Abstract:
Hypertrophic cardiomyopathy (HCM) and cardiac amyloidosis (CA) are both heart diseases. Their echocardiographic presentations are similar, whereas the latter is rare in c...Show MoreMetadata
Abstract:
Hypertrophic cardiomyopathy (HCM) and cardiac amyloidosis (CA) are both heart diseases. Their echocardiographic presentations are similar, whereas the latter is rare in clinical practice, making them susceptible to misdiagnosis. Recently, deep learning-based echocardiographic diagnostic methods have been proposed. These data-driven methods require a large amount of data for training to improve diagnostic accuracy. Therefore, these methods do not apply to conditions with sparse clinical data, such as CA. To address the problem of poor model performance due to few training data, we propose a multimodal model (USCL) combining multiview and text to differentiate between HCM and CA. Specifically, the USCL consists of two main parts: one part employs unimodal supervision (UMS) to obtain echocardiogram and text respectively, the relevant unimodal representations and prediction results. The other part uses a designed multimodal contrast learning (MMCL) approach to tune the representations of the two modalities, thus exploring more reliable multimodal representations in few-shot learning. We build a dataset for training and evaluation which includes 209 patients with HCM and 203 patients with CA, as well as 200 patients with normal cardiac function. Experiments show that USCL achieves promising results.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information: