Presentation + Paper
4 April 2022 Addressing imaging accessibility by cross-modality transfer learning
Author Affiliations +
Abstract
Multi-modality images usually exist for diagnosis/prognosis of a disease, such as Alzheimer’s Disease (AD), but with different levels of accessibility and accuracy. MRI is used in the standard of care, thus having high accessibility to patients. On the other hand, imaging of pathologic hallmarks of AD such as amyloid-PET and tau-PET has low accessibility due to cost and other practical constraints, even though they are expected to provide higher diagnostic/prognostic accuracy than standard clinical MRI. We proposed Cross-Modality Transfer Learning (CMTL) for accurate diagnosis/prognosis based on standard imaging modality with high accessibility (mod_HA), with a novel training strategy of using not only data of mod_HA but also knowledge transferred from the model based on advanced imaging modality with low accessibility (mod_LA). We applied CMTL to predict conversion of individuals with Mild Cognitive Impairment (MCI) to AD using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) datasets, demonstrating improved performance of the MRI (mod_HA)-based model by leveraging the knowledge transferred from the model based on tau-PET (mod_LA).
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhiyang Zheng, Yi Su, Kewei Chen, David A. Weidman, Teresa Wu, Ben Lo, Fleming Lure, and Jing Li "Addressing imaging accessibility by cross-modality transfer learning", Proc. SPIE 12033, Medical Imaging 2022: Computer-Aided Diagnosis, 120330X (4 April 2022); https://doi.org/10.1117/12.2611791
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Magnetic resonance imaging

Data modeling

Performance modeling

Alzheimer's disease

Cognitive modeling

Statistical modeling

Mathematical modeling

Back to Top