Multi-modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment

https://doi.org/10.1016/j.cmpb.2017.07.003Get rights and content

Highlights

  • A multi-modal discriminant DL algorithm for AD/MCI classification is proposed.

  • The mSCDDL could enhance the recognition rate compared with some other algorithms.

  • A weighted combination framework for multi-feature fusion is adopted into the DL scheme.

Abstract

Background and objective

The differentiation of mild cognitive impairment (MCI), which is the prodromal stage of Alzheimer's disease (AD), from normal control (NC) is important as the recent research emphasis on early pre-clinical stage for possible disease abnormality identification, intervention and even possible prevention.

Methods

The current study puts forward a multi-modal supervised within-class-similarity discriminative dictionary learning algorithm (SCDDL) we introduced previously for distinguishing MCI from NC. The proposed new algorithm was based on weighted combination and named as multi-modality SCDDL (mSCDDL). Structural magnetic resonance imaging (sMRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET data of 113 AD patients, 110 MCI patients and 117 NC subjects from the Alzheimer's disease Neuroimaging Initiative database were adopted for classification between MCI and NC, as well as between AD and NC.

Results

Adopting mSCDDL, the classification accuracy achieved 98.5% for AD vs. NC and 82.8% for MCI vs. NC, which were superior to or comparable with the results of some other state-of-the-art approaches as reported in recent multi-modality publications.

Conclusions

The mSCDDL procedure was a promising tool in assisting early diseases diagnosis using neuroimaging data.

Introduction

Studies of Alzheimer's disease (AD) and mild cognitive impairment (MCI) have explored varied neuroimaging modalities with promising results. These include structural Magnetic Resonance Imaging (sMRI) [41], [52], functional MRI (fMRI) [15], [32], Flourodeoxyglucose Positron Emission Tomography (FDG-PET) [23], and amyloid PETs such as Pittsburgh compound B (PiB-PET) [51], florbetapir [27], and flutemetamol [36] PETs. As each single modality offers specific information about MCI or AD, combining the complementary information from different modalities might enhance understanding of AD and MCI [10], [29], [43], [48].

sMRI provides structural information about the cerebrum and has proved that brain regions such as the hippocampus and parahippocampus are most suited for differentiation between MCI and normal controls (NC) [6], [11], [22], [41]. FDG-PET measures glucose metabolism and has indicated that brain regions such as the superior frontal gyrus, and middle cingulate cortex discriminate well between MCI and NC [9], [18], [23]. Amyloid-PET non-invasively measures the accumulation of amyloid in the brain, and it has suggested that brain regions such as the posterior cingulate and lateral temporal cortices are affected more in MCI than the NC [4], [30]. These recent studies demonstrate that each brain-imaging technique can provide specific views about brain function or structure [3]. In other words, biomarkers from these modalities offer different and potentially complementary information about various aspects of a given disease process [2], [5], [27]. Indeed, multi-modality neuroimaging has been viewed as a research method in neuroscience [31], [53].

Numerous studies have reported various ways of combining multi-modality data for efficient classification [8], [10], [16], [33], [34], [42], [50] and better differentiation of patients with AD or MCI from cognitively healthy individuals. For example, a weighted multiple kernel learning (MKL) model has been applied to combine cerebrospinal fluid (CSF), MRI, and PET to produce more powerful classifiers of MCI [16]. A linear weighted random forest (RF) model could combine different modalities and discriminate AD or MCI from NC effectively [8]. To utilize both the basic simple features and complex latent representation, multi-kernel SVM learning has been applied [33], [34], [42]. These studies demonstrate that the weighted combination approach is a simple yet effective way to integrate information from multi-modality data.

Recently, weighted multi-modality sparse representation-based classification (mSRC) has been introduced among neuroimaging communities and has demonstrated its feasibility and effectiveness in discriminating AD or MCI from NC by using data from sMRI, FDG-PET, and florbetapir-PET [44]. However, there are some weaknesses in mSRC as it is based on the SRC method that uses all the training data as a dictionary. When faced with large training sets, the sparse representation of a dictionary might be computationally time-consuming [13]. In addition, when training samples are not very representative, the classification accuracy would be affected by the dictionary [46].

A recently introduced method in pattern recognition and machine learning by Xu et al. considered supervised within-class-similarity discriminative dictionary learning (SCDDL) a robust and efficient method for facial recognition [45]. To improve the accuracy of facial recognition, SCDDL incorporated the term “within-class-similarity” for representation coefficients into the objective function of dictionary-learning. This is a combination of Fisher Discrimination Dictionary Learning (FDDL) [47] and Discriminative K-SVD (D-KSVD) [49] or Label Consistent K-SVD (LC-KSVD) [12]. wmSRC in combination with a linear classification error term has been introduced. This aims to derive a more compact dictionary as compared with SRC [45]. Although its first application was in facial-recognition (2-dimensional data), we believe that SCDDL has potential promise in the field of neuroimaging data, especially in multi-modal data for differentiation of AD or MCI from NC.

The contributions of this study are as follows. First, the SCDDL method was extended from single to multiple modalities based on weighted combination (mSCDDL), and was examined with regard to its robustness and accuracy of differentiating NC from MCI or AD. For this study, the multi-modal data used in SCDDL and mSCDDL were sMRI, FDG-PET, and florbetapir-PET. Second, the mSCDDL method was compared with other state-of-the-art multi-modality classification algorithms for performance in differentiating NC from MCI or AD.

Section snippets

Participants

The datasets used in this study were downloaded from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (http://www.loni.ucla.edu/ADNI/). A $60 million 5-year project, the ADNI was launched in 2003 by the National Institute on Aging (NIA), National Institute of Biomedical Imaging and Bioengineering (NIBIB), Food and Drug Administration (FDA), private pharmaceutical companies, and non-profit organizations.

According to ADNI protocols, the severity of cognitive impairment was assessed using

Algorithm

All the features extracted from three modalities of data (sMRI, FDG-PET, and florbetapir-PET) were included for the differentiation of AD and MCI from NC. The supervised within-class-SCDDL was presented for the first time in this section and was applied to neuroimaging data to classify MCI and AD from NC. Further, the extended multi-modality framework based on SCDDL, termed as multi-modality SCDDL (mSCDDL), was applied to the combined multi-modality data to differentiate NC from MCI and AD

Comparison with single-modality SCDDL

To compare the results more easily, the dictionary size was set as 20 atoms for both SCDDL and mSCDDL. The performances of single-modality SCDDL (SCDDL-sMRI, SCDDL-FDG-PET, and SCDDL- florbetapir-PET) and multi-modality mSCDDL (sMRI + FDG-PET + florbetapir-PET) were evaluated. The multi-modality mSCDDL achieved higher accuracy in classifying MCI or AD from NC than all the single-modality methods as shown in Fig. 1 and Table 2.

For classifying MCI from NC, the mSCDDL achieved an accuracy of

Discussion

In this study, a multi-modality classification method, mSCDDL was extended from single modality and compared with other state-of-the-art multi-modality methods (MKL, JRC, and mSRC) to identify AD and MCI. Three modalities, namely sMRI, FDG-PET, and florbetapir-PET, were used. The results revealed the effectiveness of mSCDDL for differentiation between AD and MCI (97.36% for AD and 77.66% for MCI). And the mSCDDL method was compared with other state-of-the-art multi-modality classification

Limitations

This study had several limitations. First, many other data sources are useful for AD or MCI classification in addition to sMRI, FDG-PET, and florbetapir-PET, such as cerebrospinal fluid (CSF) [28], [35], [50]. Second, only the weighted combination method was used in this study for the multi-modality analysis. Additional studies could attempt to expand SCDDL to multi-modal ones in the framework of multi-kernel learning. Third, this study did not include the cognition information in the

Conclusions

This study proposed a multi-modality-supervised within-class-similar discriminative dictionary learning classifier called “mSCDDL” to combine the multi-modality features (sMRI, FDG-PET, and florbetapir-PET) for differentiating AD and MCI from NC. The results suggest that the mSCDDL procedure is a promising tool for classification especially in helping to diagnose diseases using neuroimaging data.

Funding information

This work was supported by the Funds for International Cooperation and Exchange of the National Natural Science Foundation of China [grant number 61210001], the General Program of National Natural Science Foundation of China [grant number 61571047], and the Fundamental Research Funds for the Central Universities [grant number 2017EYT36].

Acknowledgements

The data set used in preparation of this paper was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.ucla.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in the analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.ucla.edu/wcontent/uploads/how_to_apply/ADNI_Acknowledgement_ist.pdf.

Declaration of interest

(1) There are no actual or potential conflicts of interest.

(2) There is no author's institution has contracts relating to this research through which it or any other organization may stand to gain financially now or in the future.

(3) All of the authors could be seen as involving a financial interest in this work.

Submission declaration and verification

The work described has not been published previously, and will not be submitted elsewhere while under consideration at Pattern Recognition.

All authors have reviewed the contents of the manuscript being submitted, approve of its contents and validate the accuracy of the data.

References (56)

  • N. Tzourio-Mazoyer

    Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain

    Neuroimage

    (2002)
  • K.B. Walhovd

    Multi-modal imaging predicts memory performance in normal aging and cognitive decline

    Neurobiol. Ag.

    (2010)
  • C.Y. Wee

    Enriched white matter connectivity networks for accurate identification of MCI patients

    Neuroimage

    (2011)
  • C.Y. Wee

    Identification of MCI individuals using structural and functional connectivity networks

    Neuroimage

    (2012)
  • E. Westman

    Combining MRI and CSF measures for classification of Alzheimer's disease and prediction of mild cognitive impairment conversion

    Neuroimage

    (2012)
  • L. Xu

    Multi-modality sparse representation-based classification for Alzheimer's disease and mild cognitive impairment

    Comput. Methods. Programs. Biomed.

    (2015)
  • L. Yuan

    Multi-source feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data

    Neuroimage

    (2012)
  • D. Zhang

    Multimodal classification of Alzheimer's disease and mild cognitive impairment

    Neuroimage

    (2011)
  • X. Zhu

    A novel matrix-similarity based loss function for joint regression and classification in AD diagnosis

    Neuroimage

    (2014)
  • C.M. Bauer

    Multimodal Analysis in Normal Aging, MildCognitive Impairment, and Alzheimer's Disease: Group Differentiation, Baseline Cognition and Prediction of Future Cognitive Decline

    (2013)
  • V.D. Calhoun

    A method for multitask fMRI data fusion applied to schizophrenia

    Human Brain Mapp.

    (2006)
  • V. Camus

    Using PET with 18F-AV-45 (florbetapir) to quantify brain amyloid load in a clinical environment

    Eur. J. Nucl. Med. Mole. Imag.

    (2012)
  • K.R. Gray

    Random forest-based similarity measures for multi-modal classification of Alzheimer's disease

    Neuroimage

    (2012)
  • S.D. Gretel

    Glucose metabolism during resting state reveals abnormal brain networks organization in the Alzheimer's disease and mild cognitive impairment

    PLoS One

    (2013)
  • V. Hinrichs

    Predictive markersfor AD in a multi-modality framework: an analysis of MCI progression in the ADNI population

    Neuroimage

    (2011)
  • C.R. Jack

    Prediction of AD with MRI-based hippocampal volume inmild cognitive impairment

    Neurology

    (1999)
  • Z. Jiang

    Label consistent K-SVD: learning a discriminative dictionary for recognition

    IEEE Trans. Pattern Anal. Mach. Intel.

    (2013)
  • S.M. Landau

    Comparing predictors of conversion and decline inmild cognitive impairment

    Neurology

    (2010)
  • Cited by (30)

    • A machine learning approach to screen for preclinical Alzheimer's disease

      2021, Neurobiology of Aging
      Citation Excerpt :

      Thus, quantitative EEG seems to be a promising tool to identify preclinical AD subjects, but further studies are needed to assess the individual diagnostic accuracy. Furthermore, combining multimodal biomarkers could help in the early diagnosis of AD (Frölich et al., 2017; Gupta et al., 2019; Li et al., 2017; Ritter et al., 2015). Previous studies have proposed multimodal models to predict brain amyloidosis in cognitively normal individuals (Ansart et al., 2020; Insel et al., 2016; Mielke et al., 2012; ten Kate et al., 2018).

    • CMC: A consensus multi-view clustering model for predicting Alzheimer's disease progression

      2021, Computer Methods and Programs in Biomedicine
      Citation Excerpt :

      AD using multi-model data has also been studied by some researchers. Li et al. considered a multi-modal supervised within-class-similarity distinctive dictionary learning method based on the weighted combination for AD diagnosis by using the neuroimaging data sets[21]. Tong et al. proposed a multi-modality classification model with nonlinear graph fusion to utilize the complementary information among different modal data of AD [48].

    • Hi-GCN: A hierarchical graph convolution network for graph embedding learning of brain network and brain disorders prediction

      2020, Computers in Biology and Medicine
      Citation Excerpt :

      Brain disorders such as Alzheimer's disease (AD) [1–4] and Autism spectrum disorder (ASD) [5,6] are considered in terms of disruptions of the normal-range operation of brain functions.

    • Predicting the progression of mild cognitive impairment to Alzheimer's disease by longitudinal magnetic resonance imaging-based dictionary learning

      2020, Clinical Neurophysiology
      Citation Excerpt :

      By using the label information available in the training set, supervised dictionary learning methods exploit the discriminating information for classification and deliver good classification performance in natural scene classification(Wang and Kong, 2014) and face recognition(Yang et al., 2014) tasks. Recently, dictionary learning schemes have also been utilized in medical applications and can be used to directly extract features from a sparse matrix without the segmentation of ROIs(Vu et al., 2016; Diamant et al., 2017; Li et al., 2017). In this paper, we developed an objective and automated prediction system using patch-based dictionary learning on a longitudinal structural MRI to automatically obtain the subtle gray matter (GM) density differences between patients with pMCI and sMCI.

    • Use of artificial intelligence in Alzheimer’s disease detection

      2020, Artificial Intelligence in Precision Health: From Concept to Applications
    • Automated atrophy assessment for Alzheimer's disease diagnosis from brain MRI images

      2019, Magnetic Resonance Imaging
      Citation Excerpt :

      Worst accuracy performance results are described in [44] with the highest feature dimension (4761 features/image) also, thus results in high computational complexity. The techniques described in [46–48] show amended results in brain MR image classification, with lower feature vector dimension (19), but these schemes use various complex weight optimization techniques, which themselves require high computational complexity. Feature vector dimension (7), used in [49], is equal to our proposed method (7), but it is obvious from the results of Table 3, that the method described in [49] is less effectual and general than the proposed scheme, in terms of classification accuracy whereas, our proposed system only requires a feature vector of dimension 7, with the highest retrieval accuracies.

    View all citing articles on Scopus
    View full text