Elsevier

Medical Image Analysis

Volume 73, October 2021, 102160
Medical Image Analysis

Research paper
Deep cross-view co-regularized representation learning for glioma subtype identification

https://doi.org/10.1016/j.media.2021.102160Get rights and content

Highlights

  • A deep cross-view co-regularized representation learning for subtyping glioma.

  • A unified framework integrates view representation learning and multi-constraints.

  • Fusing view-specific and view-sharable representations for performance improvement.

  • Extensive experiments demonstrate the superior performance of the proposed method.

Abstract

The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.

Introduction

Gliomas are the most frequent infiltrative brain neoplasms, accounting for 80% of malignant tumors which originate from the glia cells in the central nervous system (Ostrom et al., 2014). In addition to histologic phenotype, the new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) based on genotypes, in which isocitrate dehydrogenase (IDH) mutation and chromosome arms 1p/19q codeletion are considered as crucial genetic parameters (Eckel-Passow, Lachance, Molinaro, et al., 2015, Bieńkowski, Wöhrer, Moser, et al., 2018). Specifically, gliomas are classified into five subtypes as follows: 1) lower-grade gliomas (LGG: grade II and III) with mutant IDH and codeleted 1p/19q (LGG-mut-codel), 2) LGG with mutant IDH and intact 1p/19q (LGG-mut-intac), 3) LGG with wild-type IDH (LGG-wt), 4) glioblastoma (GBM: grade IV) with mutant IDH (GBM-mut), and 5) GBM with wild-type IDH (GBM-wt) (Louis, Perry, Reifenberger, et al., 2016, Thurnher, 2009). In general, most work conducted subtype identification by distinguishing LGG from GBM, mutant IDH from wild-type IDH, and codeleted 1p/19q from intact 1p/19q (van Lent, van Baarsen, Snijders, et al., 2020, Eckel-Passow, Lachance, Molinaro, et al., 2015, Fellah, Caudal, De Paula, et al., 2013), while different subtypes of gliomas have diverse treatment decisions and survival rates (Houillier, Wang, Kaloshi, et al., 2010, Beiko, Suki, Hess, et al., 2014). For example, gliomas with mutant IDH are driven by specific epigenetic alterations, which makes them sensitive to the therapeutic intervention that is less effective to gliomas with wild-type IDH (Songtao et al., 2012). Current studies have suggested patients who suffered from gliomas with codeleted 1p/19q significantly benefit from the gross total tumor resection when compared with the partial resection, and survival benefits achieved by the gross total resection may also have greater impact on the gliomas with intact 1p/19q (Jansen et al., 2012). Therefore, identification of glioma subtype can provide valid guidances for both risk-benefit assessment and clinical decision (van der Voort, Incekara, Wijnenga, et al., 2019, Riemenschneider, Jeuken, Wesseling, et al., 2010).

Previous studies have revealed the feasibility of using feature representations in magnetic resonance imaging (MRI) to probe underlying histologic phenotype and genotype of gliomas (van Lent et al., 2020). Generally, these feature representations can be roughly categorized into two classes, including 1) qualitative feature representations and 2) quantitative feature representations. The qualitative feature representations of gliomas are obtained by neuroradiologists’ evaluation and usually involve tumor location, margin and calcification (Patel, Poisson, Brat, et al., 2017, Qi, Yu, Li, et al., 2014). However, these qualitative imaging characteristics commonly depend on the knowledge and experience of neuroradiologists, which results in low identifiability and poor repeatability (Foltyn et al., 2020). As an alternative solution, quantitative feature representations have been used to represent gliomas, based on which many learning-based methods (e.g., traditional learning-based and deep learning-based approaches) have been developed for subtyping gliomas (Sollini, Antunovic, Chiti, et al., 2019, Korfiatis, Erickson, 2019). Specifically, traditional learning-based methods first extract hand-crafted feature representations, such as textural (e.g., histogram, gray-level co-occurrence matrix [GLCM], and neighborhood gray-tone difference matrix [NGTDM]) and non-textural (e.g., tumor size, solidity, and volume) features, to quantify the entire tumor (Lotan et al., 2019). Subsequently, feature analysis approaches (i.e., feature selection/reduction algorithms and classification methods) are designed to perform prediction based on these hand-crafted feature representations.

Recently, deep feature representations are learned by deep learning-based models, especially convolutional neural networks (CNNs), for gliomas, which achieve the state-of-the-art performance when compared with other advanced approaches (Lotan, Jain, Razavian, et al., 2019, Li, Wang, Yu, et al., 2017, Akkus, Ali, SedláÅ, et al., 2017). Several studies derived deep feature representations from 2D-level axial slices of MRI, in which all 2D slices were assigned the same class labels with the corresponding patients (Matsui, Maruyama, Nitta, et al., 2020, Chang, Grinband, Weinberg, et al., 2018, Li, Wang, Yu, et al., 2017). Although using 2D slices as the input of CNNs provides a natural data augmentation strategy that can alleviate the big data requirement of network training, it might not well represent the gliomas due to lack of other planes (i.e., coronal and sagittal planes). Therefore, some works used the whole volume of gliomas as input to learn 3D-level deep feature representations, in which all of them resampled (e.g., downsampled or upsampled) the volume of gliomas into 3D patches with a specified size (Liang, Zhang, Liang, et al., 2018, Khened, Anand, Acharya, Shah, Krishnamurthi, 2019). Recently, several 2.5D-level multi-view deep feature representations (an intermediate level between 2D and 3D levels) have been proposed to represent the gliomas (Chang, Bai, Zhou, et al., 2018, Banerjee, Mitra, Masulli, et al., 2020), in which the axial, coronal and sagittal planes of gliomas in MRI are extracted as the multi-channel inputs of CNNs. Although these 2.5D-level representations are more informative than 2D-level ones and more efficient than 3D-level ones, they might fall into suboptimal performance due to 1) only a set of axial, coronal and sagittal planes of gliomas may not be sufficient for representation of 3D information, 2) the complementary information of different planes is lack of appropriate consideration.

In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification. Here, cross-view learning is considered as a variant of multi-view learning (Tang et al., 2019), which would devote to learning relation (e.g., correlation) or complementary information of different views. In the framework, we first generate a number of cross-view images by extracting a series of axial, coronal and sagittal planes from 3D patches of MRI, then jointly learn discriminative view-specific and view-sharable feature representations and subsequent classifier in an end-to-end manner, through which both view representation learning and multiple constraints are integrated into a unified co-regularized paradigm. Specifically, after cross-view image generation, we first learn latent view-specific representations for each view via a bi-directional mapping, in which view-correlated and output-consistent regularizers are developed for exploring view correlation and deriving view consistency, respectively. Then, we project the view-specific representations into a holistically shared space to learn informative view-sharable representations which are simultaneously enhanced by adversary learning strategy. Finally, the view-specific and view-sharable representations are together incorporated for glioma subtype identification. The proposed method is conducted on three binary classification tasks for glioma subtype identification, namely, LGG vs. GBM task, IDHmut vs. IDHwt task, and 1p/19q codel vs. 1p/19q intac task. We have evaluated the effectiveness of our proposed method using MRI on multi-site datasets and experimental results demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.

The rest of this paper is organized as follows. In Section 2, we first briefly review related studies. We introduce datasets used in this study and give a detail description of our method in Section 3. In Section 4, we present experimental settings and results. The discussion and conclusion are provided in Section 5, and Section 6, respectively.

Section snippets

Related work

In this section, we first present the overview of related studies on MRI-based glioma subtype identification. Then, we review multi-view learning approaches and their applications in the medical image analysis field.

Material and methods

In this part, we first introduce the datasets used in our work (Section 3.1). Then, we present the proposed deep cross-view co-regularized representation learning method (Section 3.2).

Experiments

In this section, we first introduce experimental settings, including competing methods, implementation tools, and evaluation strategy. We then show the experimental results of glioma subtype identification, validate the effectiveness of each component in our framework via ablation experiments and analyze the influence of parameters.

Discussion

In this section, we discuss the performance of our proposed method based on mono-modality data and multi-modality data. And we further investigate the performance of the model on hierarchical binary classification task and multi-class classification task. In addition, we also clarify the advantages, limitations and future work.

Conclusion

In this paper, we presented a deep cross-view co-regularized representation learning framework for glioma subtype identification using MRI. The main advantage of the framework is its capability of jointly learning discriminative view-specific and view-sharable feature representations and subsequent classifier in an end-to-end manner, through which both view representation learning and multiple constraints are integrated into a unified co-regularized paradigm. Experimental results on a large

CRediT authorship contribution statement

Zhenyuan Ning: Conceptualization, Methodology, Software, Formal analysis, Writing – review & editing. Chao Tu: Data curation, Software, Visualization, Validation. Xiaohui Di: Data curation, Writing – review & editing. Qianjin Feng: Supervision. Yu Zhang: Conceptualization, Supervision, Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This manuscript was finished when I (Dr Ning) was nursing my grandmother (Ruzhen Zhu), and I very appreciate her taking a crucial part in my life.

References (68)

  • G. Carneiro et al.

    Automated analysis of unregistered multi-view mammograms with deep learning

    IEEE Trans Med Imaging

    (2017)
  • K. Chang et al.

    Residual convolutional neural network for the determination of IDH status in low- and high-grade gliomas from MR imaging

    Clinical Cancer Research

    (2018)
  • P. Chang et al.

    Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas

    American Journal of Neuroradiology

    (2018)
  • B. Chen et al.

    Mixed high-order attention network for person re-identification

    2019 IEEE/CVF International Conference on Computer Vision (ICCV)

    (2019)
  • H. Chen et al.

    Inferring group-wise consistent multimodal brain networks via multi-view spectral clustering

    IEEE Trans Med Imaging

    (2013)
  • J. Eckel-Passow et al.

    Glioma groups based on 1p/19q, IDH, and TERT promoter mutations in tumors

    N. Engl. J. Med.

    (2015)
  • J. Farquhar et al.

    Two view learning: SVM-2K, theory and practice

    Advances in Neural Information Processing Systems (NIPS)

    (2006)
  • S. Fellah et al.

    Multimodal MR imaging (diffusion, perfusion, and spectroscopy): is it possible to distinguish oligodendroglial tumor grade and 1p/19q codeletion in the pretherapeutic diagnosis?

    American Journal of Neuroradiology

    (2013)
  • M. Foltyn et al.

    T2/FLAIR-Mismatch sign for noninvasive detection of IDH-mutant 1p/19q non-codeleted gliomas: validity and pathophysiology

    Neuro-Oncology Advances

    (2020)
  • I.J. Goodfellow et al.

    Generative adversarial nets

    Advances in Neural Information Processing Systems (NIPS)

    (2014)
  • D.R. Hardoon et al.

    Canonical correlation analysis: an overview with application to learning methods

    Neural Comput

    (2005)
  • M. Heusel et al.

    GANs trained by a two time-scale update rule converge to a local Nash Equilibrium

    Advances in Neural Information Processing Systems (NIPS)

    (2017)
  • C. Houillier et al.

    IDH1 Or IDH2 mutations predict longer survival and response to temozolomide in low-grade gliomas

    Neurology

    (2010)
  • K.L.C. Hsieh et al.

    Radiomic model for predicting mutations in the isocitrate dehydrogenase gene in glioblastomas

    Oncotarget

    (2017)
  • K.L.C. Hsieh et al.

    Computer-aided grading of gliomas based on local and global MRI features

    Comput Methods Programs Biomed

    (2017)
  • N.L. Jansen et al.

    Prediction of oligodendroglial histology and LOH 1p/19q using dynamic [18F]FET-PET imaging in intracranial WHO grade II and III gliomas

    Neuro-oncology

    (2012)
  • D.R. Johnson et al.

    Genetically defined oligodendroglioma is characterized by indistinct tumor borders at MRI

    American Journal of Neuroradiology

    (2017)
  • A. Klami et al.

    Bayesian canonical correlation analysis

    Journal of Machine Learning Research

    (2013)
  • B. Kocak et al.

    Radiogenomics of lower-grade gliomas: machine learningbased MRI texture analysis for predicting 1p/19q codeletion status

    Eur Radiol

    (2020)
  • P. Korfiatis et al.

    Deep learning can see the unseeable: predicting molecular markers from MRI of brain gliomas

    Clin Radiol

    (2019)
  • P.L. Lai et al.

    Kernel and nonlinear canonical correlation analysis

    Int J Neural Syst

    (2000)
  • D.I. van Lent et al.

    Radiological differences between subtypes of WHO 2016 grade IIIII gliomas: a systematic review and meta-analysis

    Neuro-Oncology Advances

    (2020)
  • Z. Li et al.

    Deep learning based radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma

    Sci Rep

    (2017)
  • S. Liang et al.

    Multimodal 3D densenet for IDH genotype prediction in gliomas

    Genes (Basel)

    (2018)
  • Cited by (11)

    • TDABNet: Three-directional attention block network for the determination of IDH status in low- and high-grade gliomas from MRI

      2022, Biomedical Signal Processing and Control
      Citation Excerpt :

      Chen et al. [9] proposed a new multi-label nonlinear classification model to predict the status of IDH and verified the method’s performance on the DTI and RS-fMRI imaging data of 47 subjects. Ning et al. [10] proposed a deep cross-view co-regularized representation learning framework for IDH prediction, and experimental results on The Cancer Genome Atlas (TCGA), The Cancer Immunome Atlas (TCIA), and local hospitals datasets demonstrate the superior performance of their method. Zhang et al. [11] retrospectively analyzed the clinical data of 120 cases with high-grade gliomas, and they extracted 2970 imaging features from multimodal MRI by machine learning algorithms and reached an AUC value of 92.31% on the task of predicting IDH status.

    View all citing articles on Scopus

    This work was supported in part by the National Natural Science Foundation of China under Grant 61971213 and Grant 61671230, in part by the Basic and Applied Basic Research Foundation of Guangdong Province under Grant 2019A1515010417, and in part by the Guangdong Provincial Key Laboratory of Medical Image Processing under Grant No.2020B1212060039.

    View full text