Research paperDeep cross-view co-regularized representation learning for glioma subtype identification☆
Graphical abstract
Introduction
Gliomas are the most frequent infiltrative brain neoplasms, accounting for 80% of malignant tumors which originate from the glia cells in the central nervous system (Ostrom et al., 2014). In addition to histologic phenotype, the new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) based on genotypes, in which isocitrate dehydrogenase (IDH) mutation and chromosome arms 1p/19q codeletion are considered as crucial genetic parameters (Eckel-Passow, Lachance, Molinaro, et al., 2015, Bieńkowski, Wöhrer, Moser, et al., 2018). Specifically, gliomas are classified into five subtypes as follows: 1) lower-grade gliomas (LGG: grade II and III) with mutant IDH and codeleted 1p/19q (LGG-mut-codel), 2) LGG with mutant IDH and intact 1p/19q (LGG-mut-intac), 3) LGG with wild-type IDH (LGG-wt), 4) glioblastoma (GBM: grade IV) with mutant IDH (GBM-mut), and 5) GBM with wild-type IDH (GBM-wt) (Louis, Perry, Reifenberger, et al., 2016, Thurnher, 2009). In general, most work conducted subtype identification by distinguishing LGG from GBM, mutant IDH from wild-type IDH, and codeleted 1p/19q from intact 1p/19q (van Lent, van Baarsen, Snijders, et al., 2020, Eckel-Passow, Lachance, Molinaro, et al., 2015, Fellah, Caudal, De Paula, et al., 2013), while different subtypes of gliomas have diverse treatment decisions and survival rates (Houillier, Wang, Kaloshi, et al., 2010, Beiko, Suki, Hess, et al., 2014). For example, gliomas with mutant IDH are driven by specific epigenetic alterations, which makes them sensitive to the therapeutic intervention that is less effective to gliomas with wild-type IDH (Songtao et al., 2012). Current studies have suggested patients who suffered from gliomas with codeleted 1p/19q significantly benefit from the gross total tumor resection when compared with the partial resection, and survival benefits achieved by the gross total resection may also have greater impact on the gliomas with intact 1p/19q (Jansen et al., 2012). Therefore, identification of glioma subtype can provide valid guidances for both risk-benefit assessment and clinical decision (van der Voort, Incekara, Wijnenga, et al., 2019, Riemenschneider, Jeuken, Wesseling, et al., 2010).
Previous studies have revealed the feasibility of using feature representations in magnetic resonance imaging (MRI) to probe underlying histologic phenotype and genotype of gliomas (van Lent et al., 2020). Generally, these feature representations can be roughly categorized into two classes, including 1) qualitative feature representations and 2) quantitative feature representations. The qualitative feature representations of gliomas are obtained by neuroradiologists’ evaluation and usually involve tumor location, margin and calcification (Patel, Poisson, Brat, et al., 2017, Qi, Yu, Li, et al., 2014). However, these qualitative imaging characteristics commonly depend on the knowledge and experience of neuroradiologists, which results in low identifiability and poor repeatability (Foltyn et al., 2020). As an alternative solution, quantitative feature representations have been used to represent gliomas, based on which many learning-based methods (e.g., traditional learning-based and deep learning-based approaches) have been developed for subtyping gliomas (Sollini, Antunovic, Chiti, et al., 2019, Korfiatis, Erickson, 2019). Specifically, traditional learning-based methods first extract hand-crafted feature representations, such as textural (e.g., histogram, gray-level co-occurrence matrix [GLCM], and neighborhood gray-tone difference matrix [NGTDM]) and non-textural (e.g., tumor size, solidity, and volume) features, to quantify the entire tumor (Lotan et al., 2019). Subsequently, feature analysis approaches (i.e., feature selection/reduction algorithms and classification methods) are designed to perform prediction based on these hand-crafted feature representations.
Recently, deep feature representations are learned by deep learning-based models, especially convolutional neural networks (CNNs), for gliomas, which achieve the state-of-the-art performance when compared with other advanced approaches (Lotan, Jain, Razavian, et al., 2019, Li, Wang, Yu, et al., 2017, Akkus, Ali, SedláÅ, et al., 2017). Several studies derived deep feature representations from 2D-level axial slices of MRI, in which all 2D slices were assigned the same class labels with the corresponding patients (Matsui, Maruyama, Nitta, et al., 2020, Chang, Grinband, Weinberg, et al., 2018, Li, Wang, Yu, et al., 2017). Although using 2D slices as the input of CNNs provides a natural data augmentation strategy that can alleviate the big data requirement of network training, it might not well represent the gliomas due to lack of other planes (i.e., coronal and sagittal planes). Therefore, some works used the whole volume of gliomas as input to learn 3D-level deep feature representations, in which all of them resampled (e.g., downsampled or upsampled) the volume of gliomas into 3D patches with a specified size (Liang, Zhang, Liang, et al., 2018, Khened, Anand, Acharya, Shah, Krishnamurthi, 2019). Recently, several 2.5D-level multi-view deep feature representations (an intermediate level between 2D and 3D levels) have been proposed to represent the gliomas (Chang, Bai, Zhou, et al., 2018, Banerjee, Mitra, Masulli, et al., 2020), in which the axial, coronal and sagittal planes of gliomas in MRI are extracted as the multi-channel inputs of CNNs. Although these 2.5D-level representations are more informative than 2D-level ones and more efficient than 3D-level ones, they might fall into suboptimal performance due to 1) only a set of axial, coronal and sagittal planes of gliomas may not be sufficient for representation of 3D information, 2) the complementary information of different planes is lack of appropriate consideration.
In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification. Here, cross-view learning is considered as a variant of multi-view learning (Tang et al., 2019), which would devote to learning relation (e.g., correlation) or complementary information of different views. In the framework, we first generate a number of cross-view images by extracting a series of axial, coronal and sagittal planes from 3D patches of MRI, then jointly learn discriminative view-specific and view-sharable feature representations and subsequent classifier in an end-to-end manner, through which both view representation learning and multiple constraints are integrated into a unified co-regularized paradigm. Specifically, after cross-view image generation, we first learn latent view-specific representations for each view via a bi-directional mapping, in which view-correlated and output-consistent regularizers are developed for exploring view correlation and deriving view consistency, respectively. Then, we project the view-specific representations into a holistically shared space to learn informative view-sharable representations which are simultaneously enhanced by adversary learning strategy. Finally, the view-specific and view-sharable representations are together incorporated for glioma subtype identification. The proposed method is conducted on three binary classification tasks for glioma subtype identification, namely, LGG vs. GBM task, IDHmut vs. IDHwt task, and 1p/19q codel vs. 1p/19q intac task. We have evaluated the effectiveness of our proposed method using MRI on multi-site datasets and experimental results demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.
The rest of this paper is organized as follows. In Section 2, we first briefly review related studies. We introduce datasets used in this study and give a detail description of our method in Section 3. In Section 4, we present experimental settings and results. The discussion and conclusion are provided in Section 5, and Section 6, respectively.
Section snippets
Related work
In this section, we first present the overview of related studies on MRI-based glioma subtype identification. Then, we review multi-view learning approaches and their applications in the medical image analysis field.
Material and methods
In this part, we first introduce the datasets used in our work (Section 3.1). Then, we present the proposed deep cross-view co-regularized representation learning method (Section 3.2).
Experiments
In this section, we first introduce experimental settings, including competing methods, implementation tools, and evaluation strategy. We then show the experimental results of glioma subtype identification, validate the effectiveness of each component in our framework via ablation experiments and analyze the influence of parameters.
Discussion
In this section, we discuss the performance of our proposed method based on mono-modality data and multi-modality data. And we further investigate the performance of the model on hierarchical binary classification task and multi-class classification task. In addition, we also clarify the advantages, limitations and future work.
Conclusion
In this paper, we presented a deep cross-view co-regularized representation learning framework for glioma subtype identification using MRI. The main advantage of the framework is its capability of jointly learning discriminative view-specific and view-sharable feature representations and subsequent classifier in an end-to-end manner, through which both view representation learning and multiple constraints are integrated into a unified co-regularized paradigm. Experimental results on a large
CRediT authorship contribution statement
Zhenyuan Ning: Conceptualization, Methodology, Software, Formal analysis, Writing – review & editing. Chao Tu: Data curation, Software, Visualization, Validation. Xiaohui Di: Data curation, Writing – review & editing. Qianjin Feng: Supervision. Yu Zhang: Conceptualization, Supervision, Writing – review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This manuscript was finished when I (Dr Ning) was nursing my grandmother (Ruzhen Zhu), and I very appreciate her taking a crucial part in my life.
References (68)
- et al.
3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme
Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications
(2019) - et al.
Hierarchical fully convolutional network for joint atrophy localization and alzheimer’s disease diagnosis using structural MRI
IEEE Trans Pattern Anal Mach Intell
(2020) - et al.
Prediction of lower-grade glioma molecular subtypes using deep learning
J. Neurooncol.
(2020) Equilibrium points in n-person games
Proceedings of the National Academy of Sciences
(1950)- et al.
Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence
J Digit Imaging
(2017) - et al.
Glioma classification using deep radiomics
SN Computer Science
(2020) - et al.
Neuroimaging-based classification algorithm for predicting 1p/19q-codeletion status in IDH-mutant lower grade gliomas
American Journal of Neuroradiology
(2019) - et al.
IDH1 Mutant malignant astrocytomas are more amenable to surgical resection and have a survival benefit associated with maximal surgical resection
Neuro-oncology
(2014) - et al.
Molecular diagnostic testing of diffuse gliomas in the real-life setting: a practical approach
Clin. Neuropathol.
(2018) Large-scale machine learning with stochastic gradient descent
Proceedings of COMPSTAT’2010
(2010)
Automated analysis of unregistered multi-view mammograms with deep learning
IEEE Trans Med Imaging
Residual convolutional neural network for the determination of IDH status in low- and high-grade gliomas from MR imaging
Clinical Cancer Research
Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas
American Journal of Neuroradiology
Mixed high-order attention network for person re-identification
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Inferring group-wise consistent multimodal brain networks via multi-view spectral clustering
IEEE Trans Med Imaging
Glioma groups based on 1p/19q, IDH, and TERT promoter mutations in tumors
N. Engl. J. Med.
Two view learning: SVM-2K, theory and practice
Advances in Neural Information Processing Systems (NIPS)
Multimodal MR imaging (diffusion, perfusion, and spectroscopy): is it possible to distinguish oligodendroglial tumor grade and 1p/19q codeletion in the pretherapeutic diagnosis?
American Journal of Neuroradiology
T2/FLAIR-Mismatch sign for noninvasive detection of IDH-mutant 1p/19q non-codeleted gliomas: validity and pathophysiology
Neuro-Oncology Advances
Generative adversarial nets
Advances in Neural Information Processing Systems (NIPS)
Canonical correlation analysis: an overview with application to learning methods
Neural Comput
GANs trained by a two time-scale update rule converge to a local Nash Equilibrium
Advances in Neural Information Processing Systems (NIPS)
IDH1 Or IDH2 mutations predict longer survival and response to temozolomide in low-grade gliomas
Neurology
Radiomic model for predicting mutations in the isocitrate dehydrogenase gene in glioblastomas
Oncotarget
Computer-aided grading of gliomas based on local and global MRI features
Comput Methods Programs Biomed
Prediction of oligodendroglial histology and LOH 1p/19q using dynamic [18F]FET-PET imaging in intracranial WHO grade II and III gliomas
Neuro-oncology
Genetically defined oligodendroglioma is characterized by indistinct tumor borders at MRI
American Journal of Neuroradiology
Bayesian canonical correlation analysis
Journal of Machine Learning Research
Radiogenomics of lower-grade gliomas: machine learningbased MRI texture analysis for predicting 1p/19q codeletion status
Eur Radiol
Deep learning can see the unseeable: predicting molecular markers from MRI of brain gliomas
Clin Radiol
Kernel and nonlinear canonical correlation analysis
Int J Neural Syst
Radiological differences between subtypes of WHO 2016 grade IIIII gliomas: a systematic review and meta-analysis
Neuro-Oncology Advances
Deep learning based radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma
Sci Rep
Multimodal 3D densenet for IDH genotype prediction in gliomas
Genes (Basel)
Cited by (11)
Self-supervised medical slice interpolation network using controllable feature flow[Formula presented]
2024, Expert Systems with ApplicationsTDABNet: Three-directional attention block network for the determination of IDH status in low- and high-grade gliomas from MRI
2022, Biomedical Signal Processing and ControlCitation Excerpt :Chen et al. [9] proposed a new multi-label nonlinear classification model to predict the status of IDH and verified the method’s performance on the DTI and RS-fMRI imaging data of 47 subjects. Ning et al. [10] proposed a deep cross-view co-regularized representation learning framework for IDH prediction, and experimental results on The Cancer Genome Atlas (TCGA), The Cancer Immunome Atlas (TCIA), and local hospitals datasets demonstrate the superior performance of their method. Zhang et al. [11] retrospectively analyzed the clinical data of 120 cases with high-grade gliomas, and they extracted 2970 imaging features from multimodal MRI by machine learning algorithms and reached an AUC value of 92.31% on the task of predicting IDH status.
Multi-Level Feature Exploration and Fusion Network for Prediction of IDH Status in Gliomas From MRI
2024, IEEE Journal of Biomedical and Health InformaticsArtificial Intelligence Applications in Glioma With 1p/19q Co-Deletion: A Systematic Review
2023, Journal of Magnetic Resonance Imaging
- ☆
This work was supported in part by the National Natural Science Foundation of China under Grant 61971213 and Grant 61671230, in part by the Basic and Applied Basic Research Foundation of Guangdong Province under Grant 2019A1515010417, and in part by the Guangdong Provincial Key Laboratory of Medical Image Processing under Grant No.2020B1212060039.