Abstract
Deep learning shows high potential for many medical image analysis tasks. Neural networks can work with full-size data without extensive preprocessing and feature generation and, thus, information loss. Recent work has shown that the morphological difference in specific brain regions can be found on MRI with the means of Convolution Neural Networks (CNN). However, interpretation of the existing models is based on a region of interest and can not be extended to voxel-wise image interpretation on a whole image. In the current work, we consider the classification task on a large-scale open-source dataset of young healthy subjects—an exploration of brain differences between men and women. In this paper, we extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans. We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods: Meaningful Perturbations, Grad CAM and Guided Backpropagation, and contribute with the open-source library.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cahill, L.: Why sex matters for neuroscience. Nature Rev. Neurosci. 7(6), 477–484 (2006)
Chen, X., et al.: Microsoft COCO captions: data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015)
Cosgrove, K.P., Mazure, C.M., Staley, J.K.: Evolving knowledge of sex differences in brain structure, function, and chemistry. Biol. Psychiatr. 62(8), 847–855 (2007)
Dou, Q., et al.: Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans. Med. Imaging 35(5), 1182–1195 (2016)
Fan, L., et al.: The human Brainnetome Atlas: a new brain atlas based on connectional architecture. Cereb. Cortex 26(8), 3508–3526 (2016)
Fischl, B.: Freesurfer. Neuroimage 62(2), 774–781 (2012)
Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3429–3437 (2017)
Gong, E., Pauly, J.M., Wintermark, M., Zaharchuk, G.: Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J. Magn. Reson. Imaging 48(2), 330–340 (2018)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
LasiÄŤ, S., Szczepankiewicz, F., Eriksson, S., Nilsson, M., Topgaard, D.: Microanisotropy imaging: quantification of microscopic diffusion anisotropy and orientational order parameter by diffusion MRI with magic-angle spinning of the q-vector. Front. Phys. 2, 11 (2014)
Liu, Y., et al.: Gender differences in language and motor-related fibers in a population of healthy preterm neonates at term-equivalent age: a diffusion tensor and probabilistic tractography study. Am. J. Neuroradiol. 32(11) (2011)
Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV), pp. 565–571. IEEE (2016)
Mori, S., Wakana, S., Nagae-Poetscher, L., Van Zijl, P.: MRI atlas of human white matter. Am. J. Neuroradiol. 27(6), 1384 (2006)
Pawlowski, N., Glocker, B.: Is texture predictive for age and sex in brain MRI? arXiv preprint arXiv:1907.10961 (2019)
Pominova, M., Artemov, A., Sharaev, M., Kondrateva, E., Bernstein, A., Burnaev, E.: Voxelwise 3D convolutional and recurrent neural networks for epilepsy and depression diagnostics from structural and functional MRI data. In: 2018 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 299–307. IEEE (2018)
Pominova, M., et al.: Ensemble of 3D CNN regressors with data fusion for fluid intelligence prediction. In: Pohl, K.M., Thompson, W.K., Adeli, E., Linguraru, M.G. (eds.) ABCD-NP 2019. LNCS, vol. 11791, pp. 158–166. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31901-4_19
Rescher, B., Rappelsberger, P.: Gender dependent EEG-changes during a mental rotation task. Int. J. Psychophysiol. 33(3), 209–222 (1999)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336–359 (2019). https://doi.org/10.1007/s11263-019-01228-7.
Sharaev, M., et al.: Pattern recognition pipeline for neuroimaging data. In: 8th IAPR TC3 Workshop on Artificial Neural Networks in Pattern Recognition, pp. 306–319 (2018)
Sharaev, M., et al.: MRI-based diagnostics of depression concomitant with epilepsy: in search of the potential biomarkers. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 555–564. IEEE (2018)
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
Suk, H.I., Lee, S.W., Shen, D., Initiative, A.D.N., et al.: Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. NeuroImage 101, 569–582 (2014)
Xin, J., Zhang, X.Y., Tang, Y., Yang, Y.: Brain differences between men and women: evidence from deep learning. Front. Neurosci. 13, 185 (2019)
Yuan, L., Kong, F., Luo, Y., Zeng, S., Lan, J., You, X.: Gender differences in large-scale and small-scale spatial ability: a systematic review based on behavioral and neuroimaging research. Front. Behav. Neurosci. 13, 128 (2019)
Zanto Theodore, P., Gazzaley, A.: Fronto-parietal network: flexible hub of cognitive control. Trends Cogn. Sci. 17, 602–603 (2013)
Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108, 214–224 (2015)
Acknowledgements
The reported study was funded by RFBR according to the research project â„–20-37-90149. Also we acknowledge participation of Ruslan Rakhimov in development of meaningfull perturbation method on MRI data.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A The First Hidden Layer of 3D CNN Attention Analysis
A The First Hidden Layer of 3D CNN Attention Analysis
We analyzed features obtained in First Hidden Layer of 3D CNN as in [23]. Even though we used T1 modality MRI images in contorary to DWI modality and fractional anisotropy (FA) images in previous studies.
Similar to results shown on FA images, we found that mean voxel values for 31 features have a significant difference in men-women groups, with 10 features larger for women, and 21 features larger for men (see Fig. 3) accounting the multiple-comparisons correction. That reproduces the previously stated result, assuming that “men’s brains likely have more complex features as reflected by significantly higher entropy.” As well as that important gender-related patterns are likely to be spread in the whole-brain grey and white matter. That highlights the importance of the results discussed in the main paper, as the attention maps compared from different approaches are extracted from the whole brain imagery, without any region-of-interest removal, as in [23].
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kan, M. et al. (2021). Interpretation of 3D CNNs for Brain MRI Data Classification. In: van der Aalst, W.M.P., et al. Recent Trends in Analysis of Images, Social Networks and Texts. AIST 2020. Communications in Computer and Information Science, vol 1357. Springer, Cham. https://doi.org/10.1007/978-3-030-71214-3_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-71214-3_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-71213-6
Online ISBN: 978-3-030-71214-3
eBook Packages: Computer ScienceComputer Science (R0)