Abstract
Purpose
As the most common primary intracranial tumor, glioblastoma (GBM) is a malignant tumor that originated from neuroepithelial tissue, accounting for 40–50% of brain tumors. Precise survival prediction for patients suffering from GBM can not only help patients and doctors formulate treatment plans, but also help researchers understand the development of the disease and stimulate medical development.
Methods
In view of the tedious process of manual feature extraction and selection in traditional radiomics, we propose an end-to-end survival prediction model based on DenseNet to extract the features of magnetic resonance images including T1-weighted post-contrast images and T2-weighted images through two-branch networks. After segmenting the region of interest, the original image, the image of tumor region and the image without tumor are combined as input sample sets with three channels. Additionally, for some patients having only one of T1- or T2-weighted images, One2One CycleGAN is used to generate the T1 image from the T2 image, or vice versa. Flipping and rotating are also used for sample augmentation.
Result
By using the augmented training sample set to train the model, the classification and prediction accuracy of the two-branch DenseNet survival prediction model can reach up to 94%, and the Kaplan–Meier survival curve indicates that the model can classify patients into high-risk group and low-risk group based on whether they could survive for more than three years.
Conclusion
The classification and prediction results of the model and the survival analysis demonstrate that our model can get superior classification results which can be referenced by doctors and patients’ families for developing medical plans. However, improving the loss function and expanding the sample size can further improve the prediction results, which are the target of our subsequent research.










Similar content being viewed by others
References
Ostrom QT, Gittleman H, Farah P, Ondracek A, Chen Y, Wolinsky Y, Stroup N, Kruchko C, Barnholtz-Sloan J (2014) CBTRUS statistical report: Primary brain and central nervous system tumors diagnosed in the United States in 2007–2011. Neuro-Oncology 16:iv1–iv63. https://doi.org/10.1093/neuonc/nos218
Ostrom QT, Gittleman H, Xu J, Kromer C, Wolinsky Y, Kruchko C (2016) Barnholtz-Sloan JS (2016) CBTRUS statistical report: primary brain and other central nervous system tumors diagnosed in the United States in 2009–2013. Neuro-oncology 18:v1–v75. https://doi.org/10.1093/neuonc/now207
Kenneth C, Bruce V, Kirk S, John F, Justin K, Paul K, Stephen M, Stanley P, David M, Michael P, Lawrence T, Fred P (2013) The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 26(6):1045–1057. https://doi.org/10.1007/s10278-013-9622-7
Chang K, Zhang B, Guo X, Zong M, Rahman R, Sanchez D, Winder N, Reardon DA, Zhao B, Wen PY, Huang RY (2016) Multimodal imaging patterns predict survival in recurrent glioblastoma patients treated with bevacizumab. Neuro-oncology 18(12):1680–1687. https://doi.org/10.1093/neuonc/now086
Osman AFI (2018) Automated brain tumor segmentation on magnetic resonance images and patients overall survival prediction using support vector machines. BrainLes 2017 10670:435–449. https://doi.org/10.1007/978-3-319-75238-9_37
Ahmad C, Christian D, Matthew T, Bassam A (2017) Predicting survival time of lung cancer patients using radiomic analysis. Oncotarget 8(61):104393–104407. https://doi.org/10.18632/oncotarget.22251
Xue Y, Xu T, Zhang H, Long LR, Huang X (2018) SegAN: adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics 16(3/4):383–392. https://doi.org/10.1007/s12021-018-9377-x
Kumar A, Kim J, Lyndon D, Fulham M, Feng D (2017) An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE J Biomed Health Inform 21(1):31–40. https://doi.org/10.1109/JBHI.2016.2635663
Sun D, Wang M, Li A (2018) A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data. IEEE/ACM Trans Comput Biol Bioinf. https://doi.org/10.1109/TCBB.2018.2806438
Bello GA, Dawes TJW, Duan J, Carlo B, de Marvao A, Howard LSGE, Gibbs JSR, Wilkins MR, Cook SA, Daniel R, O’Regan DP (2019) Deep learning cardiac motion analysis for human survival prediction. Nat Mach Intell. https://doi.org/10.1038/s42256-019-0019-2
van der Burgh HK, Schmidt R, Westeneng HJ, de Reus MA, van den Berg LH, van den Heuvel MP (2017) Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis. NeuroImage Clin 13:361–369. https://doi.org/10.1016/j.nicl.2016.10.008
Nie D, Lu J, Zhang H, Ehsan A, Wang J, Yu Z, Liu LY, Wang Q, Wu J, Shen D (2019) Multi-channel 3D deep feature learning for survival time prediction of brain tumor patients using multi-modal neuroimages. Sci Rep 9:1103. https://doi.org/10.1038/s41598-018-37387-9
Tabibu S, Vinod PK, Jawahar CV (2019) Pan-renal cell carcinoma classification and survival prediction from histopathology images using deep learning. Sci Rep. https://doi.org/10.1038/s41598-019-46718-3
Gao H, Zhuang L, van der Maaten L, Weinberger KQ (2016) Densely connected convolutional. Networks. https://doi.org/10.1109/CVPR.2017.243
Shen Z, Zhou SK, Chen Y, Georgescu B, Liu X, Huang TS (2019) One-to-one mapping for unpaired image-to-image. Translation. https://doi.org/10.1109/WACV45572.2020.9093622
Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst. https://doi.org/10.1007/978-1-4842-3679-6_8
Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. https://doi.org/10.1109/ICCV.2017.244
Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2017.632
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Comput Vis Pattern Recognit. https://doi.org/10.1109/CVPR.2016.90
Bloice MD, Stocker C, Holzinger A (2017) Augmentor: an image augmentation library for machine learning. J Open Source Softw 2(19):5–6. https://doi.org/10.21105/joss.00432
Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, Lanczi L, Gerstner E, Weber M-A, Arbel T, Avants BB, Ayache N, Buendia P, Collins DL, Cordier N, Corso JJ, Criminisi A, Das T, Delingette H, Demiralp C, Durst CR, Dojat M, Doyle S, Festa J, Forbes F, Geremia E, Glocker B, Golland P (2015) The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging 34(10):1993–2024. https://doi.org/10.1109/TMI.2014.2377694
Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, Freymann JB, Farahani K, Davatzikos C (2017) Advancing the Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat Sci Data 4:170117. https://doi.org/10.1038/sdata.2017.117
Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, Shinohara R, Berger C, Ha S, Rozycki M, Prastawa M, Alberts E, Lipkova J, Freymann J, Kirby J, Bilello M, Fathallah-Shaykh H, Wiest R, Kirschke J (2018) Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. https://doi.org/10.17863/CAM.38755
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861
Godbole S, Sarawagi S (2004) Discriminative methods for multi-labeled classification. In: 8th Pacific/Asia conference on advances in knowledge discovery and data. https://doi.org/10.1007/978-3-540-24775-3_5
Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):861–874. https://doi.org/10.1016/j.patrec.2005.10.010
Hanley JA, Mcneil BJ (1982) The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143(1):29–36. https://doi.org/10.1148/radiology.143.1.7063747
Efron B (1988) Logistic regression, survival analysis, and the Kaplan-Meier curve. J Am Stat Assoc 83(402):414–425. https://doi.org/10.2307/2288857
Krizhevsky A, Sutskever I, Hinton G (2012) ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25(2):1097–1105. https://doi.org/10.1145/3065386
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput Sci. https://arxiv.org/abs/1409.1556
Lin T-Y, Goyal P, Girshick R, He K, Dollar P (2017) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/ICCV.2017.324
Samek W, Montavon G, Vedaldi A, Hansen L, Müller K-R (2019) Explainable AI: Interpreting, Explaining And Visualizing deep. Learning. https://doi.org/10.1007/978-3-030-28954-6
Funding
This study was supported by the National Natural Science Foundation of China (Grant Nos. 61773205).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants performed by any of the authors.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Rights and permissions
About this article
Cite this article
Fu, X., Chen, C. & Li, D. Survival prediction of patients suffering from glioblastoma based on two-branch DenseNet using multi-channel features. Int J CARS 16, 207–217 (2021). https://doi.org/10.1007/s11548-021-02313-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-021-02313-4