Skip to main content
Log in

Segmentation-based multi-scale attention model for KRAS mutation prediction in rectal cancer

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Kirsten Ras (KRAS) mutation identification has great clinical significance to formulate the rectal cancer treatment scheme. Recently, the development of deep learning does much help to improve the computer-aided diagnosis technology. However, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we proposed a joint network named segmentation-based multi-scale attention model (SMSAM) to predict the mutation status of KRAS gene in rectal cancer. More specifically, the network performs segmentation and prediction tasks at the same time. The two tasks mutually transfer knowledge between each other by sharing the same encoder. Meanwhile, two universal multi-scale attention blocks are introduced to ensure that the network more focuses on the region of interest. Besides, we also proposed an entropy branch to provide more discriminative features for the model. Finally, the method is evaluated on internal and external datasets. The results show that the comprehensive performance of SMSAM is better than the existing methods. The code and model have been publicly available.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability statement

The data that support the findings of this study are available from Department of Radiology of Shanxi Province Cancer Hospital, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Department of Radiology of Shanxi Province Cancer Hospital.

References

  1. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M et al (2016) Tensorflow: a system for large-scale machine learning. In: 12th USENIX symposium on operating systems design and implementation (OSDI 16), pp 265–283

  2. Almaraz-Damian JA, Ponomaryov V, Sadovnychiy S, Castillejos-Fernandez H (2020) Melanoma and nevus skin lesion classification using handcraft and deep learning feature fusion via mutual information measures. Entropy 22(4):484

    Article  Google Scholar 

  3. Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK (2018) Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation

  4. Andreyev H, Norman A, Cunningham D, Oates J, Dix B, Iacopetta B, Young J, Walsh T, Ward R, Hawkins N et al (2001) Kirsten ras mutations in patients with colorectal cancer: the ‘rascal ii’study. Br J Cancer 85(5):692–696

    Article  Google Scholar 

  5. Bell S, Lawrence Zitnick C, Bala K, Girshick R (2016) Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2874–2883

  6. Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258

  7. Chollet F et al (2015) keras

  8. Cui Y, Liu H, Ren J, Du X, Xin L, Li D, Yang X, Wang D (2020) Development and validation of a MRI-based radiomics signature for prediction of kras mutation in rectal cancer. Eur Radiol 30(4):1948–1958

    Article  Google Scholar 

  9. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255

  10. Dou Q, Yu L, Chen H, Jin Y, Yang X, Qin J, Heng PA (2017) 3d deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal 41:40–54

    Article  Google Scholar 

  11. Farhangi MM, Petrick N, Sahiner B, Frigui H, Amini AA, Pezeshk A (2020) Recurrent attention network for false positive reduction in the detection of pulmonary nodules in thoracic CT scans. Med Phys 47(5):2150–2160

  12. Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3146–3154

  13. Han J, Lee KY, Kim NK, Min BS (2020) Metachronous metastasis confined to isolated lymph node after curative treatment of colorectal cancer. Int J Colorectal Dis 35(11):1–9

  14. Hariharan B, Arbeláez P, Girshick R, Malik J (2015) Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 447–456

  15. He K, Georgia G, Piotr D, Ross G (2017) Mask r-CNN. IEEE Trans Pattern Anal Mach Intell 42(2):1

  16. He K, Liu X, Li M, Li X, Yang H, Zhang H (2020) Noninvasive kras mutation estimation in colorectal cancer using a deep learning method based on CT imaging. BMC Med Imaging 20:1–9

    Article  Google Scholar 

  17. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861

  18. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  19. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  20. Huang J, Ling CX (2005) Using auc and accuracy in evaluating learning algorithms. IEEE Trans Knowl Data Eng 17(3):299–310

    Article  Google Scholar 

  21. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  22. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440

  23. Loupakis F, Ruzzo A, Cremolini C, Vincenzi B, Salvatore L, Santini D, Masi G, Stasi I, Canestrari E, Rulli E et al (2009) Kras codon 61, 146 and braf mutations predict resistance to cetuximab plus irinotecan in kras codon 12 and 13 wild-type metastatic colorectal cancer. Br J Cancer 101(4):715–721

    Article  Google Scholar 

  24. Mou L, Zhao Y, Chen L, Cheng J, Gu Z, Hao H, Qi H, Zheng Y, Frangi A, Liu J (2019) Cs-net: channel and spatial attention network for curvilinear structure segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 721–730

  25. Oh JE, Kim MJ, Lee J, Hur BY, Kim B, Kim DY, Baek JY, Chang HJ, Park SC, Oh JH et al (2020) Magnetic resonance-based texture analysis differentiating kras mutation status in rectal cancer. Cancer Res Treat 52(1):51

    Article  Google Scholar 

  26. Park J, Woo S, Lee JY, Kweon IS (2018) Bam: bottleneck attention module. arXiv:1807.06514

  27. Rajinikanth V, Joseph Raj AN, Thanaraj KP, Naik GR (2020) A customized vgg19 network with concatenation of deep and handcrafted features for brain tumor detection. Appl Sci 10(10):3429

    Article  Google Scholar 

  28. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241

  29. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  30. Simpson AL, Antonelli M, Bakas S, Bilello M, Farahani K, Van Ginneken B, Kopp-Schneider A, Landman BA, Litjens G, Menze B, et al (2019) A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv:1902.09063

  31. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826

  32. Tan Z, Wang M, Xie J, Chen Y, Shi X (2017) Deep semantic role labeling with self-attention. arXiv:1712.01586

  33. Tjandra JJ, Chan MK (2007) Follow-up after curative resection of colorectal cancer: a meta-analysis. Dis Colon Rectum 50(11):1783–1799

    Article  Google Scholar 

  34. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998–6008

  35. Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D (2016) Matching networks for one shot learning. arXiv:1606.04080

  36. Wang H, Wang S, Qin Z, Zhang Y, Li R, Xia Y (2021) Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med Image Anal 67:101846

    Article  Google Scholar 

  37. Wang J, Cui Y, Shi G, Zhao J, Yang X, Qiang Y, Du Q, Ma Y, Kazihise NGF (2020) Multi-branch cross attention model for prediction of kras mutation in rectal cancer with t2-weighted MRI. Appl Intell 50(8):1–18

  38. Wertheimer D, Hariharan B (2019) Few-shot learning with localization in realistic settings. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6558–6567

  39. Win KY, Maneerat N, Hamamoto K, Sreng S (2020) Hybrid learning of hand-crafted and deep-activated features using particle swarm optimization and optimized support vector machine for tuberculosis screening. Appl Sci 10(17):5749

    Article  Google Scholar 

  40. Woo S, Park J, Lee JY, So Kweon I (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19

  41. Wu X, Li Y, Chen X, Huang Y, He L, Zhao K, Huang X, Zhang W, Huang Y, Li Y et al (2020) Deep learning features improve the performance of a radiomics signature for predicting kras status in patients with colorectal cancer. Acad Radiol 27(11):E254–E262

  42. Xu J, Li M, Zhu Z (2020) Automatic data augmentation for 3d medical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 378–387

  43. Xu Y, Xu Q, Ma Y, Duan J, Zhang H, Liu T, Li L, Sun H, Shi K, Xie S et al (2019) Characterizing MRI features of rectal cancers with different kras status. BMC Cancer 19(1):1111

    Article  Google Scholar 

  44. Yadamsuren EA, Nagy S, Pajor L, Lacza A, Bogner B (2012) Characteristics of advanced-and non advanced sporadic polypoid colorectal adenomas: correlation to kras mutations. Pathol Oncol Res 18(4):1077–1084

    Article  Google Scholar 

  45. Yang L, Dong D, Fang M, Zhu Y, Zang Y, Liu Z, Zhang H, Ying J, Zhao X, Tian J (2018) Can CT-based radiomics signature predict kras/nras/braf mutations in colorectal cancer? Eur Radiol 28(5):2058–2067

    Article  Google Scholar 

  46. Yang S, Ramanan D (2015) Multi-scale recognition with DAG-CNNS. In: Proceedings of the IEEE international conference on computer vision, pp 1215–1223

  47. Yu L, Cheng JZ, Dou Q, Yang X, Chen H, Qin J, Heng PA (2017) Automatic 3d cardiovascular mr segmentation with densely-connected volumetric convnets. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 287–295

  48. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2921–2929

  49. Zhou T, Fu H, Zhang Y, Zhang C, Shao L (2020) M2net: Multi-modal multi-channel network for overall survival time prediction of brain tumor patients

  50. Zhou ZH (2018) A brief introduction to weakly supervised learning. Natl Sci Rev 5(1):44–53

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant numbers 61972274); Natural Science Foundation of ShanXi (Grant numbers 201801D121139).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juanjuan Zhao.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, K., Zhao, Z., Wang, J. et al. Segmentation-based multi-scale attention model for KRAS mutation prediction in rectal cancer. Int. J. Mach. Learn. & Cyber. 13, 1283–1299 (2022). https://doi.org/10.1007/s13042-021-01447-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-021-01447-w

Keywords

Navigation