Skip to main content
Log in

Multi-scale boundary neural network for gastric tumor segmentation

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

At present, gastric cancer patients account for a large proportion of all tumor patients. Gastric tumor image segmentation can provide a reliable additional basis for the clinical analysis and diagnosis of gastric cancer. However, the existing gastric cancer image datasets have disadvantages such as small data sizes and difficulty in labeling. Moreover, most existing CNN-based methods are unable to generate satisfactory segmentation masks without accurate labels, which are due to the limited context information and insufficient discriminative feature maps obtained after the consecutive pooling and convolution operations. This paper presents a gastric cancer lesion dataset for gastric tumor image segmentation research. A multiscale boundary neural network (MBNet) is proposed to automatically segment the real tumor area in gastric cancer images. MBNet adopts encoder–decoder architecture. In each stage of the encoder, a boundary extraction refinement module is proposed for obtaining multi granular edge information and refinement firstly. Then, we build a selective fusion module to selectively fuse features from the different stages. By cascading the two modules, the richer context and fine-grained features of each stage are encoded. Finally, the astrous spatial pyramid pooling is improved to obtain the remote dependency relationship of the overall context and the fine spatial structure information. The experimental results show that the accuracy of the model reaches 92.3%, the similarity coefficient (DICE) reaches 86.9%, and the performance of the proposed method on the CVC-ClinicDB and Kvasir-SEG datasets also outperforms existing approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Sung, H., Ferlay, J., Siegel, R.L., et al.: Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CAA Cancer J. Clin. 71(3), 209–249 (2021)

    Article  Google Scholar 

  2. Fitzmaurice, C., Abate, D., Abbasi, N., Abbastabar, H., Abd-Allah, F., Abdel-Rahman, O., Abdelalim, A., Abdoli, A., Abdollahpour, I., Abdulle, A.S., Abebe, N.D.: Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 29 cancer groups, 1990 to 2017: a systematic analysis for the global burden of disease study. JAMA Oncol. 5, 1749–1768 (2019)

    Article  Google Scholar 

  3. Banks, M., Graham, D., Jansen, M., et al.: British Society of Gastroenterology guidelines on the diagnosis and management of patients at risk of gastric adenocarcinoma. Gut 68, 1545–1575 (2019)

    Article  Google Scholar 

  4. Guo, X.M., Zhao, H.Y., Shi, Z.Y., et al.: Application and progress of convolutional neural network-based pathological diagnosis of gastric cancer. J. Sichuan Univ. 52(2), 166–169 (2021)

    Google Scholar 

  5. Zhiheng, C., Qinyan, Z., Li, Z., et al.: Application of intelligent target detection technology based on gastroscopy images in early gastric cancer screening. China Digit. Med. 16(02), 7–11 (2021)

    Google Scholar 

  6. Petra, H.: Challenges of deciphering gastric cancer heterogeneity. World J. Gastroenterol. 21, 10510 (2015)

    Article  Google Scholar 

  7. Sharma, N., Aggarwal, L.M.: Automated medical image segmentation techniques. J. Med. Phys. 35(1), 3–14 (2010)

    Article  Google Scholar 

  8. Mcinerney, T., Terzopoulos, D.: Deformable models in medical image analysis. In: Proceedings of the Workshop on Mathematical Methods in Biomedical Image Analysis. IEEE (2002)

  9. Hu, S., Hoffman, E.A., Reinhardt, J.M.: Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images. IEEE Trans. Med. Imaging 20(6), 490–498 (2001)

    Article  Google Scholar 

  10. Nguyen, N.Q., Vo, D.M., Lee, S.W.: Contour-aware polyp segmentation in colonoscopy images using detailed upsamling encoder–decoder networks. IEEE Access PP(99), 1–1 (2020)

    Google Scholar 

  11. Andre, E., Brett, K., Novoa, R.A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)

    Article  Google Scholar 

  12. Sánchez-González, A., García-Zapirain, B., Sierra-Sosa, D., et al.: Automatized colon polyp segmentation via contour region analysis. Comput. Biol. Med. 100, 152–164 (2018)

    Article  Google Scholar 

  13. Drozdzal, M., Chartrand, G., Vorontsov, E., et al.: Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 44, 1–13 (2017)

    Article  Google Scholar 

  14. Ma, W., Yu, S., Ma, K., Wang, J., Ding, X., Zheng, Y.: Multi-task neural networks with spatial activation for retinal vessel segmentation and artery/vein classification. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science, vol. 11764, pp. 769–778. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_85

    Chapter  Google Scholar 

  15. Chen, X., Williams, B.M., Vallabhaneni, S.R., et al.: Learning active contour models for medical image segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2019)

  16. Dalca, A.V., Guttag, J., Sabuncu, M.R.: Anatomical priors in convolutional networks for unsupervised biomedical segmentation. In: IEEE/CVF Conference on Computer Vision & Pattern Recognition. IEEE (2018)

  17. Yu, L., Hao, C., Qi, D., et al.: Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 13(99), 994–1004 (2016)

    Google Scholar 

  18. Hsiao, Y.J., Wen, Y.C., Lai, W.Y., et al.: Application of artificial intelligence-driven endoscopic screening and diagnosis of gastric cancer. World J. Gastroenterol. 27(22), 2979–2993 (2021)

    Article  Google Scholar 

  19. Cherukuri, V., Bg, V.K., Bala, R., et al.: Deep retinal image segmentation with regularization under geometric priors. IEEE Trans. Image Process. 29, 2552–2567 (2019)

    Article  MATH  Google Scholar 

  20. Zhao, A., Balakrishnan, G., Durand, F., et al.: Data augmentation using learned transformations for one-shot medical image segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8535–8545 (2019). https://doi.org/10.1109/CVPR.2019.00874

  21. Gu, Z., Cheng, J., Fu, H., et al.: CE-Net: context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 38, 2281–2292 (2019)

    Article  Google Scholar 

  22. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2015)

    Google Scholar 

  23. Wang, D., Hu, G., Lyu, C.: FRNet: an end-to-end feature refinement neural network for medical image segmentation. Visual Comput. 37(5), 1101–1112 (2021)

    Article  Google Scholar 

  24. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. Springer, Cham (2015)

    Google Scholar 

  25. Zhou, Z., Siddiquee, M., Tajbakhsh, N., et al.: UNet++: a nested U-net architecture for medical image segmentation. In: 4th Deep Learning in Medical Image Analysis (DLMIA) Workshop (2018)

  26. Huang, H., Lin, L., Tong, R., et al.: UNet 3+: a full-scale connected UNet for medical image segmentation. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE (2020)

  27. Isensee, F., Jaeger, P.F., Kohl, S.A.A., et al.: nnU-Net: a self-configuring method for deep learning based biomedical image segmentation. Nat Methods 18, 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z

    Article  Google Scholar 

  28. Oktay, O., Schlemper, J., Folgoc, L.L., et al.: Attention u-Net: learning where to look for the pancreas (2018). arXiv preprint arXiv:1804.03999

  29. Chen, J., Lu, Y., Yu, Q., et al.: Transunet: transformers make strong encoders for medical image segmentation (2021). arXiv preprint arXiv:2102.04306

  30. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, pp. 6000–6010. https://dl.acm.org/doi/10.5555/3295222.3295349

  31. Zhu, Y., Wang, Q.C., Xu, M.D., et al.: Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest. Endosc. 89(4), 806.e1–815.e1 (2019). https://doi.org/10.1016/j.gie.2018.11.011

    Article  Google Scholar 

  32. Sakai, Y., Takemoto, S., Hori, K., et al.: Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2018.

  33. Horiuchi, Y., Aoyama, K., Tokai, Y., et al.: Convolutional neural network for differentiating gastric cancer from gastritis using magnified endoscopy with narrow band imaging. Dig. Dis. Sci. 65(5), 1355–1363 (2020)

    Article  Google Scholar 

  34. Wang, H., Ding, S., Wu, D., et al.: Smart connected electronic gastroscope system for gastric cancer screening using multi-column convolutional neural networks. Int. J. Prod. Res. 57, 1–12 (2018)

    Google Scholar 

  35. Li, L., Chen, Y., Shen, Z., et al.: Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow-band imaging. Gastric Cancer 23(1), 126–132 (2020). https://doi.org/10.1007/s10120-019-00992-2

    Article  Google Scholar 

  36. Liu, X., Wang, C., Hu, Y., Zeng, Z., Bai, J., Liao, G.: Transfer learning with convolutional neural network for early gastric cancer classification on magnifying narrow-band imaging images. In: 2018 25th IEEE International Conference on Image Processing (ICIP). pp. 1388–1392 (2018)

  37. Lyu, C., Hu, G., Wang, D.: Attention to fine-grained information: hierarchical multi-scale network for retinal vessel segmentation. Vis. Comput. 8, 1–11 (2020)

    Google Scholar 

  38. Lee, J.H., Kim, Y.J., Kim, Y.W., et al.: Spotting malignancies from gastric endoscopic images using deep learning. Surg. Endosc. 11, 3790–3797 (2019)

    Article  Google Scholar 

  39. Lee, S.A., Cho, H.C., Cho, H.C.: A novel approach for increased convolutional neural network performance in gastric-cancer classification using endoscopic images. IEEE Access 13(99), 1–1 (2021)

    Google Scholar 

  40. Wang, J., Liu, X.: Medical image recognition and segmentation of pathological slices of gastric cancer based on Deeplab v3+ neural network. Comput. Methods Progr. Biomed. 207, 106210 (2021)

    Article  Google Scholar 

  41. Bernal, J., et al.: WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs saliency maps from physicians. Comput. Med. Imaging Graph. 43, 99–111 (2015)

    Article  Google Scholar 

  42. Jha, D., Pia, H., Riegler, M., et al.: Kvasir-SEG: A segmented polyp dataset. Lecture Notes in Computer Science (LNCS), vol. 11962, pp. 451–462 (2020). https://munin.uit.no/handle/10037/18342

  43. Hausdorff and Gromov–Hausdorff distance. In: Probability and Real Trees. Lecture Notes in Mathematics, vol. 1920. Springer, Berlin, Heidelberg (2008). https://doi.org/10.1007/978-3-540-74798-7_4

  44. Kumar, N., Verma, R., Sharma, S., Bhargava, S., Vahadane, A., Sethi, A.: A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans Med Imaging. 36(7), 1550–1560 (2017). https://doi.org/10.1109/TMI.2017.2677499

    Article  Google Scholar 

  45. Chen, L.C., Papandreou, G., Schroff, F., et al.: Rethinking atrous convolution for semantic image segmentation (2017). arXiv preprint arXiv:1706.05587

  46. Sinha, A., Dolz, J.: Multi-scale self-guided attention for medical image segmentation. IEEE J Biomed Health Info. 25(1), 121–130 (2021). https://doi.org/10.1109/JBHI.2020.2986926

    Article  Google Scholar 

  47. Fu, J., Liu, J., Jiang, J., et al.: Scene segmentation with dual relation-aware attention network. IEEE Trans. Neural Netwo. Learn. Syst. 13(99), 1–14 (2020)

    Google Scholar 

Download references

Acknowledgements

The data were provided by the digestive endoscopy center of General Hospital of People’s Liberation Army.

Funding

This work was supported by the National Key R&D Program of China (2017YFB0403801), the Natural National Science Foundation of China (NSFC) (61835015).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Dongzhi He or Zhiqiang Wang.

Ethics declarations

Conflicts of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, P., Li, Y., Sun, Y. et al. Multi-scale boundary neural network for gastric tumor segmentation. Vis Comput 39, 915–926 (2023). https://doi.org/10.1007/s00371-021-02374-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02374-1

Keywords

Navigation