Skip to main content

Semi-supervised Attention-Guided VNet for Breast Cancer Detection via Multi-task Learning

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12889))

Included in the following conference series:

Abstract

Due to the rapid increase incidence of breast cancer, automated breast volume scanner (ABVS) is developed to detect breast cancer rapidly and accurately, which can automatically scan the whole breast with less manual operation. However, it is challenging for clinicians to segment the tumor region and further identify the benign and malignant tumors from the ABVS images since it has the large image size and low data quality. For this reason, we propose an effective 3D deep convolutional neural network for multi-task learning from ABVS data. Specifically, a new VNet structure is designed using deep attentive module for performance boosting. In addition, a semi-supervised mechanism is introduced to address the issue of insufficient labeled training data. Due to the difference of the tumor size, we create a two-stage process and fit the small size tumor via volume refinement block for further performance improvement. The experimental results on our self-collected data demonstrate that our model has achieved Dice coefficient of 0.764 for 3D segmentation and F1-score of 81.0% for classification. Our network outperforms the related algorithm as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bray, F., Ferlay, J., Soerjomataram, I., et al.: Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 68(6), 394–424 (2018)

    Article  Google Scholar 

  2. Schmachtenberg, C., Fischer, T., Hamm, B., et al.: Diagnostic performance of automated breast volume scanning (ABVS) compared to handheld ultrasonography with breast MRI as the gold standard. Acad. Radiol. 24(8), 954–961 (2017)

    Article  Google Scholar 

  3. Chen, H., Dou, Q., Wang, X., Qin, J., Cheng, J.C.Y., Heng, P.-A.: 3D fully convolutional networks for intervertebral disc localization and segmentation. In: Zheng, G., Liao, H., Jannin, P., Cattin, P., Lee, S.-L. (eds.) MIAR 2016. LNCS, vol. 9805, pp. 375–382. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-43775-0_34

    Chapter  Google Scholar 

  4. Jesson, A., Arbel, T.: Brain tumor segmentation using a 3D FCN with multi-scale loss . In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 392–402. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_34

    Chapter  Google Scholar 

  5. Geng, L., Li, S.M., Xiao, Z.T., et al.: Multi-channel feature pyramid networks for prostate segmentation, based on transrectal ultrasound imaging. Appl. Sci.-Basel 10(11), 12 (2020)

    Google Scholar 

  6. Wang, Y., Dou, H., Hu, X., et al.: Deep attentive features for prostate segmentation in 3D transrectal ultrasound. IEEE Trans. Med. Imaging 38(12), 2768–2778 (2019)

    Article  Google Scholar 

  7. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol. 9351. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  8. Agarwal, R., Diaz, O., Llado, X., et al.: Lesion segmentation in automated 3d breast ultrasound: volumetric analysis. Ultrason Imaging 40(2), 97–112 (2018)

    Article  Google Scholar 

  9. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  10. Dong, X., Lei, Y., Wang, T., et al.: Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med. Phys. 46(5), 2157–2168 (2019)

    Article  Google Scholar 

  11. Oktay, O., Schlemper, J., Folgoc, L.L., et al.: Attention u-net: Learning where to look for the pancreas. arXiv:1804.03999 (2018)

  12. Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV), pp. 565–571. IEEE (2016)

    Google Scholar 

  13. Du, J.C., Gui, L., He, Y.L., et al.: Convolution-based neural attention with applications to sentiment classification. IEEE Access 7, 27983–27992 (2019)

    Article  Google Scholar 

  14. Jiang, H., Shi, T., Bai, Z., et al.: Ahcnet: An application of attention mechanism and hybrid connection for liver tumor segmentation in CT volumes. IEEE Access 7, 24898–24909 (2019)

    Article  Google Scholar 

  15. Wang, Y., Wang, N., Xu, M., et al.: Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound. IEEE Trans Med Imaging 39(4), 866–876 (2020)

    Article  Google Scholar 

  16. Vandenhende, S., Georgoulis, S., Proesmans, M., et al.: Revisiting multi-task learning in the deep learning era. arXiv:2004.13379 (2020)

  17. Zhou, Y., Chen, H., Li, Y., et al.: Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med. Image Anal. 70, 101918 (2021)

    Google Scholar 

  18. Li, X., Lequan, Y., Chen, H., Chi-Wing, F., Xing, L., Heng, P.-A.: Transformation-consistent self-ensembling model for semisupervised medical image segmentation. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 523–534 (2021). https://doi.org/10.1109/TNNLS.2020.2995319

    Article  Google Scholar 

  19. Glorot, X., Bordes, A., and Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323. JMLR Workshop and Conference Proceedings (2011)

    Google Scholar 

  20. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

    Chapter  Google Scholar 

  21. Ragesh, R., Sellamanickam, S., Lingam, V., et al.: A Graph Convolutional Network Composition Framework for Semi-supervised Classification. arXiv:2004.03994 (2020)

  22. Lin, T.-Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported partly by National Natural Science Foundation of China (Nos. 62001302 and 61871274), Key Laboratory of Medical Image Processing of Guangdong Province (No. 2017B030314133), Guangdong Basic and Applied Basic Research Foundation (Nos. 2021A1515011348, 2019A1515111205), and Shenzhen Key Basic Research Project (Nos. JCYJ20170818 094109846, JCYJ20190808145011259, RCBS20200714114920379).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baiying Lei .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Y., Yang, Y., Jiang, W., Wang, T., Lei, B. (2021). Semi-supervised Attention-Guided VNet for Breast Cancer Detection via Multi-task Learning. In: Peng, Y., Hu, SM., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds) Image and Graphics. ICIG 2021. Lecture Notes in Computer Science(), vol 12889. Springer, Cham. https://doi.org/10.1007/978-3-030-87358-5_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87358-5_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87357-8

  • Online ISBN: 978-3-030-87358-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics