Skip to main content

Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation

  • Conference paper
  • First Online:
Machine Learning in Medical Imaging (MLMI 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11861))

Included in the following conference series:

Abstract

To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.

This work is supported from the National Natural Science Foundation of China (Nos. 61603193, 61673203, 61876087), the Natural Science Foundation of Jiangsu Province (No. BK20171479), Jiangsu Postdoctoral Science Foundation (No. 1701157B), and CCF-Tencent Open Research Fund (RAGR20180114). Wanqi Yang and Yinghuan Shi are co-corresponding authors. Ziqi Zhou and Xinna Guo are co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.itksnap.org/pmwiki/pmwiki.php.

References

  1. Benjamin, E.J., Muntner, P., Alonso, A., et al.: Heart disease and stroke statistics 2019 update: a report from the American Heart Association. Circulation 139(10), 56–528 (2019)

    Article  Google Scholar 

  2. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  3. Schlemper, J., et al.: Attention gated networks: learning to leverage salient regions in medical images. Med. Image Anal. 53, 197–207 (2018)

    Article  Google Scholar 

  4. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. In: International Conference on Medical Imaging with Deep Learning (2018)

    Google Scholar 

  5. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)

    Google Scholar 

  6. Dou, Q., Ouyang, C., Chen, C., Chen, H., Heng, P.: Unsupervised cross-modality domain adaptation of ConvNets for biomedical image segmentations with adversarial loss. IJCAI (2018)

    Google Scholar 

  7. Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: CVPR, pp. 9242–9251 (2018)

    Google Scholar 

  8. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)

    Google Scholar 

  9. Goodfellow, I.J., et al.: Generative adversarial nets. In: NIPS (2014)

    Google Scholar 

  10. Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: ICCV, pp. 2868–2876 (2017)

    Google Scholar 

  11. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 5967–5976 (2017)

    Google Scholar 

  12. Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y.: Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification. ArXiv abs/1801.09927 (2018)

    Google Scholar 

  13. Hori, C., Hori, T., Lee, T., Sumi, K., Hershey, J.R., Marks, T.K.: Attention-based multimodal fusion for video description. In: ICCV, pp. 4203–4212 (2017)

    Google Scholar 

  14. Ye, L., Rochan, M., Liu, Z., Wang, Y.: Cross-modal self-attention network for referring image segmentation. In: CVPR (2019)

    Google Scholar 

  15. Hong, S., Oh, J., Han, B., Lee, H.: Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. In: CVPR, pp. 3204–3212 (2016)

    Google Scholar 

  16. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  17. Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MR. Med. Image Anal. 31, 77–87 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Wanqi Yang or Yinghuan Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z. et al. (2019). Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation. In: Suk, HI., Liu, M., Yan, P., Lian, C. (eds) Machine Learning in Medical Imaging. MLMI 2019. Lecture Notes in Computer Science(), vol 11861. Springer, Cham. https://doi.org/10.1007/978-3-030-32692-0_69

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32692-0_69

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32691-3

  • Online ISBN: 978-3-030-32692-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics