Abstract
To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.
This work is supported from the National Natural Science Foundation of China (Nos. 61603193, 61673203, 61876087), the Natural Science Foundation of Jiangsu Province (No. BK20171479), Jiangsu Postdoctoral Science Foundation (No. 1701157B), and CCF-Tencent Open Research Fund (RAGR20180114). Wanqi Yang and Yinghuan Shi are co-corresponding authors. Ziqi Zhou and Xinna Guo are co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Benjamin, E.J., Muntner, P., Alonso, A., et al.: Heart disease and stroke statistics 2019 update: a report from the American Heart Association. Circulation 139(10), 56–528 (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Schlemper, J., et al.: Attention gated networks: learning to leverage salient regions in medical images. Med. Image Anal. 53, 197–207 (2018)
Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. In: International Conference on Medical Imaging with Deep Learning (2018)
Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)
Dou, Q., Ouyang, C., Chen, C., Chen, H., Heng, P.: Unsupervised cross-modality domain adaptation of ConvNets for biomedical image segmentations with adversarial loss. IJCAI (2018)
Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: CVPR, pp. 9242–9251 (2018)
Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)
Goodfellow, I.J., et al.: Generative adversarial nets. In: NIPS (2014)
Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: ICCV, pp. 2868–2876 (2017)
Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 5967–5976 (2017)
Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y.: Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification. ArXiv abs/1801.09927 (2018)
Hori, C., Hori, T., Lee, T., Sumi, K., Hershey, J.R., Marks, T.K.: Attention-based multimodal fusion for video description. In: ICCV, pp. 4203–4212 (2017)
Ye, L., Rochan, M., Liu, Z., Wang, Y.: Cross-modal self-attention network for referring image segmentation. In: CVPR (2019)
Hong, S., Oh, J., Han, B., Lee, H.: Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. In: CVPR, pp. 3204–3212 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MR. Med. Image Anal. 31, 77–87 (2016)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, Z. et al. (2019). Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation. In: Suk, HI., Liu, M., Yan, P., Lian, C. (eds) Machine Learning in Medical Imaging. MLMI 2019. Lecture Notes in Computer Science(), vol 11861. Springer, Cham. https://doi.org/10.1007/978-3-030-32692-0_69
Download citation
DOI: https://doi.org/10.1007/978-3-030-32692-0_69
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32691-3
Online ISBN: 978-3-030-32692-0
eBook Packages: Computer ScienceComputer Science (R0)