Abstract
Speeding up Magnetic Resonance Imaging (MRI) is an inevitable task in capturing multi-contrast MR images for medical diagnosis. In MRI, some sequences, e.g., in T2 weighted imaging, require long scanning time, while T1 weighted images are captured by short-time sequences. To accelerate MRI, in this paper, we propose a model-driven deep attention network, dubbed as MD-DAN, to reconstruct highly under-sampled long-time sequence MR image with the guidance of a certain short-time sequence MR image. MD-DAN is a novel deep architecture inspired by the iterative algorithm optimizing a novel MRI reconstruction model regularized by cross-contrast prior using a guided contrast image. The network is designed to automatically learn cross-contrast prior by learning corresponding proximal operator. The backbone network to model the proximal operator is designed as a dual-path convolutional network with channel and spatial attention modules. Experimental results on a brain MRI dataset substantiate the superiority of our method with significantly improved accuracy. For example, MD-DAN achieves PSNR up to 35.04 dB at the ultra-fast 1/32 sampling rate.
Keywords
Y. Yang, N. Wang—Contributed equally to this work.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
References
Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. data 4, 170117 (2017)
Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv (2018)
Block, K.T., Uecker, M., Frahm, J.: Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn. Reson. Med. 57(6), 1086–1098 (2007)
Chartsias, A., Joyce, T., Giuffrida, M.V., Tsaftaris, S.A.: Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans. Med. Imaging 37(3), 803–814 (2017)
Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Çukur, T.: Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38(10), 2375–2388 (2019)
Duan, J., et al.: VS-Net: variable splitting network for accelerated parallel MRI reconstruction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 713–722. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_78
Eksioglu, E.M.: Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3D-MRI. J. Math. Imaging Vis. 56(3), 430–440 (2016). https://doi.org/10.1007/s10851-016-0647-7
Geman, D., Yang, C.: Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4(7), 932–946 (1995)
Hammernik, K., et al.: Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 79(6), 3055–3071 (2018)
Huang, J., Chen, C., Axel, L.: Fast multi-contrast MRI reconstruction. Magn. Reson. Imaging 32(10), 1344–1352 (2014)
Huang, Y., Shao, L., Frangi, A.F.: Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning. IEEE Trans. Med. Imaging 37(3), 815–827 (2017)
Joyce, T., Chartsias, A., Tsaftaris, S.A.: Robust multi-modal MR image synthesis. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 347–355. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_40
Lee, D., Yoo, J., Ye, J.C.: Deep residual learning for compressed sensing MRI. In: IEEE ISBI, pp. 15–18 (2017)
Li, H., et al.: DiamondGAN: unified multi-modal generative adversarial networks for MRI sequences synthesis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 795–803. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_87
Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2007)
Lustig, M., Donoho, D.L., Santos, J.M., Pauly, J.M.: Compressed sensing MRI. IEEE Signal Process. Mag. 25(2), 72–82 (2008)
Meng, N., Yang, Y., Xu, Z., Sun, J.: A prior learning network for joint image and sensitivity estimation in parallel MR imaging. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 732–740. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_80
Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)
Qin, X., Wang, Z., Bai, Y., Xie, X., Jia, H.: FFA-Net: feature fusion attention network for single image dehazing. In: AAAI (2020)
Qu, X., Hou, Y., Lam, F., Guo, D., Zhong, J., Chen, Z.: Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator. Med. Image Anal. 18(6), 843–856 (2014)
Ravishankar, S., Bresler, Y.: MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans. Med. Imaging 30(5), 1028–1041 (2010)
Roy, S., Carass, A., Prince, J.L.: Magnetic resonance image example-based contrast synthesis. IEEE Trans. Med. Imaging 32(12), 2348–2363 (2013)
Schlemper, J., et al.: A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37(2), 491–503 (2018)
Wang, S., Su, Z., Ying, L., Xi, P., Dong, L.: Accelerating magnetic resonance imaging via deep learning. In: IEEE ISBI, pp. 514–517 (2016)
Weizman, L., Eldar, Y.C., Ben, B.D.: Reference-based MRI. Med. Phys. 43(10), 5357 (2016)
Xiang, L., et al.: Ultra-fast T2-weighted MR reconstruction using complementary T1-weighted information. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 215–223. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_25
Yang, Y., Sun, J., Li, H., Xu, Z.: Deep ADMM-Net for compressive sensing MRI. In: NIPS, pp. 10–18 (2016)
Zhan, Z., Cai, J.F., Guo, D., Liu, Y., Chen, Z., Qu, X.: Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction. IEEE Trans. Biomed. Eng. 63(9), 1850–1861 (2015)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_18
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2223–2232 (2017)
Acknowledgement
This work was supported in part by NSFC under Grants 11971373, 61976173, 11690011, 61721002, U1811461, and in part by the National Key Research and Development Program of China under Grant 2018AAA0102201.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Y., Wang, N., Yang, H., Sun, J., Xu, Z. (2020). Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-59713-9_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59712-2
Online ISBN: 978-3-030-59713-9
eBook Packages: Computer ScienceComputer Science (R0)