Skip to main content

Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image

  • Conference paper
  • First Online:
Book cover Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12262))

Abstract

Speeding up Magnetic Resonance Imaging (MRI) is an inevitable task in capturing multi-contrast MR images for medical diagnosis. In MRI, some sequences, e.g., in T2 weighted imaging, require long scanning time, while T1 weighted images are captured by short-time sequences. To accelerate MRI, in this paper, we propose a model-driven deep attention network, dubbed as MD-DAN, to reconstruct highly under-sampled long-time sequence MR image with the guidance of a certain short-time sequence MR image. MD-DAN is a novel deep architecture inspired by the iterative algorithm optimizing a novel MRI reconstruction model regularized by cross-contrast prior using a guided contrast image. The network is designed to automatically learn cross-contrast prior by learning corresponding proximal operator. The backbone network to model the proximal operator is designed as a dual-path convolutional network with channel and spatial attention modules. Experimental results on a brain MRI dataset substantiate the superiority of our method with significantly improved accuracy. For example, MD-DAN achieves PSNR up to 35.04 dB at the ultra-fast 1/32 sampling rate.

Y. Yang, N. Wang—Contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://ipp.cbica.upenn.edu/.

References

  1. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. data 4, 170117 (2017)

    Article  Google Scholar 

  2. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv (2018)

    Google Scholar 

  3. Block, K.T., Uecker, M., Frahm, J.: Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn. Reson. Med. 57(6), 1086–1098 (2007)

    Article  Google Scholar 

  4. Chartsias, A., Joyce, T., Giuffrida, M.V., Tsaftaris, S.A.: Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans. Med. Imaging 37(3), 803–814 (2017)

    Article  Google Scholar 

  5. Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Çukur, T.: Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38(10), 2375–2388 (2019)

    Article  Google Scholar 

  6. Duan, J., et al.: VS-Net: variable splitting network for accelerated parallel MRI reconstruction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 713–722. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_78

    Chapter  Google Scholar 

  7. Eksioglu, E.M.: Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3D-MRI. J. Math. Imaging Vis. 56(3), 430–440 (2016). https://doi.org/10.1007/s10851-016-0647-7

    Article  MathSciNet  MATH  Google Scholar 

  8. Geman, D., Yang, C.: Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4(7), 932–946 (1995)

    Article  Google Scholar 

  9. Hammernik, K., et al.: Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 79(6), 3055–3071 (2018)

    Article  Google Scholar 

  10. Huang, J., Chen, C., Axel, L.: Fast multi-contrast MRI reconstruction. Magn. Reson. Imaging 32(10), 1344–1352 (2014)

    Article  Google Scholar 

  11. Huang, Y., Shao, L., Frangi, A.F.: Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning. IEEE Trans. Med. Imaging 37(3), 815–827 (2017)

    Article  Google Scholar 

  12. Joyce, T., Chartsias, A., Tsaftaris, S.A.: Robust multi-modal MR image synthesis. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 347–355. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_40

    Chapter  Google Scholar 

  13. Lee, D., Yoo, J., Ye, J.C.: Deep residual learning for compressed sensing MRI. In: IEEE ISBI, pp. 15–18 (2017)

    Google Scholar 

  14. Li, H., et al.: DiamondGAN: unified multi-modal generative adversarial networks for MRI sequences synthesis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 795–803. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_87

    Chapter  Google Scholar 

  15. Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2007)

    Article  Google Scholar 

  16. Lustig, M., Donoho, D.L., Santos, J.M., Pauly, J.M.: Compressed sensing MRI. IEEE Signal Process. Mag. 25(2), 72–82 (2008)

    Article  Google Scholar 

  17. Meng, N., Yang, Y., Xu, Z., Sun, J.: A prior learning network for joint image and sensitivity estimation in parallel MR imaging. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 732–740. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_80

    Chapter  Google Scholar 

  18. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Article  Google Scholar 

  19. Qin, X., Wang, Z., Bai, Y., Xie, X., Jia, H.: FFA-Net: feature fusion attention network for single image dehazing. In: AAAI (2020)

    Google Scholar 

  20. Qu, X., Hou, Y., Lam, F., Guo, D., Zhong, J., Chen, Z.: Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator. Med. Image Anal. 18(6), 843–856 (2014)

    Article  Google Scholar 

  21. Ravishankar, S., Bresler, Y.: MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans. Med. Imaging 30(5), 1028–1041 (2010)

    Article  Google Scholar 

  22. Roy, S., Carass, A., Prince, J.L.: Magnetic resonance image example-based contrast synthesis. IEEE Trans. Med. Imaging 32(12), 2348–2363 (2013)

    Article  Google Scholar 

  23. Schlemper, J., et al.: A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37(2), 491–503 (2018)

    Article  Google Scholar 

  24. Wang, S., Su, Z., Ying, L., Xi, P., Dong, L.: Accelerating magnetic resonance imaging via deep learning. In: IEEE ISBI, pp. 514–517 (2016)

    Google Scholar 

  25. Weizman, L., Eldar, Y.C., Ben, B.D.: Reference-based MRI. Med. Phys. 43(10), 5357 (2016)

    Article  Google Scholar 

  26. Xiang, L., et al.: Ultra-fast T2-weighted MR reconstruction using complementary T1-weighted information. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 215–223. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_25

    Chapter  Google Scholar 

  27. Yang, Y., Sun, J., Li, H., Xu, Z.: Deep ADMM-Net for compressive sensing MRI. In: NIPS, pp. 10–18 (2016)

    Google Scholar 

  28. Zhan, Z., Cai, J.F., Guo, D., Liu, Y., Chen, Z., Qu, X.: Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction. IEEE Trans. Biomed. Eng. 63(9), 1850–1861 (2015)

    Article  Google Scholar 

  29. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_18

    Chapter  Google Scholar 

  30. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by NSFC under Grants 11971373, 61976173, 11690011, 61721002, U1811461, and in part by the National Key Research and Development Program of China under Grant 2018AAA0102201.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Sun .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 383 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Y., Wang, N., Yang, H., Sun, J., Xu, Z. (2020). Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59713-9_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59712-2

  • Online ISBN: 978-3-030-59713-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics