Skip to main content

DuDoCAF: Dual-Domain Cross-Attention Fusion with Recurrent Transformer for Fast Multi-contrast MR Imaging

  • Conference paper
  • First Online:
Book cover Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13436))

Abstract

Multi-contrast magnetic resonance imaging (MC-MRI) has been widely used for the diagnosis and characterization of tumors and lesions, as multi-contrast MR images are capable of providing complementary information for more comprehensive diagnosis and evaluation. However, it usually suffers from long scanning time to acquire multi-contrast MR images; in addition, long scanning time may lead to motion artifacts, degrading the image quality. Recently, many studies have proposed to employ the fully-sampled image of one contrast with short acquisition time to guide the reconstruction of the other contrast with long acquisition time so as to speed up the scanning. However, these studies still have two shortcomings. First, they simply concatenate the features of the two contrast images together without digging and leveraging the inherent and deep correlation between them. Second, as aliasing artifacts are complicated and non-local, sole image domain reconstruction with local dependencies is far from enough to eliminate these artifacts and achieve faithful reconstruction results. We present a novel Dual-Domain Cross-Attention Fusion (DuDoCAF) scheme with recurrent transformer to comprehensively address these shortcomings. Specifically, the proposed CAF scheme enables deep and effective fusion of features extracted from two modalities. The dual-domain recurrent learning allows our model to restore signals in both k-space and image domains, and hence more comprehensively remove the artifacts. In addition, we tame recurrent transformers to capture long-range dependencies from the fused feature maps to further enhance reconstruction performance. Extensive experiments on public fastMRI and clinical brain datasets demonstrate that the proposed DuDoCAF outperforms the state-of-the-art methods under different under-sampling patterns and acceleration rates.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, C.F.R., Fan, Q., Panda, R.: CrossViT: cross-attention multi-scale vision transformer for image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 357–366 (2021)

    Google Scholar 

  2. Chen, W., et al.: Accuracy of 3-T MRI using susceptibility-weighted imaging to detect meniscal tears of the knee. Knee Surg. Sports Traumatol. Arthrosc. 23(1), 198–204 (2015)

    Article  Google Scholar 

  3. Dar, S.U., Yurt, M., Shahdloo, M., Ildız, M.E., Tınaz, B., Çukur, T.: Prior-guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks. IEEE J. Sel. Top. Signal Process. 14(6), 1072–1087 (2020)

    Article  Google Scholar 

  4. Do, W.J., Seo, S., Han, Y., Ye, J.C., Choi, S.H., Park, S.H.: Reconstruction of multicontrast MR images through deep learning. Med. Phys. 47(3), 983–997 (2020)

    Article  Google Scholar 

  5. Feng, C.-M., Fu, H., Yuan, S., Xu, Y.: Multi-contrast MRI super-resolution via a multi-stage integration network. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 140–149. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_14

    Chapter  Google Scholar 

  6. Feng, C.M., Yan, Y., Chen, G., Fu, H., Xu, Y., Shao, L.: Accelerated multi-modal MR imaging with transformers. arXiv preprint arXiv:2106.14248 (2021)

  7. Feng, C.-M., Yan, Y., Fu, H., Chen, L., Xu, Y.: Task transformer network for joint MRI reconstruction and super-resolution. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 307–317. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_30

    Chapter  Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)

    Google Scholar 

  9. Knoll, F., et al.: fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2(1), e190007 (2020)

    Google Scholar 

  10. Korkmaz, Y., Dar, S.U., Yurt, M., Özbey, M., Cukur, T.: Unsupervised MRI reconstruction via zero-shot learned adversarial transformers. IEEE Trans. Med. Imaging (2022)

    Google Scholar 

  11. Liu, X., Wang, J., Sun, H., Chandra, S.S., Crozier, S., Liu, F.: On the regularization of feature fusion and mapping for fast MR multi-contrast imaging via iterative networks. Magn. Reson. Imaging 77, 159–168 (2021)

    Article  Google Scholar 

  12. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Article  Google Scholar 

  13. Sachan, T., Pinnaparaju, N., Gupta, M., Varma, V.: SCATE: shared cross attention transformer encoders for multimodal fake news detection. In: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 399–406 (2021)

    Google Scholar 

  14. Sun, H., et al.: Extracting more for less: multi-echo MP2RAGE for simultaneous T1-weighted imaging, T1 mapping, mapping, SWI, and QSM from a single acquisition. Magn. Reson. Med. 83(4), 1178–1191 (2020)

    Article  Google Scholar 

  15. Sun, L., Fan, Z., Fu, X., Huang, Y., Ding, X., Paisley, J.: A deep information sharing network for multi-contrast compressed sensing MRI reconstruction. IEEE Trans. Image Process. 28(12), 6141–6153 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  17. Xiang, L., et al.: Deep-learning-based multi-modal fusion for fast MR reconstruction. IEEE Trans. Biomed. Eng. 66(7), 2105–2114 (2018)

    Article  Google Scholar 

  18. Xu, Y., Zhao, H., Zhang, Z.: Topicaware multi-turn dialogue modeling. In: The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021) (2021)

    Google Scholar 

  19. Xuan, K., Sun, S., Xue, Z., Wang, Q., Liao, S.: Learning MRI k-space subsampling pattern using progressive weight pruning. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 178–187. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_18

    Chapter  Google Scholar 

  20. Yang, Y., Wang, N., Yang, H., Sun, J., Xu, Z.: Model-driven deep attention network for ultra-fast compressive sensing MRI guided by cross-contrast MR image. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 188–198. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_19

    Chapter  Google Scholar 

  21. Zhou, B., Zhou, S.K.: DuDoRNet: learning a dual-domain recurrent network for fast MRI reconstruction with deep T1 prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4273–4282 (2020)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China under Grant 61902338 and 62001120, the Shanghai Sailing Program (No. 20YF1402400), The Hong Kong Polytechnic University under Project of Strategic Importance (no. 1-ZE2Q).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chengyan Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1516 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lyu, J., Sui, B., Wang, C., Tian, Y., Dou, Q., Qin, J. (2022). DuDoCAF: Dual-Domain Cross-Attention Fusion with Recurrent Transformer for Fast Multi-contrast MR Imaging. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16446-0_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16445-3

  • Online ISBN: 978-3-031-16446-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics