Abstract
Multi-contrast magnetic resonance imaging (MC-MRI) has been widely used for the diagnosis and characterization of tumors and lesions, as multi-contrast MR images are capable of providing complementary information for more comprehensive diagnosis and evaluation. However, it usually suffers from long scanning time to acquire multi-contrast MR images; in addition, long scanning time may lead to motion artifacts, degrading the image quality. Recently, many studies have proposed to employ the fully-sampled image of one contrast with short acquisition time to guide the reconstruction of the other contrast with long acquisition time so as to speed up the scanning. However, these studies still have two shortcomings. First, they simply concatenate the features of the two contrast images together without digging and leveraging the inherent and deep correlation between them. Second, as aliasing artifacts are complicated and non-local, sole image domain reconstruction with local dependencies is far from enough to eliminate these artifacts and achieve faithful reconstruction results. We present a novel Dual-Domain Cross-Attention Fusion (DuDoCAF) scheme with recurrent transformer to comprehensively address these shortcomings. Specifically, the proposed CAF scheme enables deep and effective fusion of features extracted from two modalities. The dual-domain recurrent learning allows our model to restore signals in both k-space and image domains, and hence more comprehensively remove the artifacts. In addition, we tame recurrent transformers to capture long-range dependencies from the fused feature maps to further enhance reconstruction performance. Extensive experiments on public fastMRI and clinical brain datasets demonstrate that the proposed DuDoCAF outperforms the state-of-the-art methods under different under-sampling patterns and acceleration rates.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, C.F.R., Fan, Q., Panda, R.: CrossViT: cross-attention multi-scale vision transformer for image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 357–366 (2021)
Chen, W., et al.: Accuracy of 3-T MRI using susceptibility-weighted imaging to detect meniscal tears of the knee. Knee Surg. Sports Traumatol. Arthrosc. 23(1), 198–204 (2015)
Dar, S.U., Yurt, M., Shahdloo, M., Ildız, M.E., Tınaz, B., Çukur, T.: Prior-guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks. IEEE J. Sel. Top. Signal Process. 14(6), 1072–1087 (2020)
Do, W.J., Seo, S., Han, Y., Ye, J.C., Choi, S.H., Park, S.H.: Reconstruction of multicontrast MR images through deep learning. Med. Phys. 47(3), 983–997 (2020)
Feng, C.-M., Fu, H., Yuan, S., Xu, Y.: Multi-contrast MRI super-resolution via a multi-stage integration network. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 140–149. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_14
Feng, C.M., Yan, Y., Chen, G., Fu, H., Xu, Y., Shao, L.: Accelerated multi-modal MR imaging with transformers. arXiv preprint arXiv:2106.14248 (2021)
Feng, C.-M., Yan, Y., Fu, H., Chen, L., Xu, Y.: Task transformer network for joint MRI reconstruction and super-resolution. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 307–317. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_30
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)
Knoll, F., et al.: fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2(1), e190007 (2020)
Korkmaz, Y., Dar, S.U., Yurt, M., Özbey, M., Cukur, T.: Unsupervised MRI reconstruction via zero-shot learned adversarial transformers. IEEE Trans. Med. Imaging (2022)
Liu, X., Wang, J., Sun, H., Chandra, S.S., Crozier, S., Liu, F.: On the regularization of feature fusion and mapping for fast MR multi-contrast imaging via iterative networks. Magn. Reson. Imaging 77, 159–168 (2021)
Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)
Sachan, T., Pinnaparaju, N., Gupta, M., Varma, V.: SCATE: shared cross attention transformer encoders for multimodal fake news detection. In: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 399–406 (2021)
Sun, H., et al.: Extracting more for less: multi-echo MP2RAGE for simultaneous T1-weighted imaging, T1 mapping, mapping, SWI, and QSM from a single acquisition. Magn. Reson. Med. 83(4), 1178–1191 (2020)
Sun, L., Fan, Z., Fu, X., Huang, Y., Ding, X., Paisley, J.: A deep information sharing network for multi-contrast compressed sensing MRI reconstruction. IEEE Trans. Image Process. 28(12), 6141–6153 (2019)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Xiang, L., et al.: Deep-learning-based multi-modal fusion for fast MR reconstruction. IEEE Trans. Biomed. Eng. 66(7), 2105–2114 (2018)
Xu, Y., Zhao, H., Zhang, Z.: Topicaware multi-turn dialogue modeling. In: The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021) (2021)
Xuan, K., Sun, S., Xue, Z., Wang, Q., Liao, S.: Learning MRI k-space subsampling pattern using progressive weight pruning. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 178–187. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_18
Yang, Y., Wang, N., Yang, H., Sun, J., Xu, Z.: Model-driven deep attention network for ultra-fast compressive sensing MRI guided by cross-contrast MR image. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 188–198. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_19
Zhou, B., Zhou, S.K.: DuDoRNet: learning a dual-domain recurrent network for fast MRI reconstruction with deep T1 prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4273–4282 (2020)
Acknowledgements
This work was supported in part by National Natural Science Foundation of China under Grant 61902338 and 62001120, the Shanghai Sailing Program (No. 20YF1402400), The Hong Kong Polytechnic University under Project of Strategic Importance (no. 1-ZE2Q).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lyu, J., Sui, B., Wang, C., Tian, Y., Dou, Q., Qin, J. (2022). DuDoCAF: Dual-Domain Cross-Attention Fusion with Recurrent Transformer for Fast Multi-contrast MR Imaging. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_45
Download citation
DOI: https://doi.org/10.1007/978-3-031-16446-0_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16445-3
Online ISBN: 978-3-031-16446-0
eBook Packages: Computer ScienceComputer Science (R0)