Abstract
Long COVID is characterized by persistent symptoms, particularly pulmonary impairment, which necessitates advanced imaging for accurate diagnosis. Hyperpolarised Xenon-129 MRI (XeMRI) offers a promising avenue by visualising lung ventilation, perfusion, as well as gas transfer. Integrating functional data from XeMRI with structural data from Computed Tomography (CT) is crucial for comprehensive analysis and effective treatment strategies in long COVID, requiring precise data alignment from those complementary imaging modalities. To this end, CT-MRI registration is an essential intermediate step, given the significant challenges posed by the direct alignment of CT and Xe-MRI. Therefore, we proposed an end-to-end multimodal deformable image registration method that achieves superior performance for aligning long-COVID lung CT and proton density MRI (pMRI) data. Moreover, our method incorporates a novel Multi-perspective Loss (MPL) function, enhancing state-of-the-art deep learning methods for monomodal registration by making them adaptable for multimodal tasks. The registration results achieve a Dice coefficient score of 0.913, indicating a substantial improvement over the state-of-the-art multimodal image registration techniques. Since the XeMRI and pMRI images are acquired in the same sessions and can be roughly aligned, our results facilitate subsequent registration between XeMRI and CT, thereby potentially enhancing clinical decision-making for long COVID management.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Albert, M., et al.: Biological magnetic resonance imaging using laser-polarized 129Xe. Nature 370(6486), 199–201 (1994)
Anas, E.R., Onsy, A., Matuszewski, B.J.: CT scan registration with 3d dense motion field estimation using LSGAN. In: Papież, B.W., Namburete, A.I.L., Yaqub, M., Noble, J.A. (eds.) MIUA 2020. CCIS, vol. 1248, pp. 195–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52791-4_16
Avants, B.B., Epstein, C.L., Grossman, M., Gee, J.C.: Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12(1), 26–41 (2008)
Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 38(8), 1788–1800 (2019)
Ballering, A.V., van Zon, S.K., Olde Hartman, T.C., Rosmalen, J.G.: Persistence of somatic symptoms after Covid-19 in The Netherlands: an observational cohort study. The Lancet 400(10350), 452–461 (2022)
De Vos, B.D., Berendsen, F.F., Viergever, M.A., Sokooti, H., Staring, M., Išgum, I.: A deep learning framework for unsupervised affine and deformable image registration. Med. Image Anal. 52, 128–143 (2019)
Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)
Ehrhardt, J., Werner, R., Schmidt-Richberg, A., Handels, H.: Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration. IEEE Trans. Med. Imaging 30(2), 251–265 (2010)
Grist, J.T., et al.: Lung abnormalities detected with hyperpolarized 129Xe MRI in patients with long Covid. Radiology 305(3), 709–717 (2022)
Guo, C.K.: Multi-modal image registration with unsupervised deep learning. Ph.D. thesis, Massachusetts Institute of Technology (2019)
Heinrich, M.P., et al.: MIND: modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012)
Heinrich, M.P., Jenkinson, M., Papież, B.W., Brady, S.M., Schnabel, J.A.: Towards realtime multimodal fusion for image-guided interventions using self-similarities. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013, Part I 16. LNCS, vol. 8149, pp. 187–194. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_24
Hermosillo, G., Chefd’Hotel, C., Faugeras, O.: Variational methods for multimodal image matching. Int. J. Comput. Vis. 50(3), 329–343 (2002)
Hu, Y., et al.: Label-driven weakly-supervised learning for multimodal deformable image registration. In: 15th ISBI, pp. 1070–1074. IEEE (2018)
Hu, Y., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 49, 1–13 (2018)
Hua, R., Pozo, J.M., Taylor, Z.A., Frangi, A.F.: Multiresolution extended free-form deformations (XFFD) for non-rigid registration with discontinuous transforms. Med. Image Anal. 36, 113–122 (2017)
Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: NIPS, vol. 28 (2015)
Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P.: Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imaging 16(2), 187–198 (1997)
Mugler, J.P., III., Altes, T.A.: Hyperpolarized 129Xe MRI of the human lung. J. Magn. Reson. Imaging 37(2), 313–331 (2013)
Papież, B.W., Heinrich, M.P., Fehrenbach, J., Risser, L., Schnabel, J.A.: An implicit sliding-motion preserving regularisation via bilateral filtering for deformable image registration. Med. Image Anal. 18(8), 1299–1311 (2014)
Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249–261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19
Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L., Leach, M.O., Hawkes, D.J.: Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans. Med. Imaging 18(8), 712–721 (1999)
Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013)
Szmul, A., Matin, T., Gleeson, F.V., Schnabel, J.A., Grau, V., Papież, B.W.: XeMRI to CT lung image registration enhanced with personalized 4DCT-derived motion model. In: Stoyanov, D., et al. (eds.) RAMBO/BIA/TIA -2018. LNCS, vol. 11040, pp. 260–271. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00946-5_26
Szmul, A., Matin, T., Gleeson, F.V., Schnabel, J.A., Grau, V., Papież, B.W.: Patch-based lung ventilation estimation using multi-layer supervoxels. Comput. Med. Imaging Graph. 74, 49–60 (2019)
Wells, W.M., III., Viola, P., Atsumi, H., Nakajima, S., Kikinis, R.: Multi-modal volume registration by maximization of mutual information. Med. Image Anal. 1(1), 35–51 (1996)
Zhao, S., Dong, Y., Chang, E.I., Xu, Y., et al.: Recursive cascaded networks for unsupervised medical image registration. In: ICCV, pp. 10600–10610 (2019)
Zheng, J.Q., Wang, Z., Huang, B., Lim, N.H., Papież, B.W.: Residual aligner-based network (RAN): motion-separable structure for coarse-to-fine discontinuous deformable registration. Med. Image Anal. 91, 103038 (2024)
Zheng, J.Q., Wang, Z., Huang, B., Vincent, T., Lim, N.H., Papież, B.W.: Recursive deformable image registration network with mutual attention. In: Yang, G., Aviles-Rivero, A., Roberts, M., SchÖnlieb, CB. (eds.) Medical Image Understanding and Analysis, MIUA 2022. LNCS, Cambridge, UK, 27–29 July 2022, vol. 13413, pp. 75–86. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-12053-4_6
Acknowledgements
This study is funded by the National Institute for Health and Care Research (NIHR) (Long Covid grant, Ref: COV-LT2-0049). The views expressed in this publication are those of the authors and not necessarily those of NIHR or The Department of Health and Social Care.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Grist, J.T., Gleeson, F.V., Papież, B.W. (2024). Multimodal Deformable Image Registration for Long-COVID Analysis Based on Progressive Alignment and Multi-perspective Loss. In: Yap, M.H., Kendrick, C., Behera, A., Cootes, T., Zwiggelaar, R. (eds) Medical Image Understanding and Analysis. MIUA 2024. Lecture Notes in Computer Science, vol 14860. Springer, Cham. https://doi.org/10.1007/978-3-031-66958-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-66958-3_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-66957-6
Online ISBN: 978-3-031-66958-3
eBook Packages: Computer ScienceComputer Science (R0)