Elsevier

Neurocomputing

Volume 486, 14 May 2022, Pages 174-188
Neurocomputing

A survey on mutual information based medical image registration algorithms

https://doi.org/10.1016/j.neucom.2021.11.023Get rights and content

Abstract

The process of aligning one image, in the coordinate system of another is called registration. Image registration is vital in the medical domain and is used for diagnosis, therapy planning, and treatment of diseases. Due to its huge applicability and importance, medical image registration has emerged to be a separate domain of research. Beginning from the invasive landmark based registration, innumerable algorithms have been proposed to register two images of the human physiology. However, a breakthrough in the literature occurred when an information theory based measure, mutual information was used to register two images. Since then a large number of new algorithms have developed, which use mutual information for fully automatic registration of medical images. This paper is a survey of these algorithms. Beginning from its development, it discusses about some of the major works done on mutual information based image registration. Some comparative studies with other algorithms have also been discussed. The paper ends with a discussion of the developments in medical image registration algorithms post mutual information, which primarily include deep neural network (DNN) based algorithms.

Introduction

The term image registration means to find out a geometric relationship between two images, which have a common object. In the medical field, the common object is invariably a human body organ or a model representing an organ. The task is to modify one image (the moving image) in such a way that the common object aligns with that of the other image (the reference image). Currently, with the existing imaging modalities, we can obtain 3D images of the internal organs of the body, their anatomy and functioning, with minimum or no invasion. This helps in diagnosis and assessment of diseases and planning of therapy. However, the different modalities usually provide complementary information and an important requirement in the treatment process is the integration of information obtained from different images of same or different modalities. A simple example is the integration of CT and MR images of the skull; CT contributing to the bony framework of the skull, whereas MR to the soft tissues. The combined image gives complete information about the skull, missing in either image. Further, images might be acquired during different stages of treatment. For example, to note the progress or regress of a tumor over a period of time, or to check pre and post operative conditions. A point to point correspondence of the initial and final images is necessary in these cases. Accurate and automatic mapping of the features of a 3D image from the sequence of 2D slices as obtained from an imaging modality, is an essential activity in medical imaging. These are a few among the innumerable applications of image registration in the medical domain.

Image registration is not a new concept. An algorithm for minimizing the distance between two 3D point sets, by finding the least squares solution of the rotation and translation matrices has been proposed in [1]. This concept forms the basis of landmark based image registration, where fiducial markers are used as the reference points to determine transformation of one image with respect to the other. Later, a number of other algorithms have been developed which efficiently find out the transformation matrices to minimize the squared distance between corresponding fiducial marker points of two images. A surface matching algorithm for registering 3D brain images obtained from different modalities has been proposed in [68]. Further, [4] has proposed a method for registration of 3D shapes using Iterative Closest Point (ICP) algorithm. However, in the same year, [88] aligned two PET images, based on the idea that at the correctly aligned position, the voxel intensities of the two images are related by a constant multiplicative factor. For each voxel, the algorithm calculates the ratio of one image with respect to the other and then iteratively moves one of the images, so as to minimize the variance of the ratio, across voxels. With the introduction of this idea, a number of publications followed, which use image intensity as the parameter to determine transformation. Unlike landmark based registration algorithms, these do not require fiducial markers. These are even faster and more efficient than algorithms which use anatomic landmark information from the images in order to minimize distances between corresponding points of two images. Multi-modal (MR-PET) image registration using the same algorithm has been reported in [89]. Multi-modal image registration by constructing a feature space from the intensity values of the images of two modalities has been proposed in [42]. At registered position, the feature space contains specific structures which disperses with mis-registration. The algorithm minimizes a measure of this dispersion. [43] reports further studies on the feature space, commonly known as the joint histogram. Among the several measures of dispersion that have been studied, third order moment of the joint histogram has been successfully used for automatic registration of MR images. Two other works have been reported; [41] has proposed a modified surface fitting registration algorithm, which utilizes anatomical knowledge to enable non-equivalent structures to be fitted, and [24] has registered CT and MR images using grey value correlation. Suitable pre-processing of the CT image has been done, so grey value correlation could be applied. However, image registration using features derived from the joint histogram gained the maximum success [43]. Entropy, a statistical measure derived from the joint histogram, has been used to register CT and MR images [12]. It has been shown that at the registered position, the joint entropy of the two images is minimum. Entropy has been proven to be a more robust registration criterion than joint histogram. Finally, a breakthrough happened, when Collignon and his team published a paper [11] and almost simultaneously Viola’s PhD thesis dissertation was published [82] (The contribution in the thesis was later published in the form of a paper [83]). Both these publications propose an information theoretic approach for automatic rigid body registration of images. This approach, known as the mutual information (MI), is based on entropy, but provides much better performance in terms of robustness and efficiency. Also, it allows for fully automatic registration with minimal or no pre-processing or segmentation. Mutual information became so popular that in the next few years, it was applied in huge number of applications with high success.

This paper is a survey of mutual information based medical image registration algorithms that have come up since its development in 1995.

Section snippets

Mutual Information

The first attempt to use image intensity to register medical images was done by [88], who noted that at registered position, intensities of overlapping parts of the two images are related by a constant multiplicative factor. The next advancement happened a year later, with the introduction of the feature space also known as joint histogram, which shows specific structures at registered position. These structures disperse as the images are moved from the registered position. The joint histogram

Literature Survey

The papers on mutual information based registration can be broadly divided into two categories. Firstly those, which are either comparative studies, which experimentally prove the superiority of the algorithms, or experimental observations and analyses on the algorithms. Next are the ones which are modifications of the baseline algorithm. Modification can again be of two types based on the kind of transformation considered. Rigid body transformation takes into account only translation and

Image Registration based on Deep Neural Network

The deep neural network based image registration algorithms can be broadly divided into two categories – the algorithms developed for mono-modal image registration and the ones developed for multi-modal image registration. We discuss a few mono-modal algorithms followed by multi-modal ones.

A number of deformable image registration algorithms are based on selection of appropriate and discriminative features. Also, with the development of new imaging modalities, it has become essential that the

Conclusion

The introduction of mutual information in medical image registration has brought in a breakthrough in the speed, accuracy and robustness of the image registration algorithms. Fully automatic, multi-modal 3D image registration algorithms have come into existence. In most cases, no pre-processing or segmentation of images are required prior to registration. Mutual information has been successfully applied to rigid body registration as well as affine, elastic, deformable, registration algorithms.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Debapriya Sengupta received her B.Tech degree from WBUT. She worked in the industry for four years prior to joining IIT Kharagpur, where she received her MS degree. Currently she is pursuing her PhD from IIEST Shibpur. Her research interests include image processing, medical image analysis, speaker recognition and language recognition.

References (99)

  • J.B.A. Maintz et al.

    Comparison of edge-based and ridge-based registration of CT and MR brain images

    Med. Image Anal.

    (1996)
  • C.R. Meyer et al.

    Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations

    Med. Image Anal.

    (1997)
  • C. Studholme et al.

    Automated 3D registration of MR and CT images of the head

    Med. Image Anal.

    (1996)
  • Zhong-Qiu Zhao et al.

    A mended hybrid learning algorithm for radial basis function neural networks to improve generalization capability

    Appl. Math. Model.

    (2007)
  • Zhong-Qiu Zhao et al.

    Palmprint recognition with 2D PCA+PCA based on modular neural networks

    Neurocomputing

    (2007)
  • K.S. Arun et al.

    Least-squares fitting of two 3D point sets

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1987)
  • Isaac N. Bankman, S. Morcovescu (Eds.), Handbook of Medical Imaging Processing and Analysis, Academic Press,...
  • P.J. Besl et al.

    A method for registration of 3D shapes

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1992)
  • H.R. Boveiri et al.

    Medical image registration using deep neural networks: a comprehensive review

    Comput. Electr. Eng.

    (2020)
  • M. Bro-Nielsen, Rigid registration of CT MR and cryosection images using a GLCM framework, in: CVRMed-MRCAS, Springer,...
  • T. Butz et al.

    Affine registration with feature space mutual information

  • T.M. Buzug, J. Weese, C. Fassnacht, C. Lorenz, Image registration: convex weighting functions for histogram-based...
  • X. Cheng et al.

    Deep similarity learning for multimodal medical images

    Comput. Methods Biomech. Biomed. Eng. Imag. Visualiz.

    (2018)
  • G.E. Christensen et al.

    3D brain mapping using a deformable neuroanatomy

    Phys. Med. Biol.

    (1994)
  • A. Collignon et al.

    Automated multi-modality image registration based on information theory

    Inf. Process. Med. Imag.

    (1995)
  • A. Collignon et al.

    3D multi-modality medical image registration using feature space clustering

  • A. Collingnon et al.

    Automated multi-modality image registration based on information theory

    Inf. Process. Med. Imag.

    (1995)
  • D.L. Collins, T.M. Peters, W. Dai, A.C. Evans, Model-based segmentation of individual brain structures from MRI data,...
  • A. Dalca et al.

    Unsupervised learning for fast probabilistic diffeomorphic registration

  • D.S. Huang

    Radial basis probabilistic neural networks: model and application

    Int. J. Pattern Recogn. Artif. Intell.

    (1999)
  • D.S. Huang et al.

    A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks

    IEEE Trans. Neural Networks

    (2008)
  • D.S. Huang et al.

    Zeroing polynomials using modified constrained neural network approach

    IEEE Trans. Neural Networks

    (2005)
  • D.S. Huang et al.

    A new constrained independent component analysis method

    IEEE Trans. Neural Networks

    (2007)
  • Ji-Xiang Du, D.S.Huang, Xiao-Feng Wang, Xiao Gu, Shape recognition based on neural networks trained by differential...
  • D.S. Ji-Xiang Du et al.

    A novel full structure optimization algorithm for radial basis probabilistic neural networks

    Neurocomputing

    (2006)
  • P.A. Van Den Elsen, E.J.D. Pol, T.S. Sumanaweera, P.F. Hemler, S. Napel, J.R. Adler, Grey value correlation techniques...
  • K.A. Eppenhof et al.

    Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks

    J. Med. Imag.

    (2018)
  • J. Fan et al.

    Adversarial similarity network for evaluating image alignment in deep learning based registration

  • E. Ferrante et al.

    On the adaptability of unsupervised CNN-based deformable image registration to unseen image domains

  • Y. Fu et al.

    Deep learning in medical image registration: a review

    Phys. Med. Biol.

    (2020)
  • T. Gaens et al.

    Non-rigid multimodal image registration using mutual information

  • G. Wu et al.

    Unsupervised deep feature learning for deformable registration of MR brain images

  • J.V. Hajnal, D.L.G. Hill, D.J. Hawkes (Eds.), Medical Image Registration, The Biomedical Engineering Series,...
  • Fei Han et al.

    A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks

    Neural Comput. Appl.

    (2008)
  • Fei Han et al.

    An improved approximation approach incorporating particle swarm optimization and a priori information into neural networks

    Neural Comput. Appl.

    (2010)
  • G. Haskins et al.

    Learning deep similarity metric for 3D MR–TRUS image registration

    Int. J. Comput. Assisted Radiol. Surg.

    (2019)
  • G. Haskins et al.

    Deep learning in medical image registration: a survey

    Mach. Vis. Appl.

    (2020)
  • N. Hata et al.

    Multimodality deformable registration of pre and intraoperative images for MRI-guided brain surgery

  • P. Hellier et al.

    Multimodal non-rigid warping for correction of distortions in functional MRI

  • Cited by (27)

    • Coarse-to-fine matching via cross fusion of satellite images

      2023, International Journal of Applied Earth Observation and Geoinformation
    View all citing articles on Scopus

    Debapriya Sengupta received her B.Tech degree from WBUT. She worked in the industry for four years prior to joining IIT Kharagpur, where she received her MS degree. Currently she is pursuing her PhD from IIEST Shibpur. Her research interests include image processing, medical image analysis, speaker recognition and language recognition.

    Dr. Phalguni Gupta did his Ph.D. from IIT Kharagpur and started his career in 1983 by joining in Space Applications Centre (ISRO) Ahmedabad, India, as a Scientist. In 1987, he joined the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India. Currently he is the Vice-Chancellor of GLA University, Mathura, India. He works in the field of Data Structures, Sequential Algorithms, Parallel algorithms, Online Algorithms, Image Analysis, and Biometrics. He has published more than 300 papers in International Journals and Conferences. He has dealt with several sponsored and consultancy projects which are funded by the Government of India. Some of these projects are in the area of Biometrics, System Solver, Grid Computing, Image Processing, Mobile Computing, and Network Flow.

    Dr. Arindam Biswas graduated from Jadavpur University, Kolkata, India, and received his masters and doctorate degree both from the Indian Statistical Institute, Kolkata, India. He is currently a Professor in the Department of Information Technology, Indian Institute of Engineering Science and Technology, Shibpur, India. His research interests include digital geometry, image processing, approximate shape matching and analysis, medical image analysis, biometrics, and geometric deep learning. He has published over 100 research papers in international journals, edited volumes, and refereed conference proceedings, and holds one US patent. He has served as a Board Member of the Technical Committee 18 (tc18) for Discrete Geometry and Mathematical Morphology of International Association of Pattern Recognition (IAPR) from 2016 to 2021. Prior to joining IIEST, Shibpur, he has served in the industry for about a decade. He is currently holding the position of Dean, International Relations and Alumni Affairs, IIEST, Shibpur.

    View full text