Elsevier

Pattern Recognition

Volume 41, Issue 1, January 2008, Pages 285-298
Pattern Recognition

Multimodality image registration by maximization of quantitative–qualitative measure of mutual information

https://doi.org/10.1016/j.patcog.2007.04.002Get rights and content

Abstract

This paper presents a novel image similarity measure, referred to as quantitative–qualitative measure of mutual information (Q-MI), for multimodality image registration. Conventional information measures, e.g., Shannon's entropy and mutual information (MI), reflect quantitative aspects of information because they only consider probabilities of events. In fact, each event has its own utility to the fulfillment of the underlying goal, which can be independent of its probability of occurrence. Thus, it is important to consider both quantitative (i.e., probability) and qualitative (i.e., utility) measures of information in order to fully capture the characteristics of events. Accordingly, in multimodality image registration, Q-MI should be used to integrate the information obtained from both the image intensity distributions and the utilities of voxels in the images. Different voxels can have different utilities, for example, in brain images, two voxels can have the same intensity value, but their utilities can be different, e.g., a white matter (WM) voxel near the cortex can have higher utility than a WM voxel inside a large uniform WM region. In Q-MI, the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image. Since the voxels with higher utility values (or saliency values) contribute more in measuring Q-MI of the two images, the Q-MI-based registration method is much more robust, compared to conventional MI-based registration methods. Also, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. In this paper, the proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images.

Introduction

Registration of multimodality images of the same subject provides a way of fusing different types of information, and it is very important for medical diagnosis and computer-aided surgery [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14]. For example, by registering MR T1, T2, PD, and FLAIR brain images of the same subject, the white matter lesions (WMLs) in the brain can be identified [1]; by registering the pre- and intra-operative images, computer-aided surgery can be well performed [2]. Mutual information (MI) that measures the statistical dependency between two images has been successfully applied to multimodality image registration [6], [7], [8], [9]. Although numerous promising results have been reported, it is worth noting that the MI-based registration methods might have the limited performances, once the initial misalignment of the two images is large or equally the overlay region of the two images is small [10].

Various methods have been proposed to improve the robustness of MI-based registration, and they can be classified into three categories [10]. In the first category, instead of Shannon's entropy, alternative entropies such as Jumarie entropy [12], [13] and Rényi entropy [14] have been used. In the second category, normalized MI measures [15], [16], which is less sensitive to the changes in the overlap of two images, has been proposed and applied successfully in lots of studies [17], [18], [19]. In the third category, spatial information has been integrated into the MI-based registration [20], [21], [22], [23], [24], [25]. For example, both MI and matching of gradient maps between two images have been used for image registration in Ref. [20]; also, the MI of image features, i.e., gradient, wavelet and other features, is employed for registration [22], [23], [24], [25]. Generally, the use of spatial information increases the robustness of registration algorithms.

However, almost all MI-based registration methods treat the voxels of the images equally, when calculating their MI. In fact, different voxels, even having the same intensity [26], should be treated differently since they have different characteristics and utilities on image registration. Salient voxels should have higher utility values, and hence contribute more to determine the transformation between two images. For example, when measuring the MI of two brain images, the white matter (WM) voxels near the cortex should contribute more than the WM voxels inside the large WM regions since it is more effective to match WM voxels near the cortex than the inside regions.

To incorporate utility information into image registration procedures, we propose a novel image similarity measure, referred to as quantitative–qualitative measure of MI (Q-MI). Q-MI considers not only the probability of image intensity, but also the utility of each voxel, when calculating MI of two images. This is significantly different from the conventional MI measure that only considers the quantitative aspect of information based on the image intensity distribution. It is worth noting that the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image [27], [28]. Therefore, the salient voxels will have higher utility values, and they will contribute more in measuring the MI of the two images under registration. That is, the voxels with high utilities play major roles in determining the transformation between the images.

Importantly, the utility values of voxels are not fixed, and they will be hierarchically updated during the registration procedure, with all voxels contributing equally in the final stage. In particular, the initial utility of each voxel will be assigned according to its saliency value [27], [28]; with the progress of image registration, this utility will gradually move towards one. Thus, by mainly focusing on the voxels (or the regions) with higher utilities in the initial registration procedure, the robustness of registration can be improved. Also, by changing each joint utility to one in the final stage, the sub-voxel accuracy of registration can be retained as that obtained by the conventional MI-based registration methods, because of using MI in the final registration procedure. This hierarchical framework makes the Q-MI-based image registration not only robust but also accurate as demonstrated in the experiments.

The proposed Q-MI has been applied to the rigid registration of clinical brain images, such as MR, CT and PET images, obtained from the Retrospective Registration Evaluation Project (RREP) [29]. Experimental results show that, compared to conventional MI-based registration methods, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. It is also much more robust and has much higher success rates for the image registration.

The remainder of this paper is organized as follows. The definition of Q-MI is first provided in Section 2, and then the Q-MI-based registration method is described in Section 3. The performance of this Q-MI-based registration method is demonstrated in Section 4. This paper concludes in Section 5.

Section snippets

Quantitative–qualitative measure of MI (Q-MI)

In this section, the basic concepts of information and MI are first briefly introduced (Section 2.1). Then, the quantitative–qualitative measure of information (Section 2.2) and MI (Section 2.3) is presented.

Q-MI-based multimodality image registration

In this section, we will first describe the Q-MI-based registration algorithm by employing Q-MI as a similarity measure. Then, the method of computing the utility for each voxel in an image will be introduced (Section 3.2). Afterwards, the method of estimating the joint utility of two images will be presented, which will be used for the calculation of Q-MI (Section 3.3). Finally, the optimization method used in this registration algorithm will be briefly described (Section 3.4).

Experimental results

A number of experiments have been carried out to evaluate the performance of the Q-MI-based registration algorithm in aligning the multimodal images of brains such as MR PD-weighted, MR T1-weighted, MR T2-weighted, CT, and PET images, obtained from the Vanderbilt Retrospective Registration Project [29]. MR images are with 256×256 voxels in plane, 20–16 slices, and voxel size of 1.25mm×1.25mm×4.0mm. CT images are with 512×512 voxels in plane, 27–34 slices, and voxel size of 0.65mm×0.65mm×4.0mm.

Conclusion

A novel image similarity measure, called quantitative–qualitative measure of mutual information (Q-MI), has been presented for robust registration of multimodality brain images. By utilizing the concept of both quantitative and qualitative information measures of events, Q-MI incorporates utility information into the similarity measure of the two images, and hence it allows the registration procedure focusing more on matching the voxels with higher utility values, such as the regions of

About the AuthorHONGXIA LUAN received her Bachelor degree from Xidian University and Master degree from Hangdian University. Presently, she is a Ph.D. candidate at Shanghai Jiao Tong University. Her research interests include medical image analysis and pattern recognition.

References (43)

  • A. Bharatha et al.

    Evaluation of three-dimensional finite element-based deformable registration of pre- and intraoperative prostate imaging

    Med. Phys.

    (2001)
  • C. Studholme, D.L.G. Hill, D.J. Hawkes, Multiresolution voxel similarity measures for MR-PET registration, in:...
  • F. Maes et al.

    Medical image registration using mutual information

    Proc. IEEE

    (2003)
  • A. Collignon, F. Maes, D. Delaere, D. Vandermeulen, P. Suetens, G. Marchal, Automated multi-modality image registration...
  • J.P.W. Pluim et al.

    Mutual-information-based registration of medical images: a survey

    IEEE Trans. Med. Imaging

    (2003)
  • D.J. Pettey, J.C. Gee, Using a linear diagnostic function and non-rigid registration to search for morphological...
  • C.E. Rodriguez-Carranza, M.H. Loew, A weighted and deterministic entropy measure for image registration using mutual...
  • C.E. Rodriguez-Carranza, M.H. Loew, Global optimization of weighted mutual information for multi-modality image...
  • B. Pompe et al.

    Using mutual information to measure coupling in the cardiorespiratory system

    IEEE Eng. Med. Biol. Mag.

    (1998)
  • C. Studholme, D.L.G. Hill, D.J. Hawkes, A normalized entropy measure for multi-modality image alignment, in:...
  • D.L.G. Hill et al.

    Measurement of intraoperative brain surface deformation under a craniotomy

    Neurosurgery

    (1998)
  • Cited by (0)

    About the AuthorHONGXIA LUAN received her Bachelor degree from Xidian University and Master degree from Hangdian University. Presently, she is a Ph.D. candidate at Shanghai Jiao Tong University. Her research interests include medical image analysis and pattern recognition.

    About the AuthorFEIHU QI was born in 1938. He is a Professor at Shanghai Jiao Tong University. He has published over 200 articles in various journals and proceedings of international conferences. His research interests include image processing, medical image analysis, pattern recognition, compute vision and artificial neural networks.

    About the AuthorZHONG XUE received B.E. and M.E. degrees from Xidian University and Ph.D. degree from Singapore Nanyang Technological University. He is currently a Research Associate at University of Pennsylvania. He has published 33 international journal and conference papers in computer vision, image processing, and medical image analysis.

    About the AuthorLIYA CHEN received all her degrees (B.S., M.S., Ph.D.) from Shanghai Jiao Tong University. She has been an Associate Professor at the Department of Electronic Engineering of Shanghai Jiao Tong University since 2003. She has published over 20 articles in journals and proceedings of international conferences. Her research interests include image processing, medical image analysis, computer vision and artificial neural networks.

    About the AuthorDINGGANG SHEN received all of his degrees from Shanghai JiaoTong University. He is an assistant professor (tenure-track) in the Department of Radiology at University of Pennsylvania (Upenn) since July 2002. Before moving to Upenn, he was a faculty member in Johns Hopkins University. Dr. Shen is on the Editorial Board of Pattern Recognition, and served as a reviewer for numerous international journals and conferences. He has published over 160 articles in journals and proceedings of international conferences. His research interests include medical image analysis, pattern recognition, and computer vision.

    View full text