Multimodality image registration by maximization of quantitative–qualitative measure of mutual information
Introduction
Registration of multimodality images of the same subject provides a way of fusing different types of information, and it is very important for medical diagnosis and computer-aided surgery [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14]. For example, by registering MR T1, T2, PD, and FLAIR brain images of the same subject, the white matter lesions (WMLs) in the brain can be identified [1]; by registering the pre- and intra-operative images, computer-aided surgery can be well performed [2]. Mutual information (MI) that measures the statistical dependency between two images has been successfully applied to multimodality image registration [6], [7], [8], [9]. Although numerous promising results have been reported, it is worth noting that the MI-based registration methods might have the limited performances, once the initial misalignment of the two images is large or equally the overlay region of the two images is small [10].
Various methods have been proposed to improve the robustness of MI-based registration, and they can be classified into three categories [10]. In the first category, instead of Shannon's entropy, alternative entropies such as Jumarie entropy [12], [13] and Rényi entropy [14] have been used. In the second category, normalized MI measures [15], [16], which is less sensitive to the changes in the overlap of two images, has been proposed and applied successfully in lots of studies [17], [18], [19]. In the third category, spatial information has been integrated into the MI-based registration [20], [21], [22], [23], [24], [25]. For example, both MI and matching of gradient maps between two images have been used for image registration in Ref. [20]; also, the MI of image features, i.e., gradient, wavelet and other features, is employed for registration [22], [23], [24], [25]. Generally, the use of spatial information increases the robustness of registration algorithms.
However, almost all MI-based registration methods treat the voxels of the images equally, when calculating their MI. In fact, different voxels, even having the same intensity [26], should be treated differently since they have different characteristics and utilities on image registration. Salient voxels should have higher utility values, and hence contribute more to determine the transformation between two images. For example, when measuring the MI of two brain images, the white matter (WM) voxels near the cortex should contribute more than the WM voxels inside the large WM regions since it is more effective to match WM voxels near the cortex than the inside regions.
To incorporate utility information into image registration procedures, we propose a novel image similarity measure, referred to as quantitative–qualitative measure of MI (Q-MI). Q-MI considers not only the probability of image intensity, but also the utility of each voxel, when calculating MI of two images. This is significantly different from the conventional MI measure that only considers the quantitative aspect of information based on the image intensity distribution. It is worth noting that the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image [27], [28]. Therefore, the salient voxels will have higher utility values, and they will contribute more in measuring the MI of the two images under registration. That is, the voxels with high utilities play major roles in determining the transformation between the images.
Importantly, the utility values of voxels are not fixed, and they will be hierarchically updated during the registration procedure, with all voxels contributing equally in the final stage. In particular, the initial utility of each voxel will be assigned according to its saliency value [27], [28]; with the progress of image registration, this utility will gradually move towards one. Thus, by mainly focusing on the voxels (or the regions) with higher utilities in the initial registration procedure, the robustness of registration can be improved. Also, by changing each joint utility to one in the final stage, the sub-voxel accuracy of registration can be retained as that obtained by the conventional MI-based registration methods, because of using MI in the final registration procedure. This hierarchical framework makes the Q-MI-based image registration not only robust but also accurate as demonstrated in the experiments.
The proposed Q-MI has been applied to the rigid registration of clinical brain images, such as MR, CT and PET images, obtained from the Retrospective Registration Evaluation Project (RREP) [29]. Experimental results show that, compared to conventional MI-based registration methods, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. It is also much more robust and has much higher success rates for the image registration.
The remainder of this paper is organized as follows. The definition of Q-MI is first provided in Section 2, and then the Q-MI-based registration method is described in Section 3. The performance of this Q-MI-based registration method is demonstrated in Section 4. This paper concludes in Section 5.
Section snippets
Quantitative–qualitative measure of MI (Q-MI)
In this section, the basic concepts of information and MI are first briefly introduced (Section 2.1). Then, the quantitative–qualitative measure of information (Section 2.2) and MI (Section 2.3) is presented.
Q-MI-based multimodality image registration
In this section, we will first describe the Q-MI-based registration algorithm by employing Q-MI as a similarity measure. Then, the method of computing the utility for each voxel in an image will be introduced (Section 3.2). Afterwards, the method of estimating the joint utility of two images will be presented, which will be used for the calculation of Q-MI (Section 3.3). Finally, the optimization method used in this registration algorithm will be briefly described (Section 3.4).
Experimental results
A number of experiments have been carried out to evaluate the performance of the Q-MI-based registration algorithm in aligning the multimodal images of brains such as MR PD-weighted, MR T1-weighted, MR T2-weighted, CT, and PET images, obtained from the Vanderbilt Retrospective Registration Project [29]. MR images are with voxels in plane, 20–16 slices, and voxel size of . CT images are with voxels in plane, 27–34 slices, and voxel size of .
Conclusion
A novel image similarity measure, called quantitative–qualitative measure of mutual information (Q-MI), has been presented for robust registration of multimodality brain images. By utilizing the concept of both quantitative and qualitative information measures of events, Q-MI incorporates utility information into the similarity measure of the two images, and hence it allows the registration procedure focusing more on matching the voxels with higher utility values, such as the regions of
About the Author—HONGXIA LUAN received her Bachelor degree from Xidian University and Master degree from Hangdian University. Presently, she is a Ph.D. candidate at Shanghai Jiao Tong University. Her research interests include medical image analysis and pattern recognition.
References (43)
- et al.
Image registration methods: a survey
Image Vision Comput.
(2003) - et al.
The role of image registration in brain mapping
Image Vision Comput.
(2001) - et al.
Multi-modal volume registration by maximization of mutual information
Med. Image Anal.
(1996) - et al.
An overlap invarivant entropy measure of 3-D medical image alignment
Pattern Recognition
(1999) - et al.
A hierarchical approach to elastic registration based on mutual information
Image Vision Comput.
(2001) - et al.
Characterization of a quantitative–qualitative measure of relative information
Inform. Sci.
(1984) - et al.
A global optimization method for robust affine registration of brain images
Med. Image Anal.
(2001) - et al.
Improved optimization for the robust and accurate linear registration and motion correction of brain images
NeuroImage
(2002) - Z. Lao, D. Shen, A. Jawad, B. Karacali, D. Liu, E. Melhem, N. Bryan, C. Davatzikos, Automated segmentation of white...
- N. Hata, T. Dohi, S.K. Warfield, W.M. Wells, R. Kikinis, F.A. Jolesz, Multimodality deformable registration of pre- and...
Evaluation of three-dimensional finite element-based deformable registration of pre- and intraoperative prostate imaging
Med. Phys.
Medical image registration using mutual information
Proc. IEEE
Mutual-information-based registration of medical images: a survey
IEEE Trans. Med. Imaging
Using mutual information to measure coupling in the cardiorespiratory system
IEEE Eng. Med. Biol. Mag.
Measurement of intraoperative brain surface deformation under a craniotomy
Neurosurgery
Cited by (0)
About the Author—HONGXIA LUAN received her Bachelor degree from Xidian University and Master degree from Hangdian University. Presently, she is a Ph.D. candidate at Shanghai Jiao Tong University. Her research interests include medical image analysis and pattern recognition.
About the Author—FEIHU QI was born in 1938. He is a Professor at Shanghai Jiao Tong University. He has published over 200 articles in various journals and proceedings of international conferences. His research interests include image processing, medical image analysis, pattern recognition, compute vision and artificial neural networks.
About the Author—ZHONG XUE received B.E. and M.E. degrees from Xidian University and Ph.D. degree from Singapore Nanyang Technological University. He is currently a Research Associate at University of Pennsylvania. He has published 33 international journal and conference papers in computer vision, image processing, and medical image analysis.
About the Author—LIYA CHEN received all her degrees (B.S., M.S., Ph.D.) from Shanghai Jiao Tong University. She has been an Associate Professor at the Department of Electronic Engineering of Shanghai Jiao Tong University since 2003. She has published over 20 articles in journals and proceedings of international conferences. Her research interests include image processing, medical image analysis, computer vision and artificial neural networks.
About the Author—DINGGANG SHEN received all of his degrees from Shanghai JiaoTong University. He is an assistant professor (tenure-track) in the Department of Radiology at University of Pennsylvania (Upenn) since July 2002. Before moving to Upenn, he was a faculty member in Johns Hopkins University. Dr. Shen is on the Editorial Board of Pattern Recognition, and served as a reviewer for numerous international journals and conferences. He has published over 160 articles in journals and proceedings of international conferences. His research interests include medical image analysis, pattern recognition, and computer vision.