Elsevier

Neurocomputing

Volume 111, 2 July 2013, Pages 34-42
Neurocomputing

Multimodality image registration using local linear embedding and hybrid entropy

https://doi.org/10.1016/j.neucom.2012.11.032Get rights and content

Abstract

Robust registration of multimodality medical images has become an active area of research in medical image processing and applications. Although mutual information (MI) has been successfully applied to image registration, it is worth noting that MI-based registration measure only takes statistical intensity information into account and does not use spatial features. In this paper, we propose a local linear embedding (LLE) and hybrid entropy based registration method that combines spatial information into registration measure. Due to the robustness to absolute intensity information of image pixels and stability in noisy environment, the ordinal features (OFs) with different orientations are extracted to represent spatial information in medical images. For high dimensional OFs, the LLE algorithm is used to dimensionality reduction and the inverse mapping of LLE is used to fuse complementary information of OFs together. Then a novel similarity measure based on hybrid entropy which integrates intensity with OF is defined to register multimodality images. The experimental results show that the proposed registration algorithm can effectively suppress and eliminate the influence of noise in images. Compared with some existing methods, the proposed algorithm is of higher precision and better robustness.

Introduction

According to the physical principles for imaging, multimodality medical images are usually divided into two types: structural and functional images [1], [2]. Structural images such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI) provide mainly high-resolution images with geometric and anatomical information. Functional images such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) provide metabolic or neurochemical changes characteristic but with coarser resolution. In order to use complementary characteristics of the underlying anatomy and tissue microstructure from medical images of different modalities and provide comprehensive information for doctors, multimodality image fusion or joint analysis is required. The geometric registration is a preliminary and crucial step and has a paramount influence on the performance of image analysis, cartography, pattern recognition and computer vision.

Registration of multimodality medical images is a very challenging problem [3]. Firstly, in multimodality medical imaging, medical images are sometimes blurring because the adverse influence of the radioactive ray, tracer, high magnetic field for human body must be reduced. Moreover, there are the physical limitations of the imaging process itself, such as noise, limited resolution, insufficient contrast or inhomogeneity. These issues affect the quantification of structural and physiological parameters from medical images, and accordingly result in local maxima along the convergence plane and deteriorate the reliability of registration similarity measure. Secondly, the signals at the same position or organ acquired by different imaging mechanisms is represented by different even inverse intensity values, which makes it very difficult to align images only based on their intensity values. Therefore, multimodality image registration methods that can overcome the aforementioned problems are highly desired.

Existing registration methods of medical images can be broadly classified into two categories: featured-based and intensity-based methods [4], [5]. Feature-based registration approaches usually utilize points, lines, or surfaces and aim to minimize the distance between the corresponding features in the images [4], [6]. Feature-based registration is often efficient, but the need for feature extraction can be onerous. Moreover, any error during the feature extraction stage will adversely affect the registration precision and cannot be recovered at a later stage. Intensity-based methods aim to measure the degree of shared information in the image intensities, which has good robustness and accuracy but high computational cost [7], [8]. MI is an automatic intensity-based measure which has been successfully applied in image registration [2]. However, MI assumes that the probability of intensities between neighboring pixels is independent and ignores spatial information that may be available in the form of image features. In addition, MI measure is often not a smooth convex function, which may be trapped by local maxima and sensitive to image noise. Thus, misregistrations may occur. The current research has focused on combining feature-based and intensity-based registration together to boost registration performance, such as hybrid measures incorporating spatial features into MI measure [5], [9], [10]. Pluim et al. [11] proposed inclusion of spatial information by multiplying MI measure with an external term based on image gradient. Rueckert et al. [12] redefined MI as first-order MI and developed a new concept of second-order MI based on the co-occurrence matrices of the neighboring voxel intensities. Gan et al. [13] proposed maximum distance-gradient (MDG) vector field and similarity measure is the combination of the multi-dimensional MI and an angle measure on the MDG vector field. However, these improvements do not work well in noisy environment because gradient information is sensitive to image noise.

Besides image gradient, there are other feature descriptors representing spatial information, such as scale-invariant feature transform (SIFT), gray level co-occurrence matrix (GLCM), OF, etc. SIFT is invariant to uniform scaling, orientation, and partially invariant to affine distortion and illumination changes [14]. However, in the key step to achieve invariance to rotation, each keypoint is assigned one or more orientations based on local image gradient directions. The magnitude and direction calculations for the gradient are done for every pixel in a neighboring region around the keypoint. Meanwhile, SIFT discards low contrast keypoints so that it cannot be applied to blurred images. In brief, SIFT is computed from gradient information and not able to be used in registration of medical images with coarser resolution. Consequently, SIFT is unsuitable for MI improvement. Many researchers applied SIFT directly to higher dimensional image registration for which MI is not adequate because of very high computational cost [1]. GLCM has been proved to be a promising method for image texture analysis. However, GLCM is typically large and sparse, and the parameters in computing the GLCM of an image can be selected from a wide range, which results in a large amount of computation to be needed. OFs effectively encode spatial information between neighboring pixels and specific micro-structural information of images and reflect the intrinsic natures of the object [15], [16]. They are not limited to absolute intensity information of image pixels and present good stability even in noisy environment. Partio et al. [20] obtained better texture retrieval results by OFs than traditional GLCMs. Tan and Sun [15] concluded that OFs can play a defining role for the complex biometric recognition task. Sadr et al. [18] had demonstrated that OFs can faithfully encode signal structure and highlighted the robustness and generalization ability of local ordinal encoding for the task of pattern recognition. In this paper, we attempt to explore a solution by using a novel registration measure including OF and MI, as an answer to the robust registration problem of multimodality medical images.

Multimodality medical images are with random textures and rich inter-region sharp intensity variations, which provides a good source for OF extraction. The extracted OFs with different orientations are high dimensional and include a lot of unessential redundant information. Therefore, dimensionality reduction of OFs is necessary. Principal component analysis (PCA) is a classical and linear dimensionality reduction method that emphasizes on the features of observations with large variability [21]. But due to the nonlinear relationships of high-dimensional OFs in nature, PCA may not be appropriate and cannot provide intrinsic and robust solutions to nonlinearity distribution. LLE is an unsupervised and nonlinear learning algorithm that computes low-dimensional, neighborhood-preserved embedding of high-dimensional inputs [21], [22]. LLE maps its inputs into a single global coordinate system of lower dimensionality and is able to recovers global nonlinear structure from locally linear fits. Hence OFs with different orientations in observation space are mapped by the LLE algorithm into a collection of corresponding points in low-dimensional space. The center of these points is usually the most pertinent and the most representative, and the image in high-dimensional space corresponding to this center should be the fusion of different directional OFs and could describe the whole spatial information of medical image. Zhang et al. proposed methods to establish the process of mapping from low-dimensional embedded space to high-dimensional space for LLE according to the neighborhood relationships which are kept unchanged during dimension reduction and validated their efficiency with the application of reconstruction of multi-pose face images [23]. According to this inverse mapping of LLE, the fused OF, namely, the image corresponding to the center in embedded space is constructed by the linear combination of its neighborhoods with the definite weights in observation space. Finally a novel similarity measure based on hybrid entropy which integrates intensity with fused OF is defined to register multimodality images.

The remainder of this paper is organized as follows: in Section 2, we describe the concept and extraction of OFs. The LLE algorithm is used to reduce dimensionality of OFs with all orientations and the inverse mappings of LLE is used to fuse complementary information of OFs together. Then a similarity measure based on hybrid entropy is proposed and the registration method is described in Section 3. Section 4 includes the experimental results and a related discussion. Finally, our conclusions are presented in Section 5.

Section snippets

Extraction of ordinal features

OFs are defined based on the qualitative relationship between two image regions and are robust against various intra-class variations [17]. For example, they are invariant to monotonic transformations on images and are flexible enough to represent different local structures of different complexity. Sinha firstly introduced OFs to visual object representation, then OFs were applied to face detection, image database retrieval, stereo correspondence, image construction [15], [16].

OFs are extracted

Registration method

In this section, the fused OF representing the spatial information of medical image is integrated with image intensity to define a novel registration measure based on hybrid entropy. Then the proposed registration method is presented in detail.

Experiments and results

A series of registration experiments are performed to verify the robustness and accuracy of the proposed method. MI, combination of gradient information and MI (CGMI) [11], 2nd-order MI [12], [27] and the proposed measure based on LLE and hybrid entropy (LLE–HE) are compared and analyzed.

Conclusions

We present a LLE and hybrid entropy based method for robustly registering multimodality medical images. The LLE algorithm is used to reduce redundancy of OFs with all orientations which are robust to the brightness difference caused by multimodality sensors and stable in noisy environment. The inverse mappings of LLE is used to fuse complementary information of OFs to represent the whole spatial structure of medical images. Moreover, a novel registration measure including OF and MI is proposed

Qi Li received her B.S. degree in electronic information engineering from Xidian University, Xi’an, China, in 2002, and her M.S. degree in signal and information processing from Xidian University, Xi’an, China, in 2005. She is currently a Ph.D. student and a lecturer in the School of Electronic Engineering, Xidian University, Xi’an, China. Her research interests include medical image registration, object detection and pattern recognition.

References (30)

  • R. Suganya et al.

    Intensity based image registration by maximization of mutual information

    International Journal of Computer Applications

    (2010)
  • L. Tang, A. Hero, G. Hamarneh, Locally-adaptive similarity metric for deformable medical image registration, in:...
  • J.P.W. Pluim et al.

    Image registration by maximization of combined mutual information and gradient information

    IEEE Trans. Med. Imaging

    (2000)
  • D. Rueckert, M.J. Clarkson, D.L.G. Hill, et al., Non-rigid registration using higher-order mutual information, in:...
  • D.G. Lowe, Object recognition from local scale-invariant features, in: Proceedings of the International Conference on...
  • Cited by (15)

    • A robust non-local total-variation based image registration method under illumination changes in medical applications

      2019, Biomedical Signal Processing and Control
      Citation Excerpt :

      These metrics are obtained by examining the difference in the intensities of the corresponding pixels, based on the assumption that intensity is spatially stationary and independent for every pixel [13]. The limitations of these methods include negligence of pixel dependence in real images and possible occurrence of spatially varying intensity distortion [14]. Recently, more attention has been paid to image registration methods with illumination changes.

    • Atlas construction of cardiac fiber architecture using a multimodal registration approach

      2017, Neurocomputing
      Citation Excerpt :

      In addition, GRE image phase is extremely sensitive to small frequency shift of the magnetic field caused by magnetic susceptibility components of tissues, such as iron, lipids and collagen [26]. Considering the capability of GRE phase image and its robustness to B0 field inhomogeneity, and recent development techniques in susceptibility tensor imaging [27,28] and cutting-edge methods in QSM (quantitative susceptibility mapping) [29–34], combing with the proposed diffusion atlas reconstruction method (or others [35–38]), we could anticipate the proposed atlas reconstruction strategy can provide a critical and useful tool for both cardiac research studies and future clinical applications. Furthermore, the same atlas construction strategy can also be applied to other organs, such as knee [39], brain [40], and liver [41] studies.

    • Accurate inverse-consistent symmetric optical flow for 4D CT lung registration

      2016, Biomedical Signal Processing and Control
      Citation Excerpt :

      Therefore, some methods address the local intensity variation prior to image registration, which constitutes the aforementioned second class of methods. These methods usually obtain dense features that are robust to locally varying contrast, such as Self-Similarity Context (SSC) descriptors [15], Modality Independent Neighbourhood Descriptors (MIND) [16], normalised gradient fields [17,18], census transform [19–23], Local Binary Pattern (LBP) [24,25], entropy images [26], and manifold-based methods [27,28]. Notably, the census transform method is very similar to the LBP method.

    • Nonrigid medical image registration with locally linear reconstruction

      2014, Neurocomputing
      Citation Excerpt :

      Nonrigid image registration is a fundamental component in various medical imaging systems. Much approaches have been proposed to tackle this problem [1–9]. Many excellent work has been done, however, it is still a challenging task, because real-world images are often disturbed by two sources (see Fig. 1 for details).

    • Unsupervised Dimensionality Reduction With Multifeature Structure Joint Preserving Embedding for Hyperspectral Imagery

      2023, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
    View all citing articles on Scopus

    Qi Li received her B.S. degree in electronic information engineering from Xidian University, Xi’an, China, in 2002, and her M.S. degree in signal and information processing from Xidian University, Xi’an, China, in 2005. She is currently a Ph.D. student and a lecturer in the School of Electronic Engineering, Xidian University, Xi’an, China. Her research interests include medical image registration, object detection and pattern recognition.

    Hongbing Ji received the B.S. degree in radar engineering from Northern West Telecommunications Engineering College (the predecessor of Xidian University) in 1983, the M.S. degree in Circuit, Signals and Systems in 1989 and the Ph.D. degree in signal and information processing in 1999 from Xidian University. Currently, he is a professor and an advisor for Ph.D. students with the School of Electronic Engineering, Xidian University, Xi’an, China. His research interests include photoelectric signal processing, passive sensor based targets location and tracking, radar targets recognition and classification.

    View full text