A new Hausdorff distance for image matching

https://doi.org/10.1016/j.patrec.2004.09.022Get rights and content

Abstract

Object matching in two-dimensional images has been an important topic in computer vision, object recognition, and image analysis. The Hausdorff distance plays an important role in image matching. In order to deal with image matching problems in random noisy situations, a new Hausdorff distance is proposed in this paper. Unlike the other methods that match two binary images, the proposed method can match the gray images that have a few of pixel values. An example of object recognition is used to demonstrate the efficiency of the proposed method. The results show that, compared with MHD, the new Hausdorff distance can dispose of the noisy image matching in a more desirable manner, due to the fact that the comprehensive reflection of the gray information of neighbor pixels in the determination of the Hausdorff distance is taken into account. In addition, the proposed method can be implemented in an easy way.

Introduction

The most important problem of pattern recognition in computer vision is the ability to determine the extent to which one shape is similar to another. The matching techniques can be viewed as the measurements to determine the degree of resemblance between two objects that are superimposed on one another. In other words, matching techniques is usually regarded as a maximization of a measure of similarity. The geometric comparison of shapes is a fundamental tool for model-based image matching and object recognition. To recognize objects reliably, a key problem is to find an efficient approach to image matching for object search and identification, where the method should be easy to implement, less time consuming and can produce reasonable results.

The Hausdorff distance (HD) is a non-linear operator, which measures the mismatch of the two sets. Although the Hausdorff distance is of course not the only reasonable way to judge similarity of spatial point sets, it has proven to be effective in image matching problem and is used in at least one major commercial implementation of symbol-matching compression, so it is worth examining.

The Hausdorff distance measures the extent to which each point of a ‘model’ set lies near some point of an ‘image’ set and vice versa. Unlike most shape comparison methods, the Hausdorff distance is not based on finding corresponding mode and image points. Thus, it is more tolerant of perturbations in the location of points because it measures proximity rather than exact superposition.

However, the Hausdorff distance is extremely sensitive to outliers. If two sets of points A and B are similar, all the points are perfectly superimposed except only one single point in A which is far from any point in B, then the Hausdorff distance is determined by that point and is large. This sensitivity to outliers is not acceptable. Thus, the Hausdorff distance merit is so unstable as to be unusable.

So some modified Hausdorff distances have been proposed for the image matching, and we present a new method of the Hausdorff distance. In this paper, we first review the classical Hausdorff distance and some modified Hausdorff distances in Section 2. And then in Sections 3 The proposed method, 4 Experimental results and discussion, we propose the new Hausdorff distance and give the results of the experiments. Section 5 is conclusion.

Section snippets

Overview of the Hausdorff distance

We review some Hausdorff distances briefly in this section.

The proposed method

The Hausdorff distance is always used to match the two binary images; so once the image is disturbed by noise, the intensity of the random noise is the same as the object in the binary image. Thus by the Hausdorff distance, it cannot match the object exactly.

We propose the new method that the binary image is transformed into the gray image, and then the Hausdorff distance is computed for the gray image. Therefore, after the binary image is transformed, the intensity of the random noise is

Experimental results and discussion

In our experiment, we use the binary images of the English letters. The original and noisy images are shown in Fig. 1, Fig. 2, respectively. Compared with the classical Hausdorff distance and the MHD that is considered to deal with the noisy image matching, the proposed method is more predominant.

In our experiment, first, we transform the binary letter images into the gray images by the 3 × 3 window, so the gray images have the 9 pixel values. And then by Eqs. (12), (13), (14), we compute the

Conclusions

In theory, all kinds of the distance functions can be regarded as the measurement of the similarity. However, because of the ability of the toleration and robustness, the Hausdorff distance has attracted more attention.

Based on the classical Hausdorff distance and other modified Hausdorff distances, we propose the new Hausdorff distance, which can reduce the sensitivity and make the Hausdorff distance more robust. By the experiment, the proposed method can match the noisy image effectively.

The

Acknowledgments

The authors gratefully acknowledge financial support of the National Defense Key Laboratory of Target and Environment Feature (NDKLTEF) under grant 51476040103JW13. The authors greatly appreciate editors’ encouragement and the reviewers’ constructive comments.

References (8)

  • J. Paumard

    Robust comparison of binary images

    Pattern Recogn. Lett.

    (1997)
  • B. Takacs

    Comparing face images using the modified Hausdorff distance

    Pattern Recogn.

    (1998)
  • M.-P. Dubuisson et al.

    A modified Hausdorff distance for object matching

    Conference A: Computer Vision & Image Process., Proc. 12th IAPR Internat. Conf. on 9–13 October 1

    Pattern Recogn.

    (1994)
  • D.P. Huttenlocher et al.

    Comparing images using the Hausdorff distance under translation

    Proc. CVPR ’92., 1992 IEEE Comput. Soc. Conf. on 15–18 June

    Comput. Vision Pattern Recogn.

    (1992)
There are more references available in the full text version of this article.

Cited by (0)

View full text