Elsevier

Pattern Recognition

Volume 44, Issue 1, January 2011, Pages 45-54
Pattern Recognition

A new measurement for assessing polygonal approximation of curves

https://doi.org/10.1016/j.patcog.2010.07.029Get rights and content

Abstract

This paper presents a novel method for assessing the accuracy of unsupervised polygonal approximation algorithms. This measurement relies on a polygonal approximation called the “reference approximation”. The reference approximation is obtained using the method of Perez and Vidal [11] by an iterative method that optimizes an objective function. Then, the proposed measurement is calculated by comparing the reference approximation with the approximation to be evaluated, taking into account the similarity between the polygonal approximation and the original contour, and penalizing polygonal approximations with an excessive number of points. A comparative experiment by using polygonal approximations obtained with commonly used algorithms showed that the proposed measurement is more efficient than other proposed measurements at comparing polygonal approximations with different number of points.

Introduction

In recent years, many methods have been proposed for constructing polygonal approximations of planar curves. However, techniques for assessing such methods have not developed to the same extent. In the original works in the field, many authors provided little analysis of their methods, relying on a qualitative demonstration to justify their methods. Obviously this is not sufficient because it makes it difficult to compare the merits of different methods. Therefore, a more quantitative approach is necessary [6].

Fischler and Wolf [3], Zang et al. [15], Kadonaga and Abe [5], and Ji and Haralick [4] used human observers, with ground truth information, to quantify the performance of their methods. These methods use simple synthetic curves to obtain ground truth information (the locations of dominant points). This is a drawback because such curves are not indicative of real curves extracted from images.

The main interest in assessing polygonal approximation algorithms is the quantification of the distortions produced in the process of approximation. For this purpose, several measurements have been proposed. The most common measurements used are the following:

  • Compression ratio (CR). CR=nndwhere n is the number of points in the contour and nd is the number of points in the polygonal approximation. A small number of dominant points implies a large compression ratio.

  • The sum of square error (ISE) is defined as ISE=i=1nei2where ei is the distance from Pi to the approximated line segment.

  • There is a tradeoff between these two measurements. A high ratio CR leads to a great distortion, and a low ISE can lead to a low CR. Sarkar [14] combined these two measurements as a ratio, producing a normalized figure of merit (FOM). This is defined as FOM=CRISE

  • Marji and Syi [8] showed that the two terms in the FOM are not balanced, causing the measurement to be biased toward approximations with lower ISE (which can be easily attained by increasing the number of detected dominant points). This drawback becomes more evident for real contours, which usually contain large numbers of points. Hence, it is not the best measurement for comparing contours with different numbers of dominant points. Therefore, Marji and Syi [8] used a modified version of the FOM, (in this case, they used the inverse of the FOM). The new measurement is defined as WE2x=ISECRxwhere x is used to control the contribution of the denominator in the overall result in order to reduce the imbalance between the two terms. These authors used x=1,2,3. However, it has not been clarified which of the x values are the best choice.

  • Rosin [12], in order to avoid the problem of the FOM measurement, used two components: fidelity and efficiency. The fidelity measures how well the polygon obtained by the algorithm to be tested fits the curve relative to the optimum polygon, in terms of the approximation error. The efficiency measures the compactness of the polygon obtained by the algorithm to be tested, relative to the optimum polygon that incurs the same error. These components are defined as Fidelity=EoptEapprox100Efficiency=MoptMapprox100where Eapprox is the error incurred by the polygonal approximation to be tested and Eopt is the error incurred by the polygonal approximation obtained by using the Perez and Vidal method [11]. Both polygonal approximations are set to produce the same number of lines. Mapprox is the number of lines in the polygonal approximation produced by the algorithm to be tested, and Mopt is the minimal number of lines that the polygonal approximation obtained by Perez and Vidal requires to produce the same error as the tested algorithm.

    Rosin used a combined measurement (the geometric mean of fidelity and efficiency)Merit=Fidelity×Efficiency

The method of Perez and Vidal [11] obtains the optimum polygonal approximation for a prefixed number of points. Therefore, this method is not an unsupervised method to obtain a polygonal approximation because the number of points is fixed. Rosin's measurement uses this method to analyze the performance of any unsupervised method by comparing the approximation polygonal obtained with the two optimum polygonal approximations mentioned in the previous paragraph, as computed by the Perez and Vidal method.

The advantage of Rosin's [12] measurement over Sarkar's [14] measurement is that it can be used to compare the results of polygonal approximations with different numbers of dominant points.

However, Rosin's measurement has a significant drawback. An example of this drawback is shown in Fig. 1. Fig. 1(a) shows an original contour. Figs. 1(b) and (c) show the optimum polygonal approximations obtained by the Perez and Vidal method for 6 and 250 points, respectively. Fig. 1(d) shows the polygonal approximation (it has 38 points) obtained by using the unsupervised method in [1]. If an unsupervised method obtains the same polygonal approximation as Fig. 1(b), Rosin's measurement for this method will be 100. The same case occurs if a method obtains a polygonal approximation similar to Fig. 1(c), while the polygonal approximation in Fig. 1(d) has a Rosin's measurement of 76.

However, according to human perception, the polygonal approximation in Fig. 1(c) has an excessive number of points, the polygonal approximation in Fig. 1(b) has an insufficient number of points and is very different from the original contour (it is not a dinosaur), and the polygonal approximation in Fig. 1(d) should be the best choice.

This drawback can be summarized as follows:

  • When a polygonal approximation with very few points is obtained that is similar to the optimum polygonal approximation with the same number of points, Rosin's measurement will be close to 100. In this case, the polygonal approximation may be much distorted from the original contour.

  • A polygonal approximation with many points, close to the number of breakpoints of the original contour, provides a Rosin's measurement close to 100, because the errors are very small. In this case, the polygonal approximation has an excessive number of redundant points and is inefficient.

To address this drawback, a novel measurement is proposed in this work. The propose of this measurement is to extend Rosin's method, for quantifying how well positioned breakpoints are, to also consider if an appropriate number of dominant points have been selected by an algorithm. The performance of this new measurement is compared with the performance of Rosin's measurement by using four methods to obtain polygonal approximations. The results obtained show that the proposed measurement is not subject to the drawback of the Rosin's measurement.

In Section 2, the proposed measurement is described. The experimental results are shown in Section 3, and the main conclusions are summarized in Section 4.

Section snippets

Our proposal

Let C be a contour and CH a polygonal approximation of contour C constructed by a method H. To evaluate the method H, our measurement relies on a specific polygonal approximation of contour C. This approximation will be called “the reference approximation”.

Experimental results and discussion

Some real contours have been used to obtain their reference approximations. The contours used and their reference approximations are shown in Fig. 11, Fig. 12. Table 1 presents the number of points (n) of the contours used, the number of points of the reference approximation (nopt), and the number of breakpoints of the original contours (nb).

Conclusions

A novel measurement for assessing the accuracy of polygonal approximation algorithms has been proposed. The propose of this measurement is to extend Rosin's method to also consider if an appropriate number of dominant points have been selected by an algorithm. This measurement relies on a polygonal approximation called reference approximation. The reference approximation is obtained using the method of Perez and Vidal by an iterative method that optimizes an objective function. The results show

A. Carmona-Poyato received his title of Agronomic Engineering and Ph.D. degree from the University of Cordoba (Spain), in 1986 and 1989, respectively. Since 1990 he has been working with the Department of Computing and Numerical Analysis of the University of Cordoba as lecturer. His research is focused on image processing and 2-D object recognition.

References (15)

There are more references available in the full text version of this article.

Cited by (20)

  • Dominant point detection based on suboptimal feature selection methods

    2020, Expert Systems with Applications
    Citation Excerpt :

    In Table 3, the results obtained from the real contours are exhibited. The performance of our method is compared with Carmona (Carmona-Poyato et al., 2011) according to the Carmona measurements had been described as (Carmona-Poyato et al., 2011) (refers to Eq. (16)) of the related dominant points representing a contour. As observed in Table 3, except for the Dinasour6, WEx values of our proposed SBS method is higher than Carmona’s method.

  • Unsupervised generation of polygonal approximations based on the convex hull

    2020, Pattern Recognition Letters
    Citation Excerpt :

    It means that the set of break points taken as dominant points will produce a perfect approximation, whereas this type of approximation is of no practical use since its compression ratio is very low. Carmona-Poyato et al. [7] proposed a new measure for assessing polygonal approximation of curves that uses the optimal algorithm of [29] and the optimization of an objective function based on WE2. Both measurements have a main drawback: they need optimal solutions which are computationally very expensive.

  • A new thresholding approach for automatic generation of polygonal approximations

    2016, Journal of Visual Communication and Image Representation
View all citing articles on Scopus

A. Carmona-Poyato received his title of Agronomic Engineering and Ph.D. degree from the University of Cordoba (Spain), in 1986 and 1989, respectively. Since 1990 he has been working with the Department of Computing and Numerical Analysis of the University of Cordoba as lecturer. His research is focused on image processing and 2-D object recognition.

R. Medina-Carnicer received the Bachelor degree in Mathematics from University of Sevilla (Spain). He received the Ph.D. in Computer Science from the Polytechnic University of Madrid (Spain) in 1992. Since 1993 he has been a lecturer of Computer Vision in Cordoba University (Spain). His research is focused on edge detection, evaluation of computer vision algorithms and pattern recognition.

F.J. Madrid-Cuevas received the Bachelor degree in Computer Science from Malaga University (Spain) and the Ph.D. degree from Polytechnic University of Madrid (Spain), in 1995 and 2003, respectively. Since 1996 he has been working with the Department of Computing and Numerical Analysis of Cordoba University, currently he is an assistant professor. His research is focused mainly on image segmentation, 2-D object recognition and evaluation of computer vision algorithms.

R. Muñoz-Salinas received the Bachelor degree in Computer Science from Granada University (Spain) and the Ph.D. degree from Granada University (Spain), in 2006. Since 2006 he has been working with the Department of Computing and Numerical Analysis of Cordoba University; currently he is an assistant professor. His research is focused mainly on mobile robotics, human–robot interaction, artificial vision and soft computing techniques applied to robotics.

N.L. Fernández-García received the Bachelor degree in Mathematics from Complutense University of Madrid (Spain) in 1988. He received the Ph.D. in Computer Science from the Polytechnic University of Madrid (Spain) in 2002. Since 1990, he has been working with the Department of Computing and Numerical Analysis at Córdoba University. Actually is a lecturer of Computer Science and Artificial Intelligence. His research is focused on edge detection, 2-D object recognition, and evaluation of computer vision algorithms.

View full text