1 Introduction

Contour detection is unavoidable for many image processing and/or computer vision tasks such as object recognition, image segmentation and shape approximation. A contour is usually represented as a curve and, thus, contour detection is reduced to fitting a curve to noisy data. Since curves are discretized in the digital image, discrete curve fitting has been studied for decades for different classes of curves and different discretization models [1,2,3,4,5,6, 9, 11,12,13]. An important advantage of using a discrete curve over a continuous one, when used for fitting, is that it requires no empirical threshold in error to define an inlier that affects the output. We note that an underlying threshold that a discrete model uses to collect its points is usually designed only to achieve some properties such as connectivity (see [7, 10] for example), and thus such a threshold is clearly justified.

This paper deals with the problem of fitting a discrete polynomial curve to 2D data in the presence of outliers, which is formulated as follows [8]: For a given data set \(P = \{ (i_p,j_p) \in \mathbb {Z}^2 ~|~ p=1,\ldots ,n \}\) (\(n < \infty \)) and a degree d, the discrete polynomial curve fitting is to find the discrete polynomial curve \(D ({{\varvec{a}}})\) that has the maximum number of inliers, i.e., data points in \(D ({{\varvec{a}}})\). Here \(D \left( {{\varvec{a}}} \right) \) is defined [10] by, with coefficients \({{\varvec{a}}}=(a_0,\ldots ,a_d) \in \mathbb {R}^{d+1}\),

(1)

where \((x_1,y_1) = (-\frac{1}{2},-\frac{1}{2})\), \((x_2,y_2) = (\frac{1}{2},-\frac{1}{2})\), \((x_3,y_3) = (\frac{1}{2},\frac{1}{2})\), \((x_4,y_4) = (-\frac{1}{2},\frac{1}{2})\). Equation (1) collects \((i,j) \in \mathbb {Z}^2\) iff the four points \((i+x_s, j+y_s)\) for \(s \in \{1,\ldots ,4\}\) lie on the both sides (\(y > \sum _{l=0}^d a_l x^l\) and \(y < \sum _{l=0}^d a_l x^l\)) of the underlying continuous polynomial curve \(y = \sum _{l=0}^d a_l x^l\), or at least one of them is on \(y = \sum _{l=0}^d a_l x^l\) (Fig. 1). We employ this discretization model because the connectivity is guaranteed [7].

Fig. 1.
figure 1

Integer point in (not in) \(D ({{\varvec{a}}})\). The black curves depict the underlying continuous polynomial curve \(y = f(x) = \sum _{l=0}^d a_l x^l\).

This problem can be discussed in the parameter (coefficient) space. For the data point \((i_p,j_p)\) (\(p \in \{1,\ldots , n\}\)), the feasible region \(R_p \subseteq \mathbb {R}^{d+1}\) is defined by

where \(h_{\left( p,s\right) } \left( {{\varvec{a}}} \right) = \left( j_p+y_s\right) - \sum _{l=0}^{d} \left( i_p+x_s\right) ^l a_l\) (Fig. 2(a)). We remark that \((i_p,j_p) \in D ( {{\varvec{a}}} )\) iff \({{\varvec{a}}} \in R_p\). \(R_p\) is an unbounded concave polytope, which is the union of two unbounded convex polytopes defined by \(h_{(p,1) ({{\varvec{a}}})} \le 0 \le h_{(p,3)} ({{\varvec{a}}})\) and \(h_{(p,2) ({{\varvec{a}}})} \le 0 \le h_{(p,4)} ({{\varvec{a}}})\) (cf. Fig. 3(a)). For \(\varPi \subseteq \{1,\ldots ,n\}\), the feasible region \(R_\varPi \) is defined as the intersection of the feasible regions each of which is for a data point \((i_p,j_p)\) where \(p \in \varPi \): \(R_\varPi = \bigcap _{p \in \varPi } R_p\) (Fig. 2(b)). Since \(R_\varPi = \emptyset \) if there exists no \({{\varvec{a}}} \in \mathbb {R}^{d+1}\) that satisfies \(\{ (i_p,j_p) ~|~ p \in \varPi \} \subseteq D ( {{\varvec{a}}} )\), we may assume \(R_\varPi \not = \emptyset \) below. Accordingly, our fitting problem is formulated in the parameter space as follows: Given P and d, find \(\varPi \subseteq \{1,\ldots ,n\}\) with the maximum \(| \varPi |\) and \({{\varvec{a}}} \in R_\varPi \) for that \(\varPi \).

Fig. 2.
figure 2

Feasible region. (a) shows \(R_p\) for \(d=2\) and \(\left( i_p,j_p\right) = \left( 0,0\right) \). (b) shows intersections among the feasible regions for four data points indexed from 1 to 4. A darker region has a larger number of inliers.

This problem requires evaluating \(R_\varPi \) for all \(\varPi \subset \{1,\ldots ,n\}\), which is reduced to classify each data point into an inlier or an outlier (we have \(2^n\) instances). To this end, a heuristic based incremental approach [8] was proposed where it iteratively evaluates whether a data point can be added to the current inlier set until the inlier set does not have its superset. In this approach, the feasible region for the current inlier set is tracked by its vertices: when a new data point is added to the current inlier set, a vertex of the new feasible region can be obtained from the intersection points of an edge (or a face) of the current feasible region and a facet (or two facets) of the feasible region for the data point being added. Evaluating such possible combinations all is, however, computationally expensive. The contribution of this paper is to facilitate this incremental evaluation by introducing an efficient computation of the vertices of the new feasible region. Our introduced computation eliminates combinations producing no vertex of the new feasible region, based on the property that an edge or face of a bounded feasible region is inside the convex hull of its vertices. Though the computational complexity is not reduced, our introduced computation efficiently reduces running time in practice, as shown in experiments.

2 Brief Review of Incremental Approach

The incremental approach [8] starts with computing the feasible region for an initialized inlier set. It then evaluates each data point one by one to update the feasible region. If the updated feasible region is not empty, the data point is added to the inlier set; it is regarded as an outlier otherwise. How to represent and update the feasible region is a key issue there, which is briefly explained below.

2.1 Representing a Feasible Region Using Its Vertices

A vertex of a feasible region is defined as an intersection point of its facets. For \(p=1,\ldots ,n\), and \(s=1,\ldots ,4\), a facet F(ps) of \(R_p\) is defined by

F(ps) is a part of the hyperplane \(h_{(p,s)} ({{\varvec{a}}}) = 0\) supporting \(R_p\) (Fig. 3(a)). Similarly, a facet \(F_\varPi (p,s)\) of \(R_\varPi \) is defined by \(F_\varPi \left( p,s\right) = F \left( p,s\right) \cap R_\varPi \) for \(\varPi \subseteq \{ 1,\ldots ,n \}\) and \((p,s) \in \varPi \times \{ 1,\ldots ,4 \}\) (Fig. 3(b)). Note that \(F_\varPi (p,s)\) may be empty for some (ps).

Fig. 3.
figure 3

Facets of a feasible region. (a): \(h_{(p,s)} ({{\varvec{a}}}) = 0\), \(s=1,...4\) are depicted in blue lines. \(\min _{s \in \{1,\ldots ,4\}} h_{(p,s)} ({{\varvec{a}}}) = 0\) is depicted in yellow, while \(\max _{s \in \{1,\ldots ,4\}} h_{(p,s)} ({{\varvec{a}}}) = 0\) is depicted in pink. (Color figure online)

A vertex of \(R_\varPi \) is given as the intersection of \(d+1\) facets. The set \(V_\varPi \) of the vertices of \(R_\varPi \) is defined by

(2)

See Fig. 4 for an illustration of \(V_\varPi \). Each combination of \(d+1\) facets determining an element in \(V_\varPi \) is indicated by an element in \(\varPsi _\varPi \), which is defined by

(3)

In this way, the feasible region of the inlier set \(\varPi \) can be represented by its vertices \(V_\varPi \) with the help of \(\varPsi _\varPi \). Note that different elements in \(\varPsi _\varPi \) may determine the same element of \(V_\varPi \).

Fig. 4.
figure 4

Vertices \(V_\varPi \) (yellow) of \(R_\varPi \) (\(\varPi = \{1,2,3\}\)). (Color figure online)

2.2 Tracking the Vertices of the Feasible Region

Theorem 1 indicates that \(\varPsi _\varPi \) plays an important role in tracking the vertices of the updated feasible region when a new inlier (represented by \(p^*\)) comes in. Note that \(R_\varPi \) is almost always bounded (see [8]).

Theorem 1

(Sekiya and Sugimoto[8]). For \(\varPi \subsetneq \{1,\ldots ,n\}\) such that \(R_\varPi \) is bounded and \(p^*\in \{1,\ldots ,n\} {\setminus } \varPi \), \(\varPsi _{\varPi \cup \{p^*\}} \subseteq \varPsi _{\varPi } \cup \varPhi _{\varPi , p^*}^1 \cup \varPhi _{\varPi , p^*}^2\), where

\(\left\{ \left( p_1,s_1\right) , \ldots , \left( p_\omega ,s_\omega \right) \right\} \) where \(\omega = d\) and \(d-1\) respectively corresponds to an edge and a (2-dimensional) face of \(R_\varPi \): \(\bigcap _{\lambda =1}^\omega F_\varPi (p_\lambda ,s_\lambda )\) (the intersection of \(\omega \) facets). An element in \(\varPhi _{\varPi ,p^*}^1\) (resp. \(\varPhi _{\varPi ,p^*}^2\)) is thus considered to be the combination of an edge (resp. a face) of \(R_\varPi \) and a facet (resp. two facets) of \(R_{p^*}\). Theorem 1 therefore indicates that a vertex of \(R_{\varPi \cup \{p^*\}}\) is a vertex of \(R_\varPi \) or otherwise obtained as the intersection point of an edge (resp. a 2-dimensional face) of \(R_\varPi \) and a facet (resp. two facets) of \(R_{p^*}\).

3 Efficient Update of Vertices of Feasible Region

Sekiya and Sugimoto [8] evaluates whether each element in \(\varPsi _\varPi \cup \varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2\) satisfies the condition in Eq. (3) to extract elements in \(\varPsi _{\varPi \cup \{p^*\}}\). When \(R_\varPi \) is bounded, any edge or face of \(R_\varPi \) is inside the convex hull of the vertices of \(R_\varPi \) on that edge or face (Lemma 1). Based on this, we introduce a computation to eliminate elements in \(\varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2\) that cannot be in \(\varPsi _{\varPi \cup \{p^*\}}\). This enables us to compute \(\varPsi _{\varPi \cup \{p^*\}}\) efficiently.

We define a set of edges (or faces) of \(R_\varPi \). Namely, for \(\omega = d \text{(edge) }, d-1 \text{(face) }\), we define

We note that an edge (or face) is determined as the intersection of d (or \(d-1\)) facets of \(R_\varPi \). For an edge (or face) \(\psi \in \varPsi _\varPi ^{(\omega )}\) (\(\omega = d, d-1\)), we denote by \(V_\varPi (\psi )\) the vertices on \(\psi \):

Let \(\mathrm{Conv} ( V_\varPi (\psi ) )\) denote the convex hull of \(V_\varPi (\psi )\). Then we have Lemma 1 whose proof is provided in Appendix A.

Lemma 1

For \(\varPi \subseteq \{1,\ldots ,n\}\) for which \(R_\varPi \) is bounded and \(\psi \in \varPsi _\varPi ^{(\omega )}\) (\(\omega = d,d-1\)), \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s) \subseteq \mathrm{Conv} (V_\varPi (\psi ))\).

Suppose we are adding an new inlier \(p^*\) to the current inlier set \(\varPi \). For each \(\psi \in \varPsi _{\varPi }^{(d)}\), \(\varPhi _{\varPi ,p^*}^1\) has four elements \(\psi \cup \{(p^*,s^*)\}\), \(s^*= 1,\ldots ,4\). Lemma 1 allows to identify \(\tilde{s}^*\in \{1,\ldots ,4\}\) such that \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s) \cap F (p^*, \tilde{s}^*) = \emptyset \) (the facet of \(R_{p^*}\) corresponding to \(\tilde{s}^*\) does not intersect with the edge \(\psi \)). We can thus define the subset of \(\varPhi _{\varPi ,p^*}^1\) by eliminating \(\psi \cup \{(p^*,\tilde{s}^*)\}\) and use the subset instead of \(\varPhi _{\varPi ,p^*}^1\) itself in computing \(\varPsi _{\varPi \cup \{p^*\}}\). The same argument can be applied to \(\varPhi _{\varPi ,p^*}^2\).

Fig. 5.
figure 5

Illustration of \(A_{\varPi ,p^*} ( \psi )\). For \(\psi \in \varPsi _\varPi ^{(\omega )}\), \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s)\) is depicted in green and \(V_\varPi (\psi )\) in yellow. For \(s^*= 1,\ldots ,4\), \(h_{(p^*,s^*)} ({{\varvec{a}}}) = 0\) is depicted in a solid and dotted red line where the solid part depicts \(F (p^*,s^*)\). In this example, \(s^*\) satisfies (i) in Eq. (4) if \(h_{(p^*,s^*)} ({{\varvec{a}}}) = 0\) runs between the two yellow vertices, and (ii) if \(h_{(p^*,s^*)} ({{\varvec{a}}}) = 0\) is depicted in a solid line on either of the dotted black lines which are parallel to the \(a_0\) axis and passes through yellow vertices. \(A_{\varPi ,p^*} ( \psi ) = \{3\}\), accordingly. (Color figure online)

To exclude \(\tilde{s}^*\) above, we define \(A_{\varPi ,p^*} (\psi )\) for \(\psi \in \varPsi _\varPi ^{(\omega )}\) (\(\omega = d, d-1\)), by

(4)

With \(A_{\varPi ,p^*} \left( \psi \right) \), we can identify the facets of \(R_{p^*}\) that potentially intersect with the edge/face \(\psi \) (see Fig. 5). (i) in Eq. (4) means that the hyperplane \(h_{(p^*,s^*)} ({{\varvec{a}}}) = 0\) runs between two vertices of \(\psi \), or passes through one of its vertices. Since the edge or face is inside the convex hull of its vertices (Lemma 1), it follows that the hyperplane intersects with \(\psi \). This however does not indicate that the facet determined by \(s^*\) intersects with \(\psi \) (the facet is only a part of the hyperplane). We therefore have to evaluate whether the facet indeed intersects with \(\psi \), for which (ii) plays the role. (ii) means that, for at least one vertex \({{\varvec{a}}} \in V_\varPi (\psi )\), \(s^*\) achieves the maximum or minimum of \(h_{(p^*,s')} ({{\varvec{a}}}), s'\in \{1,\ldots ,4\}\); unless (ii) is satisfied, the intersection of the hyperplane with \(\psi \) is out of the facet.

Using only \(s^*\in \{1,\ldots ,4\}\) in \(A_{\varPi ,p^*} \left( \psi \right) \) for each \(\psi \), we can define the subsets of \(\varPhi _{\varPi ,p^*}^1\) and \(\varPhi _{\varPi ,p^*}^2\) that can be used in computing \(\varPsi _{\varPi \cup \{p^*\}}\).

Now we formally prove that no element in \(\varPhi _{\varPi ,p^*}^1 {\setminus } X_{\varPi , p^*}^1\) or \(\varPhi _{\varPi ,p^*}^2 {\setminus } X_{\varPi , p^*}^2\) is in \(\varPsi _{\varPi \cup \{p^*\}}\).

Theorem 2

For \(\varPi \subsetneq \{1,\ldots ,n\}\) such that \(R_\varPi \) is bounded and \(p^*\in \{1,\ldots ,n\} {\setminus } \varPi \), \(\varPsi _{\varPi \cup \{p^*\}} \subseteq \varPsi _{\varPi } \cup X_{\varPi , p^*}^1 \cup X_{\varPi , p^*}^2\).

Proof

Consider \(\psi \in \varPsi _\varPi ^{(\omega )}\) (\(\omega = d, d-1\)). We show \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s) \cap F (p^*,s^*) = \emptyset \) for any \(s^*\in \{1,\ldots ,4\} {\setminus } A_{\varPi ,p^*} (\psi )\). Note that this means \(\psi ' \notin \varPsi _{\varPi \cup \{p^*\}}\) for any \(\psi ' \in \varPhi _{\varPi ,p^*}^{d+1-\omega } {\setminus } X_{\varPi ,p^*}^{d+1-\omega }\).

We first assume that \(s^*\) does not satisfy (i) in Eq. (4): \(h_{(p^*,s^*)} ({{\varvec{a}}}) < 0\) for \(\forall {{\varvec{a}}} \in V_\varPi (\psi )\) or \(h_{(p^*,s^*)} ({{\varvec{a}}}) > 0\) for \(\forall {{\varvec{a}}} \in V_\varPi (\psi )\). From Lemma 1 and the linearity of \(h_{(p^*,s^*)}\), we have \(h_{(p^*,s^*)} ({{\varvec{a}}}) < 0\) for \(\forall {{\varvec{a}}} \in \bigcap _{(p,s) \in \psi } F_\varPi (p,s)\) or \(h_{(p^*,s^*)} ({{\varvec{a}}}) > 0\) for \(\forall {{\varvec{a}}} \in \bigcap _{(p,s) \in \psi } F_\varPi (p,s)\). It follows that \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s) \cap F (p^*,s^*) = \emptyset \).

We next assume that \(s^*\) does not satisfy (ii): \(s^*\notin \mathop {\mathrm {arg\,min}}\nolimits _{s' \in \{1,\ldots ,4\}} h_{(p^*,s')} ({{\varvec{a}}})\) \(\cup \) \(\mathop {\mathrm {arg\,max}}\nolimits _{s' \in \{1,\ldots ,4\}} h_{(p^*,s')} ({{\varvec{a}}})\) for \(\forall {{\varvec{a}}} \in V_\varPi (\psi )\). Lemma 1 and the linearity of \(h_{(p^*,s^*)}\) imply \(s^*\notin \mathop {\mathrm {arg\,min}}\nolimits _{s' \in \{1,\ldots ,4\}} h_{(p^*,s')} ({{\varvec{a}}})\) \(\cup \) \(\mathop {\mathrm {arg\,max}}\nolimits _{s' \in \{1,\ldots ,4\}} h_{(p^*,s')} ({{\varvec{a}}})\) for \(\forall {{\varvec{a}}} \in \bigcap _{(p,s) \in \psi } F_\varPi (p,s)\). It follows that \(\bigcap _{(p,s) \in \psi } F_\varPi (p,s) \cap F (p^*,s^*) = \emptyset \).    \(\square \)

figure a
figure b

4 Algorithm

Algorithm 1 [8]Footnote 1 is the incremental approach to solve the discrete polynomial curve fitting for a given data point set P and a given degree d. It classifies each data index into two classes: \(\varPi \) (inlier) and \(\varPi ^\complement \) (outlier). \(\varPi \) is first initialized to be a set of \(d+1\) data indices for which \(V_\varPi \) and \(\varPsi _\varPi \) are computed at low cost (see [8] for the sufficient condition that \(R_\varPi \) is bounded). In the following loop (Steps 4–12), we add new data indices to either \(\varPi \) or \(\varPi ^\complement \) one by one. When \(\varPi \) is updated, \(V_\varPi \) and \(\varPsi _\varPi \) are also updated. Since \(\varPhi _{\varPi \cup \{p^*\}} \ne \emptyset \) if \(R_{\varPi \cup \{p^*\}} \ne \emptyset \) (see [8]), an inlier set obtained by Algorithm 1 is guaranteed to have no superset.

The purpose of Algorithm 2 is to efficiently compute \(V_{\varPi \cup \{p^*\}}\) and \(\varPsi _{\varPi \cup \{p^*\}}\) in Step 6 of Algorithm 1: efficient computation of the vertices of the feasible region updated by an additional data point. Some of the vertices are inherited from the current feasible region. So, the first loop (Steps 2–7) evaluates each vertex of the current feasible region to check if it serves as a vertex of the updated feasible region, where it suffices only to verify that the vertex is inside the feasible region for the additional data point (Step 5), since \(F_\varPi (p,s) \cap R_{p^*} = F_{\varPi \cup \{p^*\}} (p,s)\) for any \((p,s) \in \varPi \times \{1,\ldots ,4\}\). The vertices appearing only in the updated feasible region are obtained from \(X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2\) in the second loop (Steps 9–16). For each element in \(X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2\), we first check if the hyperplanes corresponding to the element intersect at a unique point (Step 10), and if so, we then check if that intersection point serves as a vertex of the updated feasible region (Step 12). Note that the condition in Step 12 is equivalent with \({{\varvec{a}}} \in \bigcap _{(p,s) \in \psi } F_{\varPi \cup \{p^*\}} (p,s)\); \({{\varvec{a}}} \in \bigcap _{(p,s) \in \psi } F (p,s)\) implies \({{\varvec{a}}} \in R_{p^*}\) since any \(\psi \in X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2\) contains (ps) such that \(p = p^*\). The most efficiently working part is Step 8, which reduces the number of iterations in the second loop.

The computational complexity for this method is the same with [8]: \(\mathcal {O} (n^{d+2})\) for a variable number n of data and a fixed degree d, because \(\mathcal {O} (|X_{\varPi ,p^*}^{\alpha }|) = \mathcal {O} (|\varPhi _{\varPi ,p^*}^{\alpha }|) = \mathcal {O} (|\varPsi _\varPi ^{(\omega )}|)\) for \(\alpha = 1,2\) where \(\omega = d+1-\alpha \). The practical efficiency of the method is evaluated in the next section.

5 Experiments

For \(d=2\), we generated input data sets P for \(n = 200, 400, 600, 800, 1000\) as follows: setting \((a_0,a_1,a_2) = (450,-3.2,0.0064)\), we randomly generated n integer points within \([0,499] \times [0,499]\) so that 80% of the points, called ground-truth inliers, are in \(D (a_0,a_1,a_2)\) while the other 20% points, called ground-truth outliers, are not in \(D (a_0,a_1,a_2)\), where each ground-truth outlier was generated so that its Euclidean distance from its closest point in \(D (a_0,a_1,a_2)\) is in [1, 4]. In the same way, we generated data sets for \(d=3\) where we used \((a_0,a_1,a_2,a_3) = (250,5,-0.03,4.0 \times 10^{-5})\) to generate their ground-truth inliers and outliers. P is shown in Fig. 6 for \(n=200,600,1000\). In the experiments, we used a PC with an Intel Xeon 3.7 GHz processor with 64 GB memory.

Fig. 6.
figure 6

Examples of input data sets. Ground-truth inliers are depicted in green while ground-truth outliers in red. (Color figure online)

We applied our proposed method (Algorithm 1 together with Algorithm 2) 100 times to each P to see the efficiency of our introduced computation. At each trial, we randomly initialized \(\varPi \) (Step 1) and selected \(p^*\) (Step 5), where the \(d+1\) data indices in the initial \(\varPi \) are chosen only from the ground-truth inliers. For comparison, we also applied Algorithm 1 alone (Sekiya+ [8]) 100 times to each P using the same initialization and data point selection. We then evaluated the recall (the ratio of ground-truth inliers in the output against the whole ground-truth inliers) and the computational time (i.e., processing time).

Tables 1 and 2 show the average of recalls over 100 trials for each P and the average of computational times over 100 trials for the two methods. We remark that the outputs by our proposed method are exactly the same as those by Algorithm 1 alone.

Table 1. Recall of ground-truth inliers (average over 100 trials).
Table 2. Computational time (ms) (average over 100 trials).

We see in Table 2 that our proposed method achieves the same results much faster than Algorithm 1 alone. We see that Algorithm 2 indeed efficiently updates feasible regions in computational time thanks to \(|X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2| < |\varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2|\). To visualize this efficiency, we compared \(|X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2|\) and \(|\varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2|\) in each iteration in a trial in the case of \(n=400\) (Fig. 7). We observe that (1) \(|X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2|\) is significantly smaller than \(|\varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2|\) and that (2) \(|X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2|\) is almost constant independent of the size of the inlier set. We remark that \(X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2 = \emptyset \) indicates \(A_{\varPi ,p^*} (\psi ) = \emptyset \) for all \(\psi \in \varPsi _\varPi ^{(\omega )}\) (\(\omega = d,d-1\)).

Fig. 7.
figure 7

\(|X_{\varPi ,p^*}^1 \cup X_{\varPi ,p^*}^2|\) (red) and \(|\varPhi _{\varPi ,p^*}^1 \cup \varPhi _{\varPi ,p^*}^2|\) (green) in each iteration. The horizontal axis is the number of data points already evaluated (i.e., \(|\varPi \cup \varPi ^\complement |\)). The results are from a trial of \(n=400\). (Color figure online)

6 Conclusion

We dealt with the problem of fitting a discrete polynomial curve to 2D data in the presence of outliers. We discussed how to efficiently compute the vertices of the feasible region for an incrementally updated inlier set in the parameter space. Based on the property that an edge or face of a bounded feasible region is inside the convex hull of its vertices, we introduced a computation to facilitate updating the vertices of the feasible region when a new data point is added to the current inlier set. The efficiency of our proposed computation was demonstrated by our experimental results.