1 Introduction

Repeated patterns of parallel lines are frequently found in many man-made structures, e.g., tiles on a floors, stairs, brick walls, fences, blinds and windows, chessboards etc. When a family of parallel scene lines is projected on the image plane, their projections converge at a common point, i.e., vanishing point. The geometric analysis of parallel scene lines in the image plane has several computer vision applications, e.g., 3D reconstruction, rectification, camera calibration, pattern recognition, segmentation etc. Hence, a significant amount of research has been conducted in the literature [111] on accurately computing parallel lines in the image plane based on vanishing point detection (see [5] for details). Most of the methods in the literature detect a set of parallel lines by calculating a least squares solution for the vanishing point, and then compute a Maximum Likelihood Estimate (MLE) using a non-linear Levenberg-Marquadt optimization algorithm to find the updated estimate of the location of the vanishing point that minimises a geometric error in the image itself [7].

In contrast, there has been much more limited research [3, 4, 8, 9, 12] on accurately computing equidistant parallel lines in the image plane. This involves fitting a model that both predicts the location of the vanishing point and determines the common projective spacing amongst the set of lines. Suboptimal performance can be achieved by treating this problem as a two stage process [8, 12] that first locates the vanishing point (or points in the case of Pilu and Adams [12]) and then recovers the equal spacing in that context. A better approach is to fit a combined model, that represents the pencil of equally spaced lines, directly to the image data. The standard method for doing this [4] employs a somewhat similar strategy to that used in vanishing point detection, that is by first finding an approximate least squares solution based on a algebraic distance model and then optimizing that solution locally to obtain a MLE for the group of equidistant parallel lines. Again, the latter involves minimising the geometric error in the image space using non-linear optimization. There are both obvious computational and less-obvious robustness issues with this approach; as the MLE is a local rather than global solution it depends crucially on the initial linear algebraic approximation. Even though the latter can be mitigated to a great extent by employing an initial image space conditioning, it is possible that for some line combinations and imaging error that the final MLE solution is far from globally optimal.

In this paper, we present a novel pseudo-geometric formulation for equidistant parallel lines which allows direct linear evaluation against fitted lines in the image space thus providing improved robustness of fit without the need for non-linear optimization. Such a formulation is useful as it can be used to form the basis of an equidistant parallel line detection scheme. Such a detection process poses a chicken and egg problem because we need to label and number a set of parallel lines in order to fit a model to them and yet until they have been accurately fitted to the model we can not robustly identify them. In [4, 12], this problem is solved using a RANSAC (RANdomised Sampling And Consensus; Fischler and Bolles [13]) approach in which large numbers of putative line assignments are fitted and tested against the model and further supporting evidence is sought and evaluated. While the details of the detection process are outside the scope of this paper, and we make no further contribution in this regard, it is clear that a method that is able to rapidly and robustly determine the projective spacing model from a minimum set of assigned line combinations will have computational and performance advantages.

This paper is organised as follows. Section 2 presents literature review. Section 3 explains the mathematical fundamentals of algebraic and pseudo-geometric formulation. Section 4 describes in detail the experimental results. Sections 4.2 and 4.3 demonstrate the robustness of the pseudo-geometric versus algebraic formulation on the Simulated and Checkerboard grid dataset respectively. Section 4.4 illustrate an application of our work in real scenes of the York dataset. Section 4.5 presents computational advantage of linear pseudo-geometric formulation versus a non-linear optimized solution. Section 5 concludes the paper.

2 Prior Art

The most widely used standard method in [4] selects equidistant parallel lines from a set of parallel lines by determining an algebraic least square solution based on a formulation using the line at infinity (described in Sect. 3.1). Next, it applies a non-linear optimization to the least squares solution to minimize the distance between the equidistant model parallel lines and image lines. Pilu and Adams [12], uses a similar formulation except it first finds the line at infinity directly in the image as the line connecting two sets of vanishing points. Also, in this method the non-linear optimization minimizes a description length cost function rather than the geometric error. Hence, the obtained equidistant model parallel lines will not be the best fit for the image lines. Clark and Mirmehdi [8], rectify perspective views of text by using vanishing points (similar to [4]). However, in contrast to [4], they employ a two stage process where they initially use paragraph recognition to find line positions, and then map the midpoints of the lines onto a vertical baseline. Subsequently, a least square fitting refines the mapped midpoints to determine the tilt of the document and vertical vanishing point. Herout et al. [9], similar to Pilu and Adams [12], solves a chicken and egg problem of finding the projective spacing and order of the lines, whose optimal values are determined by linear regression. Hansard et al. [3], detects checkerboard grids to calibrate Time-of-Flights images. First, the oriented clusters of image gradients are used to determine two pencils of lines that represent the two orthogonal sets of equidistant parallel lines of checkerboard grid. Next, the Hough Transform based analysis is used to select an acceptable solution for the grid based on the fixed characteristics of the checkerboard grid. A recent work in [14] applies the idea of line parallelism to parallel planar curves for automatic single-view calibration and rectification of highway images.

Our Contribution. In what follows, we present a remarkably simple reformulation of the equidistant pencil of parallel lines under perspective projection as a linear interpolation of their homogeneous form. Thus, if the first (\(0^{th}\) in the formulation below) and \(n^{th}\) lines in a sequence are known, equally spaced intermediaries can be represented by direct linear interpolation. Conversely, the homogeneous description of the first and \(n^{th}\) lines in a sequence form the parameters of a model that fits a set of intermediary equally spaced lines. Armed with this simple reformulation it is possible to make use of the fact that in their normalized form the dot product of homogeneous lines and points leads directly to their perpendicular separation to derive a pseudo-geometric error model. The error is only pseudo-geometric as in the formulation the interpolated line representation is not guaranteed to be normalized and more importantly its magnitude (deviation from normality) may be linearly spatially varying between the descriptions of the first and \(n^{th}\) lines. In other words, the normalisation of the first and \(n^{th}\) lines used to describe the pencil may not be the same. Nevertheless, we show in a sequence of experiments with synthetic and real data that a linear solution using the pseudo-geometric formulation is superior to the algebraic solution and performs close to the non-linear solution of the true geometric error.

3 Mathematical Foundation

We represent a point in homogeneous form as a vector \(\mathbf {p}=(x,y,z)\) and line \(\mathbf {l}=(a,b,c)\) in general form. \(\mathbf {p}_{i,j}\) denotes a \(j^{th}\) point of \(i^{th}\) line, i.e., \(\mathbf {l_{\text {i}}}\). A matrix is represented with a bold upper-case letter e.g., \(\mathbf {M}\). Following the formulation presented in [4], a group of equally spaced parallel lines on the scene plane can be represented as

$$\begin{aligned} ax+by+\lambda =0 \end{aligned}$$
(1)

where \(\lambda =0,1,2,\ldots ,n\) is a scalar (index of a line) which is hereby restricted to take only integer values to generate equally spaced lines in which n relates to a representative expected number of lines in the image. In matrix notation, the homogeneous form of the line can be represented as

$$\begin{aligned} \left[ \begin{array}{c} a\\ b\\ \lambda \end{array}\right] =\begin{bmatrix}a&0\\ b&0\\ 0&1 \end{bmatrix}\begin{bmatrix}1\\ \lambda \end{bmatrix} \end{aligned}$$
(2)

Under perspective imaging, the line transformation is

$$\begin{aligned} \mathbf {l}_{\lambda }=\mathbf {H}^{-\text {T}}\left[ \begin{array}{c} a\\ b\\ \lambda \end{array}\right] =\mathbf {H}^{-\text {T}}\begin{bmatrix}a&0\\ b&0\\ 0&1 \end{bmatrix}\begin{bmatrix}1\\ \lambda \end{bmatrix}=\mathbf {A}\begin{bmatrix}1\\ \lambda \end{bmatrix} \end{aligned}$$
(3)

where \(\mathbf {H}\) is the homography matrix from the scene plane to the image plane, \(\mathbf {A}\) is a 3 by 2 matrix whose first column represents the line \(\mathbf {l}_{0}\) and second column represents the vanishing line of the plane \(\mathbf {l}_{\infty }\) also referred to as the line at infinity. Thus, the group (pencil) of equally spaced parallel lines can be written as

$$\begin{aligned} \mathbf {l}_{\lambda }=\mathbf {l}_{0}+\lambda \mathbf {l}_{\infty } \end{aligned}$$
(4)

3.1 Algebraic Formulation

The state-of-the-art algorithm described in [4] does not explicitly disclose the solution for matrix \(\mathbf {A}\) of Eq. 3. Therefore, we carefully analysed their work and referenced material in order to explicitly present their formulation. We do this in order to show that a subtlety different formulation, given in the next section, involving the dot product against points forming the line gives a better geometric solution. Hence, we refer to the original formulation as algebraic and our alternative as pseudo-geometric. The equality expressed in Eq. 3 leads to the expectation that the vector cross product of the entities involved is zero

$$ \mathbf {l}_{\lambda }\times \mathbf {A}\begin{bmatrix}1\\ \lambda \end{bmatrix}=0 $$

Since \(\mathbf {l}_{\lambda }=(a_{\lambda },b_{\lambda },c_{\lambda })\), and \(\mathbf {A}=\begin{bmatrix}\mathbf {l}_{0}^{\text {T}}&\mathbf {l}_{\infty }^{\text {T}}\end{bmatrix}\), \(\mathbf {l}_{0}=(a_{0},b_{0},c_{0})\) and \(\mathbf {l}_{\infty }=(a_{\infty },b_{\infty },c_{\infty })\) it follows that

$$ \begin{bmatrix}a_{\lambda }\\ b_{\lambda }\\ c_{\lambda } \end{bmatrix}\times \begin{bmatrix}a_{0}&a_{\infty }\\ b_{0}&b_{\infty }\\ c_{0}&c_{\infty } \end{bmatrix}\begin{bmatrix}1\\ \lambda \end{bmatrix}=0 $$
$$ \begin{bmatrix}a_{\lambda }\\ b_{\lambda }\\ c_{\lambda } \end{bmatrix}\times \begin{bmatrix}a_{0}+a_{\infty }\lambda \\ b_{0}+b_{\infty }\lambda \\ c_{0}+c_{\infty }\lambda \end{bmatrix}=0 $$

Solving the cross product using \(\mathbf {u}\times \mathbf {v}=\hat{\mathbf {x}}(u_{y}v_{z}-u_{z}v_{y})+\hat{\mathbf {y}}(u_{z}v_{x}-u_{x}v_{z})+\hat{\mathbf {z}}(u_{x}v_{y}-u_{y}v_{x})\), where \(\hat{\mathbf {x}},\hat{\mathbf {y}},\hat{\mathbf {z}}\) are the unit vectors along the xyz axes of the cartesian coordinate frame, we get three equations as

$$\begin{aligned} \begin{bmatrix}b_{\lambda }(c_{0}+c_{\infty }\lambda )-c_{\lambda }(b_{0}+b_{\infty }\lambda )\\ c_{\lambda }(a_{0}+a_{\infty }\lambda )-a_{\lambda }(c_{0}+c_{\infty }\lambda )\\ a_{\lambda }(b_{0}+b_{\infty }\lambda )-b_{\lambda }(a_{0}+a_{\infty }\lambda ) \end{bmatrix}=0 \end{aligned}$$
(5)

however, only two rows are linearly independent, hence by rearranging and using the first two rows we can build a system of equations \(\mathbf {Z}_{b}\mathbf {x}_{b}=0\), subscript b is for the baseline method, as shown below

$$\begin{aligned} \begin{bmatrix}0&-c{}_{\lambda _{1}}&b{}_{\lambda _{1}}&0&-\lambda _{1}c_{\lambda _{1}}&\lambda _{1}b_{\lambda _{1}}\\ c{}_{\lambda _{1}}&0&-a{}_{\lambda _{1}}&\lambda _{1}c_{\lambda _{1}}&0&-\lambda _{1}a_{\lambda _{1}}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\mathbf {\vdots }\\ 0&-c{}_{\lambda _{i}}&b{}_{\lambda _{i}}&0&-\lambda _{i}c_{\lambda _{i}}&\lambda _{i}b_{\lambda _{i}}\\ c{}_{\lambda _{i}}&0&-a{}_{\lambda _{i}}&\lambda _{i}c_{\lambda _{i}}&0&-\lambda _{i}a_{\lambda _{i}}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots \\ 0&-c{}_{\lambda _{N}}&b{}_{\lambda _{N}}&0&-\lambda _{N}c_{\lambda _{N}}&\lambda _{N}b_{\lambda _{N}}\\ c{}_{N}&0&-a{}_{\lambda _{N}}&\lambda _{N}c_{\lambda _{N}}&0&-\lambda _{N}a_{\lambda _{N}} \end{bmatrix}\begin{bmatrix}a_{0}\\ b_{0}\\ c_{0}\\ a_{\infty }\\ b_{\infty }\\ c_{\infty } \end{bmatrix}=0 \end{aligned}$$
(6)

where \(\lambda _{i}\), \(i=1,2, \ldots ,N\) represent the index of a line and N is the number of image lines used for fitting. Equation 6 is solved by using Singular Value Decomposition (SVD) to get the least squares solution for \(\mathbf {x}_{b}\), i.e., \(\mathbf {l}_{0}=(a_{0},b_{0},c_{0})\) and \(\mathbf {l}_{\infty }=(a_{\infty },b_{\infty },c_{\infty })\). The solution is given by the column of \(\mathbf {V}\) corresponding to the smallest singular value in \(\mathbf {S}\) where the SVD of \(\mathbf {Z_{ b }}\) is \(\mathbf {USV^{T}}\), where \(\mathbf {U}\) and \(\mathbf {V}\) are orthogonal matrices, and \(\mathbf {S}\) is a diagonal matrix of non-negative singular values [7]. This formulation minimizes the algebraic error as there is no geometric interpretation as to the size of the vector product as the solution deviates from the measured lines in the image. The lines \(\mathbf {l}_{0}=(a_{0},b_{0},c_{0})\) and \(\mathbf {l}_{\infty }=(a_{\infty },b_{\infty },c_{\infty })\) are substituted in Eq. 4 and used to generate all the required equidistant model lines for integer values of \(\lambda =0,1,2,\ldots ,n\).

In practice, to mitigate the use of an algebraic formulation, it is recommended to condition the image data (line data) prior to application of this linear method in order to improve its accuracy and robustness. This involves finding minimum and maximum x and y coordinate values and setting their range to unity, the details of the conditioning are presented in the experimental Sect. 4.1.

3.2 Pseudo-geometric Formulation

We propose a novel formulation of the equidistant projective spacing model to allow direct pseudo-geometric linear evaluation against the fitted edge lines in the image thus providing improved robustness of fit without the need for a non-linear optimization. The key idea of our formulation is to exploit the dot product between the interpolated lines of the equidistant projective spacing model and points on edge lines in the image. The mathematical foundation of the proposed novel formulation is explained as follows. First, we rework the formulation to interpolate between real lines avoiding the use of the line at infinity (an alternative formulation using the line at infinity and the dot product is also possible and is considered in the experimental section). According to Eq. (4) for line n we get

$$\begin{aligned} \mathbf {l}_{n}=\mathbf {l}_{0}+n\mathbf {l}_{\infty } \end{aligned}$$
(7)

Rearranging Eq. 7 gives

(8)

Substituting Eq. 8 in Eq. 4 to eliminate the vanishing line \(\mathbf {l}_{\infty }\) to get

$$\begin{aligned} \mathbf {l}_{\lambda }&=\mathbf {l}_{0}+\lambda \frac{(\mathbf {l}_{n}-\mathbf {l}_{0})}{n}=\frac{n\mathbf {l}_{0}+\lambda (\mathbf {l}_{n}-\mathbf {l}_{0})}{n}=\frac{(n-\lambda )\mathbf {l}_{0}+\lambda \mathbf {l}_{n}}{n} \end{aligned}$$
(9)

which reformulates the problem as the interpolation between a pair of real lines \(\mathbf {l}_{0}\) and \(\mathbf {l}_{n}\), or any other pair of labelled lines separated by integer values of \(\lambda \). We like this formulation as it bounds the extent of the model to lie within the extent of the image and close to the lines under consideration. The chosen n need only be as large or a small amount larger than the number of lines under consideration. The second part of our reformulation is to use the dot product of the interpolated line with respect to a point \(\mathbf {p}=(x,y,1)\) on a line in the image. Ideally, this dot product should be zero as the line model should pass through all edge points that define the equidistant lines

$$ \mathbf {l}_{\lambda }\cdot \mathbf {p}=0 $$

as for a normalized line and point the dot product represents the minimum orthogonal distance between the line and the point [7]. This expands to:

$$ \frac{(n-\lambda )\mathbf {l}_{0}+\lambda \mathbf {l}_{n}}{n}\cdot (x,y,1)=0 $$
$$ (n-\lambda )\mathbf {l}_{0}\cdot (x,y,1)+\lambda \mathbf {l}_{n}\cdot (x,y,1)=0 $$

where \(\mathbf {l}_{0}=(a_{0},b_{0},c_{0})\) and \(\mathbf {l}_{n}=(a_{n},b_{n},c_{n})\),

$$\begin{aligned} (n-\lambda )a_{0}x+(n-\lambda )b_{0}y+(n-\lambda )c_{0}+\lambda a_{n}x+\lambda b_{n}y+\lambda c_{n}=0 \end{aligned}$$
(10)

Considering multiple points on each line in the image Eq. 10 can be expanded in matrix form as \(\mathbf {Zx}=0\) as shown below,

$$\begin{aligned} \begin{bmatrix}(n-\lambda _{1})x_{1,1}&(n-\lambda _{1})y_{1,1}&(n-\lambda _{1})&x_{1,1}\lambda _{1}&y_{1,1}\lambda _{1}&\lambda _{1}\\ (n-\lambda _{1})x_{1,2}&(n-\lambda _{1})y_{1,2}&(n-\lambda _{1})&x_{1,2}\lambda _{1}&y_{1,2}\lambda _{1}&\lambda _{1}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots \\ (n-\lambda _{i})x_{i,j}&(n-\lambda _{i})y_{i,j}&(n-\lambda _{i})&x_{i,j}\lambda _{i}&y_{i,j}\lambda _{i}&\lambda _{i}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots \\ (n-\lambda _{N})x_{N,M}&(n-\lambda _{N})y_{N,M}&(n-\lambda _{N})&x_{N,M}\lambda _{N}&y_{N,M}\lambda _{N}&\lambda _{N} \end{bmatrix}\begin{bmatrix}a_{0}\\ b_{0}\\ c_{0}\\ a_{n}\\ b_{n}\\ c_{n} \end{bmatrix}=0 \end{aligned}$$
(11)

where \(\lambda _{i,}\), \(i=1,2, \ldots ,N\) is the index of a line, represented by M points with coordinates \(x_{i,j}\) and \(y_{i,j}\), \(j=1,2, \ldots ,M\). Hence, \(\mathbf {Z}\) is a \(NM\times 6\) matrix and \(\mathbf {x}\) is a \(6\times 1\) vector. Again the SVD of matrix \(\mathbf {Z}\) in Eq. 11 gives the required least squares solution for \(\mathbf {x}\). If we use a pair of endpoints to represent each line then at least three or more lines are needed. The solution, \(\mathbf {l}_{0}=(a_{0},b_{0},c_{0})\) and \(\mathbf {l}_{n}=(a_{n},b_{n},c_{n})\), is substituted in Eq. 9 to generate all the required equidistant model lines for integer values of \(\lambda =0,1,2,\ldots ,n\), i.e.,

$$\begin{aligned} \begin{array}{ccc} a_{\lambda }=\frac{(n-\lambda )a_{0}+\lambda a_{n}}{n},&b_{\lambda }=\frac{(n-\lambda )b_{0}+\lambda b_{n}}{n},&c_{\lambda }=\frac{(n-\lambda )c_{0}+\lambda c_{n}}{n}\end{array} \end{aligned}$$
(12)

where n is chosen to be representative of the number of lines in the image.

In practice, the line representations recovered by this approach will not be normalized and so the dot product between them and the normalized point (the point is already normalized as by design its \(3^{rd}\) element is unity) will provide a scaled estimate of the geometric error, furthermore the degree of scaling will also be linearly interpolated according to the pencil spacing model. It is this change of scale that prevents the method providing a true geometric error and this is why we refer to it as a pseudo-geometric method. A normalised line equation in homogeneous form \(\hat{\mathbf {l}}_{\lambda }\) is computed as

$$ \hat{\mathbf {l}}_{\lambda }=\left( \frac{a_{\lambda }}{\sqrt{a_{\lambda }^{2}+b_{\lambda }^{2}}},\frac{b_{\lambda }}{\sqrt{a_{\lambda }^{2}+b_{\lambda }^{2}}},\frac{c_{\lambda }}{\sqrt{a_{\lambda }^{2}+b_{\lambda }^{2}}}\right) $$

and thus, the orthogonal distance between the normalised version of the line and a point is \(\hat{\mathbf {l}}_{\lambda }\cdot \mathbf {p}\). This expands to:

$$ \frac{\left( a_{\lambda },b_{\lambda },c_{\lambda }\right) }{\sqrt{a_{\lambda }^{2}+b_{\lambda }^{2}}}\cdot (x,y,1) $$

which shows that the effect of the lack of normalisation on the geometric error is to scale it by \(s_{\lambda }=\sqrt{a_{\lambda }^{2}+b_{\lambda }^{2}}\). This would not be a problem if the scaling itself were constant across the image region covered by the lines, as minimising a constantly scaled geometric error would give the same result as minimising the geometric error itself. As the formulation of the equidistant pencil model is linear the scaling also varies linearly as

$$ s_{\lambda }=\frac{(n-\lambda )s_{0}+\lambda s_{n}}{n} $$

Thus, in practice we wish the values of \(s_{0}\) and \(s_{n}\) that result from the least squares solution to be similar.

4 Experiments

We have conducted a series of experiments to compare the performance of the algebraic and pseudo-geometric formulations. The main difference between the two approaches is that the former solves a set of equations based on the homogeneous line representations themselves whereas the latter uses point locations associated with the line. In our comparative experiments, we use the two end points that define the physical extent of a fitted line in the image. In the following sections, we refer to the linear pseudo-geometric formulation as “our work”, algebraic formulation as “baseline” and a combined approach that uses the line at infinity of Eq. 4 with the dot product to derive an alternative system of equations similar to Eq. 11 as “our infinity”. The corresponding non-linear optimized versions are referred as “our work optimized”, “baseline optimized” and “our infinity optimized”. By optimized we mean that we applied the same non-linear least squares geometric error minimisation based on the orthogonal displacements (in the image) of the endpoints of the lines to the fitted model starting from the linear solution found by the respective method. Our experiments fall into three groups, comparisons using large quantities of simulated data where we can explore the range of geometrical distortion and simulated imaging/measurement error (ME); comparisons using real images of checkerboard patterns captured under a variety of perspectively distorted poses; and finally a few qualitative examples of our method applied to real images from the York vanishing point dataset [15]. Note that the groups of equidistant parallel lines for the checkerboard and York dataset are found via recursive application of RANSAC using an equidistant line model with a cost function that includes both supported model lines and missing image lines, details of which are beyond the scope of the current paper.

4.1 Data Conditioning

As discussed in Sect. 3.1, it is recommended that data is conditioned for the algebraic method. For homogeneous points conditioned versions \(\mathbf {p}'=(x',y',1)\) are determined using

$$ \left[ \begin{array}{c} x'\\ y'\\ 1 \end{array}\right] =\left[ \begin{array}{ccc} s_{x} &{} 0 &{} t_{x}\\ 0 &{} s_{y} &{} t_{y}\\ 0 &{} 0 &{} 1 \end{array}\right] \left[ \begin{array}{c} x\\ y\\ 1 \end{array}\right] =\mathbf {C}\mathbf {p} $$

where the scaling is and , and the translation is and , respectively.

Under this conditioning lines transform as \(\mathbf {l}^{\prime }=\mathbf {C}^{-T}\mathbf {l}\) [7]. Note that, all the results in this paper are based on errors measured in original unconditioned coordinates where the inverse used to transform conditioned points is \(\mathbf {p}^{\text {T}}=\mathbf {C}^{-1}\mathbf {p}^{\prime \text {T}}\) and for lines \(\mathbf {l}=\mathbf {C}^{T}\mathbf {l}^{\prime }\).

4.2 Experiment 1: Simulated Dataset

Simulated sets of projected parallel lines are generated and tested over a range of experimental conditions by varying both the magnitude of the random homography and the size of ME. The former is generated by randomly displacing the four corners of a \(768\times 1024\) image region uniformly in the x and y directions \(\pm [50,100,200,400]\) pixels and calculating the resulting homography with respect to the original \(768\times 1024\) region. Similarly, the measurement error is Gaussian distributed with standard deviation of [2, 4, 8, 16] pixels orthogonal to the simulated lines. For each condition 100 sets of 7 parallel lines of random nominal length (uniform \([2-1024]\) pixels) are generated with random orientation (uniform \([0-180^{\circ }])\), total spacing (uniform \([200-768]\) pixels) and end point offset (uniform \([0-50]\) pixels) giving a total of \(4\times 4\times 100=1600\) simulated images for comparing the performance of the algebraic and pseudo-geometric formulations.

Figure 1 shows an example of how well the equidistant model parallel lines fit the simulated image lines using our work versus baseline, and versus our work optimized and baseline optimized, when only (bottom) three image lines, with imaging error of 2 pixels, are used for fitting. It can be seen from Fig. 1a that our work obtains a better fit to the image lines than the baseline in Fig. 1b. Also, our work in Fig. 1a which is a direct linear evaluation against image lines is quite close to our work optimized in Fig. 1c and the baseline optimized in Fig. 1d.

Figure 2 shows a comparison of the average RMS perpendicular distance (d) in pixels (with errors bars showing \(25\,\%\)\(75\,\%\) percentiles) measured against all image lines using our work, baseline, our work optimized and baseline optimized on the simulated dataset. Figure 2a and b shows fitting using all possible sets of three and all lines respectively for an ME of 8 pixel. Each figure plots the average RMS against the size of random homography applied to the simulated line data. It can be seen that our work outperforms the baseline, i.e., it has a lower average RMS and error range in all cases. In fact, our work provides a better and consistent fit for equidistant parallel lines which is superior to the baseline (see Red versus Black line) and close to the non-linear solution of the true geometric error (see Red versus Blue and Green line). It is also noticeable that the magnitude of the random homography does not have a significant effect on performance.

In Table 1, we summarize the average RMS and standard deviation of d in pixels across all homographies on the simulated dataset measured against all lines with ME, when three and all lines are used for fitting using our work (with real lines, i.e., \(\mathbf {l}_{0}\) and \(\mathbf {l}_{n}\)), baseline and our infinity (with \(\mathbf {l}_{0}\) and vanishing line at infinity \(\mathbf {l}_{\infty }\)), along with optimized results for each algorithm. Table 2 is similar except that we measure the average RMS against ground truth (original lines without added ME). It can be seen from Tables 1 and 2 that our work consistently performs better than baseline and importantly close to the optimized solutions. In addition, notice that our work using real lines performs slightly better than the our infinity line version.

Fig. 1.
figure 1

Visual comparison of equidistant model parallel (colour dashed) lines overlayed on all the image (bold colour and black) lines: (a) our work, (b) baseline, (c) our work optimized, and (d) baseline optimized, when fitted using three image lines (black lines). (Color figure online)

Fig. 2.
figure 2

Average root mean square RMS perpendicular distance d in pixels with (\(25\,\%\)\(75\,\%\) percentiles) error bars. Fitting using three (left column) and all (right column) lines with measurement error (ME) of 8 pixels. (Color figure online)

Table 1. Average RMS d in pixels and standard deviation across homographies measured against all lines with imaging error (ME), when three and all lines with ME are used for fitting.
Table 2. Average RMS d in pixels and standard deviation across homographies measured against ground truth, when three and all lines with ME are used for fitting.
Table 3. Average normalisation scale ratio of line \(\mathbf {l}_{0}\) and \(\mathbf {l}_{n}\) across homographies at each imaging error (ME), when three and all lines are used for fitting.

In Table 3, we present the average of the normalisation scale ratio of line \(\mathbf {l}_{0}\) and \(\mathbf {l}_{n}\), i.e., \(\sqrt{a_{0}^{2}+b_{0}^{2}}/\sqrt{a_{n}^{2}+b_{n}^{2}}\), across all homographies at each ME, when three and all lines are used for fitting. Note that in Table 3, we use the same \(n=6\), i.e., \(7^{th}\), line for our work, baseline and our infinity. Scale ratios close to 1.0 are preferred as this implies the method performs true geometric error minimisation. It can be seen from Table 3 that our work using real lines is scaled closer to 1.0 as compared to baseline and our infinity which we believe explains the improved performance.

4.3 Experiment 2: Checkerboard Grid Dataset

We captured 30 images of a 8 by 12 checkerboard grid from a variety of perspectively distorted poses under varying light conditions. The checkerboard grid contains two sets of orthogonal equidistant parallel lines. Figure 3 shows equidistant model parallel lines fitted to three perspectively distorted checkerboard grid images. Table 4 shows average RMS perpendicular distance d in pixels and standard deviations across the 30 images for the first and second set of equidistant parallel lines respectively, when fitted using either all sets of three lines or all lines for both conditioned and original image lines (with both sets of errors measured consistently in the original image space). Again, our work using real lines or the line at infinity consistently outperforms the baseline in terms of mean RMS and standard deviation. These experiments also highlight the importance of conditioning on the algebraic formulation. Notice, in Table 4, that the errors are much larger for the baseline method applied in the original image domain (especially for three line fitting) and that subsequent non-linear optimization was not able to resolve these initial errors. This was also found to be the case for the simulated experiments presented previously but was excluded from the results section for brevity. Notice also that there is a small advantage of the formulation based on real lines over the line at infinity especially for the original unconditioned data.

Fig. 3.
figure 3

Equidistant model parallel (Red and Green colour) lines overlayed on the perspectively distorted checkerboard grid images. (Color figure online)

Table 4. Average RMS d in pixels with one standard deviation across 30 images of checkerboard grid.

4.4 Experiment 3: York Dataset

York vanishing point dataset [15] has been widely used for detecting parallel lines but not for detecting equidistant parallel lines. The coloured equidistant lines overlayed on the images of York data set in Fig. 4 show that the pseudo-geometric model can be used to detect equidistant parallel lines encountered in real scenes of indoor and outdoor environment.

Fig. 4.
figure 4

Equidistant parallel (coloured) lines detected on York data set [15]. (Color figure online)

Table 5. Average computational time in seconds with one standard deviation across 1600 images of the simulated dataset.

4.5 Computational Time

We present in Table 5 average computational time in seconds of our work, baseline, our infinity, versus non-linear optimization of our work, baseline and our work with infinity line on the simulated dataset. The computational time was measured in MatLab on a HP Z600 workstation with an Intel Xeon 2.66 GHz processor with 6 GB RAM. Notice that the non-linear optimization of our work is much faster as the solution is close to a local minimum.

5 Conclusions

We have presented a linear pseudo-geometric formulation for equidistant parallel line fitting under perspective distortion. The principal improvement over the prior art was to reformulate the problem using the dot product of interpolated model lines against end points in the image rather than using the line representations themselves. This turned the solution from a purely algebraic form to one that was close to geometric. In a series of experiments, using a large amount of simulated data and a smaller corroborating corpus of real data, we have shown that our improved solution does not require any pre-conditioning of the image data and avoids the need for computationally expensive non-linear optimization as a post processing step to minimise the true geometric error. A second but less important element of our reformulation was to avoid the use of the line at infinity in our equidistant line model and instead interpolate between real lines in the image. This refinement was also shown to give small but useful improvements in performance and robustness.