Skip to main content
Log in

Total Least Squares Fitting of k-Spheres in n-D Euclidean Space Using an (n+2)-D Isometric Representation

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We fit k-spheres optimally to n-D point data, in a geometrically total least squares sense. A specific practical instance is the optimal fitting of 2D-circles to a 3D point set.

Among the optimal fitting methods for 2D-circles based on 2D (!) point data compared in Al-Sharadqah and Chernov (Electron. J. Stat. 3:886–911, 2009), there is one with an algebraic form that permits its extension to optimally fitting k-spheres in n-D. We embed this ‘Pratt 2D circle fit’ into the framework of conformal geometric algebra (CGA), and doing so naturally enables the generalization. The procedure involves a representation of the points in n-D as vectors in an (n+2)-D space with attractive metric properties. The hypersphere fit then becomes an eigenproblem of a specific symmetric linear operator determined by the data. The eigenvectors of this operator form an orthonormal basis representing perpendicular hyperspheres. The intersection of these are the optimal k-spheres; in CGA the intersection is a straightforward outer product of vectors.

The resulting optimal fitting procedure can easily be implemented using a standard linear algebra package; we show this for the 3D case of fitting spheres, circles and point pairs. The fits are optimal (in the sense of achieving the KCR lower bound on the variance).

We use the framework to show how the hyperaccurate fit hypersphere of Al-Sharadqah and Chernov (Electron. J. Stat. 3:886–911, 2009) is a minor rescaling of the Pratt fit hypersphere.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Actually, the use of an (n+2)-D space to represent n-D Euclidean space in a homogeneous, isometric manner is not a new trick. Wachter suggested it in 1816 in a letter to his teacher Gauss, and Lobachevsky called it a ‘horosphere’, but it has lain dormant. It was recently revived in the context of studying Euclidean geometry as a special case of conformal geometry by means of geometric algebra [3, 15].

  2. Geometric algebra is a framework to compute metrically with subspaces of a vector space, and with linear transformations on them. Conformal geometric algebra sets up an (n+2)-D vector space that conveniently represents the conformal (i.e., angle preserving) transformations in n-D, by means of its geometric algebra; hence its name. Among those conformal transformations are the Euclidean similarities, which is why CGA is suitable for Euclidean problems of a metric nature.

  3. For readers familiar with alternatives, in all formulas in this paper the notation ‘⋅’ signifies an inner product called the left contraction (see [9]) if used between multivectors with unequal grades; it coincides with the scalar product when used between equal grade multivectors.

  4. In Clifford algebra, this geometric product is called the Clifford product.

  5. Those familiar with GA should note that P′ is not simply an outermorphism extension of P, see Eq. (33).

  6. The simplest situation in which this occurs (barring a trivial P based on fully coincident data points) is two data points in \(\mathbb {R}^{4}\), which gives four zero eigenvalues to exactly fit the dual point pair (0-sphere) x 1x 2x 3x 4.

  7. For a 2-blade X, we compute \(X \ast P[X] = \frac{1}{N} \varSigma_{i} X \ast (p_{i} \wedge (p_{i} \cdot X) ) = -\frac{1}{N} \varSigma_{i} X \ast ((p_{i} \cdot X)\wedge p_{i} ) = -\frac{1}{N} \varSigma_{i} ((p_{i} \cdot X) \ast(p_{i} \cdot X) ) = -\frac{1}{N} \varSigma_{i} (p_{i} \cdot X)^{2}\), automatically providing the minus sign of Eq. (25), as promised.

References

  1. Al-Sharadqah, A., Chernov, N.: Error analysis for circle fitting algorithms. Electron. J. Stat. 3, 886–911 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Al-Sharadqah, A., Chernov, N., Huang, Q.: Errors-in-variables regression and the problem of moments. Braz. J. Probab. Stat. 27(4), 401–415 (2013)

    Article  MathSciNet  Google Scholar 

  3. Anglès, P.: Construction de revêtements du groupe conforme d’un espace vectorial muni d’une “métrique” de type (p,q). Ann. Inst. Henri Poincaré, Section A XXXIII, 33–51 (1980)

    Google Scholar 

  4. Coope, I.D.: Circle fitting by linear and nonlinear least squares. J. Optim. Theory Appl. 76(2), 381–388 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  5. de Berg, M., van Kreveld, M., Overmars, M., Schwarzkopf, O.: Computational Geometry: Algorithms and Applications, 2nd edn. Springer, Berlin (1998)

    Google Scholar 

  6. Doran, C., Lasenby, A.: Physical applications of geometric algebra (1999). Available at http://www.mrao.cam.ac.uk/~clifford/ptIIIcourse/

  7. Doran, C., Lasenby, A.: Geometric Algebra for Physicists. Cambridge University Press, Cambridge (2003)

    Book  MATH  Google Scholar 

  8. Dorst, L., Valkenburg, R.J.: Square root and logarithm of rotors in 3D conformal geometric algebra using polar decomposition. In: Dorst, L., Lasenby, J. (eds.) Guide to Geometric in Practice, pp. 81–104. Springer, Berlin (2011)

    Chapter  Google Scholar 

  9. Dorst, L., Fontijne, D., Mann, S.: Geometric Algebra for Computer Science: an Object-Oriented Approach to Geometry. Morgan Kaufman, San Mateo (2009)

    Google Scholar 

  10. Eastwood, M.G., Michor, P.W.: Some remarks on the Plücker relations. Rend. Circ. Mat. Palermo II-63, 85–88 (2000)

    MathSciNet  Google Scholar 

  11. Fontijne, D.: Efficient Implementation of Geometric Algebra. Ph.D. thesis, University of Amsterdam (2007)

  12. Gander, W., Golub, G.H., Strebel, R.: Least square fitting of circles and ellipses. BIT Numer. Math. 34, 558–578 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hestenes, D., Sobczyk, G.: Clifford Algebra to Geometric Calculus. Reidel, Dordrecht (1984)

    Book  MATH  Google Scholar 

  14. Kanatani, K., Al-Sharadqah, A., Chernov, N., Sugaya, Y.: Renormalization returns: hyper-renormalization and its applications. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Computer Vision—ECCV 2012. LNCS, vol. 7574, pp. 384–397. Springer, Berlin (2012)

    Chapter  Google Scholar 

  15. Li, H., Hestenes, D., Rockwood, A.: Generalized homogeneous coordinates for computation geometry. In: Sommer, G. (ed.) Geometric Computing with Clifford Algebra, pp. 27–59. Springer, Berlin (1999)

    Google Scholar 

  16. Nievergelt, Y.: Hyperspheres and hyperplanes fitted seamlessly by algebraic constrained total least-squares. Linear Algebra Appl. 331, 43–59 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  17. Perwass, C., Förstner, W.: Uncertain geometry with circles, spheres and conics. In: Klette, R., Kozera, R., Noakes, L., Weickert, J. (eds.) Geometric Properties from Incomplete Data. Computational Imaging and Vision, vol. 31, pp. 23–41. Springer, Berlin (2006)

    Chapter  Google Scholar 

  18. Pratt, V.: Direct least-squares fitting of algebraic surfaces. Comput. Graph. 21, 145–152 (1987)

    Article  MathSciNet  Google Scholar 

  19. Raynor, G.E.: On n+2 mutually orthogonal hyperspheres in Euclidean n-space. Am. Math. Mon. 41(7), 424–438 (1934)

    Article  MathSciNet  Google Scholar 

  20. Rockwood, A., Hildenbrand, D.: Engineering graphics in geometric algebra. In: Baryo-Corrochano, E., Scheuermann, G. (eds.) Geometric Algebra Computing, pp. 53–67. Springer, Berlin (2010)

    Chapter  Google Scholar 

  21. Taubin, G.: Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations, with applications to edge and range image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 13, 1115–1138 (1991)

    Article  Google Scholar 

  22. Valkenburg, R.: GAMatlab, a geometric algebra toolbox for Matlab, 2007–2013. (To be made available as public library by IRL, Auckland, NZ.)

  23. Valkenburg, R.J., Dorst, L.: Estimating motors from a variety of geometric data in 3D conformal geometric algebra. In: Dorst, L., Lasenby, J. (eds.) Guide to Geometric in Practice, pp. 25–46. Springer, Berlin (2011)

    Chapter  Google Scholar 

Download references

Acknowledgement

We thank an anonymous reviewer for encouraging us to investigate how to incorporate the hyperfit into CGA. For hyperspheres, this turned out to be a surprisingly minor modification, which led to our Sect. 7.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leo Dorst.

Appendix: Simulations of 3D Sphere Fitting

Appendix: Simulations of 3D Sphere Fitting

In the main text, we motivate our initial restriction to the Pratt fit for the extension to k-sphere by its quality relative to the geometric fit and the hyperaccuracy of the hyperfit of [1]. To substantiate the statements in [1] in this regard, we performed some simulations to compare the various fits for 3D spheres; some results are given in Figs. 8 and 9. We used data points generated from a unit sphere, with variable radial Gaussian noise σ radial (a standard deviation from 0.001 to 0.1, along the horizontal axis in each figure). Since one typically wants to use the fit to determine parameters of the sphere, we have characterized the properties of the fits by the average radius of the fit, and the standard deviation of this estimator. Both were determined by 50 repeated trials on 100 or 20 randomly generated data points (within every trial, the same random point set was used for all fitting methods). We plot two cases of directionality: an angular Gaussian distribution of the points with a standard deviation of 1 radian (Fig. 8), and an angular Gaussian distribution with a standard deviation of 0.2 radians (Fig. 9).

Fig. 8
figure 8

Radius determination as a function of the radial noise standard deviation σ radial, for spheres based on 100 data points, generated from a unit sphere, with angular standard deviation of 1 radian. With 50 trials per fit, we show average and standard deviation. Note the scale, all fits perform well (Color figure online)

Fig. 9
figure 9

Radius determination as a function of the radial noise standard deviation σ radial, for spheres based on 20 data points, generated form a unit sphere, with angular standard deviation of 0.2 radian. With 50 trials per fit, we show average radius, and standard deviation (Color figure online)

In the wide angular case of Fig. 8, the various sphere fits all give good and stable estimates for the ground truth radius value of 1. The variance of these estimates are all very similar, confirming their equivalent ‘optimality’ [1] in achieving the KCR lower bound for their variance term. The differences in bias between Taubin fit [21], Pratt fit [18], geometric fit and hyperfit [1] completely confirm the theoretical results of [1] in units of \(\sigma _{\mathrm {radial}}^{2}/\rho\) being 2, 1, \(\frac{1}{2}\), 0, respectively (which is a bit surprising in view of the considerable variance; the explanation is presumably the strong correlation between those fits, which we expose explicitly for Pratt fit and hyperfit in Sect. 7). For the 100 points used, the standard deviation exceeds the bias; other simulations (not shown) confirm our conclusion from Table 2 of [1] that only from about 1000 points onwards are these contributions to the MSE comparable.

In the narrow angular case of Fig. 9 (fitting to a small spherical cap of points), as the noise increases the data point cloud becomes hard to distinguish from a cloud with a standard deviation of about 0.3, and the sphere fits ultimately all adapt to this. The algebraic fits of Coope [4] and Nievergelt [16], motivated on Frobenius norms of data matrices, do so for relatively low noise. The geometric fit (computed by Levenberg-Marquardt, with the Pratt fit as a seed) appears to fit very large radii around noise of 0.05, whereas the geometric/algebraic fits of Taubin, Pratt and the hyperfit behave somewhat better here.

These simulations confirm the basic results of [1] (and could have been done in that paper): that their hyperfit is best, that the geometrically ‘exact’ fit is biased, and that the bias of the Pratt fit is small, and only four times that of the geometric fit. We use this as a numerical motivation for focusing on the Pratt fit: it is very good as a fit, and it is algebraically preferable to the hyperfit for the desired extension to k-spheres.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dorst, L. Total Least Squares Fitting of k-Spheres in n-D Euclidean Space Using an (n+2)-D Isometric Representation. J Math Imaging Vis 50, 214–234 (2014). https://doi.org/10.1007/s10851-014-0495-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-014-0495-2

Keywords

Navigation