Abstract
We discuss an interpolation scheme (based on optimization) to fit a given ordered sample of reduced data \(Q_m\) in arbitrary Euclidean space. Here the corresponding knots are not given and need to be first somehow guessed. This is accomplished by solving an appropriate optimization problem, where the missing knots minimize the cost function measuring the total squared norm of acceleration of the interpolant (here a natural spline). The initial infinite dimensional optimization (set to minimize an acceleration within the class of admissible curves) is reduced to the finite dimensional problem, for which the unknown optimal interpolation knots are to be found. The latter introduces a highly non-linear optimization task, both difficult for theoretical analysis and in derivation of computationally feasible optimization scheme (in particular handling medium and large number of data points). The experiments to compare the interpolants based either on optimal knots or on the so-called cumulative chords are performed for 2D and 3D data. The problem of interpolating or approximating reduced data is applicable in computer vision (image segmentation), in computer graphics (curve modeling in computer aided geometrical design) or in engineering and physics (trajectory modeling).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The problem of fitting data points in Euclidean space \(E^m\) is a classical problem for which there exist many different interpolation or approximation techniques (see e.g. [1–6]). Most classical interpolation schemes assume a given sequence of ordered data \({\mathscr {M}}=\{x_0,x_1,\dots ,x_n\}\) (where \(x_i\in E^m\)) together with the corresponding set of ordered interpolation knots \(\{t_i\}_{i=0}^n\) (parametric interpolation on non-reduced data). The problem of data fitting and modeling gets more complicated while dealing with the reduced data i.e. when only \({\mathscr {M}}\) is available (termed as non-parametric interpolation). Here, for a given fitting scheme, different choices of ordered interpolation knots \(\{\hat{t}_i\}_{i=0}^n\) render different curves. An early work on this topic can be found in [7] which was later extended among all in [8–19], where various quantitative criteria (often for special \(m=2,3\)) are introduced to measure the appropriateness of particular choice of \(\{\hat{t}_i\}_{i=0}^n\) (e.g. convergence rate for dense data \({\mathscr {M}}\) derived from the unknown curve). A more recent work in which different parameterization of the unknown knots are discussed, including the so-called cumulative chord parameterization
can be found e.g. in [5, 20–26]. The analysis of convergence rates to the unknown curve \(\gamma : [0,T]\rightarrow E^m\) and its length \(d(\gamma )\) (based on different parameterizations and dense samplings) is also recently studied among all in [27–40].
In this paper we introduce a special criterion of choosing the unknown knots (applicable not only to dense but also to sparse data) minimizing the mean squared of norm of the second derivative of the interpolating curve. An initial infinite dimensional optimization problem (see Lemma 1) is reduced to the corresponding finite dimensional one (set to determine the unknown knots). The latter constitutes a constrained highly non-linear optimization task (knots must be ordered) difficult for the theoretical analysis and computationally sensitive to the increase of interpolation points while standard optimization techniques are invoked. An alternative (not analyzed in this paper) is a computationally feasible optimization scheme called Leap-Frog (see [41–44]) which is here adapted to compute the suboptimal knots for ordered data in arbitrary Euclidean space \(E^m\). The performance of the Leap-Frog Algorithm is illustrated on 2D and 3D reduced data \({\mathscr {M}}\) (i.e. for \(m=2,3\)) and subsequently compared with the multidimensional analogue of Secant Method (see e.g. [2] or [46]). The initial guess is chosen according to cumulative chords (1).
The proposed scheme for knots selection is applicable, in data fitting and curve modeling (e.g. computer graphics and computer vision), in approximation and interpolation (e.g. in trajectory planning, image segmentation, data compression) as well as in many other engineering and physics problems (robotics or particle trajectory estimation). Specific applications for fitting sparse (and dense) reduced data \({\mathscr {M}}\) in \(E^m\) can be found e.g. in [5, 6] or [47].
2 Problem Formulation
Assume that ordered (by indexing) data points \({\mathscr {M}}=\{x_0,x_1,x_2,\dots ,x_n\}\) are given (here \(x_i\in E^m\) and \(x_i\ne x_{i+1}\), for \(i=0,1,\dots ,n\) with \(n\ge 2\)). Such \({\mathscr {M}}\) is called admissible data. Define now a class (denoted by \({\mathscr {I}}_T\)) of admissible curves \(\gamma \) as piecewise \(C^{2}\) curves \(\gamma :[0,T]\rightarrow E^m\) (where \(0<T<\infty \) is fixed) interpolating \({\mathscr {M}}\) with the ordered free unknown knots \(\{t_i\}_{i=0}^n\) satisfying \(\gamma (t_i)=x_i\) (here \(t_i<t_{i+1}\), \(t_0=0\) and \(t_n=T\)). More specifically, we assume that any choice of ordered interpolation knots \(\{t_i\}_{i=0}^n\) yields a curve \(\gamma \in C^1([0,T])\) such that it extends over sub-segment \([t_i,t_{i+1}]\) (for each \(i=0,1,\dots ,n-1\)) to \(\gamma \in C^{2}([t_i,t_{i+1}])\) - i.e. \(\gamma \) is \(C^2\) except of being \(C^1\) only at interpolation knots \(\{t_i\}_{i=0}^n\). The reason why we do not confine our analysis within a more natural class of \(\gamma \in C^2([t_0,t_n])\) is justified by the subsequent choice of computational scheme (called herein Leap-Frog - see [42]) which effectively deals with the optimization problem (3). This scheme is designed to iteratively produce the a sequence of curves \(\gamma _{LF}^k\in {\mathscr {I}}_T\) generically positioned outside of the class \(C^2([t_0,t_n])\) (i.e. \(\gamma _{LF}^k\notin C^2([t_0,t_n])\)). However, the computed optimum by Leap-Frog belongs to the tighter class of functions in \(C^2([t_0,t_n])\) - see [41, 48].
We look for an optimal \(\gamma _{opt}\in {\mathscr {I}}_T\) minimizing the following functional:
i.e. satisfying
For future needs define also \({\mathscr {J}}_T^i\) as the i-th segment energy:
obviously satisfying the inequality \({\mathscr {J}}_T^i(\gamma ) \le {\mathscr {J}}_T(\gamma )\). Note that for each function \(\gamma \in {\mathscr {I}}_T\) the corresponding sequence of unknown interpolation knots \(\{t_i\}_{i=0}^n\) (with \(t_0\) and \(t_n=T\) fixed) satisfies with \(n-1\) internal components the following:
Evidently (3) defines an infinite dimensional optimization task (considered over \({\mathscr {I}}_T\)) not invariant with respect to an arbitrary \(C^2\) class re-parameterization \(\phi :[0,T]\rightarrow [0,\tilde{T}]\) (with \(\tilde{T}>0\)).
Remark 1
Note that if we confine reparametrizations’ class to the affine ones i.e. \(\phi (t)=t{\tilde{T}}/T\) (with \(\phi ^{-1}(s)=sT/{\tilde{T}}\)) then as \(\phi ^{-1}(s)=t\), \(\phi ^{-1'}\equiv T/{\tilde{T}}\) and \(\phi ^{-1''}\equiv 0\), formula (2) reads for \(\tilde{\gamma }(s)=(\gamma \circ \phi ^{-1})(s)\) (upon using integration by substitution) as:
Thus a curve \(\gamma _{opt}\in {\mathscr {I}}_T\) is optimal to \({\mathscr {J}}_{T}\) if and only if a corresponding \({\tilde{\gamma }}_{opt}\in {\mathscr {I}}_{\tilde{T}}\) is optimal for \({\mathscr {J}}_{\tilde{T}}\). Therefore we can effectively assume \(t_n=T\) to be arbitrary. Similar argument can be applied to \(t_0=0\) (we set \(\phi (t)=t-t_0\) if the latter does not hold). \(\quad \square \)
In the anticipation of the forthcoming materials we briefly re-introduce different families of piecewise cubics interpolating data points \({\mathscr {M}}\) (see also [10, Chap.IV]) subject to various boundary conditions. In addition, in the remark to follow we also formulate the specific energy formulation (a special case of (2)) for the family of the so-called natural splines.
Remark 2
First recall that a cubic spline interpolant \(\gamma ^{C_i}_{\mathscr {T}}=\gamma ^{C}_{\mathscr {T}}|_{[t_i,t_{i+1}]}\), for given temporarily fixed interpolation knots \({\mathscr {T}}=(t_0,t_1,\dots ,t_{n-1},t_n)\) (here the knots \(\{t_n\}_{i=0}^n\) are admissible) is defined as
to satisfy (for \(i=0,1,2,\dots ,n-1\); \(c_{j,i}\in \mathbb {R}^m\), where \(j=1,2,3,4\))
with the velocities \(v_0,v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\) assumed to be temporarily free parameters (if unknown). The coefficients \(c_{j,i}\) (with \(\varDelta t_i=t_{i+1}-t_i\)) are defined as follows:
The latter comes from (7) and Newton’s formula (see e.g. [4, Chap. 1])
combined with \(c_{1,i}=\gamma _{\mathscr {T}}^{C_i}(t_i)\), \(c_{2,i}=\dot{\gamma }_{\mathscr {T}}^{C_i}(t_i)\), \(c_{3,i}=\ddot{\gamma }_{\mathscr {T}}^{C_i}(t_{i})/2\), and \(c_{4,i}={\gamma _{\mathscr {T}}^{C_i'''}(t_{i})}/6\). Adding \(n-1\) constraints enforcing continuity of second derivatives of \(\gamma _{\mathscr {T}}^C\) at \(x_1,x_2,\dots ,x_{n-1}\) i.e. for \(i=1,2,\dots ,n-1\) \(\ddot{\gamma }_{\mathscr {T}}^{C_{i-1}}(t_{i})=\ddot{\gamma }_{\mathscr {T}}^{C_i}(t_{i})\) leads (upon using (7) and (8)) to the m tridiagonal linear systems (strictly diagonally dominant) of \(n-1\) equations in \(n+1\) vector unknowns representing velocities at \({\mathscr {M}}\) i.e. \(v_0, v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\):
The terminal velocities \(v_0\) and \(v_{n}\) (if unknown) can be calculated from the conditions \(\ddot{\gamma }_{\mathscr {T}}^{C}(0)=\ddot{\gamma }_{\mathscr {T}}^{C}(T_c)={\mathbf {0}}\) combined with (8) (this yields a natural cubic spline interpolant \(\gamma _{\mathscr {T}}^{NS}\) - a special \(\gamma _{\mathscr {T}}^{C}\)) which supplements (9) with two missing linear equations:
The resulting m linear systems, each of size \((n+1)\times (n+1)\), (based on (9) and (10)) as strictly row diagonally dominant result in one vector solution \(v_0,v_1,v_2,\dots ,v_{n-1},v_n\) (solved e.g. by Gauss elimination without pivoting - see [4, Chap.4]), which when fed into (8) determines explicitly a natural cubic spline \(\gamma _{\mathscr {T}}^{NS}\) (with fixed \({\mathscr {T}}\)).
Combining (2) with (7) results in (in fact it is also true for any spline \(\gamma ^C_{\mathscr {T}}\) provided the respective velocities \(\{v_i\}_{i=0}^n\) are somhow prescribed - e.g. also for a Hermite or a complete spline [4, Chap. 4])):
Upon introducing \(c_{5,i}=c_{3,i}+c_{4,i}\varDelta t_i\) the latter reduces into:
By (8) we also have \(c_{5,i}=((x_{i+1}-x_i)/(\varDelta t_i)^2) -(v_i/\varDelta t_i)\) and thus
which when passed to (11) yields finally
Clearly for a given data points \({\mathscr {M}}\) and each strictly increasing sequence of fixed knots \(\{t_i\}_{i=0}^n\) the natural spline \({\gamma _{\mathscr {T}}^{NS}}\) exists and is unique. Note also that (12) (in contrast with each curve \(\gamma _{\mathscr {T}}^{C_i}\) itself) involves only two vector coefficients \(c_{3,i}\) and \(c_{4,i}\) appearing in (7).
Once we vary the knots \(\{t_i\}_{i=0}^n\) (subject to \(t_0<t_1<t_2<...<t_{n-1}<t_n=T\) with \(t_n=T\) and \(t_0\) fixed) the corresponding space of such natural splines \(\gamma ^{NS}\) (denoted here by \({\mathscr {I}}^{NS}\)) evidently satisfies \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\cap C^2([0,T_c])\) (also \(\gamma ^{NS}\) is \(C^{\infty }([t_i,t_{i+1}])\) for each \(i=0,1,\dots ,n-1\) - we omit subscript \(\mathscr {T}\) to emphasize that internal knots may vary).
If now we minimize (2) only over a class of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) (such thinning of (2) and (3) to (12) is justified later in Lemma 1) then
reduces into finding optimal parameters for (13) \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) (here terminal knots \(t_n=T\) and \(t_0\) are constant) within the family of natural splines \({\mathscr {I}}^{NS}\), subject to the constraint \(t_0<t_1^{opt}<t_2^{opt}<...<t_{n-1}^{opt}<t_n=T\). As we mentioned there is a one-to-one correspondence between natural spline \(\gamma ^{NS}\) and the interpolation knots. Consequently (13) can be reformulated (see also (5)) upon introduction \(\hat{\mathscr {T}}=(t_1,t_2,\dots ,t_{n-1})\) into the following minimization problem in \(\hat{\mathscr {T}}\):
Thus an implicit formula (12) yields, upon feeding \(v_i\) from (9) and (10), the explicit non-linear expression for \({\mathscr {J}}_{T}^F\) (see (14)) ready for minimization over \({\mathscr {I}}^{NS}\) with respect to free variables \((t_1,t_2,\dots ,t_{n-1})\in \varOmega _{t_0}^{T}\). Note that the value of the energy \({\mathscr {J}}_T^F(\hat{\mathscr {T}})\) is always finite for each \(\hat{\mathscr {T}}\in \varOmega _{t_0}^T\) since \(\gamma _{\mathscr {T}}^{NS}\in C^2\) and therefore \(\Vert \ddot{\gamma }_{\mathscr {T}}^{NS}\Vert \) is continuous over a compact set \([t_0,t_n]\) (as \(t_n=T<+\infty \)).
The class of natural splines \({\mathscr {I}}^{NS}\) is invoked here since one can reduce (3) to the same optimization confined merely to the subclass of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) (see [10, 41, 48]). In fact by [41] the following holds (for arbitrary fixed knots \(t_0<t_n=T\)):
Lemma 1
For a given admissible data points \({\mathscr {M}}\) in arbitrary Euclidean space \(E^m\) the subclass of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) satisfies
In the next section we test the optimization problem (15) (converted into (12) or (14)) on some 2D and 3D data.
3 Numerical Experiments
Experiments are performed with Mathematica Package. We compare now the performance of Leap-Frog (see [41]) with the Secant Method applied to (12) (as justified by (15)). The experiments are conducted only for sparse reduced data points \({\mathscr {M}}\) in \(E^{2,3}\). They represent special cases of all admissible sparse reduced data in arbitrary \(E^m\). The initial guesses are based on cumulative chords (1).
The first example deals with reduced data \({\mathscr {M}}\) in \(E^2\) (i.e. for \(m=2\)).
Example 1
Consider for \(n=5\) the following 2D reduced data points (see dotted points in Fig. 1):
A blind guess of uniform interpolation knots yields (rescaled to \(T_{c}=\hat{T}_n\) - see (1)):
and the initial guess based on cumulative chord \({\mathscr {T}}_c =(t_0,{\hat{\mathscr {T}}}_c,T_c)\) (i.e. based on the geometry of the layout of the data) renders:
The natural splines \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) (based on \({\mathscr {T}}_{uni}=(0,{\hat{\mathscr {T}}}_{uni},16.9146\))) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) (based on \({\mathscr {T}}_{c}\)) yield the energies \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{uni})=7.18796<{\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)=7.8536\). Both interpolants \( \gamma _{{\mathscr {T}}_{uni}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) are shown in Fig. 1a and b, respectively. The Secant Method yields (for (12)) the optimal knots (augmented by terminal times \(t_0=0\) and \(t_5=T_c\))
with the optimal energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{SM})=4.25388\). The execution time amounts to \(T^{SM}=7.037922sec\). The resulting curve \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) is plotted in Fig. 1c. Note that for each free variable Secant Method uses here two initial numbers \(t_i^c\pm 0.5\). The Leap-Frog Algorithm decreases the initial energy \({\mathscr {J}}^{F}_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\) upon 42 iterations to \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{SM})\) (i.e. as for Secant Method) with optimal values satisfying \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\), up to the 6th decimal place - this is the iteration bound). The respective execution time \(T^{LF}=3.333620sec<T^{SM}\). The 0th, 1st, 10th, 20th, 30th and 42nd iterations Leap-Frog decrease the energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\) to:
with only the first two iterations contributing to major energy decrease (and hence the corrections of the initial guess for knots taken as \({\mathscr {T}}_{c}\)). The resulting natural spline \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) yielded by Secant Method) based on \({\mathscr {T}}_{opt}^{LF}\) is shown in Fig. 1c and also visually compared with \(\gamma _{{\mathscr {T}}_c}^{NS}\) in Fig. 1d. Note that if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_c^{SM})\) (say up to 5th decimal place) then only 22 iterations are needed here with shorter execution time \(T^{LF}_{E}=2.270440\ sec\) and with optimal knots
We miss out now a bit on precise estimation of the optimal knots but we speed up the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \({\mathscr {T}}_{opt}^{LF_E}\approx {\mathscr {T}}_{opt}^{SM}\)). The other iteration a posteriori stopping criteria can also be considered which even further accelerate Leap-Frog performance at almost no cost in difference between computed curve an optimal curve. \(\quad \square \)
We pass now to an example of reduced data in \(E^3\) (i.e. with \(m=3\)).
Example 2
Consider for \(n=5\) the following 3D reduced data points (see dotted points in Fig. 2):
The uniform interpolation knots read (rescaled to \(T_{c}=\hat{t}_n\) - see (1)) as:
and the initial guess based on cumulative chord \({\mathscr {T}}_c\) is here:
The natural splines \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) (based on \({\mathscr {T}}_{uni}\)) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) (based on \({\mathscr {T}}_{c}\)) yield the following energies \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{uni})=10.145>{\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)=9.45031\). Again, both interpolants \( \gamma _{{\mathscr {T}}_{uni}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) are presented in Fig. 2a,b, respectively.
The Secant Method yields (for (12)) the optimal knots (augmented by terminal times \(t_0=0\) and \(t_5=T_c\))
with the optimal energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_{opt}^{SM})=4.65476\). The execution time amounts to \(T^{SM}=6.783365sec\). The resulting curve \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) is plotted in Fig. 2c. Note that for each free variable Secant Method uses here two initial numbers \(t_i^c\pm 0.1\). Leap-Frog decreases the initial energy to \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{LF})= {\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_{opt}^{SM})\) (as for the Secant Method) with the iteration stopping conditions \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\) (up to 6th decimal point) upon 38 iterations. The respective execution time amounts to \(T^{LF}= 3.757498<T^{SM}\). The 0th (i.e. \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\)), 1st, 2nd, 10th, 13th, and 36th iterations Leap-Frog decrease the energy to:
with again only the first three iterations contributing to major correction of the initial guess knots \({\mathscr {T}}_{c}\). The resulting natural spline \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) yielded by Secant Method) based on \({\mathscr {T}}_{opt}^{LF}\) is shown in Fig. 2c and also visually compared with \(\gamma _{{\mathscr {T}}_c}^{NS}\) in Fig. 2d. Again if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_c^{SM})\) (say up to 5th decimal place) then only 13 iterations are needed here with shorter execution time \(T^{LF}_{E}= 1.878057<T^{SM}\) and optimal knots
As previously, we miss out here a bit on precise estimation of the optimal knots but we accelerate the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \({\mathscr {T}}_{opt}^{LF_E}\approx {\mathscr {T}}_{opt}^{SM}\)). \(\quad \square \)
4 Conclusions
In this paper we discussed the method of fitting reduced data \({\mathscr {M}}\) in arbitrary Euclidean space \(E^m\) with natural splines \(\gamma _{\mathscr {T}}^{NS}\) based on finding the best unknown knots \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) (and thus the best natural spline) to minimize the total mean of squared norm of acceleration of the interpolant. The original optimization problem (2) derived in a wider class of piecewise-\(C^2\) class interpolants is reduced to the class of natural splines \(\gamma _{\mathscr {T}}^{NS}\) - see Lemma 1. This in turn reformulates into the finite-dimensional constrained optimization task (14) in \((t_1,t_2,\dots ,t_{n-1})\)-variables, subject to the satisfaction of the inequalities \(t_0<t_1<t_2,< ... <t_{n-1}<t_n\). Two computational schemes are deployed to test the quality of the computed interpolants - i.e. Leap-Frog and Secant Method. They both do not rely on large size matrix inversion during the computational procedure. For sparse reduced data \({\mathscr {M}}\) our optimization set-up together with applied numerical schemes offer a feasible choice (supplemented with computational tools) of approximating the unknown interpolation knots \(\{t_i\}_{i=0}^n\approx \{\hat{t}_i^{opt}\}_{i=0}^n\). Future work will include the theoretical analysis of the nature of (14) and convergence of tested iterative schemes to its local (global) minima (minimum).
Some recent related work on fitting reduced data \({\mathscr {M}}\) in \(E^{2,3}\) can also be found in [49, 50].
References
Bézier, P.E.: Numerical Control: Mathematics and Applications. John Wiley, New York (1972)
Davis, P.J.: Interpolation and Approximation. Dover Pub. Inc., New York (1975)
Farin, G.: Curves and Surfaces for Computer Aided Geometric Design. Academic Press, San Diego (1993)
de Boor, C.: A Practical Guide to Spline. Springer-Verlag, New York (1985)
Kvasov, B.I.: Methods of Shape-Preserving Spline Approximation. World Scientific, Singapore (2000)
Piegl, L., Tiller, W.: The NURBS Book. Springer-Verlag, Heidelberg (1997)
Epstein, M.P.: On the influence of parameterization in parametric interpolation. SIAM J. Numer. Anal. 13, 261–268 (1976)
Barsky, B.A., DeRose, T.D.: Geometric continuity of parametric curves: three equivalent characterizations. IEEE. Comp. Graph. Appl. 9(6), 60–68 (1989)
Boehm, E., Farin, G., Kahmann, J.: A survey of curve and surface methods in CAGD. Comp. Aided Geom. Design 1(1), 1–60 (1988)
de Boor, C., Höllig, K., Sabin, M.: High accuracy geometric Hermite interpolation. Comp. Aided Geom. Design 4(4), 269–278 (1987)
Hoschek, J.: Intrinsic parametrization for approximation. Comp. Aided Geom. Design 5(1), 27–31 (1988)
Lachance, M.A., Schwartz, A.J.: Four point parabolic interpolation. Comp. Aided Geom. Design 8, 143–149 (1991)
Lee, E.T.Y.: Corners, cusps, and parameterization: variations on a theorem of Epstein. SIAM J. Numer. Anal. 29, 553–565 (1992)
Lee, E.T.Y.: Choosing nodes in parametric curve interpolation. Comp. Aided Geom. Design 21, 363–370 (1989)
Marin, S.P.: An approach to data parameterization in parametric cubic spline interpolation problems. J. Approx. Theory 41, 64–86 (1984)
Nielson, G.M., Foley, T.A.: A survey of applications of an affine invariant norm. In: Lyche, T., Schumaker, L.L. (eds.) Math. Methods Comp. Aided Geom. Design. pp. 445–467. Academic Press, New York (1989)
Schaback, R.: Interpolation in \({\mathbb{R}}^2\) by piecewise quadratic visually \({C}^2\) Bézier polynomials. Comp. Aided Geom. Design 6, 219–233 (1989)
Sederberg, T.W., Zhao, J., Zundel, A.K.: Approximate parametrization of algebraic curves. In: Strasser, W., Seidel, H.P. (eds.) Theory and Practice in Geometric Modelling, pp. 33–54. Springer-Verlag, Heidelberg (1989)
Taubin, T.: Estimation of planar curves, surfaces, and non-planar space curves defined by implicit equations with applications to edge and range image segmentation. IEEE Trans. Patt. Mach. Intell. 13(11), 1115–1138 (1991)
Kocić, L.M., Simoncinelli, A.C., Della, V.B.: Blending parameterization of polynomial and spline interpolants. Facta Univ. (NIŠ) Ser. Math. Inf. 5, 95–107 (1990)
Mørken, K., Scherer, K.: A general framework for high-accuracy parametric interpolation. Math. Comput. 66(217), 237–260 (1997)
Noakes, L., Kozera, R.: Cumulative chords piecewise-quadratics and piecewise-cubics. In: Klette, R., Kozera, R., Noakes, L., Weickert, J. (eds.) Geometric Properties from Incomplete Data. Computational Imaging and Vision, vol. 31, pp. 59–75. Springer, The Netherlands (2006)
Kozera, R., Noakes, L.: Piecewise-quadratics and exponential parameterizations for reduced data. Appl. Maths Comput. 221, 620–638 (2013)
Farouki, R.T.: Optimal parameterizations. Comp. Aided Geom. Design 14(2), 153–168 (1997)
Rababah, A.: High order approximation methods for curves. Comp. Aided Geom. Des. 12, 89–102 (1995)
Schaback, R.: Optimal geometric Hermite interpolation of curves. In: Dæhlen, M., Lyche, T., Schumaker, L. (eds.) Mathematical Methods for Curves and Surfaces II, pp. 1–12. Vanderbilt University Press, Nashville (1998)
Floater, M.S.: Point-based methods for estimating the length of a parametric curve. J. Comput. Appl. Maths 196(2), 512–522 (2006)
Floater, M.S.: Chordal cubic spline interpolation is fourth order accurate. IMA J. Numer. Anal. 26, 25–33 (2006)
Kozera, R.: Cumulative chord piecewise-quartics for length and curve estimation. In: Petkov, N., Westenberg, M.A. (eds.) CAIP 2003. LNCS, vol. 2756, pp. 697–705. Springer, Heidelberg (2003)
Kozera, R.: Asymptotics for length and trajectory from cumulative chord piecewise-quartics. Fundam. Inf. 61(3–4), 267–283 (2004)
Kozera, R.: Curve modelling via interpolation based on multidimensional reduced data. Stud. Inf. 25(4B–61), 1–140 (2004)
Kozera, R., Noakes, L.: \(C^1\) interpolation with cumulative chord cubics. Fundam. Inf. 61(3–4), 285–301 (2004)
Kozera, R., Noakes, L., Klette, R.: External versus internal parameterizations for lengths of curves with nonuniform samplings. In: Asano, T., Klette, R., Ronse, C. (eds.) Geometry, Morphology, and Computational Imaging. LNCS, vol. 2616, pp. 403–418. Springer, Heidelberg (2003)
Kozera, R., Noakes, L.: Smooth interpolation with cumulative chord cubics. In: Wojciechowski, B., Smółka, B., Paulus, H., Kozera, R., Skarbek, W., Noakes, L. (eds.) ICCVG 2004. Computational Imaging and Vision, vol. 32, pp. 87–94. Springer, The Netherlands (2006)
Noakes, L., Kozera, R.: More-or-less uniform sampling and lengths of curves. Quar. Appl. Maths 61(3), 475–484 (2003)
Noakes, L., Kozera, R.: Interpolating sporadic data. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part II. LNCS, vol. 2351, pp. 613–625. Springer, Heidelberg (2002)
Noakes, L., Kozera, R., Klette, R.: Length estimation for curves with \(\varepsilon \)-uniform sampling. In: Skarbek, W. (ed.) CAIP 2001. LNCS, vol. 2124, pp. 518–526. Springer, Heidelberg (2001)
Noakes, L., Kozera, R., Klette, R.: Length estimation for curves with different samplings. In: Bertrand, G., Imiya, A., Klette, R. (eds.) Digital and Image Geometry. LNCS, vol. 2243, pp. 339–351. Springer, Heidelberg (2002)
Noakes, L., Kozera, R.: Cumulative chords and piecewise-quadratics. In: Wojciechowski, K. (ed.) ICCVG 2002. Association for Image Processing Poland, vol. II, pp. 589–595. Silesian University of Technology Gliwice Poland, Institute of Theoretical and Applied Informatics, PAS, Gliwice, Poland (2002)
Vincent, S., Forsey, D.: Fast and accurate parametric curve length computation. J. Graphics Tools 6(4), 29–40 (2002)
Kozera, R., Noakes, L.: Fitting data via optimal interpolation knots. To be submitted
Noakes, L.: A Global algorithm for geodesics. J. Math. Austral. Soc. Ser. A 64, 37–50 (1999)
Noakes, L., Kozera, R.: Nonlinearities and noise reduction in 3-source photometric stereo. J. Math. Imag. Vision 18(3), 119–127 (2003)
Noakes, L., Kozera, R.: 2D Leap-Frog Algorithm for optimal surface reconstruction. In: Latecki, M.J. (ed.) SPIE 1999. Vision Geometry VIII vol. 3811, pp. 317–328. Society of Industrial and Applied Mathematics, Bellingham, Washington. (1999)
Noakes, L., Kozera, R.: Denoising images: non-linear Leap-Frog for shape and light-source recovery. In: Asano, T., Klette, R., Ronse, C. (eds.) Geometry, Morphology, and Computational Imaging. LNCS, vol. 2616, pp. 419–436. Springer, Heidelberg (2003)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Janik, M., Kozera, R., Kozioł, P.: Reduced data for curve modeling - applications in graphics, computer vision and physics. Adv. Sci. Tech. 7(18), 28–35 (2013)
Noakes, L., Kozera, R.: Optimal natural splines with free knots. To be submitted
Kuznetsov, E.B., Yakimovich, A.Y.: The best parameterization for parametric interpolation. J. Comp. Appl. Maths 191, 239–245 (2006)
Shalashilin, V.I., Kuznetsov, E.B.: Parametric Continuation and Optimal Parameterization in Applied Mathematics and Mechanics. Kluver Academic Publishers, Boston, Dordrecht, London (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Kozera, R., Noakes, L. (2016). Optimal Knots Selection for Sparse Reduced Data. In: Huang, F., Sugimoto, A. (eds) Image and Video Technology – PSIVT 2015 Workshops. PSIVT 2015. Lecture Notes in Computer Science(), vol 9555. Springer, Cham. https://doi.org/10.1007/978-3-319-30285-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-30285-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30284-3
Online ISBN: 978-3-319-30285-0
eBook Packages: Computer ScienceComputer Science (R0)