1 Introduction

The problem of fitting data points in Euclidean space \(E^m\) is a classical problem for which there exist many different interpolation or approximation techniques (see e.g. [16]). Most classical interpolation schemes assume a given sequence of ordered data \({\mathscr {M}}=\{x_0,x_1,\dots ,x_n\}\) (where \(x_i\in E^m\)) together with the corresponding set of ordered interpolation knots \(\{t_i\}_{i=0}^n\) (parametric interpolation on non-reduced data). The problem of data fitting and modeling gets more complicated while dealing with the reduced data i.e. when only \({\mathscr {M}}\) is available (termed as non-parametric interpolation). Here, for a given fitting scheme, different choices of ordered interpolation knots \(\{\hat{t}_i\}_{i=0}^n\) render different curves. An early work on this topic can be found in [7] which was later extended among all in [819], where various quantitative criteria (often for special \(m=2,3\)) are introduced to measure the appropriateness of particular choice of \(\{\hat{t}_i\}_{i=0}^n\) (e.g. convergence rate for dense data \({\mathscr {M}}\) derived from the unknown curve). A more recent work in which different parameterization of the unknown knots are discussed, including the so-called cumulative chord parameterization

$$\begin{aligned} \hat{t}_i=0,\quad \quad \hat{t}_{i+1}=\hat{t}_i +\Vert q_{i+1}-q_i\Vert , \end{aligned}$$
(1)

can be found e.g. in [5, 2026]. The analysis of convergence rates to the unknown curve \(\gamma : [0,T]\rightarrow E^m\) and its length \(d(\gamma )\) (based on different parameterizations and dense samplings) is also recently studied among all in [2740].

In this paper we introduce a special criterion of choosing the unknown knots (applicable not only to dense but also to sparse data) minimizing the mean squared of norm of the second derivative of the interpolating curve. An initial infinite dimensional optimization problem (see Lemma 1) is reduced to the corresponding finite dimensional one (set to determine the unknown knots). The latter constitutes a constrained highly non-linear optimization task (knots must be ordered) difficult for the theoretical analysis and computationally sensitive to the increase of interpolation points while standard optimization techniques are invoked. An alternative (not analyzed in this paper) is a computationally feasible optimization scheme called Leap-Frog (see [4144]) which is here adapted to compute the suboptimal knots for ordered data in arbitrary Euclidean space \(E^m\). The performance of the Leap-Frog Algorithm is illustrated on 2D and 3D reduced data \({\mathscr {M}}\) (i.e. for \(m=2,3\)) and subsequently compared with the multidimensional analogue of Secant Method (see e.g. [2] or [46]). The initial guess is chosen according to cumulative chords (1).

The proposed scheme for knots selection is applicable, in data fitting and curve modeling (e.g. computer graphics and computer vision), in approximation and interpolation (e.g. in trajectory planning, image segmentation, data compression) as well as in many other engineering and physics problems (robotics or particle trajectory estimation). Specific applications for fitting sparse (and dense) reduced data \({\mathscr {M}}\) in \(E^m\) can be found e.g. in [5, 6] or [47].

2 Problem Formulation

Assume that ordered (by indexing) data points \({\mathscr {M}}=\{x_0,x_1,x_2,\dots ,x_n\}\) are given (here \(x_i\in E^m\) and \(x_i\ne x_{i+1}\), for \(i=0,1,\dots ,n\) with \(n\ge 2\)). Such \({\mathscr {M}}\) is called admissible data. Define now a class (denoted by \({\mathscr {I}}_T\)) of admissible curves \(\gamma \) as piecewise \(C^{2}\) curves \(\gamma :[0,T]\rightarrow E^m\) (where \(0<T<\infty \) is fixed) interpolating \({\mathscr {M}}\) with the ordered free unknown knots \(\{t_i\}_{i=0}^n\) satisfying \(\gamma (t_i)=x_i\) (here \(t_i<t_{i+1}\), \(t_0=0\) and \(t_n=T\)). More specifically, we assume that any choice of ordered interpolation knots \(\{t_i\}_{i=0}^n\) yields a curve \(\gamma \in C^1([0,T])\) such that it extends over sub-segment \([t_i,t_{i+1}]\) (for each \(i=0,1,\dots ,n-1\)) to \(\gamma \in C^{2}([t_i,t_{i+1}])\) - i.e. \(\gamma \) is \(C^2\) except of being \(C^1\) only at interpolation knots \(\{t_i\}_{i=0}^n\). The reason why we do not confine our analysis within a more natural class of \(\gamma \in C^2([t_0,t_n])\) is justified by the subsequent choice of computational scheme (called herein Leap-Frog - see [42]) which effectively deals with the optimization problem (3). This scheme is designed to iteratively produce the a sequence of curves \(\gamma _{LF}^k\in {\mathscr {I}}_T\) generically positioned outside of the class \(C^2([t_0,t_n])\) (i.e. \(\gamma _{LF}^k\notin C^2([t_0,t_n])\)). However, the computed optimum by Leap-Frog belongs to the tighter class of functions in \(C^2([t_0,t_n])\) - see [41, 48].

We look for an optimal \(\gamma _{opt}\in {\mathscr {I}}_T\) minimizing the following functional:

$$\begin{aligned} {\mathscr {J}}_T(\gamma )=\sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \Vert {\ddot{\gamma }}(t)\Vert ^2dt\;, \end{aligned}$$
(2)

i.e. satisfying

$$\begin{aligned} {\mathscr {J}}_T(\gamma _{opt})=\min _{\gamma \in {\mathscr {I}}_T}{\mathscr {J}}_T(\gamma )\;. \end{aligned}$$
(3)

For future needs define also \({\mathscr {J}}_T^i\) as the i-th segment energy:

$$\begin{aligned} {\mathscr {J}}_T^i(\gamma )=\int _{t_i}^{t_{i+1}}\Vert {\ddot{\gamma }}(t)\Vert ^2dt\;, \end{aligned}$$
(4)

obviously satisfying the inequality \({\mathscr {J}}_T^i(\gamma ) \le {\mathscr {J}}_T(\gamma )\). Note that for each function \(\gamma \in {\mathscr {I}}_T\) the corresponding sequence of unknown interpolation knots \(\{t_i\}_{i=0}^n\) (with \(t_0\) and \(t_n=T\) fixed) satisfies with \(n-1\) internal components the following:

$$\begin{aligned} \varOmega _{t_0}^{T}= \{(t_1,t_2,\dots ,t_{n-1})\in \mathbb {R}^{n-1}: t_0<t_1<t_2< ... <t_{n-1}<t_n=T\}. \end{aligned}$$
(5)

Evidently (3) defines an infinite dimensional optimization task (considered over \({\mathscr {I}}_T\)) not invariant with respect to an arbitrary \(C^2\) class re-parameterization \(\phi :[0,T]\rightarrow [0,\tilde{T}]\) (with \(\tilde{T}>0\)).

Remark 1

Note that if we confine reparametrizations’ class to the affine ones i.e. \(\phi (t)=t{\tilde{T}}/T\) (with \(\phi ^{-1}(s)=sT/{\tilde{T}}\)) then as \(\phi ^{-1}(s)=t\), \(\phi ^{-1'}\equiv T/{\tilde{T}}\) and \(\phi ^{-1''}\equiv 0\), formula (2) reads for \(\tilde{\gamma }(s)=(\gamma \circ \phi ^{-1})(s)\) (upon using integration by substitution) as:

$$\begin{aligned} {\mathscr {J}}_{\tilde{T}}(\tilde{\gamma })= & {} {T^3\over \tilde{T}^3} \sum _{i=0}^{n-1}\int _{\tilde{t}_i}^{\tilde{t}_{i+1}} \phi ^{-1'}(s)\Vert (\ddot{\gamma }\circ \phi ^{-1})(s)\Vert ^2ds= {T^3\over \tilde{T}^3} \sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}}\Vert \ddot{\gamma }(t)\Vert ^2dt\nonumber \\= & {} {T^3\over \tilde{T}^3}{\mathscr {J}}_{T}(\gamma )\;. \end{aligned}$$
(6)

Thus a curve \(\gamma _{opt}\in {\mathscr {I}}_T\) is optimal to \({\mathscr {J}}_{T}\) if and only if a corresponding \({\tilde{\gamma }}_{opt}\in {\mathscr {I}}_{\tilde{T}}\) is optimal for \({\mathscr {J}}_{\tilde{T}}\). Therefore we can effectively assume \(t_n=T\) to be arbitrary. Similar argument can be applied to \(t_0=0\) (we set \(\phi (t)=t-t_0\) if the latter does not hold). \(\quad \square \)

In the anticipation of the forthcoming materials we briefly re-introduce different families of piecewise cubics interpolating data points \({\mathscr {M}}\) (see also [10, Chap.IV]) subject to various boundary conditions. In addition, in the remark to follow we also formulate the specific energy formulation (a special case of (2)) for the family of the so-called natural splines.

Remark 2

First recall that a cubic spline interpolant \(\gamma ^{C_i}_{\mathscr {T}}=\gamma ^{C}_{\mathscr {T}}|_{[t_i,t_{i+1}]}\), for given temporarily fixed interpolation knots \({\mathscr {T}}=(t_0,t_1,\dots ,t_{n-1},t_n)\) (here the knots \(\{t_n\}_{i=0}^n\) are admissible) is defined as

$$\begin{aligned} \gamma _{\mathscr {T}}^{C_i}(t)=c_{1,i}+c_{2,i}(t-t_i)+c_{3,i}(t-t_i)^2 +c_{4,i}(t-t_i)^3, \end{aligned}$$
(7)

to satisfy (for \(i=0,1,2,\dots ,n-1\); \(c_{j,i}\in \mathbb {R}^m\), where \(j=1,2,3,4\))

$$\begin{aligned} \gamma _{\mathscr {T}}^{C_i}(t_{i+k})=x_{i+k}\;, \quad \dot{\gamma }_{\mathscr {T}}^{C_i}(t_{i+k})=v_{i+k}\;,\quad k=0,1 \end{aligned}$$

with the velocities \(v_0,v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\) assumed to be temporarily free parameters (if unknown). The coefficients \(c_{j,i}\) (with \(\varDelta t_i=t_{i+1}-t_i\)) are defined as follows:

$$\begin{aligned} c_{1,i}= & {} x_i\;,\nonumber \\ c_{2,i}= & {} v_i\;,\nonumber \\ c_{4,i}= & {} {v_i+v_{i+1}-2{x_{i+1}-x_i\over \varDelta t_i}\over (\varDelta t_i)^2}\;, \nonumber \\ c_{3,i}= & {} {{(x_{i+1}-x_i)\over \varDelta t_i}-v_i\over \varDelta t_i}-c_{4,i}\varDelta t_i\;. \end{aligned}$$
(8)

The latter comes from (7) and Newton’s formula (see e.g. [4, Chap. 1])

$$\begin{aligned} \gamma _{\mathscr {T}}^{C_i}(t)= & {} \gamma _{\mathscr {T}}^{C_i}(t_i)+ \gamma _{\mathscr {T}}^{C_i}[t_i,t_i](t-t_i) + \gamma _{\mathscr {T}}^{C_i}[t_i,t_i,t_{i+1}](t-t_i)^2\\&\&+\gamma _{\mathscr {T}}^{C_i}[t_i,t_i,t_{i+1},t_{i+1}](t-t_i)^2(t-t_{i+1}) \end{aligned}$$

combined with \(c_{1,i}=\gamma _{\mathscr {T}}^{C_i}(t_i)\), \(c_{2,i}=\dot{\gamma }_{\mathscr {T}}^{C_i}(t_i)\), \(c_{3,i}=\ddot{\gamma }_{\mathscr {T}}^{C_i}(t_{i})/2\), and \(c_{4,i}={\gamma _{\mathscr {T}}^{C_i'''}(t_{i})}/6\). Adding \(n-1\) constraints enforcing continuity of second derivatives of \(\gamma _{\mathscr {T}}^C\) at \(x_1,x_2,\dots ,x_{n-1}\) i.e. for \(i=1,2,\dots ,n-1\) \(\ddot{\gamma }_{\mathscr {T}}^{C_{i-1}}(t_{i})=\ddot{\gamma }_{\mathscr {T}}^{C_i}(t_{i})\) leads (upon using (7) and (8)) to the m tridiagonal linear systems (strictly diagonally dominant) of \(n-1\) equations in \(n+1\) vector unknowns representing velocities at \({\mathscr {M}}\) i.e. \(v_0, v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\):

$$\begin{aligned} v_{i-1}\varDelta t_i+ & {} 2v_i(\varDelta t_{i-1}+\varDelta t_i)+v_{i+1}\varDelta t_{i-1}=b_i\;,\nonumber \\ b_i= & {} 3(\varDelta t_i{x_i-x_{i-1}\over \varDelta t_{i-1}} +\varDelta t_{i-1}{x_{i+1}-x_i\over \varDelta t_i})\;. \end{aligned}$$
(9)

The terminal velocities \(v_0\) and \(v_{n}\) (if unknown) can be calculated from the conditions \(\ddot{\gamma }_{\mathscr {T}}^{C}(0)=\ddot{\gamma }_{\mathscr {T}}^{C}(T_c)={\mathbf {0}}\) combined with (8) (this yields a natural cubic spline interpolant \(\gamma _{\mathscr {T}}^{NS}\) - a special \(\gamma _{\mathscr {T}}^{C}\)) which supplements (9) with two missing linear equations:

$$\begin{aligned} 2v_0+v_1=3{x_1-x_0\over \varDelta t_0}\;,\quad v_{n-1}+2v_n=3{x_n-x_{n-1}\over \varDelta t_{n-1}}. \end{aligned}$$
(10)

The resulting m linear systems, each of size \((n+1)\times (n+1)\), (based on (9) and (10)) as strictly row diagonally dominant result in one vector solution \(v_0,v_1,v_2,\dots ,v_{n-1},v_n\) (solved e.g. by Gauss elimination without pivoting - see [4, Chap.4]), which when fed into (8) determines explicitly a natural cubic spline \(\gamma _{\mathscr {T}}^{NS}\) (with fixed \({\mathscr {T}}\)).

Combining (2) with (7) results in (in fact it is also true for any spline \(\gamma ^C_{\mathscr {T}}\) provided the respective velocities \(\{v_i\}_{i=0}^n\) are somhow prescribed - e.g. also for a Hermite or a complete spline [4, Chap. 4])):

$$\begin{aligned} {\mathscr {J}}_{T} (\gamma _{\mathscr {T}}^{NS})= & {} \sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \Vert \ddot{\gamma }_{{\mathscr {T}}}^{NS_i}(t)\Vert ^2dt\nonumber \\= & {} \sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \langle 2c_{3,i}+6c_{4,i}(t-t_i)|2c_{3,i}+6c_{4,i}(t-t_i)\rangle dt\nonumber \\= & {} 4 \sum _{i=0}^{n-1}(\Vert c_{3,i}\Vert ^2\varDelta t_i+3\Vert c_{4,i}\Vert ^2(\varDelta t_i)^3 +3\langle c_{3,i}|c_{4,i}\rangle (\varDelta t_i)^2)\;.\nonumber \\ \end{aligned}$$

Upon introducing \(c_{5,i}=c_{3,i}+c_{4,i}\varDelta t_i\) the latter reduces into:

$$\begin{aligned} {\mathscr {J}}_{T}(\gamma _{{\mathscr {T}}}^{NS})= & {} 4 \sum _{i=0}^{n-1}(\varDelta t_i\Vert c_{5,i}\Vert ^2+(\varDelta t_i)^3\Vert c_{4,i}\Vert ^2+\langle c_{5,i}|c_{4,i}\rangle (\varDelta t_i)^2). \end{aligned}$$
(11)

By (8) we also have \(c_{5,i}=((x_{i+1}-x_i)/(\varDelta t_i)^2) -(v_i/\varDelta t_i)\) and thus

$$\begin{aligned} \varDelta t_i\Vert c_{5,i}\Vert ^2= & {} {\Vert x_{i+1}-x_i\Vert ^2\over (\varDelta t_i)^3} +{\Vert v_i\Vert ^2\over \varDelta t_i}-2{\langle x_{i+1}-x_i|v_i\rangle \over (\varDelta t_i)^2 } \;,\\ (\varDelta t_i)^3\Vert c_{4,i}\Vert ^2= & {} {\Vert v_i+v_{i+1}\Vert ^2\over \varDelta t_i}+ {4 \Vert x_{i+1}-x_i\Vert ^2\over (\varDelta t_i)^3} - {4\langle v_{i+1}+v_i|x_{i+1}-x_i\rangle \over (\varDelta t_i)^2}\;,\\ \langle c_{5,i}|c_{4,i}\rangle (\varDelta t_i)^2= & {} {\langle x_{i+1}-x_i| v_i+v_{i+1}\rangle \over (\varDelta t_i)^2} -{2\Vert x_{i+1}-x_i\Vert ^2\over (\varDelta t_i)^3}-{\Vert v_i\Vert ^2\over \varDelta t_i} -{\langle v_i|v_{i+1}\rangle \over \varDelta t_i}\\&+{2\langle v_i|x_{i+1}-x_i\rangle \over (\varDelta t_i)^2} \end{aligned}$$

which when passed to (11) yields finally

$$\begin{aligned} {\mathscr {J}}_{T}(\gamma _{\mathscr {T}}^{NS})= & {} 4\sum _{i=0}^{n-1}\Big ({-1\over (\varDelta t_i)^3}(-3\Vert x_{i+1}-x_i\Vert ^2 +3\langle v_i+v_{i+1}|x_{i+1}-x_i\rangle \varDelta t_i\nonumber \\&\quad \quad \quad \quad \quad \quad -(\Vert v_i\Vert ^2+\Vert v_{i+1}\Vert ^2 +\langle v_i|v_{i+1}\rangle )(\varDelta t_i)^2\Big )\;. \end{aligned}$$
(12)

Clearly for a given data points \({\mathscr {M}}\) and each strictly increasing sequence of fixed knots \(\{t_i\}_{i=0}^n\) the natural spline \({\gamma _{\mathscr {T}}^{NS}}\) exists and is unique. Note also that (12) (in contrast with each curve \(\gamma _{\mathscr {T}}^{C_i}\) itself) involves only two vector coefficients \(c_{3,i}\) and \(c_{4,i}\) appearing in (7).

Once we vary the knots \(\{t_i\}_{i=0}^n\) (subject to \(t_0<t_1<t_2<...<t_{n-1}<t_n=T\) with \(t_n=T\) and \(t_0\) fixed) the corresponding space of such natural splines \(\gamma ^{NS}\) (denoted here by \({\mathscr {I}}^{NS}\)) evidently satisfies \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\cap C^2([0,T_c])\) (also \(\gamma ^{NS}\) is \(C^{\infty }([t_i,t_{i+1}])\) for each \(i=0,1,\dots ,n-1\) - we omit subscript \(\mathscr {T}\) to emphasize that internal knots may vary).

If now we minimize (2) only over a class of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) (such thinning of (2) and (3) to (12) is justified later in Lemma 1) then

$$\begin{aligned} {\mathscr {J}}_{T}(\gamma ^{NS}_{opt})= \min _{\gamma ^{NS} \in {\mathscr {I}}^{NS}}{\mathscr {J}}_{T} (\gamma ^{NS}) \end{aligned}$$
(13)

reduces into finding optimal parameters for (13) \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) (here terminal knots \(t_n=T\) and \(t_0\) are constant) within the family of natural splines \({\mathscr {I}}^{NS}\), subject to the constraint \(t_0<t_1^{opt}<t_2^{opt}<...<t_{n-1}^{opt}<t_n=T\). As we mentioned there is a one-to-one correspondence between natural spline \(\gamma ^{NS}\) and the interpolation knots. Consequently (13) can be reformulated (see also (5)) upon introduction \(\hat{\mathscr {T}}=(t_1,t_2,\dots ,t_{n-1})\) into the following minimization problem in \(\hat{\mathscr {T}}\):

$$\begin{aligned} {\mathscr {J}}_{T}(\gamma ^{NS}_{opt})= & {} \min _{\hat{\mathscr {T}}\in \varOmega _{t_0}^{T}} {\mathscr {J}}_{T}^F(t_1,t_2,\dots ,t_{n-1})\nonumber \\= & {} \min _{\hat{\mathscr {T}} \in \varOmega _{t_0}^{T}} \sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}}\Vert \ddot{\gamma }^{NS}(s)\Vert ^2ds\;\nonumber \\= & {} \min _{\hat{\mathscr {T}} \in \varOmega _{t_0}^{T_c}} 4\sum _{i=0}^{n-1}\Big ({-1\over (\varDelta t_i)^3}(-3\Vert x_{i+1}-x_i\Vert ^2 +3\langle v_i+v_{i+1}|x_{i+1}-x_i\rangle \varDelta t_i\nonumber \\&\quad \quad \quad \quad \quad \quad -(\Vert v_i\Vert ^2+\Vert v_{i+1}\Vert ^2 +\langle v_i|v_{i+1}\rangle )(\varDelta t_i)^2\Big )\;. \end{aligned}$$
(14)

Thus an implicit formula (12) yields, upon feeding \(v_i\) from (9) and (10), the explicit non-linear expression for \({\mathscr {J}}_{T}^F\) (see (14)) ready for minimization over \({\mathscr {I}}^{NS}\) with respect to free variables \((t_1,t_2,\dots ,t_{n-1})\in \varOmega _{t_0}^{T}\). Note that the value of the energy \({\mathscr {J}}_T^F(\hat{\mathscr {T}})\) is always finite for each \(\hat{\mathscr {T}}\in \varOmega _{t_0}^T\) since \(\gamma _{\mathscr {T}}^{NS}\in C^2\) and therefore \(\Vert \ddot{\gamma }_{\mathscr {T}}^{NS}\Vert \) is continuous over a compact set \([t_0,t_n]\) (as \(t_n=T<+\infty \)).

The class of natural splines \({\mathscr {I}}^{NS}\) is invoked here since one can reduce (3) to the same optimization confined merely to the subclass of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) (see [10, 41, 48]). In fact by [41] the following holds (for arbitrary fixed knots \(t_0<t_n=T\)):

Lemma 1

For a given admissible data points \({\mathscr {M}}\) in arbitrary Euclidean space \(E^m\) the subclass of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) satisfies

$$\begin{aligned} \min _{\gamma \in {\mathscr {I}}_T}{\mathscr {J}}_T(\gamma )= \min _{\gamma ^{NS}\in {\mathscr {I}}^{NS}} {\mathscr {J}}_T(\gamma ^{NS})\;. \end{aligned}$$
(15)

In the next section we test the optimization problem (15) (converted into (12) or (14)) on some 2D and 3D data.

3 Numerical Experiments

Experiments are performed with Mathematica Package. We compare now the performance of Leap-Frog (see [41]) with the Secant Method applied to (12) (as justified by (15)). The experiments are conducted only for sparse reduced data points \({\mathscr {M}}\) in \(E^{2,3}\). They represent special cases of all admissible sparse reduced data in arbitrary \(E^m\). The initial guesses are based on cumulative chords (1).

The first example deals with reduced data \({\mathscr {M}}\) in \(E^2\) (i.e. for \(m=2\)).

Fig. 1.
figure 1

Natural splines interpolating data points \({\mathscr {M}}_{2D1}\) (a) \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) with uniform knots \({\mathscr {T}}_{uni}\), (b) \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) with cumulative chords \({\mathscr {T}}_{c}\), (c) \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) with optimal knots \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\) (thus \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}=\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\)) (d) \(\gamma _{{\mathscr {T}}_{opt}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) plotted together.

Example 1

Consider for \(n=5\) the following 2D reduced data points (see dotted points in Fig. 1):

$$\begin{aligned} {\mathscr {M}}_{2D1}=\{(-4, 0), (-0.5, -4), (0.5, -3), (-0.5, 4), (0.5, 3), (-1, 3.8)\}\;. \end{aligned}$$

A blind guess of uniform interpolation knots yields (rescaled to \(T_{c}=\hat{T}_n\) - see (1)):

$$\begin{aligned} {\mathscr {T}}_{uni}=\{0,3.38291,6.76583,10.1487,13.5317,16.9146 \} \end{aligned}$$

and the initial guess based on cumulative chord \({\mathscr {T}}_c =(t_0,{\hat{\mathscr {T}}}_c,T_c)\) (i.e. based on the geometry of the layout of the data) renders:

$$\begin{aligned} {\mathscr {T}}_{c}=\{0, 5.31507, 6.72929, 13.8004, 15.2146, 16.9146\}. \end{aligned}$$

The natural splines \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) (based on \({\mathscr {T}}_{uni}=(0,{\hat{\mathscr {T}}}_{uni},16.9146\))) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) (based on \({\mathscr {T}}_{c}\)) yield the energies \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{uni})=7.18796<{\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)=7.8536\). Both interpolants \( \gamma _{{\mathscr {T}}_{uni}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) are shown in Fig. 1a and b, respectively. The Secant Method yields (for (12)) the optimal knots (augmented by terminal times \(t_0=0\) and \(t_5=T_c\))

$$\begin{aligned} {\mathscr {T}}_{opt}^{SM}=\{0, 3.67209, 5.62892, 11.435, 14.5491, 16.9146 \} \end{aligned}$$

with the optimal energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{SM})=4.25388\). The execution time amounts to \(T^{SM}=7.037922sec\). The resulting curve \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) is plotted in Fig. 1c. Note that for each free variable Secant Method uses here two initial numbers \(t_i^c\pm 0.5\). The Leap-Frog Algorithm decreases the initial energy \({\mathscr {J}}^{F}_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\) upon 42 iterations to \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{SM})\) (i.e. as for Secant Method) with optimal values satisfying \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\), up to the 6th decimal place - this is the iteration bound). The respective execution time \(T^{LF}=3.333620sec<T^{SM}\). The 0th, 1st, 10th, 20th, 30th and 42nd iterations Leap-Frog decrease the energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\) to:

$$\begin{aligned} \{7.8536,4.93366, 4.25839, 4.25389, 4.25388, 4.25388\} \end{aligned}$$

with only the first two iterations contributing to major energy decrease (and hence the corrections of the initial guess for knots taken as \({\mathscr {T}}_{c}\)). The resulting natural spline \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) yielded by Secant Method) based on \({\mathscr {T}}_{opt}^{LF}\) is shown in Fig. 1c and also visually compared with \(\gamma _{{\mathscr {T}}_c}^{NS}\) in Fig. 1d. Note that if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_c^{SM})\) (say up to 5th decimal place) then only 22 iterations are needed here with shorter execution time \(T^{LF}_{E}=2.270440\ sec\) and with optimal knots

$$\begin{aligned} {\mathscr {T}}_{opt}^{LF_E}=\{0,3.67502,5.63183, 11.4368 14.5498,16.9146 \}. \end{aligned}$$

We miss out now a bit on precise estimation of the optimal knots but we speed up the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \({\mathscr {T}}_{opt}^{LF_E}\approx {\mathscr {T}}_{opt}^{SM}\)). The other iteration a posteriori stopping criteria can also be considered which even further accelerate Leap-Frog performance at almost no cost in difference between computed curve an optimal curve. \(\quad \square \)

We pass now to an example of reduced data in \(E^3\) (i.e. with \(m=3\)).

Fig. 2.
figure 2

Natural splines interpolating data points \({\mathscr {M}}_{3D1}\) (a) \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) with uniform knots \({\mathscr {T}}_{uni}\), (b) \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) with cumulative chords \({\mathscr {T}}_{c}\), (c) \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) with optimal knots \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\) (thus \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}=\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\)) (d) \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) plotted together.

Example 2

Consider for \(n=5\) the following 3D reduced data points (see dotted points in Fig. 2):

$$\begin{aligned} {\mathscr {M}}_{3D1}=\{(0, 0, 0), (-0.5, 0, -4), (0.5, 0, -4), (-0.5, 0, 4), (0.5, 0, 4), (-1, 0, 3.8)\}. \end{aligned}$$

The uniform interpolation knots read (rescaled to \(T_{c}=\hat{t}_n\) - see (1)) as:

$$\begin{aligned} {\mathscr {T}}_{uni}=\{0 , 3.12133, 6.24266, 9.364, 12.4853, 15.6067\} \end{aligned}$$

and the initial guess based on cumulative chord \({\mathscr {T}}_c\) is here:

$$\begin{aligned} {\mathscr {T}}_{c}=\{0, 4.03113, 5.03113, 13.0934, 14.0934, 15.6067\}. \end{aligned}$$

The natural splines \(\gamma _{{\mathscr {T}}_{uni}}^{NS}\) (based on \({\mathscr {T}}_{uni}\)) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) (based on \({\mathscr {T}}_{c}\)) yield the following energies \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{uni})=10.145>{\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)=9.45031\). Again, both interpolants \( \gamma _{{\mathscr {T}}_{uni}}^{NS}\) and \(\gamma _{{\mathscr {T}}_{c}}^{NS}\) are presented in Fig. 2a,b, respectively.

The Secant Method yields (for (12)) the optimal knots (augmented by terminal times \(t_0=0\) and \(t_5=T_c\))

$$\begin{aligned} {\mathscr {T}}_{opt}^{SM}=\{0, 2.91851,5.12399,11.1964,13.507,15.6067\} \end{aligned}$$

with the optimal energy \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_{opt}^{SM})=4.65476\). The execution time amounts to \(T^{SM}=6.783365sec\). The resulting curve \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) is plotted in Fig. 2c. Note that for each free variable Secant Method uses here two initial numbers \(t_i^c\pm 0.1\). Leap-Frog decreases the initial energy to \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_{opt}^{LF})= {\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_{opt}^{SM})\) (as for the Secant Method) with the iteration stopping conditions \({\mathscr {T}}_{opt}^{LF}={\mathscr {T}}_{opt}^{SM}\) (up to 6th decimal point) upon 38 iterations. The respective execution time amounts to \(T^{LF}= 3.757498<T^{SM}\). The 0th (i.e. \({\mathscr {J}}^F_{{\mathscr {T}}_c}(\hat{\mathscr {T}}_c)\)), 1st, 2nd, 10th, 13th, and 36th iterations Leap-Frog decrease the energy to:

$$\begin{aligned} \{9.45031, 5.30697, 4.83704, 4.65485, 4.65476,4.65476\} \end{aligned}$$

with again only the first three iterations contributing to major correction of the initial guess knots \({\mathscr {T}}_{c}\). The resulting natural spline \(\gamma _{{\mathscr {T}}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{{\mathscr {T}}_{opt}^{SM}}^{NS}\) yielded by Secant Method) based on \({\mathscr {T}}_{opt}^{LF}\) is shown in Fig. 2c and also visually compared with \(\gamma _{{\mathscr {T}}_c}^{NS}\) in Fig. 2d. Again if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \({\mathscr {J}}^F_{{\mathscr {T}}_c}({\mathscr {T}}_c^{SM})\) (say up to 5th decimal place) then only 13 iterations are needed here with shorter execution time \(T^{LF}_{E}= 1.878057<T^{SM}\) and optimal knots

$$\begin{aligned} {\mathscr {T}}_{opt}^{LF_E}=\{0, 2.92093,5.12632,11.1981,13.5079,15.6067\}. \end{aligned}$$

As previously, we miss out here a bit on precise estimation of the optimal knots but we accelerate the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \({\mathscr {T}}_{opt}^{LF_E}\approx {\mathscr {T}}_{opt}^{SM}\)). \(\quad \square \)

4 Conclusions

In this paper we discussed the method of fitting reduced data \({\mathscr {M}}\) in arbitrary Euclidean space \(E^m\) with natural splines \(\gamma _{\mathscr {T}}^{NS}\) based on finding the best unknown knots \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) (and thus the best natural spline) to minimize the total mean of squared norm of acceleration of the interpolant. The original optimization problem (2) derived in a wider class of piecewise-\(C^2\) class interpolants is reduced to the class of natural splines \(\gamma _{\mathscr {T}}^{NS}\) - see Lemma 1. This in turn reformulates into the finite-dimensional constrained optimization task (14) in \((t_1,t_2,\dots ,t_{n-1})\)-variables, subject to the satisfaction of the inequalities \(t_0<t_1<t_2,< ... <t_{n-1}<t_n\). Two computational schemes are deployed to test the quality of the computed interpolants - i.e. Leap-Frog and Secant Method. They both do not rely on large size matrix inversion during the computational procedure. For sparse reduced data \({\mathscr {M}}\) our optimization set-up together with applied numerical schemes offer a feasible choice (supplemented with computational tools) of approximating the unknown interpolation knots \(\{t_i\}_{i=0}^n\approx \{\hat{t}_i^{opt}\}_{i=0}^n\). Future work will include the theoretical analysis of the nature of (14) and convergence of tested iterative schemes to its local (global) minima (minimum).

Some recent related work on fitting reduced data \({\mathscr {M}}\) in \(E^{2,3}\) can also be found in [49, 50].