1 Introduction

Assume that \(\gamma :I\rightarrow \mathbb {E}^n\) represents a smooth regular curve (i.e. \(\dot{\gamma }(t) \ne {\vec 0}\)) of class \(C^k\) (usually with \(k=3,4\)) defined over a compact interval \(I=[0,T]\) (with \(0<T<\infty \)). Suppose that \(m+1\) interpolation points \(\{q_i\}_{i=0}^m=\{\gamma (t_i)\}_{i=0}^m\) (forming the so-called reduced data \(Q_m\)) belong to an arbitrary Euclidean space \(\mathbb {E}^n\). Here \({\mathcal T}=\{t_i\}^{m}_{i=0}\) is not given (here \(t_i<t_{i+1}\)). We introduce now (see e.g. [1, 7, 12] or [19]) some preliminary notions (applicable for \(m\rightarrow \infty \)).

Definition 1.1

The interpolation knots \({\mathcal T}\) are admissible if:

$$\begin{aligned} \lim \limits _{m\rightarrow \infty } \delta _m = 0,\ \text {where} \ \delta _m = \max \limits _{1 \le i \le m }\{t_i-t_{i-1}: \quad i=1,2,\dots ,m\}. \end{aligned}$$
(1)

Definition 1.2

The interpolation knots \({\mathcal T}\) are more-or-less uniform if there exist constants \(0 < K_l\le K_u\) such that:

$$\begin{aligned} (K_l/m)\le t_i-t_{i-1}\le (K_u/m), \end{aligned}$$
(2)

for all \(i=1,2,\dots ,m\) and any \(m\in \mathbb {N}\). Alternatively, more-or-less uniformity amounts to the existence of some constant \(0<\beta \le 1\) such that \(\beta \delta _m\le t_{i}-t_{i-1}\le \delta _m\) for all \(i=1,2,\dots ,m\) and arbitrary \(m\in \mathbb {N}\). Lastly, the subfamily \({\mathcal T}_{\beta _0}\) of more-or-less uniform samplings represents a set of \(\beta _0\textit{-more-or-less uniform}\) samplings if each of its representatives satisfies \(\beta _0\le \beta \le 1\), for some \(0<\beta _0\le 1\) fixed.

Having selected the fitting scheme \(\hat{\gamma }\) of \(Q_m\) the unknown knots \({\mathcal T}\) for the interpolant \(\hat{\gamma }\) must somehow be replaced by estimates \(\hat{\mathcal T}=\{\hat{t}_i\}_{i=0}^m\) subject to \(\hat{\gamma }(\hat{t}_i)=q_i\). We use here the so-called exponential parameterization (see e.g. [17]) which depends on a single parameter \(\lambda \in [0,1]\) according to:

$$\begin{aligned} \hat{t}_{0}=0\ \ \text {and} \ \ \hat{t}_{i} =\hat{t}_{i-1}+\Vert q_i-q_{i-1}\Vert ^{\lambda }, \end{aligned}$$
(3)

for \(i=1,2,\dots ,m\). It is also assumed here that \(q_i\ne q_{i+1}\) so that the extra condition \(\hat{t}_{i}< \hat{t}_{i+1}\) is preserved as stipulated generically while fitting reduced data \(Q_m\). The case of \(\lambda =0\) in (3) gives uniform knots \(\hat{t}_{i}=i\). Evidently the latter does not reflect the geometry of \(Q_m\). On the other hand, \(\lambda =1\) yields the so-called cumulative chord parameterization which coincides with Euclidean distances between consecutive points \(q_i\) and \(q_{i+1}\) and as such it refers to the spread of \(Q_m\). More information on the above topic and related issues can be found e.g. in [3, 5, 16, 17] or [18].

The selection of the specific interpolant \(\hat{\gamma }:\hat{I}=[0,\hat{T}]\rightarrow \mathbb {E}^n\) (with \(\hat{T}=\hat{t}_m\)) together with some knots’ estimates \(\hat{\mathcal T}\approx {\mathcal T}\) raises an important question concerning the convergence rate (if any) in approximating \(\gamma \) with \(\hat{\gamma }\) (or its length) once \(m\rightarrow \infty \). Recall first (see [1, 12] or [19]):

Definition 1.3

Consider a family \(\{F_{\delta _m},\delta _m > 0\}\) of functions \(F_{\delta _m}: I \rightarrow \mathbb {E}^n\). We say that \(F_{\delta _m}\) is of order \(O(\delta _m^{\alpha })\) (denoted as \(F_{\delta _m}=O(\delta _m^{\alpha })\)), if there is a constant \(K > 0\) such that, for some \(\bar{\delta }> 0\) the inequality \(\Vert F_{\delta _m}(t)\Vert < K\delta _m^{\alpha } \) holds for all \(\delta _m \in (0,\bar{\delta })\), uniformly over I.

For a given \(\hat{\gamma }\) fitting dense data \(Q_m\) based on \(\hat{\mathcal T}\approx {\mathcal T}\) (and some a priori selected mapping \(\phi :I\rightarrow \hat{I}\)) the natural question arises about the distance measurement \(\Vert F_{\delta _m}\Vert =\Vert \gamma -\hat{\gamma }\,\circ \,\phi ||\) tending to 0 (uniformly over I), while \(m\rightarrow \infty \). Of course, by (1) proving \(F_{\delta _m}=\gamma -\hat{\gamma }\circ \phi =O(\delta _m^{\alpha })\) not only guarantees the latter but also establishes lower bound on convergence speed (if \(\alpha >0\)). The coefficient \(\alpha > 0\) appearing in Definition 1.3 is called the convergence rate in approximating \(\gamma \) by \(\hat{\gamma }\circ \phi \) uniformly over [0, T]. If additionally such \(\alpha \) cannot be improved (once \(\gamma \) and \({\mathcal T}\) are given) then \(\alpha \) is sharp. The latter analogously extends to the length estimation (with \(n=1\)), for which the scalar expression \(F_{\delta _m}=d(\gamma )-d(\hat{\gamma })=O(\delta _m^{\beta })\) is to be considered.

For certain applications such as the analysis of the convergence rate in \(d(\gamma )=\int _0^T\Vert \dot{\gamma }(t)\Vert dt \approx d(\hat{\gamma })=\int _0^{\hat{T}}\Vert \hat{\gamma }'(\hat{t})\Vert d\hat{t}\) (see e.g. [2, 5] or [15]) the mapping \(\phi (t)=\hat{t}\) should be a reparameterization of I into \(\hat{I}\) (i.e. \(\dot{\phi }>0\)). In other situations such as robot’s and drone path planning the extra trajectory looping of \(\hat{\gamma }\) is sometimes needed (e.g. for traction line posts’ inspection while making circles by drone). Of course, in many other applications robot navigation requires trajectory planning with no loops whatsoever. In that context (as well as for length estimation) one of the conditions to exclude the local looping of \(\hat{\gamma }\circ \phi \) is to require \(\phi \) to be an injective function (see e.g. [13]).

From now on it is assumed that \(\hat{\gamma }=\hat{\gamma }^L\) which represents a piecewise-Lagrange cubic \(\hat{\gamma }^L:\hat{I}=[0,\hat{T}]\rightarrow \mathbb {E}^n\) (see e.g. [1]). More precisely, the interpolant \(\hat{\gamma }^L\) is defined as a track-sum of Lagrange cubics \(\{\hat{\gamma }_{i=3k}^L\}_{k=0}^{(m-3)/3}\) with each \(\hat{\gamma }_i^L:\hat{I}_i=[\hat{t}_i,\hat{t}_{i+3}]\rightarrow \mathbb {E}^n\) satisfying \(q_{i+j}=\hat{\gamma }_i^L(t_{i+j})\), for \(j=0,1,2,3\). As already pointed out the unavailable knots \({\mathcal T}\) are estimated with \(\hat{\mathcal T}\) governed by exponential parameterization (3). For simplicity we suppose that \(m=3k\), where \(k\in \{1,2,3,\ldots \}\). In a similar fashion, one selects here \(\phi =\psi ^L\) defined as a track-sum of Lagrange cubics \(\{\psi _{i=3k}^L\}_{k=0}^{(m-3)/3}\) mapping \(\psi _i^L:I_i=[t_i,t_{i+3}]\rightarrow [\hat{t}_i,\hat{t}_{i+3}]\) and fulfilling \(t_{i+j}=\hat{\psi }_i^L(t_{i+j})\), for \(j=0,1,2,3\). Evidently if \(\dot{\psi }_i^L>0\) (as \(\hat{t}_i<\hat{t}_{i+1}\)) then \(\psi ^L_i:I_i\rightarrow \hat{I}_i = Rg(\psi ^L_i)\) (here \(Rg(\psi ^L_i)\) denotes the range of \(\psi _i^L\)). On the other hand if \(\psi _i^L\) is not injective we may also have \(\psi ^L_i:I_i\rightarrow \hat{I}_i \subset Rg(\psi ^L_i)\). In order to construct the composition \(\hat{\gamma }_i^L\circ \psi _i^L\) as a well-defined function, each domain of \(\hat{\gamma }_i^L\) is here understood as naturally extendable from \(\hat{I}_i\) to \(\mathbb {R}\). Such adjusted Lagrange piecewise-cubics denoted as \(\check{\gamma }_i^L\) satisfy \(\check{\gamma }^L_i|_{\hat{I}_i}=\hat{\gamma }^L_i\). The following result holds (see e.g. [7, 9] or [19]):

Theorem 1.4

Assume \(\gamma \in C^4([0,T])\) be a regular curve in \(\mathbb {E}^n\) sampled admissibly (see (1)). For \(\hat{\gamma }^L\) and \(\lambda =1\) in (3) each mapping \(\psi ^L_i\) is a \(C^{\infty }\) reparameterization of \(I_i\) into \(\hat{I}_i\) and we have (uniformly over [0, T]):

$$\begin{aligned} \gamma -\hat{\gamma }^L_i\circ \psi ^L_i= O(\delta _m^4). \end{aligned}$$
(4)

In the remaining cases of \(\lambda \in [0,1)\) from (3) let \(\gamma \) be sampled more-or-less uniformly (see (2)). Then for each mapping \(\psi ^L_i\) combined with \(\check{\gamma }_i^L\) the following holds (uniformly over [0, T]):

$$\begin{aligned} \gamma -\check{\gamma }^L_i\circ \psi ^L_i = O(\delta _m). \end{aligned}$$
(5)

Both (4) and (5) are sharp within the class of \(\gamma \in C^4([0,T])\) and within a given family of admitted samplings, assumed here as either (1) or (2), respectively. By the latter we understand the existence of at least one \(\gamma _0\in C^4([0,T])\)) and some admissible (or more-or-less uniform) sampling \({\mathcal T}_0\) for which \(\alpha (1)=4\) in (4) (or \(\alpha (\lambda )=1\) for \(\lambda \in [0,1)\) in (5)) are sharp according to Definition 1.3 - see also [9] or [12]. Note that \(\psi ^L\) as a track-sum of \(\{\psi _{i=3k}^L\}_{k=0}^{(m-3)/3}\) defines a piecewise \(C^{\infty }\) mapping of I into \(\mathbb {R}\) at least continuous at \({\mathcal T}\). If \(\psi ^L\) is a reparameterization (e.g. always holding asymptotically for \(\lambda =1\)) then \(\psi ^L:I\rightarrow \hat{I}\). In particular for \(\lambda =1\) we also have \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^4)\) - see [19]. In contrast, the injectivity of \(\psi _i^L\) and length estimation for \(\lambda \in [0,1)\) has not been so far examined.

In this paper we introduce two sufficient conditions enforcing each \(\psi _i^L: I_i\rightarrow \hat{I}_i\) to be injective, for \(\lambda \in [0,1)\) governing the exponential parameterization (3). These two conditions are represented by the inequalities (6) and (7). In the next step, Theorem 2.1 is established (the main result of this paper) to formulate several sufficient conditions enforcing (6) and (7) to hold asymptotically. Noticeably all derived conditions stipulating asymptotically the injectivity of \(\psi ^L\) are independent from \(\gamma \) and apply to any fixed \(\lambda \in [0,1)\) and to any preselected \(\beta _0\)-more-or-less-uniform samplings (i.e. to any \(0<\beta _0< 1\) fixed a priori). Additionally, all re-transformed algebraic constraints established here are visualized with the aid of 3D plots in Mathematica (see [22]). The conditions can also be exploited once the incomplete information about samplings is available such as a priori knowledge of the respective upper and lower bounds for each triples \((M_{im},N_{im},P_{im})\) characterizing \({\mathcal T}\) as specified in (8) - see also Remark 3.1. The examples illustrate Theorem 2.1 and the relevance of this work (see Example 1). The conjecture concerning the sharp convergence rate \(\alpha (\lambda )=2\) in length estimation \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^{\alpha (\lambda )})\) (combined with (3) for all \(\lambda \in [0,1)\) yielding \(\dot{\phi }>0\)) is tested numerically (see Example 2 and Remark 3.2).

2 Sufficient Conditions for Injectivity of \(\psi _i^L\)

In this section we establish and discuss the asymptotic character (i.e. applicable for m sufficiently large) of two sufficient conditions enforcing \(\psi _i^L\) to be a genuine reparameterization of \(I_i\) into \(\hat{I}_i\) based on multidimensional reduced data \(Q_m\).

Evidently the positivity of the quadratic \(\dot{\psi }_i^L(t)=a_it^2+b_it+c_i\) over \(I_i\) is e.g. guaranteed (for both sparse and dense data \(Q_m\)) provided if e.g. either (6) or (7) hold:

$$\begin{aligned} a_i&<0 \quad \quad \text {and}\quad \quad \dot{\psi }_i^L(t_i)>0 \quad \qquad \text {and}\quad \quad \dot{\psi }_i^L(t_{i+3})>0,&\end{aligned}$$
(6)
$$\begin{aligned} a_i&>0 \quad \quad \text {and}\quad \quad \dot{\psi }_i^L\Big (-\frac{b_i}{2a_i}\Big )>0. \end{aligned}$$
(7)

Noticeably, any admissible sampling (1) can be characterized as follows:

$$\begin{aligned} \begin{array}{cccc} t_{i+1}-t_i=M_{im}\delta _m, \ \ \quad&t_{i+2}-t_{i+1}=N_{im}\delta _m&\text {and}&t_{i+3}-t_{i+2}=P_{im}\delta _m, \end{array} \end{aligned}$$
(8)

where \(0< M_{im},N_{im},P_{im}\le 1\). The main theoretical contribution of this paper reads as:

Theorem 2.1

Let \(\gamma \in C^3([0,T])\) be sampled \(\beta _0\)-more-or-less uniformly (see Definition (1.2)) with knots \({\mathcal T}\) represented by (8). For data \(Q_m\) combined with exponential parameterization (3) (with any fixed \(\lambda \in [0,1)\)) the condition (6) yielding each \(\psi _i^L:I\rightarrow \hat{I}_i\) as a reparameterization holds asymptotically, if the following three inequalities are satisfied for sufficiently large m:

$$\begin{aligned} {1\over P_{im}+N_{im}+M_{im}}\left( \frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}} -\frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}}\right) \le \rho <0, \end{aligned}$$
(9)
$$\begin{aligned}&M_{im}^{\lambda -1}-\frac{(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})M_{im}}{N_{im}+M_{im}}+\frac{(P_{im}^{\lambda -1} -N_{im}^{\lambda -1})M_{im}(N_{im}+M_{im})}{(P_{im}+N_{im})(P_{im}+N_{im}+M_{im})}\nonumber \\&\qquad -\frac{(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})M_{im}}{P_{im}+N_{im}+M_{im}}\ge \rho _1>0, \end{aligned}$$
(10)
$$\begin{aligned}&P_{im}^{\lambda -1}-\frac{(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})P_{im}(P_{im}+N_{im})}{(N_{im}+M_{im})(P_{im}+N_{im}+M_{im})} +\frac{P_{im}(P_{im}^{\lambda -1}-N_{im}^{\lambda -1})}{P_{im}+N_{im}+M_{im}}\nonumber \\&\qquad +\frac{P_{im}(P_{im}^{\lambda -1}-N_{im}^{\lambda -1})}{P_{im}+N_{im}}\ge \rho _2>0, \end{aligned}$$
(11)

with fixed \(\rho <0\), \(\rho _1>0\) and \(\rho _2>0\) but arbitrary small. Similarly, the condition (7) enforcing \(\dot{\psi }_i^L>0\) holds asymptotically if the following two inequalities are met for sufficiently large m:

$$\begin{aligned} {1\over P_{im}+N_{im}+M_{im}}\left( \frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}} -\frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}}\right) \ge \rho _3>0, \end{aligned}$$
(12)
$$\begin{aligned}&M_{im}^{\lambda -1}+\frac{(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})(2N_{im}+M_{im})}{3(N_{im}+M_{im})}-\frac{(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})^2}{3(N_{im}+M_{im})}\nonumber \\&\qquad \cdot \frac{(P_{im}+N_{im})(P_{im}+N_{im}+M_{im})}{(P_{im}^{\lambda -1}-N_{im}^{\lambda -1})(N_{im}+M_{im})-(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})(P_{im}+N_{im})}\nonumber \\&\qquad -\Big [\frac{(P_{im}^{\lambda -1}-N_{im}^{\lambda -1})(N_{im}+M_{im})-(N_{im}^{\lambda -1}-M_{im}^{\lambda -1})(P_{im}+N_{im})}{(N_{im}+M_{im})(P_{im}+N_{im})(P_{im}+N_{im}+M_{im})}\Big ]\nonumber \\&\qquad \cdot \frac{(N_{im}^2+N_{im}M_{im}+M_{im}^2)}{3}\ge \rho _4>0,\qquad \end{aligned}$$
(13)

where constants \(\rho _3>0\) and \(\rho _4>0\) are fixed and small.

Proof

Newton interpolation formula (see [1]) based on divided differences of \(\psi _i^L\) yields over \(I_i\):

$$\begin{aligned} \psi _i^L(t)= & {} \psi _i^L(t_i)+\psi _i^L[t_i,t_{i+1}](t-t_i)+\psi _i^L[t_i,t_{i+1},t_{i+2}](t-t_i)(t-t_{i+1})\\&+\,\psi _i^L[t_i,t_{i+1},t_{i+2},t_{i+3}], \end{aligned}$$

which for each \(t\in I_i\) renders \(\dot{\psi }_i^L(t)=\)

$$\begin{aligned}&\psi _i^L[t_i,t_{i+1}]+\psi _i^L[t_i,t_{i+1}, t_{i+2}](2t-t_i-t_{i+1})+\psi _i^L[t_i,t_{i+1}, t_{i+2}, t_{i+3}]\qquad \\&\qquad \cdot \big ((t-t_{i+1})(t-t_{i+2})+\,(t-t_{i})(t-t_{i+2})+(t-t_{i})(t-t_{i+1})\big ).\nonumber \end{aligned}$$
(14)

We recall now the proof of (18) (see [9] or [12]) since it is vital for further arguments. As \(\gamma \) is regular it can be assumed to be parameterized by arc-length rendering \(\Vert \dot{\gamma }(t)\Vert =1\), for \(t\in [0,T]\) (see [2]). The latter due to \(1\equiv \Vert \dot{\gamma }(t)\Vert ^2=\langle \dot{\gamma }(t)|\dot{\gamma }(t)\rangle \) results in \(0\equiv (\Vert \dot{\gamma }(t)\Vert ^2)'=2\langle \dot{\gamma }(t)|\ddot{\gamma }(t)\rangle \) over \(t\in [0,T]\). The orthogonality of \(\dot{\gamma }\) and \(\ddot{\gamma }\) nullifies certain terms in the expression (for \(j=i+k\) with \(k=0,1,2\) and any \(\lambda \in [0,1]\)):

$$\begin{aligned} \hat{t}_{j+1}-\hat{t}_{j}=\Vert q_{j+1}-q_j\Vert ^{\lambda }=\Vert \gamma (t_{j+1})-\gamma (t_{j})\Vert ^{\lambda }= \langle \gamma (t_{j+1})-\gamma (t_{j})|\gamma (t_{j+1})-\gamma (t_{j})\rangle ^{\lambda } \end{aligned}$$
(15)

once Taylor expansion for \(\gamma \in C^3\) is used:

$$\begin{aligned} \gamma (t_{j+1})-\gamma (t_j)=(t_{j+1}-t_j)\dot{\gamma }(t_j)+\frac{(t_{j+1}-t_j)^2}{2} \ddot{\gamma }(t_j)+O((t_{j+1}-t_j)^2). \end{aligned}$$
(16)

Indeed, upon substituting (16) into (15) and exploiting \(\langle \dot{\gamma }(t)|\ddot{\gamma }(t)\rangle =0\) one obtains:

$$\begin{aligned} \hat{t}_{j+1}-\hat{t}_{j}=(t_{j+1}-t_{j})^{\lambda }\left( 1+O((t_{j+1}-t_j)^2)\right) ^{{\lambda \over 2}}. \end{aligned}$$
(17)

For any admissible samplings the constants in the term \(O((t_{j+1}-t_j)^2)\) depend on the third derivative of \(\gamma \) which is bounded over [0, T] as \(\gamma \in C^3\). Again Taylor Th. applied to the function \(f(x)=(1+x)^{\frac{\lambda }{2}}\) at \(x_0=0\) yields for all \(x\in [-\varepsilon ,\varepsilon ]=I_{\varepsilon }\) (with some fixed \(\varepsilon >0\)) the existence of some \(\xi _x\) satisfying \(|\xi _x|<|x|\) such that \(f(x)=1+\frac{\lambda }{2}x+\frac{\lambda }{4}(\frac{\lambda }{2}-1)(1+\xi _x)^{\frac{\lambda }{2}-2}\). For \(0<\varepsilon <1\) we exclude the singularity of \(\tau (\xi _x)=(1+\xi _x)^{\frac{\lambda }{2}-2}\) at \(\xi _x=-1\) (with \(\lambda \in [0,1]\)) which forces \(\tau \) to be bounded over \(I_{\varepsilon }\). Thus for \(|\xi _x|<|x|\le \varepsilon <1\) we have \(f_1(x)=1+\frac{\lambda }{2}x+O(x^2)\) - the constant standing along \(x^2\) depends now on \(\lambda \) (which is fixed). Take now \(x=O((t_{j+1}-t_j)^2)\) determined in (17) which is asymptotically small (for m large) due to the admissibility condition (1) and thus separated from \(-1\). Hence the second-divided differences of \(\psi _i^L\) satisfy (with \(k=0,1,2\)):

$$\begin{aligned} \psi _i^L[t_{i+k},t_{i+k+1}]=\frac{\hat{t}_{i+k+1}-\hat{t}_{i+k}}{t_{i+k+1}-t_{i+k}} =(t_{i+k+1}-t_{i+k})^{\lambda -1}+O((t_{i+k+1}-t_{i+k})^{1+\lambda }). \end{aligned}$$
(18)

Thus, by (8) and (18) one obtains for each \(\lambda \in [0,1]\) and \(k=0,1,2\) the following formula for the second divided differences of \(\psi _i^L\) (needed also in (15)):

$$\begin{aligned} \psi _i^L[t_{i+k},t_{i+k+1}]=R_{imk}^{\lambda -1}\delta _m^{\lambda -1}+O(\delta _m^{1+\lambda }), \end{aligned}$$
(19)

with \(R_{im0}=M_{im}\), \(R_{im1}=N_{im}\) and \(R_{im2}=P_{im}\). Furthermore still by (18) combined with \(0<(t_{i+l+1}-t_{i+l})(t_{i+2}-t_i)^{-1}\le 1\) (for \(l=0,1\)) and telescoped \(t_{i+2}-t_i=(t_{i+2}-t_{i+1})+(t_{i+1}-t_i)\) the third-divided difference of \(\psi _i^L\) is equal to \(\psi _i^L[t_{i},t_{i+1},t_{i+2}]\)

$$\begin{aligned}= & {} \frac{(t_{i+2}-t_{i+1})^{\lambda -1}-(t_{i+1}-t_{i})^{\lambda -1}}{t_{i+2}-t_{i}} +\frac{O((t_{i+2}-t_{i+1})^{1+\lambda })+O((t_{i+1}-t_{i})^{1+\lambda })}{t_{i+2}-t_{i}}\nonumber \\= & {} \frac{N_{im}^{\lambda -1}\delta _m^{\lambda -1}-M_{im}^{\lambda -1}\delta _m^{\lambda -1}}{(N_{im}+M_{im})\delta _m} +O\left( \frac{(t_{i+2}-t_{i+1})^{1+\lambda }}{t_{i+2}-t_i}\right) +O\left( \frac{(t_{i+1}-t_i)^{1+\lambda }}{t_{i+2}-t_i}\right) \nonumber \\= & {} \frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}} \delta _m^{\lambda -2}+O((t_{i+2}-t_{i+1})^{\lambda })+O((t_{i+1}-t_{i})^{\lambda }). \end{aligned}$$
(20)

A similar argument leads to:

$$\begin{aligned} \psi _i^L[t_{i+1},t_{i+2},t_{i+3}]= \frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}}\delta _m^{\lambda -2}+O((t_{i+3}-t_{i+2})^{\lambda })+O((t_{i+2}-t_{i+1})^{\lambda }). \end{aligned}$$
(21)

Hence by (20) and (21) (for \(l=0,1\)) the third divided differences of \(\psi _i^L\) (needed in (15)) read as:

$$\begin{aligned} \psi _i^L[t_{i+l},t_{i+l+1},t_{i+l+2}]=\frac{R_{im(l+1)}^{\lambda -1}-R_{iml}^{\lambda -1}}{R_{im(l+1)}+R_{iml}}\delta _m^{\lambda -2} +O(\delta _m^{\lambda }). \end{aligned}$$
(22)

Coupling again (20) and (21) with telescoped \(t_{i+3}-t_i=(t_{i+3}-t_{i+2})+(t_{i+2}-t_{i+1})+(t_{i+1}-t_i)\) and \(0<(t_{i+l+1}-t_{i+l})(t_{i+3}-t_i)^{-1}<1\) reduces the fourth divided difference of \(\psi _i^L\) into:

$$\begin{aligned} \psi _i^L[t_i,t_{i+1},t_{i+2},t_{i+3}]= & {} \frac{\frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}}-\frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}}}{t_{i+3}-t_i}\delta _m^{\lambda -2}\\&+\sum _{l=0}^2O\left( \frac{(t_{i+l+1}-t_{i+l})^{\lambda }}{t_{i+3}-t_i}\right) , \end{aligned}$$

which ultimately yields \(\psi _i^L[t_i,t_{i+1},t_{i+2},t_{i+3}]\)

$$\begin{aligned} ={1\over P_{im}+N_{im}+M_{im}}\Big (\frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}}-\frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}}\Big )\delta _m^{\lambda -3}+O(\delta _m^{\lambda -1}). \end{aligned}$$
(23)

The proof of (23) relies on \(O\left( \frac{(t_{i+l+1}-t_{i+l})^{\lambda }}{t_{i+3}-t_i}\right) =O((t_{i+l+1}-t_{i+l})^{\lambda -1}) =O(\delta _m^{\lambda -1})\). The second step resorts to more-or-less uniformity (3) of admitted samplings \({\mathcal T}\) for any \(\lambda \in [0,1)\) (as \(\lambda -1<0\)). However, to keep all constants in \(O(\delta _m^{\lambda -1})\) from (23) as independent from each representative of (3) from now on we admit only \(\beta _0\)-more-or-less uniform samplings for some fixed \(0<\beta _0\le 1\) (see Definition 1.3). The latter permits to exploit the inequality \(|(t_{i+l+1}-t_{i+l})^{\lambda -1}|\le \beta _0^{\lambda -1}\delta _m^{\lambda -1}\) to justify (23) with constants in \(O(\delta _m^{\lambda -1})\) depending on \(\gamma \) and \(\lambda \) (but not on samplings \({\mathcal T}\)).

Recalling now that \(\dot{\psi }_i^L(t)=a_it^2+b_it+c_i\) over \(I_i\), by (15) we have:

$$\begin{aligned} a_i= & {} 3\psi _i[t_i,t_{i+1},t_{i+2},t_{i+3}],\nonumber \\ b_i= & {} 2\psi _i[t_i,t_{i+1},t_{i+2}]-2\psi _i[t_i,t_{i+1},t_{i+2},t_{i+3}](t_{i+2}+t_{i+1}+t_i),\nonumber \\ c_i= & {} \psi _i[t_i,t_{i+1}]-\psi _i[t_i,t_{i+1}t_{i+2}](t_i+t_{i+1})\nonumber \\&+\,\psi _i[t_i,t_{i+1},t_{i+2},t_{i+3}](t_it_{i+1}+t_{i+1}t_{i+2}+t_it_{i+2}). \end{aligned}$$
(24)

In the next steps both conditions (6) and (7) enforcing \(\dot{\psi }_i^L>0\) (for arbitrary m) are transformed into their asymptotic analogues applicable for sufficiently large m (i.e. for \(Q_m\) sufficiently dense). This will ultimately complete the proof of Theorem 2.1.

In doing so, both conditions (6) and (7) are reformulated into asymptotic counterparts expressed in terms of \((M_{im}, N_{im},P_{im})\) (see Theorem 2.1). To save space only the first inequality from (6) i.e. \(a_i<0\) is fully addressed here (which automatically covers both (i) and (iv) - see (9) and (12)). The remaining more complicated cases (ii), (iii) and (v) (listed below) are supplemented with the final asymptotic formulas (10), (11) and (13). The proof of the latter shall be given in the full journal version of this paper.

(i) By (24) the first inequality from (6) amounts to \(\psi _i^L[t_i,t_{i+1},t_{i+2},t_{i+3}]<0\) which in turn by (23) holds subject to:

$$\begin{aligned}&\Big (\frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{(P_{im}+N_{im})(P_{im}+N_{im}+M_{im})}-\frac{N_{im}^{\lambda -1} -M_{im}^{\lambda -1}}{(N_{im}+M_{im})(P_{im}+N_{im}+M_{im})}\Big )\delta _m^{\lambda -3}\nonumber \\&\qquad +\,O(\delta _m^{\lambda -1})<0, \end{aligned}$$
(25)

for \((M_{im},N_{im},P_{im})\in [\beta _0,1]^3\). Asymptotically, for fixed \(\lambda \in [0,1)\) the slowest term determining the sign of (25) accompanies \(\delta _m^{\lambda -3}\) and reads as (for all \(\beta _0\)-more-or-less uniform samplings):

$$\begin{aligned} \theta _1(M_{im}, N_{im}, P_{im})={1\over P_{im}+N_{im}+M_{im}} \left( \frac{P_{im}^{\lambda -1}-N_{im}^{\lambda -1}}{P_{im}+N_{im}} -\frac{N_{im}^{\lambda -1}-M_{im}^{\lambda -1}}{N_{im}+M_{im}}\right) , \end{aligned}$$

provided \(\theta _1\) is not of any order \(\Theta (\delta _m^{2+\varepsilon })\) with \(\varepsilon \ge 0\). A possible sufficient condition guaranteeing the latter is to require:

$$\begin{aligned} \theta _1(M_{im}, N_{im}, P_{im})\le \rho <0, \end{aligned}$$
(26)

to hold for any fixed \(\rho <0\). Evidently (26) amounts to the first inequality (9) assumed to hold in Theorem 2.1 in order to enforce in turn asymptotically the first inequality in (6) (for any fixed \(\lambda \in [0,1)\)).

(ii) A similar but longer argument shows that (upon combining (8), (15), (19), (22) and (23)) the asymptotic fulfillment of the second inequality from (6) i.e. \(\underline{\dot{\psi }_i^L(t_i)>0}\) is met subject to (10) satisfied for any fixed, but arbitrary small \(\rho _1>0\) and sufficiently large m.

(iii) The third inequality \(\underline{\dot{\psi }_i^L(t_{i+3})>0}\) determining (6) maps analogously into its asymptotic counterpart (11) assumed to be fulfilled for an arbitrary but fixed \(\rho _2>0\) and m sufficiently large.

(iv) Clearly the proof of (9) yields a symmetric sufficient condition for \(\underline{a_i>0}\) (representing the first inequality in (7)) to hold asymptotically. The latter coincides with (12) stipulated to be satisfied by any fixed \(\rho _3>0\), subject to m getting large.

(v) The reformulation of \(\kappa _{im}=\underline{\dot{\psi }_i^L(\frac{-b_i}{2a_i})>0}\) from (7) into (13) (assumed to hold for any fixed \(\rho _4>0\) and sufficiently large m) involves a more intricate treatment (it is omitted here).

The asymptotic conditions established in Theorem 2.1 in the form of specific inequalities depend (for each i) exclusively on triples \((M_{im},N_{im},M_{im})\in [\beta _0,1]^3\) and fixed \(\lambda \in [0,1)\) (not on curve \(\gamma \)). Consequently, they can all be also visualized geometrically in 3D for each \(i=3k\) and \(\lambda \in [0,1)\) as well as for any regular curve \(\gamma \). Several examples with 3D plots are presented in Sect. 3 with the aid of Mathematica Package [22].

We note that all asymptotic conditions from Theorem 2.1 can be extended to their 2D analogues (with extra argument used establishing in fact a new theorem) which in turn can be visualized in more appealing 2D plots. Again it is omitted here as exceeding the scope of this paper.

Recall that uniform sampling, for which \(M_{im}=N_{im}=P_{im}=1\) (i.e. where \(\beta _0=1\)) combined with \(\lambda \in [0,1)\) or \(\lambda =1\) with (1) both yield \(\dot{\psi }_i^L=1+O(\delta _m^2)>0\) (see [9] and [19])). Noticeably, conditions (10), (11) and (13) are met for either \(\lambda =1\) or \({\mathcal T}\) uniform and \(\lambda \in [0,1)\). In contrast none of (9) or (12) (participating in either (6) or (7)) holds for the above two eventualities. A possible remedy to incorporate these two special cases in adjusted asymptotic representations of either \(a_i>0\) or \(a_i<0\) is to apply the fourth-order Taylor expansion for \(\gamma \in C^4\) - see (16). The analysis (left out here) yields a modified condition for \(a_i>0\) (and thus for \(a_i<0\)), this time hinging not only on triples \((M_{im},N_{im},P_{im})\in [\beta _0,1]^3\), \(\lambda \in [0,1)\) but also on \(\gamma \) curvature \(\Vert \ddot{\gamma }(t_i)\Vert ^2\) along \({\mathcal T}\) (see [9] and [19]) - here \(\Vert \dot{\gamma }(t)\Vert =1\) as \(\gamma \) is a regular curve and as such can be assumed to be parameterized by arc-length (see [2]). The latter may not always be given in advance. Alternatively, one could rely on a priori imposed restrictions on curvatures of \(\gamma \) belonging to the prescribed family of admissible curves.

3 Experimentation and Testing

In this section first Theorem 2.1 is illustrated with some examples based on algebraic tests supported by 3D plots generated in Mathematica (see Subsect. 3.1). Next the convergence rate \(\alpha (\lambda )\) for \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^{\alpha (\lambda )})\) is numerically investigated. A special attention is given to \(\lambda \in [0,1)\) yielding \(\psi ^L\) as a piecewise \(C^{\infty }\) reparameterization of [0, T] into \([0,\hat{T}]\) (see Subsect. 3.2).

In doing so, in a preliminary step, for a given fixed \(\beta _0\) two families of \(\beta _0\)-more-or-less uniform samplings (27) and (29) are introduced. Next the fulfillment of the asymptotic sufficient conditions enforcing the injectivity of \(\dot{\psi }^L>0\) (see Theorem 2.1) is examined for various \(\lambda \in [0,1)\) and both samplings (27) and (29). In particular, the inequalities (9), (10), (11), (denoted in this section by (6)\(^{*}\)) and (12), (13) (marked here with (7)\(^{*}\)) representing asymptotically in 3D both (6) and (7) are tested for different sets of triples \((M_{im},N_{im},P_{im})\in [\beta _0,1]^3\) characterizing either (27) or (29). The algebraic calculations performed herein (assuming m is sufficiently large) are supplemented by geometrical visualizations with 3D plots in Mathematica. At this point, we re-emphasize that the asymptotic conditions from Theorem 2.1 can be extended further into respective 2D counterparts upon some laborious calculations. In return, the latter gives some advantage in visualizing more appealing 2D (versus 3D) plots. To save the space the relevant theory and testing concerning this extra 2D case are left out here.

The second example reports on tests designed to numerically evaluate \(\alpha (\lambda )\) in length estimation \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta ^{\alpha (\lambda )})\), for any \(\lambda \in [0,1]\) yielding each \(\psi ^L_i\) as an injective function. The conjecture concerning \(\alpha (\lambda )\) is proposed in Remark 3.2 based on our numerical results.

The tests reported here are performed for 2D and 3D curves \(\gamma _{sp}\), \(\gamma _S\) introduced in Example 2 (i.e. for \(n=2,3\)). However all established results with the accompanied experimentation are equally applicable to arbitrary multidimensional reduced data \(Q_m=\{q_i\}_{i=0}^m\) with \(q_i=\gamma (t_i)\in {\mathbb E}^{ n}\).

3.1 Testing Injectivity of \(\psi ^L\)

Example 1

Consider first the following family \({\mathcal T}_1\) of more-or-less uniform sampling (for geometrical distribution of \(\{\gamma (t_i)\}_{i=0}^{15}\) with sampling (27) see also Fig. 3(a) and Fig. 4(a)):

$$\begin{aligned} t_i={\left\{ \begin{array}{ll} \frac{i}{m}+\frac{1}{2m}, \quad \text {for}\quad i=4k+1, \\ \frac{i}{m}-\frac{1}{2m},\quad \text {for}\quad i=4k+3, \\ \frac{i}{m},\quad \quad \quad \ \ \text { for }\quad i=2k, \end{array}\right. } \end{aligned}$$
(27)

for which \(K_l=\frac{1}{2}\), \(K_u=\frac{3}{2}\) and \(\beta _1=\frac{1}{3}\) (see Definition 1.2). Here \(0\le i\le m=3k\), where \(k\in \{1,2,\dots \}\), so that \(t_0=0\) and \(t_m=T=1\). Upon resorting to (8) the following 3D compact asymptotic representation \({\mathcal T}_1^{3D}\) of \({\mathcal T}_1\) reads as (for \(m=3k\)):

$$\begin{aligned} {\mathcal T}_1^{3D}=\left\{ (1, \tfrac{1}{3}, \tfrac{1}{3}), (1, 1, \tfrac{1}{3}),(\tfrac{1}{3}, 1, 1), (\tfrac{1}{3}, \tfrac{1}{3}, 1),(1, \tfrac{1}{3}, \tfrac{2}{3}), (\tfrac{1}{3}, 1, \tfrac{2}{3})\right\} . \end{aligned}$$
(28)

The last two points in (28) are generated for \(m=3k\) as \(t_m=1\). We set \(\beta _0=0.16\) and hence as \(\beta _0\le \beta _1\) the sampling (27) is also \(\beta _0\textit{-more-or-less uniform}\).

We also admit another \(\beta _0\textit{-more-or-less uniform sampling}\ {\mathcal T}_2\) defined according to (for geometrical spread of \(\{\gamma (t_i)\}_{i=0}^{15}\) with sampling (29) see also Fig. 3(b) and Fig. 4(b)):

$$\begin{aligned} t_i= \frac{i}{m} +\frac{(-1)^{i+1}}{3m}, \end{aligned}$$
(29)

with \(K_l=\frac{1}{3}\), \(K_u=\frac{5}{3}\) and \(\beta _2=\frac{1}{5}\ge \beta _0\) (see Definition 1.2). Again we set \(t_0=0\) and \(t_m=T=1\) with \(0\le i\le m=3k\), for \(k\in \{1,2,\dots \}\). By (8) the 3D asymptotic form \({\mathcal T}^{3D}_2\) of (29) reads as:

$$\begin{aligned} {\mathcal T}_2^{3D}=\left\{ (\tfrac{4}{5},\tfrac{1}{5}, 1), (\tfrac{1}{5}, 1, \tfrac{1}{5}),(1,\tfrac{1}{5}, 1), (1,\tfrac{1}{5}, \tfrac{4}{5}), (\tfrac{1}{5}, 1, \tfrac{2}{5}) \right\} . \end{aligned}$$
(30)

The last two points in (30) come for \(m=3k\) as \(t_m=1\) and the first point is due to \(t_0=0\).

Table 1. Testing conditions (6) and (7) (implied asymptotically by (6)\(^{*}\) and (7)\(^{*}\)) for sampling (27) (represented by (28)) and for \(\lambda =0.3\) and \(\lambda =0.9\) with \(\rho =-0.001\), \(\rho _1=0.05\), \(\rho _2=0.05\), \(\rho _3=0.001\) and \(\rho _4=0.005\). Here \(\mathbf{T}\) stands for true and \(\mathbf{F}\) for false, respectively.
Table 2. Testing conditions (6) and (7) (implied asymptotically by (6)\(^{*}\) and (7)\(^{*}\)) for sampling (29) (represented by (30)) and for \(\lambda =0.3\) and \(\lambda =0.9\) with \(\rho =-0.001\), \(\rho _1=0.05\), \(\rho _2=0.05\), \(\rho _3=0.001\) and \(\rho _4=0.005\). Here \(\mathbf{T}\) stands for true and \(\mathbf{F}\) for false, respectively.

The inequalities (9), (10), (11) marked as (6)\(^{*}\) (or (12) and (13) denoted by (7)\(^{*}\)) enforcing asymptotically (6) (or (7)) to hold are tested over \([\beta _0,1]^3\) for both samplings (27) and (29). The fixed parameter \(\lambda \) is set either to \(\lambda =0.3\) or to \(\lambda =0.9\) with \(\rho =-0.001\), \(\rho _1=\rho _2=0.05\), \(\rho _3=0.001\) and \(\rho _4=0.005\) - see Table 1 and Table 2. The corresponding sets of triples \((M_{im},N_{im},P_{im})\in [\beta _0,1]^3\) satisfying either (6)\(^{*}\) or (7)\(^{*}\) represent the respective solids \(D_{\beta _0}^{\lambda }\subset [\beta _0,1]^3\) plotted in 3D by Mathematica as shown in Fig. 1 and Fig. 2.

Fig. 1.
figure 1

Condition (6) enforced asymptotically by (6)\(^{*}\) visualized in 3D plots as two solids \(D_{\beta _0}^{\lambda }{\subset }[\beta _0,1]^3\), for \(\lambda = 0.3\) or \(\lambda =0.9\), respectively. Here \(\beta _0=0.16\) with dotted points representing samplings: a) (27) mapped into (28) or b) (29) mapped into (30) both for \(\lambda =0.3\) and samplings: c) (27) mapped into (28) or d) (29)mapped into (30) both for \(\lambda =0.9\).

Noticeably different points from \({\mathcal T}_k^{3D}\), for \(k=1,2\) may interchangeably satisfy one of the sufficient conditions enforcing either (6) or (7) to hold asymptotically. The latter is demonstrated in Table 1 and Table 2. Indeed for \(\lambda =0.3\) all conditions from (6)\(^{*}\) are not satisfied by both \({\mathcal T}_k^{3D}\) (for \(k=1,2\)) as we have \(\mathbf{F}\) in the respective columns of both Table 1 and Table 2. Moreover, the conditions from (7)\(^{*}\) are only fulfilled by some points (not all) from \({\mathcal T}_k^{3D}\). Consequently the injectivity of \(\psi _i^L\) for either \({\mathcal T}_1^{3D}\) or \({\mathcal T}_2^{3D}\) is not guaranteed. Geometrically both \({\mathcal T}_k^{3D}\) (for \(k=1,2\)) are not contained in the respective injectivity zones \(D_{\beta _0}^{\lambda =0.3}\) (for either (6)\(^*\) or (7)\(^*\)). In contrast for \(\lambda =0.9\), a simple inspection of Table 1 and Table 2 reveals that all points from \({\mathcal T}_k^{3D}\) (for \(k=1,2\)) can be split into two subsets each contained in the injectivity zones \(D_{\beta _0}^{\lambda =0.9}\) determined by either (6)\(^{*}\) or by (7)\(^{*}\), respectively. Algebraically the latter yields at least one \(\mathbf{T}\) in the last two columns of all rows for both Table 1 and Table 2. \(\quad \square \)

Remark 3.1

Note that if for a given family of \(\beta _0\)-more-or-less uniform samplings \({\mathcal T}_{\beta _0}\) the subfamily \({\mathcal T}_{\beta _0}^{\nu }\subset {\mathcal T}_{\beta _0}\) with extra constraints \(\nu _1\le M_{im}\le \nu _2\), \(\nu _3\le N_{im}\le \nu _4\) and \(\nu _5\le P_{im}\le \nu _6\) (here \(\nu =(\nu _1,\nu _2,\nu _3,\nu _4,\nu _5,\nu _6)\)) is chosen one can also examine (for a fixed \(\lambda \in [0,1)\)) whether \(I_{\nu }^{3D}\subset D_{\beta _0}^{\lambda }\), where \(I_{\nu }^{3D}=(\nu _1,\nu _2)\times (\nu _3,\nu _4)\times (\nu _5,\nu _6)\). By Theorem 2.1, should the latter holds the entire subfamily of \({\mathcal T}_{\beta _0}^{\nu }\) yields asymptotically \(\psi _i^L\) as injective functions. The incomplete information on input samplings \({\mathcal T}\) carried by \({\mathcal T}_{\beta _0}^{\nu }\) can in certain situations accompany \(Q_m\).

\(\quad \square \)

3.2 Numerical Testing for Length Estimation

We pass now to the experiments designed to investigate convergence rate \(\alpha (\lambda )\) in length approximation by examining \(d(\gamma )-d(\hat{\gamma })=O(\delta ^{\alpha (\lambda )})\) - see Definition 1.3. The coefficient \(\alpha (\lambda )\) is estimated numerically by \(\tilde{\alpha }(\lambda )\) which in turn is computed using a linear regression on the pairs \(\{(\log (m),-\log (E_m)\}_{m=m_{min}}^{m=m_{max}}\), where \(E_m=|d(\gamma )-d(\hat{\gamma }^L)|\), for a given m. The slope a of the regression line \(y(x)=ax+b\) found in Mathematica with the aid of Normal[LinearModelFit[data]] yields \(a=\tilde{\alpha }(\lambda )\) forming a numerical estimate of \(\alpha (\lambda )\).

Example 2

Consider a 2D spiral \(\gamma _{sp}:[0,1]\rightarrow \mathbb {E}^2\) (a regular curve with \(\gamma _{sp}(0)=(-0.2,0)\) and \(\gamma _{sp}(1)=(1.2,0)\)):

$$\begin{aligned} \gamma _{sp}(t)= \big ((t + 0.2)\cos (\pi (1 - t)), (t + 0.2)\sin (\pi (1 - t))\big ), \end{aligned}$$
(31)

and the so-called 3D Steinmetz curve \(\gamma _S:[0,1]\rightarrow \mathbb {E}^3\) (a regular closed curve with \(\gamma _S(0)=\gamma _S(1)=(1,0,1.2)\) - see a dotted gray point in Fig. 4):

$$\begin{aligned} \gamma _S(t)= \left( \cos (2\pi t), \sin (2\pi t), \sqrt{1.2^2 - 1.0^2\sin ^2(2\pi t)}\right) . \end{aligned}$$
(32)

Both curves \(\gamma _{sp}\), \(\gamma _S\) (from (31) and (32)) sampled according to either (27) or (29) are plotted in Fig. 3 and Fig. 4, respectively. The numerical results assessing the estimate \(\tilde{\alpha }(\lambda )\) of \(\alpha (\lambda )\) (for \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^{\alpha (\lambda )})\)) are presented in Table 3. Recall that here, a linear regression to compute \(\tilde{\alpha }(\lambda )\) is applied to the collections of points \(\{(\log (m),-\log (E_m)\}_{m_{min}=120}^{m_{max}=201}\), with \(E_m=|d(\gamma )-d(\hat{\gamma }^L)|\) and for various \(\lambda \in \{0.3,0.7,0.9\}\). The results from Table 3 suggest that for all \(\lambda \in \{0.3,0.7,0.9\}\) rendering \(\dot{\psi }^L>0\) (e.g. the latter is guaranteed if Theorem 2.1 holds) one may expect \(\lim _{m\rightarrow \infty }E_m=0\) with the quadratic convergence rate \(\alpha (\lambda )=2\approx \tilde{\alpha }(\lambda )\). \(\quad \square \)

Fig. 2.
figure 2

Condition (7) enforced asymptotically by (7)\(^{*}\) visualized in 3D plots as two solids \(D_{\beta _0}^{\lambda }{\subset }[\beta _0,1]^3\), for \(\lambda =0.3\) or \(\lambda =0.9\), respectively. Here \(\beta _0=0.16\) with dotted points representing samplings: a) (27) mapped into (28) or b) (29) mapped into (30) both for \(\lambda =0.3\) and samplings: c) (27) mapped into (28) or d) (29) mapped into (30) both for \(\lambda =0.9\).

Fig. 3.
figure 3

A spiral curve \(\gamma _{sp}\) from (31) sampled according to: a) (27) or b) (29), for \(m=15\).

Fig. 4.
figure 4

A Steinmetz curve \(\gamma _S\) from (32) sampled according to: a) (27) or b) (29), for \(m=15\) (with dotted gray point \(\gamma _S(0)=\gamma _S(1)=(1,0,1.2)\)).

In fact the numerical results from Example 2 combined with (5) in conjunction with the argument used to prove \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^4)\) for \(\lambda =1\) (see [7]) or [19]) lead to expect \(\alpha (\lambda )=2\) in \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^{\alpha (\lambda )})\), for all \(\lambda \in [0,1)\) yielding \(\psi ^L\) as a piecewise \(C^{\infty }\) reparametrization. The latter forms an open problem which can be stated as:

Remark 3.2

Assume \(\gamma \in C^4([0,T])\) be a regular curve in \(\mathbb {E}^n\) sampled more-or-less uniformly (see Definition 1.2). For the interpolant \(\hat{\gamma }^L\) and any \(\lambda \in [0,1)\) in (3) yielding each \(\psi ^L_i: I\rightarrow \hat{I}\) as a \(C^{\infty }\) genuine reparameterization Example 2 suggests a sharp quadratic convergence rate in:

$$\begin{aligned} d(\gamma )-d(\hat{\gamma }^L_i\circ \psi ^L_i)= O(\delta _m^2). \end{aligned}$$
(33)

In particular if Theorem 2.1 holds (and \(\beta _0\)-more-or-less uniform samplings are used) the mapping \(\psi ^L\) is asymptotically a reparameterization which in turn hints to expect (33). Recall that by sharpness of (33) we understand the existence of at least one regular curve of class \(C^4\) and of at least one samplings from \({\mathcal T}_{\beta _0}\) such that in (33) the convergence rate \(\alpha (\lambda )\) has exactly order 2 (i.e. is not faster than quadratic). \(\quad \square \)

Table 3. The numerical estimates of \(\alpha (\lambda )\approx \tilde{\alpha }(\lambda )\) for a spiral \(\gamma _{sp}\) from (31) and a Steinmetz curve \(\gamma _S\) from (32) computed for \(m_{min}=120\le m \le m_{max}=201\) and \(\lambda \in \{0.3,0.7,0.9\}\). Here \(\mathbf{T}\) stands for true and \(\mathbf{F}\) for false, respectively.

4 Conclusions

Fitting reduced data (see e.g. [3] or [16]) constitutes an important task in computer vision and graphics, engineering, microbiology, physics and other applications like medical image processing (e.g. for area, length and boundary estimation or trajectory planning) - see e.g. [4, 6, 8, 11, 14, 15, 17, 20] or [21].

Two sufficient conditions (6) and (7) are first formulated to ensure that the Lagrange piecewise-cubic \(\psi ^L:[0,T]\rightarrow [0,\hat{T}]\) (introduced in Sect. 1) is a genuine reparameterization. The latter applies to both sparse and dense reduced data \(Q_m\). Here the unknown interpolation knots \({\mathcal T}\) are replaced by \(\hat{\mathcal T}\) which in turn is determined by exponential parameterization (3) controlled by a single parameter \(\lambda \in [0,1]\) and \(Q_m\). The main contribution established in Theorem 2.1 (see Sect. 2) reformulates (6) and (7) into respective asymptotic representatives valid for sufficiently large m (i.e. for \(Q_m\) getting denser). These new transformed conditions (specified in Theorem 2.1) depend exclusively on \(\lambda \in [0,1)\) and \({\mathcal T}\) characterized by (8) within the admitted class of \(\beta _0\)-more-or-less uniform samplings (see Definition 1.2) and apply to any regular curve \(\gamma \in C^3([0,T])\) (with \(0<T<\infty \)). Lastly, in Sect. 3 two illustrative examples are presented. The attached 3D plots generated in Mathematica [22] illustrate the algebraic character of the asymptotic conditions justified in Theorem 2.1 (see Example 1). In addition, the numerical examination of the convergence rate in length estimation of interpolated \(\gamma \) for \(\lambda \in \{0.3,0.7,0.9\}\) are performed. Consequently, based on the latter the conjecture suggesting the quadratic convergence rate for \(d(\gamma )-d(\hat{\gamma }^L)=O(\delta _m^2)\) is posed (see Example 2 and Remark 3.2), subject to the injectivity of \(\psi ^L\). At this point we remark that all asymptotic formulas from Theorem 2.1 are extendable to the corresponding inequalities expressed in (xy)-variables. This can be achieved by converting first (with the aid of special homogeneous mapping) each triple \((M_{im},N_{im},P_{im})\) from (8) into a pair \((x(M_{im},N_{im},P_{im}),y(M_{im},N_{im},P_{im}))\) and then by reformulating all conditions from Theorem 2.1, accordingly in terms of (xy). The satisfaction of such new conditions enforces (9), (10) and (11) or (12) and (13) asymptotically (and thus of (6) or (7)). It is a big advantage to reduce the illustrations from 3D to more appealing 2D analogues. We omit here the theoretical discussion and the geometrical insight of this 2D extension of Theorem 2.1. Similarly, recall that only items (i) and (iv) (see Sect. 2) are given here a full proof. In contrast, the final steps of proving (ii), (iii) and (v) are left out as treated later exhaustively in a journal version of this work (together with the mentioned above 2D extension of Theorem 2.1).

Future work may include various interpolation schemes \(\hat{\gamma }\) or \(\phi \) based on \(Q_m\) combined with either (3) or with other \(\hat{\mathcal T}\) compensating the unknown knots \({\mathcal T}\) (see e.g. [3, 10, 13] or [16]). Searching for alternative sufficient conditions enforcing \(\psi _i^L\) to be injective forms an interesting topic. Lastly the theoretical justification of (33) poses another open problem.