Skip to main content
Log in

On Time-Optimal Trajectories in Non-Uniform Mediums

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper addresses the problem of finding time-optimal trajectories in two-dimensional space, where the mover’s speed monotonically decreases or increases in one of the space’s coordinates. We address such problems in different settings for the velocity function and in the presence of obstacles. First, we consider the problem without any obstacles. We show that the problem with linear speed decrease is reducible to the de l’Hôpital’s problem, and that, for this case, the time-optimal trajectory is a circular segment. Next, we show that the problem with linear speed decreases and rectilinear obstacles can be solved in polynomial time by a dynamic program. Finally, we consider the case without obstacles, where the medium is non-uniform and the mover’s velocity is a piecewise linear function. We reduce this problem to that of solving a system of polynomial equations of fixed degree, for which algebraic elimination theory allows us to solve the problem to optimality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Agarwal, P.K., Har-Peled, S., Sharir, M., Varadarajan, K.R.: Approximating shortest paths on a convex polytope in three dimensions. J. Assoc. Comput. Mach. 44, 567–584 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  2. Agarwal, P.K., Raghavan, P., Tamaki, H.: Motion planning for a steering-constrained robot through moderate obstacles. In: 27th Annual ACM Symposium on Theory of Computing, pp. 343–352. ACM, New York, NY, USA (1995)

  3. Daescu, O., Mitchell, J.S.B., Ntafos, S.C., Palmer, J.D., Yap, C.K.: k-Link shortest paths in weighted subdivisions. In: Dehne, F.K.H.A., López-Ortiz, A., Sack, J. (eds.) Algorithms and Data Structures, 9th International Workshop, WADS 2005. Lecture Notes in Computer Science, vol. 3608, pp. 325–337. Springer, Berlin, Heidelberg (2005)

  4. Daescu, O., Mitchell, J.S.B., Ntafos, S., Palmer, J.D., Yap, C.K.: Approximating minimum-cost polygonal paths of bounded number of links in weighted subdivisions. In: 22nd Annual Symposium on Computational Geometry, pp. 483–484. ACM, New York, NY, USA (2006)

  5. Gelfand, I., Fomin, S.: Calculus of Variations, Revised English Edition and Translated by R.A. Silverman. Prentice-Hall, Englewood Cliffs, NJ (1963)

  6. Basic helicopter handbook: US Department of Transportation, pp. 61–13A. Federal Aviation Administration, Advisory Circular (1973)

  7. Chitsaz, H., LaValle, S.: Minimum wheel-rotation paths for differential drive mobile robots among piecewise smooth obstacles. In: IEEE International Conference on Robotics and Automation, ICRA 2007, pp. 2718–2723. IEEE (2007)

  8. Chitsaz, H., LaValle, S., Balkcom, D., Mason, M.: Minimum wheel-rotation paths for differential-drive mobile robots. Int. J. Robot. Res. 28(1), 66–80 (2009)

    Article  Google Scholar 

  9. Chitsaz, H., Lavalle, S.M., Balkcom, D.J., Mason, T.: An explicit characterization of minimum wheel-rotation paths for differential-drives. In: IEEE International Conference on Methods and Models in Automation and Robotics. IEEE (2006)

  10. Bertsekas, D.: Dynamic Programming and Optimal Control. Athena Scientific, Nashua (2005)

    MATH  Google Scholar 

  11. Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishechenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962)

    MATH  Google Scholar 

  12. Betts, J.: Practical Methods for Optimal Control and Estimation Using Nonlinear Programming. SIAM Press, Philadelphia (2010)

    Book  MATH  Google Scholar 

  13. Betts, J.: Survey of numerical methods for trajectory optimization. J. Guid. Control Dyn. 21, 193–207 (1998)

    Article  MATH  Google Scholar 

  14. Mitchell, J., Papadimitriou, C.: Planning shortest paths. In: SIAM Conference on Geometric Modeling and Robotics, pp. 1–21. SIAM, New York (1985)

  15. Canny, J., Reif, J.: New lower bound techniques for robot motion planning problems. In: 28th Annual Symposium on Foundations of Computer Science, pp. 49–60. IEEE Computer Society Press (1987)

  16. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  17. Lang, S.: Introduction to Differentiable Manifolds, Universitext. Springer, New York (2002)

    Google Scholar 

  18. Kline, M.: Calculus: An Intuitive and Physical Approach. Dover Publications Inc, New York (1998)

    Google Scholar 

  19. Papadimitriou, C.H., Steiglitz, K.: Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall Inc, Upper Saddle River (1982)

    MATH  Google Scholar 

  20. Cox, D., Little, J., O’Shea, D.: Ideals, Varieties and Algorithms. Springer Verlag, New York (1992)

    Book  MATH  Google Scholar 

  21. Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge (1992)

    Google Scholar 

  22. Manocha, D.: Solving systems of polynomial equations. IEEE Comput. Graph. Appl. 14(2), 46–55 (1994)

    Article  MathSciNet  Google Scholar 

  23. Wilkinson, J.: The evaluation of the zeros of ill-conditioned polynomials. Part I. Numer. Math. 1(1), 150–166 (1959)

    Article  MATH  MathSciNet  Google Scholar 

  24. Wilkinson, J.: The evaluation of the zeros of ill-conditioned polynomials. Part II. Numer. Math. 1(1), 167–180 (1959)

    Article  MathSciNet  Google Scholar 

  25. Spivak, M.: A Comprehensive Introduction to Differential Geometry. Publish or Perish Inc, Houston (1999)

    MATH  Google Scholar 

  26. Berger, A., Grigoriev, A., Usotskaya, N.: The time-optimal helicopter trajectory is a circle segment. In: Proceedings of the 26th European Workshop on Computational Geometry, EuroCG 2010, pp. 89–92 (2010)

Download references

Acknowledgments

We would like to thank Erik Balder for his very useful comments and links. Part of this work has been presented at the 26th European Workshop on Computational Geometry [26].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to André Berger.

Additional information

Communicated by Jason L. Speyer.

Appendices

Appendix 1: Proof of Theorem 2.1

To prove Theorem 2.1 rigorously, we proceed in a number of steps.

Step (a).  Any strong minimum of \(J[y]\) is also a weak minimum of \(J[y]\) as well, because \({\mathcal D}_1(a,b) \subseteq {\mathcal C}(a,b)\) and closeness in the norm \(\Vert \cdot \Vert _1\) implies closeness in the norm \(\Vert \cdot \Vert _0\). (Cf. Sect. 3 of [5].)

Step (b).  We first assume that \(v(y)\) is of class \(C^3\) everywhere, and not piecewise \(C^3\). (We again relax this assumption in Step (d)). This assumption implies that the integrand \(F\) of the time-optimal trajectory problem has continuous partial derivatives up to order three with respect to all its arguments, which is needed for the following results to apply, cf. Sect. 25 of [5]. According to the Theorem in Sect. 28 of [5], it then holds that a function \(y=y(x)\) is a weak minimum of \(J[y]\) if it satisfies the following three sufficient conditions: (i) it is a solution of Euler’s equation, (ii) it satisfies the strengthened Legendre condition, (iii) it satisfies the strengthened Jacobi condition.

Condition (i) is also a necessary condition for a weak minimum, see Theorem 1 in Sect. 4 of [5]. Thus, a weak minimum \(y=y(x)\) needs to satisfy

$$\begin{aligned} F_y - \frac{\mathrm{d}}{\mathrm{d}x} F_{y^{\prime }} = 0 \end{aligned}$$

with the boundary conditions \(y(a)=A\) and \(y(b)=B\). [In this notation, the subscripts denote partial derivatives as is common practice in the calculus of variations, and the quantities \(x\), \(y(x)\) and \(y^{\prime }(x)\) are supposed to be entered into the function arguments, as for instance: \(F_{y^{\prime }} = \frac{\partial F}{\partial z}(x,y(x),y^{\prime }(x))\). The total derivative \(\frac{\mathrm{d}}{\mathrm{d}x}\) indicates that the expression in which \(x\), \(y(x)\), and \(y^{\prime }(x)\) are entered, is differentiated with respect to \(x\).] For the integrand \(F\) of the time-optimal trajectory problem, Euler’s equation gives a second-order differential equation (see also Case 4 in Sect. 4 of [5]):

$$\begin{aligned} y^{\prime \prime }(x) + \frac{v^{\prime }(y(x))}{v(y(x))} (1+y^{\prime }(x)^2) = 0. \end{aligned}$$

The strengthened Legendre condition (ii) requires that at all points along \(y=y(x)\) (satisfying Euler’s equation) it should hold that

$$\begin{aligned} P(x) = \frac{1}{2} F_{y^{\prime }y^{\prime }}\big [x,y(x),y^{\prime }(x)\big ] > 0. \end{aligned}$$

For the time-optimal trajectory problem it is obtained that

$$\begin{aligned} P(x) = \frac{1}{2 v(y(x))(1+y^{\prime }(x)^2)^{3/2}} \end{aligned}$$

which obviously is strictly positive for any differentiable function \(y(x)\); hence this condition holds a fortiori.

The strengthened Jacobi condition (iii) requires that there are no points conjugate to the point \(a\) in the closed interval \([a,b]\) for \(y=y(x)\). According to Theorem 3 in Sect. 26 of [5], under the condition (ii) (which has just been verified), this is equivalent to positive definiteness of the following quadratic functional (i.e., the second variation of the functional \(J[y]\)):

$$\begin{aligned} \delta ^2 J[h] = \int _a^b (P(x) h^{\prime }(x)^2 + Q(x) h(x)^2) \mathrm{d}x \end{aligned}$$

for all admissible \(h(x)\) in \({\mathcal D}_1(a,b)\) with \(h(a)=h(b)=0\), where \(P(x)\) is as given above and \(Q(x)\) (evaluated along \(y=y(x)\) which satisfies Euler’s equation) is given by:

$$\begin{aligned} Q(x) = \frac{1}{2} \left( F_{yy}[x,y(x),y^{\prime }(x)] - \frac{\mathrm{d}}{\mathrm{d}x} F_{yy^{\prime }}[x,y(x),y^{\prime }(x)] \right) . \end{aligned}$$

For the time-optimal trajectory problem, suppressing the argument \(x\) for readability, the quantity \(Q\) is computed as:

$$\begin{aligned} Q = \frac{1}{2} \left( \frac{v^{\prime }(y)^2 - v(y) v^{\prime \prime }(y)}{v(y)^2 \sqrt{1+y^{\prime 2}}} + \frac{v^{\prime }(y) y^{\prime \prime }}{v(y)(1+y^{\prime 2})^{3/2}} \right) . \end{aligned}$$

We substitute \(y^{\prime \prime } = - \frac{v^{\prime }(y)}{v(y)} (1+y^{\prime 2})\) which holds for any solution \(y\) of Euler’s equation. This gives:

$$\begin{aligned} Q = \frac{-v^{\prime \prime }(y)}{2 v(y) \sqrt{1+y^{\prime 2}}}. \end{aligned}$$

Because of concavity of the velocity function, the numerator expression \(-v^{\prime \prime }(y)\) is non-negative for all \(y<y_{\max }\), hence \(Q(x) \ge 0\). Because \(P(x)>0\) and \(Q(x) \ge 0\), it follows that \(\delta ^2 J[h]\) is zero if and only if \(h^{\prime }(x) \equiv 0\). Together with \(h(a)=h(b)=0\) this implies that \(h(x) \equiv 0\). Hence \(\delta ^2 J[h]\) is positive definite, which verifies condition (iii).

To sum up, it holds that a function \(y=y(x)\) is a weak minimum of \(J[y]\) if and only if it satisfies Euler’s equation ().

Step (c).  According to Theorem 1 in Sect. 34 of [5], it holds that, if \(y=y(x)\) satisfies Euler’s equation, then a sufficient condition for being a strong minimum of \(J[y]\) is that there is a field \(y^{\prime }=\psi (x,y)\) of this functional embedding \(y=y(x)\) and which is defined on an open region containing \(y=y(x)\), such that for all \((x,y)\) in this region and for all \(w \in {\mathbb R}\), the Weierstrass E-function satisfies \(E(x,y,\psi (x,y),w) \ge 0\). Here, the existence of such a field \(\psi (x,y)\) which embeds \(y=y(x)\), is guaranteed by the sufficient conditions (ii) and (iii) for a weak minimum under (b) above, see Sect. 34 of [5]. The Weierstrass E-function is defined by

$$\begin{aligned} E(x,y,z,w) := F(x,y,w) - F(x,y,z) - (w-z)F_{y^{\prime }}(x,y,z), \end{aligned}$$

which gives:

$$\begin{aligned} E(x,y,z,w) = \frac{\sqrt{1+w^2}\sqrt{1+z^2} - (1+wz)}{v(y)\sqrt{1+z^2}}. \end{aligned}$$

Upon squaring each term of the numerator in the above expression, it can be seen that \(E(x,y,z,w)\) is non-negative for all \(x, y, z, w\), and therefore, this holds a fortiori also for \(z=\psi (x,y)\) with \((x,y)\) in the associated open region. This verifies the sufficient condition, which—in view of step (b)—proves that every weak minimum of \(J[y]\) is also a strong minimum, while—according to step (a)—also the converse holds. Summarizing, we have shown that for the time-optimal trajectory problem with a velocity function of class \(C^3\), a function \(y=y(x)\) is a strong minimum of \(J[y]\) if and only if it is a weak minimum of \(J[y]\) if and only if it is a solution of Euler’s equation (satisfying the boundary conditions).

Step (d).  We now admit velocity functions which are piecewise \(C^3\) rather than \(C^3\). Clearly, on each of the intervals for \(y\) on which \(v(y)\) is \(C^3\), the above considerations apply. This partitions the half-plane into horizontal layers, and the question arises how two time-optimal trajectories in adjacent layers of the half-plane can be pieced together at an altitude, say \(y=y_d\), where \(v(y)\) is not three times continuously differentiable, in such a way that the joined curve becomes time-optimal across layers. To this end, one may analyze what happens to the first variation \(\delta J\) when the junction point is horizontally varied at the altitude \(y_d\). This can be done entirely analogous to the analysis in Sect. 15 of [5] on broken extremals, which leads to the Weierstrass–Erdmann conditions. These conditions state that at such a junction point it must hold for optimality that the so-called canonical variables are all continuous. These canonical variables are \(x\), \(y\), \(p\), and \(H\), where the moment \(p\) and the Hamiltonian \(H\) are defined as:

$$\begin{aligned} p := F_{y^{\prime }}, H: = - F + y^{\prime } F_{y^{\prime }}. \end{aligned}$$

Because \(F\) does not depend explicitly on \(x\), it holds that along any solution \(y=y(x)\) of Euler’s equation

$$\begin{aligned} \frac{\mathrm{d}H}{\mathrm{d}x}=0, \end{aligned}$$

so that \(H\) remains constant along time-optimal trajectories within each layer, and moreover, according to the Weierstrass–Erdmann conditions, also across layers. Within each layer, Euler’s equation \(F_y - \frac{\mathrm{d}}{\mathrm{d}x} F_{y^{\prime }} = 0\) can be integrated to give a family of first-order differential equations (parameterized by an integration constant) because \(F_x \equiv 0\). See also Case 2 in Sect. 4 of [5] where it is shown that:

$$\begin{aligned} F - y^{\prime } F_{y^{\prime }} = C, \end{aligned}$$

where \(C\) is a constant. This equation is also known as the Beltrami identity. Upon varying \(C\), this equation generates precisely the same solutions as Euler’s equation (as is confirmed by differentiation with respect to \(x\)) except for the spurious constant solutions \(y(x) \equiv \gamma \) with \(F(x,\gamma ,0)=C\) which have been introduced. For the helicopter problem, this equation takes the form:

$$\begin{aligned} \frac{1}{v(y(x))\sqrt{1+y^{\prime }(x)^2}} = C, \end{aligned}$$

of which the constant solutions \(y(x) \equiv v^{-1}(\frac{1}{C})\) do not satisfy Euler’s equation and must be rejected.

The Weierstrass–Erdmann conditions make clear that this differential equation continues to apply across layers with an unchanged constant \(C\) (which equals the value of \(-H\) along the corresponding solution \(y=y(x)\)). For the moment \(p\) we have the expression

$$\begin{aligned} p = \frac{y^{\prime }}{v(y)\sqrt{1+y^{\prime 2}}}, \end{aligned}$$

showing that the Weierstrass–Erdmann conditions also imply that \(y^{\prime }\) remains continuous along time-optimal curves. From Euler’s Eq. () it then is apparent that \(y^{\prime \prime }\) exists and is continuous within layers, but not across layers at junction points where \(v^{\prime }(y_d)\) does not exist. Therefore, the time-optimal functions \(y(x)\) are in general only piecewise \(C^2\). For the signed curvature \(\kappa (x)\) along a solution of Euler’s equation at a point \((x,y(x))\) we have:

$$\begin{aligned} \kappa (x) = \frac{y^{\prime \prime }(x)}{(1+y^{\prime }(x)^2)^{3/2}} = -\frac{v^{\prime }(y(x))}{v(y(x))\sqrt{1+y^{\prime }(x)^2}} = -v^{\prime }(y(x)) C, \end{aligned}$$

where the last step employs the Beltrami identity (). To sum up, so far we have proved part (3) of Theorem 2.1, and we have shown the equivalence of parts (1) and (2). We also have proved that the roles played by Euler’s equation and the Beltrami identity regarding weak and strong solutions of the helicopter problem are as claimed. It remains to show that between any two points there always exists a time-optimal trajectory (i.e., a non-constant solution of either of the differential equations connecting them) and that it is unique.

Step (e).  To address this issue, we focus attention on the set of non-constant solutions of the Beltrami identity which pass through a given point \((0,A)\), for varying values of the constant \(C\). Note that this causes no loss of generality, because \(F_x \equiv 0\) and horizontal translations \(x \mapsto x-a\) leave the problem invariant for all \(a \in {\mathbb R}\).

At the point \((0,A)\) the derivative \(T=y^{\prime }(0)\) relates to the constant \(C\) by

$$\begin{aligned} \frac{1}{v(A)\sqrt{1+T^2}} = C, \quad T = \pm \frac{\sqrt{1-C^2 v(A)^2}}{C v(A)}. \end{aligned}$$

If \(T \ne 0\), we have according to the Inverse Mapping Theorem (see Theorem 5.2 in Chapter I, Sect. 5 of [17]) that the \(C^1\) function \(y=y(x)\) can locally be represented as a \(C^1\) function \(x=x(y)\). To obtain the differential equation satisfied by \(x(y)\) we can rewrite the Beltrami identity in terms of \(x^{\prime } = \frac{\mathrm{d}x}{\mathrm{d}y}\) instead of \(y^{\prime } = \frac{\mathrm{d}y}{\mathrm{d}x}\). Alternatively, we can redo the computations starting from the rewritten functional, now given by \(\tilde{J}[x] = \int _A^B G(y,x,x^{\prime }) \mathrm{d}y\) with \(G(y,x,x^{\prime }) = \frac{\sqrt{1+x^{\prime 2}}}{v(y)}\), which again does not depend explicitly on \(x\). Euler’s equation then takes the form \(-\frac{\mathrm{d}}{\mathrm{d}y} G_{x^{\prime }} = 0\). It can be worked out to give the second-order differential equation

$$\begin{aligned} x^{\prime \prime }(y) - \frac{v^{\prime }(y)}{v(y)} x^{\prime }(y) (1+x^{\prime }(y)^2) = 0, \end{aligned}$$

with the corresponding initial conditions \(x(A)=0\) and \(x^{\prime }(A)=\frac{1}{T}\). However, its form also allows for direct integration, producing a family of first-order differential equations \(G_{x^{\prime }} = D\) (with \(D\) an integration constant), which constitutes the analog of the Beltrami identity:

$$\begin{aligned} \frac{x^{\prime }(y)}{v(y) \sqrt{1+x^{\prime }(y)^2}} = D. \end{aligned}$$

(Here we have not introduced any spurious solutions.) The constant \(D\) satisfies \(|D|=C\). The sign of \(D\) equals the sign of \(x^{\prime }(y)\), which at \(y=A\) equals the sign of \(1/T\). Hence we can write

$$\begin{aligned} T = \frac{\sqrt{1-D^2 v(A)^2}}{D v(A)}. \end{aligned}$$

The value of \(D\) changes sign when a time-optimal curve passes through a point where \(y^{\prime }(x) = 0\), after which a new local description as \(x=x(y)\) is required. (The value \(D=0\) corresponds to the vertical solution \(x(y) \equiv 0\), previously excluded. The solution \(y=y(x)\) with \(y(0)=A\) and \(T=y^{\prime }(0)=0\) does not admit a representation as \(x=x(y)\) in an open neighborhood of \(y=A\).)

The first-order differential equation () can be rewritten into explicit form as

$$\begin{aligned} x^{\prime }(y) = \frac{D v(y)}{\sqrt{1-D^2 v(y)^2}}. \end{aligned}$$

Integration gives the explicit solution:

$$\begin{aligned} x(y) = \int _A^y \frac{D v(\eta )}{\sqrt{1-D^2 v(\eta )^2}} d\eta . \end{aligned}$$

This solution extends everywhere on an interval \((y_D,y_{\max })\), with \(y_D\) defined by

$$\begin{aligned} y_D := \inf \{ y | v(y)<\frac{1}{|D|} \}. \end{aligned}$$

This follows by observing that: for any finite \(y\) in this interval the range of integration is bounded, \(v(y)\) is bounded (it decreases strictly monotonically, so an upper bound is given by \(v(\hat{y})\) for any selected \(\hat{y} < y\)), and \(\sqrt{1-D^2 v(y)^2}\) is bounded away from zero by definition of \(y_D\) and the strictly decreasing property of \(v(y)\). (Cf. also Cor. 1.8 in Chapter IV, Sect. 1 of [17]. The first-order equation for \(x(y)\) involves \(v(y)\) which is Lipschitz, so that this result applies to arbitrary velocity functions.) Note also that \(\lim _{y \uparrow y_{\max }} x(y)\) exists and remains finite for the same reasons.

If we compare two such solutions \(x_1(y)\) and \(x_2(y)\) obtained for different values \(D=D_1\) and \(D=D_2\), respectively, then the derivative of \(x_1(y)-x_2(y)\) equals

$$\begin{aligned} x_1^{\prime }(y)-x_2^{\prime }(y) = \frac{D_1 v(y)}{\sqrt{1-D_1^2 v(y)^2}} - \frac{D_2 v(y)}{\sqrt{1-D_2^2 v(y)^2}}, \end{aligned}$$

which exists on the interval \((\max \{y_{D_1},y_{D_2}\},y_{\max })\). The function \(\displaystyle f(z) = \frac{z}{\sqrt{1-z^2}}\) is monotonically increasing and gives a bijection between \((-1,1)\) and \({\mathbb R}\). Therefore \(x_1^{\prime }(y)-x_2^{\prime }(y) = 0\) if and only if \(D_1 v(y) = D_2 v(y)\), which does not happen because \(v(y)>0\) and \(D_1 \ne D_2\). This shows that the difference \(x_1(y)-x_2(y)\) is either strictly monotonically increasing or strictly monotonically decreasing. Hence the functions \(x_1(y)\) and \(x_2(y)\) have their unique intersection point in the given interval at \(y=A\). The value of \(y_D\) can either be finite or equal to \(-\infty \). If \(y_D=-\infty \), then the representation \(x=x(y)\) extends to the entire curve \(y=y(x)\). In this case the function \(y=y(x)\), being convex, has no minimum. This occurs if \(v_{\max } := \sup \{v(y) | y<y_{\max }\}\) is finite, for \(|D|\) such that \(|D| \le \frac{1}{v_{\max }}\). If \(y_D\) is finite, then the local representation of the curve extends only to the point \((x_D,y_D)\) at which \(|x^{\prime }(y_D)|=\infty \) (equivalently: \(y^{\prime }(x_D)=0\)) and the representation as a function \(x=x(y)\) stops to apply. (To continue the curve we may switch to the original representation as a function \(y=y(x)\) and once this point is passed we may return to a new local description of the form \(x=x(y)\), now with \(D\) of opposite sign.)

In order to show that the value \(x_D\) is indeed finite, we introduce a variable transformation \(\zeta = \arcsin (|D| v(\eta ))\), for which we obtain:

$$\begin{aligned} x_D = x(y_D) = \text{ sign }(D) \int _{\arcsin (|D| v(A))}^{\pi /2} \frac{\sin (\zeta )}{|D| v^{\prime }(v^{-1}(\sin (\zeta )/|D|))} \mathrm{d}\zeta . \end{aligned}$$

Then clearly the range of integration is bounded and the numerator of the integrand is bounded. The denominator is bounded away from zero, because \(v^{-1}\) is only applied to values in the range \([v(A),v(y_D)]\), hence \(v^{\prime }\) is applied only to values in the range \([y_D,A]\) on which it is bounded away from zero because it is strictly negative everywhere on \(y<y_{\max }\). (Moreover, any possible discontinuities in \(v^{\prime }(y)\) do not harm, since they can occur only at finitely many points.) Hence the integral has a well-defined finite value.

Note also that, if \(|T|\) increases monotonically, then \(C=|D|\) decreases monotonically, whence \(y_D=v^{-1}(\frac{1}{|D|})\) decreases monotonically. In fact,

$$\begin{aligned} \lim _{|D| \downarrow \frac{1}{v_{\max }}} y_D = -\infty . \end{aligned}$$

We now show that \(|x_D|\) increases monotonically with decreasing \(|D|\) too. Upon writing \(|x_D|\) as:

$$\begin{aligned} |x_D| = \int _{\arcsin (|D| v(A))}^{\pi /2} \frac{\sin (\zeta )/|D|}{-v^{\prime }(v^{-1}(\sin (\zeta )/|D|))} \mathrm{d}\zeta , \end{aligned}$$

we find that, if \(|D|\) decreases, then \(\sin (\zeta )/|D|\) increases (for all \(\zeta \) in the range of integration, using \(\sin (\zeta ) \ge 0\)), hence the numerator of the integrand increases, while the denominator has that \(v^{-1}(\sin (\zeta )/|D|)\) decreases. Therefore the term \(-v^{\prime }(v^{-1}(\sin (\zeta )/|D|)) = |v^{\prime }(v^{-1}(\sin (\zeta )/|D|))|\) decreases. Thus, the integrand increases. Furthermore, the integrand is strictly positive and the range of integration increases (the lower end point decreases). Consequently \(|x_D|\) increases (strictly) monotonically with decreasing \(|D|\).

If \(|T| \uparrow \frac{\sqrt{v_{\max }^2-v(A)^2}}{v(A)}\), then equivalently \(|D| \downarrow \frac{1}{v_{\max }}\) and \(y_D \rightarrow -\infty \). For \(|x_D|\) we have the following lower bound:

$$\begin{aligned} |x_D| = \int _{y_D}^A \frac{|D| v(\eta )}{\sqrt{1-|D|^2 v(\eta )^2}} \mathrm{d}\eta \ge \int _{y_D}^A \frac{|D| q_D (\tilde{y}_{\max }-\eta )}{\sqrt{1-|D|^2 q_D^2 (\tilde{y}_{\max }-\eta )^2}} \mathrm{d}\eta \end{aligned}$$

with

$$\begin{aligned} q_D = -\frac{v(A)-v(y_D)}{A-y_D}, \quad \tilde{y}_{\max } = A + \frac{v(A)}{q_D}, \end{aligned}$$

because the line segment \(\tilde{v}(y) = q_D (\tilde{y}_{\max }-y)\) connects the two points \((y_D,v(y_D))\) and \((A,v(A))\) and is everywhere below \(v(y)\) due to concavity. Using \(v(y_D) = \frac{1}{|D|}\), the rightmost integral can be computed as:

$$\begin{aligned}&\left[ \frac{\sqrt{1-|D|^2 q_D^2 (\tilde{y}_{\max }-\eta )^2}}{|D| q_D} \right] _{\eta =y_D}^{\eta =A} =\frac{\sqrt{1-|D|^2 q_D^2 (\tilde{y}_{\max }-A)^2}}{|D| q_D} \nonumber \\&= \frac{\sqrt{v(y_D)+v(A)}}{\sqrt{v(y_D)-v(A)}} (A-y_D) \ge A-y_D. \end{aligned}$$

Clearly, for \(y_D \rightarrow -\infty \) it now follows that \(|x_D| \rightarrow \infty \). For \(\frac{1}{v_{\max }} < |D| < \frac{1}{v(A)}\) we have shown that the corresponding solution curve \(x=x(y)\) passing through \((0,A)\) is convex as a function \(y=y(x)\), with a (unique) minimum attained at \((x_D,y_D)\). The value of \(y_D\) equals \(v^{-1}(\frac{1}{|D|}) < A\), and for \(|D| \uparrow \frac{1}{v(A)}\) we have \((|x_D|,y_D) \rightarrow (0,A)\), strictly monotonically. For \(|D| \downarrow \frac{1}{v_{\max }}\) we have \(y_D \rightarrow -\infty \) in a strictly monotonic way. Then we also have \(|x_D| \ge A-y_D\) and also \(|x_D| \rightarrow \infty \) in a strictly monotonic way. Note that \(x_D > 0\) corresponds to \(T<0\) (equivalently \(D<0\)); likewise, \(x_D < 0\) corresponds to \(T>0\) (and \(D>0\)).

Step (f).  Uniqueness of solutions passing through two given points. Now suppose two different solutions of Euler’s equation both pass through the same two distinct points \((a,A)\) and \((b,B)\). Without any loss of generality, we assume that \(a=0\) and \(b>0\). (This employs horizontal translation invariance as before. We will now also employ symmetry with respect to vertical lines: the general solution set of Euler’s equation is invariant with respect to the mapping \(x \mapsto -x\).) Let \(D_1\) and \(D_2\) be the two constants associated with the local representations \(x=x_1(y)\) and \(x=x_2(y)\), respectively, of the solutions near \((0,A)\).

  1. (i)

    If \(|D_1|\) and \(|D_2|\) are both \(\le \frac{1}{v_{\max }}\), then the two representations \(x_1(y)\) and \(x_2(y)\) extend for all \(y<y_{\max }\) and are strictly monotonic. If \(D_1\) and \(D_2\) have opposite signs, clearly the two curves have opposite signs everywhere for \(x_1(y)\) and \(x_2(y)\) except at the intersection point \((0,A)\). If \(D_1\) and \(D_2\) have the same sign, the difference \(|x_1(y)-x_2(y)|\) grows monotonically, so \(x_1(y)\) and \(x_2(y)\) have only one point in common, which gives a contradiction.

  2. (ii)

    If \(|D_1| \le \frac{1}{v_{\max }}\) and \(|D_2| \in (\frac{1}{v_{\max }},\frac{1}{v(A)})\), it follows that the curve \(x_2(y)\) has a minimum. If \(D_1\) and \(D_2\) have the same sign, the curves \(x_1(y)\) and \(x_2(y)\) do not meet on the intersection of their original ranges of definition, while the continuation of the curve \(x_2(y)\) after the minimum point has the opposite sign compared to \(x_1(y)\), so they still do not intersect. If \(D_1\) and \(D_2\) have opposite sign, then the curves could potentially meet in a second point \((b,B)\) with \(B \ge A\). Then both curves are increasing at \((b,B)\). Mirroring this situation in the vertical line \(x=b/2\), mapping the point \((b,B)\) to \((0,B)\) and mapping the two solution curves to two other solution curves of Euler’s equation, shows the two new curves both to be decreasing at \((0,B)\), but also that the curve with a minimum initially is below the curve without a minimum. This contradicts our observations above.

  3. (iii)

    If \(|D_1|, |D_2| \in ]\frac{1}{v_{\max }},\frac{1}{v(A)}[\), then both curves have a minimum. If \(D_1\) and \(D_2\) have opposite signs, then a contradiction occurs as under (ii). If \(D_1\) and \(D_2\) have the same sign, then observe that \(|D_1|<|D_2|\) is equivalent to \(|x_{D_1}|>|x_{D_2}|\) because \(|x_D|\) has been shown to increase monotonically for monotonically decreasing \(|D|\). If the minima on the two curves are ordered accordingly, then their ordering should at the same time be opposite by mirroring the situation in the vertical line \(x=b/2\). Again, this gives a contradiction.

  4. (iv)

    Finally, the curve \(y=y(x)\) through \((0,A)\) with \(y^{\prime }(0)=T=0\), is handled by observing that it has its minimum at \((0,A)\). It is convex, and therefore, this minimum is unique, so it has no minimum at \((b,B)\). If the other curve has no minimum at \((b,B)\) either, the two points \((0,A)\) and \((b,B)\) could switch roles and the situation is handled by one of the cases above. If the other curve has a minimum at \((b,B)\), then with \(A \le B\), it cannot pass through the point \((0,A)\) (or the other way around if \(B \le A\)).

This shows that no two solutions of Euler’s equation with \(v(y)\) piecewise of class \(C^3\), can pass through the same two distinct points., i.e., time-optimal paths in this setting are unique.

Step (g).  Existence of solutions passing through two given points. Without loss of generality we assume that \(a=0\), \(b>0\), and \(A \le B\).

  1. (i)

    If \(A=B\) then we can solve \(x_D = b/2\) for \(D\). Note that this yields a unique value \(D=\widehat{D}_1<0\) since \(x_D\) is a monotonic function of \(D\) as discussed above. Then by symmetry with respect to the vertical line \(x=x_{\widehat{D}_1}\) we see that the solution curve through \((0,A)\) and \((x_{\widehat{D}_1},y_{\widehat{D}_1})\) also passes through \((b,A)\).

  2. (ii)

    If \(B>A\), note that the previous solution curve now passes strictly below the point \((b,B)\). Then we compute the solution curve which has \(|D|=\frac{1}{v(A)}\), i.e., which has its minimum at \((0,A)\). Now:

  3. (ii–a)

    If \((b,B)\) is located on this curve, then we are done.

  4. (ii–b)

    If \((b,B)\) is located to the right of this curve, then we vary the parameter \(D\) continuously in the range \((\widehat{D}_1,-\frac{1}{v(A)})\). It then holds that the solution curves \(y=y(x,D)\) can all be extended to \(y \uparrow y_{\max }\) and they depend continuously on the parameter \(D\). (See Chapter IV, Sect. 1 of [17], see also Theorem 4 in Vol. 1, Chapter 5 of [25] for this property to hold locally; here we use also the fact that the solution curves can all be extended to \(y \uparrow y_{\max }\).) Hence a value for \(D\) exists for which the solution curve passes through \((b,B)\).

  5. (ii-c)

    If \((b,B)\) is located above this curve, then we can determine a value \(\widehat{D}_2>0\) for which it holds that \(\lim _{y \uparrow y_{\max }} x_{\widehat{D}_2}(y) = b\). This follows from the observation that

$$\begin{aligned} x(y_{\max },D) := \int _A^{y_{\max }} \frac{D v(\eta )}{\sqrt{1-D^2 v(\eta )^2}} \mathrm{d}\eta , \end{aligned}$$

has a well-defined finite value, because the integrand is positive, the numerator is bounded, and the denominator is bounded away from zero, while the range of integration is bounded. Moreover, \(x(y_{\max },D)\) monotonically increases with positive \(D\), and therefore, the integrand monotonically increases as well. Now we need to vary the parameter \(D\) continuously in the range \((\widehat{D}_2,\frac{1}{v(A)})\). By continuous dependence of the solution curves on the parameter \(D\) the result again follows.

This completes the proof of Theorem 2.1.

\(\square \)

Remark 7.1 The exposition in the proof above also makes clear why there are no vertical segments in time-optimal paths with \(a < b\), as indicated after the proof of Proposition 2.1. The reason for this, is that time-optimality requires any vertical segment to join with a non-vertical segment, while satisfying the Weierstrass–Erdmann conditions. These conditions require \(x^{\prime }(y)=0\). At the same time, we have local uniqueness of solutions of a second-order equations satisfying two initial conditions, provided that a local Lipschitz condition is satisfied. For Euler’s equation in terms of \(x(y)\), see Eqn. (), this is easily seen to hold because \(v(y)\) is Lipschitz and no unboundedness of derivatives occurs. Therefore, vertical segments can only be extended time-optimally by vertical continuation.

Appendix 2: Non-Convexity of the Time Function

In this appendix, we show that the time function \(t(t_1, \ldots , t_n)\) is not necessary convex even in the case of \(n=1\), i.e., if the velocity function has a single breakpoint.

Consider the following function:

$$\begin{aligned} t(t_1)=\frac{1}{q_1}\int _0^{\gamma _1(t_1)} \frac{\mathrm{d}\alpha }{\sin (\alpha +\beta _1(t_1))}+\frac{1}{q_2}\int _0^{\gamma _2(t_1)} \frac{\mathrm{d}\alpha }{\sin (\alpha +\beta _2(t_1))}. \end{aligned}$$

The first derivative is equal to (see Lemma 4.1):

$$\begin{aligned} \frac{\mathrm{d}t(t_1)}{\mathrm{d}t_1}=\frac{1}{q_2R_2(t_1)}-\frac{1}{q_1R_1(t_1)}, \end{aligned}$$

hence, the second derivative is calculated in the following way:

$$\begin{aligned} \frac{\mathrm{d}^2t(t_1)}{\mathrm{d}t_1^2}=\frac{1}{q_2}\frac{-\frac{\mathrm{d}R_2(t_1)}{\mathrm{d}t_1}}{R_2^2(t_1)}-\frac{1}{q_1}\frac{-\frac{\mathrm{d}R_1(t_1)}{\mathrm{d}t_1}}{R_1^2(t_1)} \end{aligned}$$
$$\begin{aligned} =\frac{q_2R_2^2(t_1)\frac{a_1^4-(x_B-x_A-t_1)^4}{4(x_B-x_A-t_1)^3R_1(t_1)}+q_1R_1^2(t_1)\frac{a_2^4-t_1^4}{4t_1^3R_2^2(t_1)}}{q_1q_2R_1^2(t_1)R_2^2(t_1)}. \end{aligned}$$

Let \(x_A=0\) and \(x_B=2(a_1+a_2)\) then the second-order derivative is negative for \(t_1=a_2\). The function \(\frac{\mathrm{d}^2t}{\mathrm{d}t_1^2}(t_1)\) is continuous so there exists the interval \([a_2-\varepsilon , a_2]\) such that the second-order derivative is negative within this interval for sufficiently small \(\varepsilon \). We conclude that the time function is non-convex. Now we show that the stationary points in this non-convex area are local minima, and therefore, we can not apply Newton-type methods to find the global minimum.

Our example uses the value of the second-order derivative we obtain above. There are two more ingredients added, namely they are the necessary conditions for the point to be stationary:

  • \(q_1R_1(t_1)=q_2R_2(t_1)\) by Equation (11),

  • \(q_2(a_2^2-t_1^2)(x_B-x_A-t_1)=q_1(a_1^2+(x_B-x_A-t_1))^t_1\) by Equation (12), thus

    $$\begin{aligned}q_1(a_1^2+(x_B-x_A-t_1)^2)=\frac{q_2(a_2^2-t_1^2)(x_B-x_A-t_1)}{t_1}.\end{aligned}$$

We multiply both fractions in the numerator of the second-order derivative by 1, but presented in the way \(\frac{q_1^2}{q_1^2}\) and \(\frac{q_2^2}{q_2^2}\), respectively, thus we can rewrite the expression using the first equation in the following way:

$$\begin{aligned} \frac{d^2t(t_1)}{dt_1^2}&= \frac{q_1R_1(t_1)}{4(q_1R_1(t_1))^4}\left[ \frac{q_1^2(a_1^4-(x_B-x_A-t_1)^4)}{(x_B-x_A-t_1)^3}+\frac{q_2^2(a_2^4-t_1^4)}{t_1^3}\right] \\&= \frac{1}{(q_1R_1(t_1))^3}\frac{q_1^2(a_1^2-(x_B-x_A-t_1)^2)(a_1^2+(x_B-x_A-t_1)^2)}{(x_B-x_A-t_1)^3}\\&+\frac{1}{(q_1R_1(t_1))^3}\frac{q_2^2(a_2^2-t_1^2)(a_2^2+t_1^2)}{t_1^3}. \end{aligned}$$

Now we apply the second equation:

$$\begin{aligned} \frac{d^2t(t_1)}{dt_1^2}&= \frac{1}{(q_1R_1(t_1))^3}\frac{q_1(a_1^2-(x_B-x_A-t_1)^2)q_2(a_2^2-t_1^2)(x_B-x_A-t_1)}{t_1(x_B-x_A-t_1)^3}\\&+\frac{1}{(q_1R_1(t_1))^3}\frac{q_2^2(a_2^2-t_1^2)(a_2^2+t_1^2)}{t_1^3}\\&= \frac{q_2(a_2^2-t_1^2)}{(q_1R_1(t_1))^3t_1}\left[ \frac{q_1(a_1^2-(x_B-x_A-t_1)^2)}{(x_B-x_A-t_1)^2}+\frac{q_2(a_2^2-t_1^2)}{t_1^2}\right] \\&= \frac{q_2(a_2^2-t_1^2)}{(q_1R_1(t_1))^3t_1}\left[ \frac{q_1a_1^2}{(x_B-x_A-t_1)^2}-q_1+\frac{q_2a_2^2}{t_1^2}+q_2\right] .\\ \end{aligned}$$

So, \(\frac{\mathrm{d}^2t(t_1)}{\mathrm{d}t_1^2}\ge 0\) as \(t_1<a_2\) and \(q_2>q_1\). We conclude that the stationary points in the non-convex area are the points of local minimum.

Appendix 3: Proof of Lemma 4.2

The main ingredients to prove Lemma 4.2 are the following observations:

Observation 9.1

For three points \(A=(x_1, y_1), B=(x_3, y_3) \in S\) and \(M=(x_2, y_2)\), such that \(x_1 <x_2<x_3\) and \(y_2<y_1,\; y_2<y_3\), \(T_{AB}\) lies above \(T_{AM}\) and \(T_{MB}\) on the interval \([x_1, x_3]\).

Proof

Denote the \(x\)-coordinates of the centers of the circle segments \(T_{AM}, \,T_{MB}\), and \(T_{AB}\) as \(x_0^{AM}, \, x_0^{MB}\) and \(x_0^{AB}\), respectively. Then \(x_0^{MB} \le x_0^{AB}\le x_0^{AM}\) as \(\,\,\,\,\,\,\,\,x_0^{AB}=\lambda x_0^{MB}+(1-\lambda )x_0^{AM}\) for \(\lambda =\frac{x_3-x_2}{x_3-x_1} \in [0,1]\). The circle segment \(T_{AB}\) is then given by the equation

$$\begin{aligned}(x-x_0^{AB})^2+(y-y_0)^2=R^2_{AB},\end{aligned}$$

where \(R^2_{AB}=(x_0^{AB}-x_1)^2+(y_0-y_1)^2\), so \(y=y_0-\sqrt{R^2_{AB}-(x-x_0^{AB})^2}\). Similarly, the circle segment \(T_{AM}\) is given by the equation \(y=y_0-\sqrt{R^2_{AM}-(x-x_0^{AM})^2}\), where we have \(R^2_{AM}=(x_0^{AM}-x_1)^2+(y_0-y_1)^2\).

The circle segment \(T_{AB}\) lies above the circle segment \(T_{AM}\) if

$$\begin{aligned} y_0-\sqrt{R^2_{AB}-(x-x_0^{AB})^2} \ge y_0-\sqrt{R^2_{AM}-(x-x_0^{AM})^2} \end{aligned}$$

for any \(x\in [x_1, x_3]\). It is sufficient to compare the expressions under the radical to verify this fact:

$$\begin{aligned}&R^2_{AB}-(x-x_0^{AB})^2-(R^2_{AM}-(x-x_0^{AM})^2)\\&=(x_0^{AB}\!-\!x_1)^2\!+\!(y_0\!-\!y_1)^2\!-\!(x\!-\!x_0^{AB})^2\!-\!(x_0^{AM}\!-\!x_1)^2-(y_0-y_1)^2+(x-x_0^{AM})^2\\&=2(x_0^{AB}-x_0^{AM})(x-x_1)\le 0. \end{aligned}$$

Thus, \(\sqrt{R^2_{AB}-(x-x_0^{AB})^2}\le \sqrt{R^2_{AM}-(x-x_0^{AM})^2}\) for any \(x\in [x_1, x_3]\). Hence,

$$\begin{aligned} y_0&- \sqrt{R^2_{AB}-(x-x_0^{AB})^2} \ge y_0-\sqrt{R^2_{AM}-(x-x_0^{AM})^2}. \end{aligned}$$

Therefore, \(T_{AB}\) is higher than \(T_{AM}\). The proof that \(T_{AB}\) is higher than \(T_{MB}\) is similar. \(\square \)

Observation 9.2

For any three consecutive points \(A=(x_1, y_1), C=(x_2, y_1)\) and \(B=(x_2, y_2)\) from \(S\), such that \(x_1<x_2\) and \(T_{AB}\) hits \(L_{AC}\), the time-optimal trajectory lies above \(T_{AB}\) on the interval \([x_1, x_2]\).

Proof

We consider \(x_1=0\) and \(D=(\Delta x,y_1)\) without loss of generality, where \(\Delta x\) is determined by Lemma 4.1b as in this case \(L_{AD}\) plus \(T_{DB}\) is the time-optimal trajectory with one step obstacle. Furthermore, \(\Delta x=x_0^{DB}\). Then by Equation (9)

$$\begin{aligned} x_0^{AB}&= \frac{-2(y_2-y_1)y_0-y^2_1+x^2_2+y^2_2}{2x_2}, \text { and}\\ x_0^{DB}&= \Delta x= \frac{-2(y_2-y_1)y_0-y^2_1+x^2_2+y^2_2}{2(x_2-\Delta x)}+\frac{(\Delta x)^2}{2(\Delta x-x_2)}. \end{aligned}$$

Therefore, we have that \(2x_0^{DB}(x_2-\Delta x)+(\Delta x)^2=2x_2x_0^{AB}\) as well as that \(x_0^{DB}-x_0^{AB}=\Delta x-x_0^{AB}=\frac{(\Delta x)^2}{2x_2}\ge 0\). Therefore, \(x_0^{DB}\ge x_0^{AB}\) and the circle segment \(T_{DB}\) is above \(T_{AB}\) for any \(x \in [x_1, x_2]\), for a similar proof see Observation 9.1. \(\square \)

Now, consider the set of observations with the “ground level case,” when the unique obstacle between \(A\) and \(B\) is a horizontal line.

Observation 9.3

Let \(A=(x_1, y_1)\), \(C=(x_1, y_2)\), \(D=(x_2, y_2)\), and \(B=(x_2, y_3)\) be four consecutive points from \(S\) such that \(x_1<x_2, \;y_2<y_1,\; y_2<y_3\) and such that \(T_{AB}\) hits \(L_{CD}\). Then the time-optimal trajectory with obstacles is the combination of the circle segment \(T_{AK}\), the straight line segment\(L_{KL}\), and the circle segment \(T_{LB}\), where \(K\) and \(L\) are the points of intersection of \(T_{AD}\) and \(T_{CB}\) with \(L_{CD}\) respectively.

Proof

Consider two triplets \(A,\;C,\;D\) and \(C, \;D,\;B\) as the input to Lemma 4.1, see Fig. 9. \(T_{AK}\) does not intersect \(T_{LB}\) by Observation 9.1, as \(T_{AB}\) hits \(L_{CD}\). \(\square \)

Finally, we present the proof of Lemma 4.2.

Proof

Assume the statement is not correct, hence the trajectory hits some obstacle and there exists a time-optimal trajectory \(T\) with obstacles from \(A\) to \(B\) such that it does not hit any obstacle and does not pass any potential breakpoint.

Fig. 9
figure 9

The time-optimal trajectory for the unique horizontal obstacle between \(A\) and \(B\)

We split the proof into five parts depending on the type of the “terrain” between \(A\) and \(B\).

  1. (1)

    The obstacles between \(A\) and \(B\) are monotonically increasing (or decreasing), see Fig. 10. First, we note that \(T\) is convex, otherwise we can always locally shortcut it with the straight line in such a way that it still does not pass any potential breakpoint, but the straight line is always better than any trajectory above it by Lemma 3.1, so \(T\) is not time-optimal. Without any loss of generality, we assume that \(y_1<y_3\). Let us consider the last obstacle \(C=(x_2,y_2)\) on the way along \(T\). Note that \(T\) does not pass \(C\) by assumption. By Lemma 4.1, the time-optimal trajectory from \(C\) to \(B\) has one of two shapes:

    • The time-optimal trajectory is the combination of the straight line \(L_{CD}\) and \(T_{DB}\), see Fig. 10a. The intersection of the line \(y=y_3\) and the trajectory \(T\) is point \(M\). The time-optimal trajectory from \(M\) to \(B\) is the combination of the straight line \(L_{MD}\) and the circle segment \(T_{DB}\) by Lemma 4.1b. Then the following combined trajectory is better than \(T\): \(T\) before the point \(M\), straight line \(L_{MD}\) and \(T_{DB}\), and it passes the potential breakpoint \(C\), a contradiction.

    • The time-optimal trajectory is \(T_{CB}\), see Fig. 10b. \(M\) is the point of intersection of \(T\) and \(T_{CB}\). If \(T_{CB}=T_{MB}\) does not hit any obstacle before crossing \(T\) or it just crosses the obstacle in the potential breakpoint \(K\) then the trajectory \(T\) between \(A\) and \(M\) plus \(T_{MB}\) is better than \(T\) and it passes the potential breakpoint \(C\), a contradiction. If \(T_{MB}\) hits the obstacle which is determined by the potential breakpoint \(K\) then we consider \(T_{KB}\) (or \(L_{KD_K}\) and \(T_{D_{K}B}\)) by Lemma 4.1a or b. We have only a finite number of obstacles. Thus, we can find the “last” obstacle \(L\) such that the time-optimal trajectory in Lemma 4.1 does not hit this obstacle. Any new circle segment or combination of the line and circle segment lies higher than the previous one by induction, so it does not hit any obstacle to the right of the considered potential breakpoint. We use the time-optimal trajectory which is determined by the potential breakpoint \(L\) and \(B\) to shortcut \(T\) as in the previous case, a contradiction.

  2. (2)

    There is an obstacle between \(A\) and \(B\) which is higher than \(A\) and \(B\), see Fig. 11. By Lemma 3.1, we can always shortcut \(T\) with the line \(L_{KM}\) which passes two potential breakpoints, a contradiction.

  3. (3)

    The obstacles between \(A\) and \(B\) are a “convex” rectilinear function, see Fig. 12. Similar to case 1) the trajectory \(T\) is convex. Denote the lowest point of the trajectory \(T\) as point \(M=(x_M, y_M)\). We consider two following cases:

    • All the obstacles which are hit by \(T_{AB}\) are lower than \(M\), see Fig. 12a. We introduce a new instance: the horizonal line with \(y=y'\) at the altitude of the highest obstacle \(K=(x', y')\), hit by \(T_{AB}\). \(T\) is a valid trajectory for the new instance. The time-optimal trajectory \(T'\) for the new instance by Observation 9.3 is the circle segment \(T_{AK_1}\), line segment \(L_{K_1K_2}\) and circle segment \(T_{K_2B}\), where \(K_1, \,K_2\) belongs to the line \(y=y'\). By Observation 9.2, this trajectory does not hit any obstacle of the original instance, so it is a valid trajectory for the original instance. \(T'\) is time-optimal for the new instance, hence it is better than \(T\). So, \(T'\) is also better than \(T\) for the original instance, a contradiction.

    • There is at least one obstacle \(K=(x', y')\), which is hit by \(T_{AB}\), and it is higher than \(M\), see Fig. 12b. We introduce a new instance: the horizontal line \(y=y_M\). If either \(T_{AM}\) or \(T_{MB}\) hits the obstacles then we are in the settings of the case 1) for one of the segments \(T_{AM}\) or \(T_{MB}\), hence there is a trajectory \(T'\) which is better than \(T\) and it passes one of the potential breakpoints. So we only consider the case when \(T_{AM}\) and \(T_{MB}\) do not hit any obstacle. It means that \(T\) is just a combination of \(T_{AM}\) and \(T_{MB}\). By Observation 9.1, \(T_{AB}\) is above \(T\), so \(T_{AB}\) also does not hit any obstacle, a contradiction.

  4. 4)

    There is an obstacle between \(A\) and \(B\) which is higher than \(A\) , lower than \(B\) and it is the point of non-monotonicity, see Fig. 13. If the trajectory \(T\) “goes down” after this obstacle, see Fig. 13a, we can shortcut it as in case (2), a contradiction. If it stays higher, see Fig. 13b, then we can introduce a new equivalent monotonic instance as in case (1).

  5. 5)

    The obstacles between \(A\) and \(B\) are not presented by a convex rectilinear function, and all the obstacles are lower than \(A\) and \(B\) , see Fig. 14. The obstacle function is not “convex,” so there exists an obstacle which is “concave.” If \(T\) “goes down” on both sides of this obstacle we can shortcut is as in case (2), see Fig. 14a, a contradiction. If \(T\) stays higher at least on one side, see Fig. 14b, then we can introduce a new “convex” instance as in case 4), so we are in the settings of the case 3).

Fig. 10
figure 10

Monotonic obstacle case

Fig. 11
figure 11

Concave obstacle case

Fig. 12
figure 12

Convex obstacle case

Fig. 13
figure 13

Non-monotonic obstacle case, the obstacle is higher than \(A\)

Fig. 14
figure 14

Non-monotonic obstacle case, the obstacle is lower than \(A\) and \(B\)

Finally, we conclude that if the time-optimal trajectory between two potential breakpoints \(A\) and \(B\) hits an obstacle then there is a potential breakpoint \(C\) between \(A\) and \(B\) such that the time-optimal trajectory with obstacles from \(A\) to \(B\) passes \(C\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Berger, A., Grigoriev, A., Peeters, R. et al. On Time-Optimal Trajectories in Non-Uniform Mediums. J Optim Theory Appl 165, 586–626 (2015). https://doi.org/10.1007/s10957-014-0590-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-014-0590-y

Keywords

Mathematics Subject Classification

Navigation