Skip to main content
Log in

On feedback strengthening of the maximum principle for measure differential equations

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

For a class of nonlinear nonconvex impulsive control problems with states of bounded variation driven by Borel measures, we derive a new type non-local necessary optimality condition, named impulsive feedback maximum principle. This optimality condition is expressed completely within the objects of the impulsive maximum principle (IMP), while employs certain “feedback variations” of impulsive control. The obtained optimality condition is shown to, potentially, discard non-optimal IMP-extrema, and can be viewed as a deterministic non-local iterative algorithm for optimal impulsive control.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. For simplicity, we confine ourselves with the linear cost function; minimization of a functional \(I=l(x(T))\) with a \(C^2\)- cost function l reduces to the addressed case [19]. Problems with \(C^1\)- and some particular classes of C- (nonsmooth) cost functions can be treated as in [17, 18].

  2. The direct transformation \((P)\rightarrow (RP)\) works as a “magnifier” for the singular dynamics of MDE (1), associated with the discrete and singular continuous components of measure \(\mu \). It regularizes the time-line of “fast processes”—observed as instantaneous switches of state—by putting them into a common time scale, together with the regular, absolutely continuous motion; this is done by extending instants of jumps into intervals, proportional to the intensities of the respective impulses, while the support of the singular continuous component of measure \(\mu \)—typically, enduring the topological structure of a Cantor discontinuum—is turned into a sort of “fat” Cantor set (Volterra–Smith–Cantor set).

  3. In its definition we employ an obvious description of the controllability set of the \({\mathbf {t}}\)-component of system (14), (15) to the point \((s, {\mathbf {t}})=(S, T)\).

References

  1. Ancona, F., Bressan, A.: Patchy vector fields and asymptotic stabilization. ESAIM-COCV 4, 445–472 (1999)

    MathSciNet  MATH  Google Scholar 

  2. Arutyunov, A.V., Karamzin, DYu.: Non-degenerate necessary optimality conditions for the optimal control problem with equality-type state constraints. J. Glob. Optim. 64(4), 623–647 (2016)

    MathSciNet  MATH  Google Scholar 

  3. Arutyunov, A.V., Karamzin, DYu., Pereira, F.L.: On constrained impulsive control problems. J. Math. Sci. 165(6), 654–688 (2010)

    MathSciNet  MATH  Google Scholar 

  4. Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1994)

    MATH  Google Scholar 

  5. Barron, E.N., Jensen, R., Menaldi, J.L.: Optimal control and differential games with measures. Nonlinear Anal. 21(4), 241–268 (1993)

    MathSciNet  MATH  Google Scholar 

  6. Bressan, A.: Hyperimpulsive motions and controllizable coordinates for Lagrangean systems. Atti. Acc. Lincei End. Fis. 8(XIX), 197–246 (1989)

    Google Scholar 

  7. Bressan, A., Colombo, G.: Existence and continuous dependence for discontinuous O.D.E.’s. Bollettino dell’Unione Matematica Italiana IV–B, 295–311 (1990)

    MathSciNet  MATH  Google Scholar 

  8. Bressan, A., Rampazzo, F.: Impulsive control systems without commutativity assumptions. J. Optim. Theory Appl. 81(3), 435–457 (1994)

    MathSciNet  MATH  Google Scholar 

  9. Bressan, A., Rampazzo, F.: On systems with quadratic impulses and their application to Lagrangean mechanics. SIAM J. Control Optim. 31, 1205–1220 (1993)

    MathSciNet  MATH  Google Scholar 

  10. Ceragioli, F.: Discontinuous Ordinary Differential Equations and Stabilization. Ph.D. Thesis, University of Florence (2000)

  11. Clarke, F.H., Hiriart-Urruty, J.-B., Ledyaev, YuS: On global optimality conditions for nonlinear optimal control problems. J. Glob. Optim. 13(2), 109–122 (1998)

    MathSciNet  MATH  Google Scholar 

  12. Clarke, F.H., Ledyaev, YuS, Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998)

    MATH  Google Scholar 

  13. Daryin, A.N., Kurzhanskii, A.B.: Closed-loop impulse control of oscillating systems. In: Proceedings of IFAC Workshop on Periodic Control Systems (PSYCO–07), Saint-Petersburg (2007)

  14. Daryin, A.N., Kurzhanskii, A.B.: Control synthesis in a class of higher-order distributions. Differ. Equ. 43(11), 1479–1489 (2007)

    MathSciNet  MATH  Google Scholar 

  15. Daryin, A.N., Kurzhanskii, A.B., Seleznev, A.V.: A dynamic programming approach to the impulse control synthesis problem. In: Proceedings of Joint 44th IEEE CDC-ECC 2005, Seville (2005)

  16. Dykhta, V.A.: Nonstandard duality and nonlocal necessary optimality conditions in nonconvex optimal control problems. Autom. Remote Control 75(11), 1906–1921 (2014)

    MathSciNet  MATH  Google Scholar 

  17. Dykhta, V.A.: Positional strengthenings of the maximum principle and sufficient optimality conditions. Proc. Steklov Inst. Math. 293(1), S43–S57 (2016)

    MATH  Google Scholar 

  18. Dykhta, V.A.: Variational necessary optimality conditions with feedback descent controls for optimal control problems. Dokl. Math. 91(3), 394–396 (2015)

    MathSciNet  MATH  Google Scholar 

  19. Dykhta, V.A.: Weakly monotone solutions of the Hamilton-Jacobi inequality and optimality conditions with positional controls. Autom. Remote Control 75(5), 829–844 (2014)

    MathSciNet  MATH  Google Scholar 

  20. Dykhta, V., Samsonyuk, O.: A maximum principle for smooth optimal impulsive control problems with multipoint state constraints. Comput. Math. Math. Phys. 49(6), 942–957 (2009)

    MathSciNet  MATH  Google Scholar 

  21. Dykhta, V., Samsonyuk, O.: Optimal Impulsive Control with Applications. Fizmathlit, Moscow (2000). (in Russian)

    MATH  Google Scholar 

  22. Filippov, A.F.: Differential equations with discontinuous right-hand sides. Trans. AMS 42, 199–231 (1964)

    MATH  Google Scholar 

  23. Filippov, A.F.: Differential Equations with Discontinuous Right-hand Sides. Kluwer Academic Publishers, Norwell (1988)

    MATH  Google Scholar 

  24. Finogenko, I.A., Ponomarev, D.V.: About differential inclusions with positional explosive and impulsive controls. Proc. Inst. Math. Mech. UB RAS 19(1), 284–299 (2012)

    Google Scholar 

  25. Fraga, S.L., Pereira, F.L.: On the feedback control of impulsive dynamic systems. In: Proceedings of the 47th IEEE Conference on Decision and Control, pp. 2135–2140 (2008)

  26. Fraga, S.L., Pereira, F.L.: Hamilton–Jacobi–Bellman equation and feedback synthesis for impulsive control. IEEE Trans. Autom. Control 57(1), 244–249 (2012)

    MathSciNet  MATH  Google Scholar 

  27. Goncharova, E.V., Staritsyn, M.V.: Control improvement method for impulsive systems. J. Comput. Syst. Sci. Int. 49(6), 883–890 (2010)

    MathSciNet  MATH  Google Scholar 

  28. Goncharova, E.V., Staritsyn, M.V.: Gradient refinement methods for optimal impulse control problems Autom. Remote Control 72(10), 2188–2195 (2011)

    MATH  Google Scholar 

  29. Goncharova, E., Staritsyn, M.: Optimal impulsive control problem with phase and mixed constraints. Dokl. Math. 84(3), 882–885 (2011)

    MathSciNet  MATH  Google Scholar 

  30. Goncharova, E., Staritsyn, M.: Optimization of measure-driven hybrid systems. J. Optim. Theory Appl. 153(1), 139–156 (2012)

    MathSciNet  MATH  Google Scholar 

  31. Goncharova, E., Staritsyn, M.: Optimal control of dynamical systems with polynomial impulses. Discrete Cont. Dyn. Syst. Ser. A 35, 4367–4384 (2015)

    MathSciNet  MATH  Google Scholar 

  32. Gurman, V.: On optimal processes with unbounded derivatives. Autom. Remote Control 17, 14–21 (1972)

    Google Scholar 

  33. Hájek, O.: Discontinuous differential equations I, II. J. Differ. Equ. 32(149–170), 171–185 (1979)

    MathSciNet  MATH  Google Scholar 

  34. Ioffe, A.D., Tikhomirov, V.M.: Theory of Extremal Problems. North-Holland, Amsterdam (1979)

    Google Scholar 

  35. Karamzin, D.: Necessary conditions of the minimum in an impulse optimal control problem. J. Math. Sci. 139(6), 7087–7150 (2006)

    MathSciNet  MATH  Google Scholar 

  36. Kostousov, V.B.: Struktura impulsno-skolzyaschih rezhimov pri vozmuscheniyah tipa meryi (On structure of impulsive sliding modes under disturbances of measure type), Differentsialnyie uravneniya (Differ. Eq.), Part I: 20(3), 382–391. Part II: 20(5), 645–753 (1984) (in Russian)

  37. Krasovskii, N.N., Subbotin, A.I.: Game-Theoretical Control Problems. Springer, New York (1988)

    Google Scholar 

  38. Krotov, V.F.: Global Methods in Optimal Control Theory. Monographs and Textbooks in Pure and Applied Mathematics, vol. 195. Marcel Dekker, New York (1996)

    Google Scholar 

  39. Kurzhanskii, A.B.: Impulse control synthesis, fast controls and hybrid system modeling. Plenary talk at ALCOSP- 07 (2007)

  40. Kurzhanskii, A.B.: On synthesis of systems with impulse controls. Mechatron. Autom. Control 4, 2–12 (2006). (in Russian)

    Google Scholar 

  41. Kurzhanskii, A., Tochilin, P.: Impulse controls in models of hybrid systems. Differ. Equ. 45(5), 731–742 (2009)

    MathSciNet  MATH  Google Scholar 

  42. Kurzhanskii, A.B., Varaiya, P.: Impulsive inputs for feedback control and hybrid system modeling. In: Sivasundaram, S., Devi, J.V., Udwadia, F.E., Lasiecka, I. (eds.) Advances in Dynamics and Control: Theory Methods and Applications, pp. 305–326. Cambridge Scientific Publishers, Cottenham (2011)

    Google Scholar 

  43. Miller, B.: The generalized solutions of nonlinear optimization problems with impulse control. SIAM J. Control Optim. 34, 1420–1440 (1996)

    MathSciNet  MATH  Google Scholar 

  44. Miller, B., Rubinovich, E.: Impulsive Control in Continuous and Discrete-Continuous Systems. Kluwer Academic/Plenum Publishers, New York (2001)

    MATH  Google Scholar 

  45. Motta, M., Rampazzo, F.: Space-time trajectories of nonlinear systems driven by ordinary and impulsive controls. Differ. Integral Equ. 8, 269–288 (1995)

    MathSciNet  MATH  Google Scholar 

  46. Motta, M., Rampazzo, F.: Dynamic programming for nonlinear systems driven by ordinary and impulsive control. SIAM J. Control Optim. 34, 199–225 (1996)

    MathSciNet  MATH  Google Scholar 

  47. Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962)

    MATH  Google Scholar 

  48. Rishel, R.: An extended Pontryagin principle for control systems whose control laws contain measures. J. Soc. Ind. Appl. Math. Ser. A Control 3, 191–205 (1965)

    MathSciNet  MATH  Google Scholar 

  49. Sesekin, A.N., Nepp, A.N.: Impulse position control algorithms for nonlinear systems. In: AIP Conference Proceedings. 1690, (2015). https://doi.org/10.1063/1.4936709

  50. Sorokin, S.P.: Necessary feedback optimality conditions and nonstandard duality in problems of discrete system optimization. Autom. Remote Control 75(9), 1556–1564 (2014)

    MathSciNet  MATH  Google Scholar 

  51. Sorokin, S., Staritsyn, M.: Feedback necessary optimality conditions for a class of terminally constrained state-linear variational problems inspired by impulsive control. Numer. Algebra Control Optim. 7(2), 201–210 (2017)

    MathSciNet  MATH  Google Scholar 

  52. Sorokin, S.P., Staritsyn, M.V.: Necessary optimality condition with feedback controls for nonsmooth optimal impulsive control problems. In: Proceedings of the VIII International Conference Optimization and Applications (OPTIMA-2017), Petrovac, Montenegro, 2–7 October 2017, pp. 531–538 (2017)

  53. Sorokin, S.P., Staritsyn, M.V.: Numeric algorithm for optimal impulsive control based on feedback maximum principle. Optim. Lett. (2018). https://doi.org/10.1007/s11590-018-1344-9

    MathSciNet  MATH  Google Scholar 

  54. Strekalovsky, A.S., Yanulevich, M.V.: On global search in nonconvex optimal control problems. J. Glob. Optim. 65(1), 119–135 (2016)

    MathSciNet  MATH  Google Scholar 

  55. Vinter, R., Pereira, F.: A maximum principle for optimal processes with discontinuous trajectories. SIAM J. Control Optim. 26, 205–229 (1988)

    MathSciNet  MATH  Google Scholar 

  56. Warga, J.: Variational problems with unbounded controls. J. SIAM Control Ser. A 3(3), 424–438 (1987)

    MathSciNet  MATH  Google Scholar 

  57. Wolenski, P.R., Zabič, S.: A sampling method and approximation results for impulsive systems. SIAM J. Control Optim. 46(3), 983–998 (2007)

    MathSciNet  MATH  Google Scholar 

  58. Zavalischin, S., Sesekin, A.: Dynamic Impulse Systems: Theory and Applications. Kluwer Academic Publishers, Dorderecht (1997)

    Google Scholar 

  59. Zavalishchin, S.T., Sesekin, A.N.: Impulse-sliding regimes of nonlinear dynamic systems. Differ. Equ. 19(5), 562–571 (1983)

    MATH  Google Scholar 

  60. Zavalishchin, S.T., Sesekin, A.N.: On the question of the synthesis of impulse control in the problem of optimization of dynamic systems with quadratic functional. Some methods of analytical construction of impulse regulators. Sverdlovsk. Urals Research Center USSR Academy of Sciences, pp. 3–8 (1979)

Download references

Acknowledgements

Authors are grateful to V.A. Dykhta for an inspiration of this study, and worthy advices.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stepan Sorokin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The study is partially supported by the Russian Foundation for Basic Researches, Grants Nos. 16-31-60030, 16-08-00272, 17-01-00733.

Appendices

Appendix A: Feedback control and control synthesis of ordinary dynamical systems

The goal of this appendix is to provide a motivation for the chosen concept of impulsive feedback control. To clarify the idea, we appeal to ordinary control systems.

On a finite time interval \({\mathcal {T}}\), consider a control dynamical system

$$\begin{aligned} \dot{x}(t) = f\big (t,x(t),u(t)\big ), \quad x(0)=x^0, \quad u(t)\in U, \end{aligned}$$
(41)

with given \(x^0\in {\mathbb {R}}^n\) and a compact \(U\subset {\mathbb {R}}^m\). Control inputs are \(u=u(\cdot )\in {\mathcal {U}}\doteq L_\infty ({\mathcal {T}}, U)\), and, under standard regularity assumptions (Lipschitz continuity and sub-linear growth of f w.r.t. x), a trajectory is the unique Carathéodory solution \(x=x[u](\cdot )\in {\mathcal {X}}\doteq AC({\mathcal {T}}, {\mathbb {R}}^n)\) of system (41) corresponding to a control \(u\in {\mathcal {U}}\).

Given an arbitrary function \( w: \ {\mathcal {T}}\times {\mathbb {R}}^n\mapsto U,\) which is said to be feedback control (or closed-loop control, or, simply, feedback) of system (41), we focus on consistent definitions of solution to the closed-looped system

$$\begin{aligned} \dot{x}(t) = f\big (t,x(t),w(t, x(t))\big ), \quad x(0)=x^0. \end{aligned}$$

The Carathéodory solution concept \(({\mathcal {C}})\) is still a cornerstone here. This type solution is uniquely defined for feedbacks w which produce vector fields \(F(t, x)\doteq f\big (t, x, w(t, x)\big )\) enjoying the well-known Carathéodory properties, and can exist for discontinuous w.r.t. x functions w(tx). However, one can loose to guarantee the existence of a Carathéodory feedback solution under weaker regularity assumptions. On the other hand, there is a series of alternative relevant solution concepts [1, 7, 10, 22, 23, 33, 37], that make sense in much more general cases, even for an arbitrary feedback control.

In this list, we mark out three concepts, which are of a “constructive” (iterative) feature:

  • Euler feedback solution (\({\mathcal {E}}\)),

  • Krasovskii-Subbotin (generalized sampling) feedback solution (\(\mathcal {KS}\)), and

  • Model-predictive feedback solution (\(\mathcal {MP}\)).

The mentioned solutions are defined as outcomes of certain step-by-step procedures—as uniform limits of sequences of Carathéodory solutions produced by control inputs of a relatively simple structure. The first two types of feedback solution are rather conventional, and we focus on \(\mathcal {MP}\) solutions, which is a novel concept.

Definition 3

A control synthesis of system (41) is an arbitrary everywhere defined single-valued mapping \(\omega : \, {\mathbb {R}}^n \mapsto {\mathcal {U}}.\) In other words, it is a family of control functions \(u_x(\cdot )\in {\mathcal {U}}\), parameterized by state \(x\in {\mathbb {R}}^n\).

A measurable in t feedback control w trivially produces a control synthesis by setting \(u_x(t)\doteq w(t,x)\) with a further factorization in \(L_\infty \).

Given a partition \(\pi \) of \({\mathcal {T}}\), and a synthesis \(\omega : \, x\mapsto u_x(\cdot )\), a respective \(\pi \)-polygonal arc is the function \(\varsigma _\pi [ \omega ]^S : {\mathcal {T}}\mapsto {\mathbb {R}}^n\) obtained through the following iterations with \(x_0\doteq x^0\) and \(x_i \doteq \varsigma (t_i)\):

$$\begin{aligned} \varsigma (t)=\varsigma (t_i)+\int _{t_i}^{t}f\big (s, \varsigma (s), u_{x_i}(s)\big )ds, \quad t\in [t_i, t_{i+1}], \quad i={1,\ldots , N-1}. \end{aligned}$$

Definition 4

A function \(\varsigma : \ {\mathcal {T}}\mapsto {\mathbb {R}}^n\), being a partial uniform limit of a sequence of \(\pi \)-polygonal arcs \(\varsigma _\pi [ \omega ]^S \), as \(\mathrm{diam}(\pi ) \rightarrow 0\), is said to be a model predictive feedback solution associated with synthesis \(\omega \).

Proposition 6

Assume that f satisfies standard assumptions about the Lipschitz continuity and sublinear growth. Then, for any synthesis \(\omega : \, x \mapsto u_x\), there exists at least one \(\mathcal {MP}\) solution of closed-looped system (41), and this solution is absolutely continuous.

Proof

Let \(\pi _k=\{t_i^k\}\), \(k =1, 2, \ldots \), be partitions of the interval \({\mathcal {T}}\) such that \(\mathrm{diam}(\pi _k)\le \varepsilon _k \rightarrow 0\) as \(k\rightarrow \infty \), \(\varsigma _k=\varsigma _{\pi _k} [ \omega ]^S \)—the respective \(\pi _k\)-polygonal arcs, and \(u_k\)—the concatenation of functions \(u_{\varsigma _{k}(t_i^k)}\), defined though the above step-by-step procedure (note that \(u_k\) is an admissible open loop control and \(\varsigma _k=\varsigma [u_k]\)).

Standard arguments based on the Gronwall’s inequality ensure that the tube of Carathéodory solutions to system (41) is bounded, which implies that the family \(\{\varsigma _k\}\) is equi-bounded. Furthermore, for any \(\varsigma _k\), the compositions \(f\big (t, \varsigma _k(t), u_k(t)\big )\) are also uniformly bounded, which implies that solutions \(\varsigma _k\) are uniformly Lipschitzian on \({\mathcal {T}}\), and hence, the family \(\{\varsigma _k\}\) is equicontinuous. By the Arzelá-Ascoli selection principle, there is a uniformly converging subsequence of \(\{\varsigma _k\}\), whose limit \(\varsigma \) is an \(\mathcal {MP}\) solution, by definition. The absolute continuity of \(\varsigma \) follows from [4, Theorem 4]. \(\square \)

It is important to note that the raised solution concepts are pairwise different, i.e., the respective sets of solutions are not, in general, proper subsets one of another: to ensure the difference between \({\mathcal {C}}\), \({\mathcal {E}}\), and \(\mathcal {KS}\), we refer to [10] and the bibliography therein, where relevant counter-examples are collected, and perform a comparison of the concepts \(\mathcal {KS}\) and \(\mathcal {MP}\). The following two simplest examples show that, in fact, \(\mathcal {KS} \nsubseteq \mathcal {MP}\), and \(\mathcal {MP} \nsubseteq \mathcal {KS}\).

Example A.1

Consider the dynamic system:

$$\begin{aligned} \dot{x} =u, \quad x(0)=0, \quad u\in [0,1], \quad t\in [0,1]. \end{aligned}$$

The feedback control, defined formally as \(w(t, x)=v(t)\), \(t\in [0,1]\), \(x\in {\mathbb {R}}\), with

$$\begin{aligned} v(t)=\left\{ \!\begin{array}{ll} 0, &{}\quad \text{ if } t \text{ is } \text{ irrational },\\ 1, &{}\quad \text{ otherwise, }\end{array}\right. \end{aligned}$$

produces a unique natural Carathéodory solution \(x\equiv 0\), and a continuum of \(\mathcal {KS}\) solutions, in particular, \(x=t\) (corresponding to partitions by rational numbers). On the other hand, the respective control synthesis \(x\mapsto u_x\) gives a single \(\mathcal {MP}\)-solution \(x\equiv 0\), which coincides with the Carathéodory one. Thus, \(\mathcal {KS} \nsubseteq \mathcal {MP}\).

Note that the addressed “feedback” is, obviously, a representative of the measurable control \(u\equiv 0\). At the same time, sampling of the respective solution \(x\equiv 0\) by the \(\mathcal {KS}\) or \({\mathcal {E}}\) algorithms leads to very different solutions, while the \(\mathcal {MP}\) scheme naturally weeds out such pathological cases.

Example A.2

Consider the system

$$\begin{aligned} \dot{x} =1+u, \quad x(0)=0, \quad u\in [0,1], \quad t\in [0,1]. \end{aligned}$$

The feedback control

$$\begin{aligned} w(t,x)=\left\{ \!\begin{array}{ll} 1, &{}\quad \text{ if } t \text{ is } \text{ irrational } \text{ and } x \text{ is } \text{ rational },\\ 0, &{}\quad \text{ otherwise, }\end{array}\right. \end{aligned}$$

generates a single \(\mathcal {KS}\)-solution \(x=t\), while the respective control synthesis \(\omega : \ u_x(t)=w(t,x)\) produces a continuum of \(\mathcal {MP}\) solutions, in particular, \(x=2t\). Thus, \(\mathcal {MP} \nsubseteq \mathcal {KS}\).

Appendix B: Feedback necessary optimality conditions: how feedbacks serve to discard non-optimal extrema

This appendix is assumed to give the reader some intuition about the paradigm of feedback necessary optimality conditions, in particular, to answer the question: “Why one can expect that feedback controls, used in the feedback maximum principle, serve to “filter out” non-optimal processes, in particular, non-optimal extrema?”.

The idea is, in fact, quite simple; it comes back to the technique of the modified Lagrangian with a linear (in state variable) weakly monotone function [16,17,18,19], and refers to the principle of extremal aiming [12, 37]: Given a reference (examined) process \({\bar{\gamma }}=({\bar{y}},{\bar{v}})\), \({\bar{y}}\doteq (\bar{{\mathbf {t}}},\bar{{\mathbf {x}}})\), we try to construct another process \({\hat{\gamma }}\) of a lower cost (this would mean that \({\bar{\gamma }}\) is not optimal, and, hence, is discarded by \({\hat{\gamma }}\)). Fixed \(\xi \) and \({\bar{\phi }}\), introduce the function

$$\begin{aligned} \eta (s, y)\doteq \eta (s, {\mathbf {t}}, {\mathbf {x}})= \xi \big (\mathbf t-\bar{{\mathbf {t}}}(s)\big )+\langle {\bar{\phi }}(s), {\mathbf {x}}- \bar{{{\mathbf {x}}}}(s) \rangle \end{aligned}$$

and define

$$\begin{aligned} {\bar{\eta }} (s,y)=\eta (s,y)+\int \limits _s^S \inf \limits _{y\in O_r} \left[ \max \limits _{|v|\le 1} \frac{d\eta }{d\tau }(\tau ,y,v) \right] d\tau , \end{aligned}$$

where

$$\begin{aligned} \displaystyle \frac{d\eta }{ds}(s,y, v)\doteq \eta _s+H\big ({\mathbf {x}}, \eta _{{\mathbf {x}}}(s),\eta _{{\mathbf {t}}},v\big ) \end{aligned}$$
(42)

is a total derivative of function \(\eta \) w.r.t. system (15), and \(O_r\) is an outer estimate of the trajectory tube (the existence of such an outer estimate follows from the uniform boundedness, thanks to the sublinear growth assumption in (H)).

One can discover the following properties of function \({\bar{\eta }}\):

  1. 1)

    \({\bar{\eta }}\) is weakly increasing w.r.t. dynamical system [12], i.e., for any initial data \((s_*,y_*)\), there is at least one trajectory y of (15), (14) with \(y(s_*)=y_*\) such that the composition \({\bar{\eta }}\big (s,y(s)\big )\) does not decrease on \([s_*, S]\). This property is equivalent to the following Hamilton-Jacobi inequality:

    $$\begin{aligned} \max \limits _{|v|\le 1} \frac{d{\bar{\eta }}}{ds}(s,y,v)\ge 0,\quad \forall \, \big (s,y\big )\in {\mathcal {S}}\times {\mathcal {T}}\times {\mathbb {R}}^n. \end{aligned}$$
  2. 2)

    Given an admissible process \(\gamma =(y, v)\), \(y=(\mathbf t,{\mathbf {x}})\), one has:

    $$\begin{aligned} \begin{aligned} {\mathcal {I}}(\gamma )-{\mathcal {I}}({\bar{\gamma }})&=\langle c, {{\mathbf {x}}}(S)\rangle - \langle c,\bar{{\mathbf {x}}}(S)\rangle \\&= -{\bar{\eta }} \big (S,y(S)\big )= -\displaystyle \int \limits _0^S \frac{d{\bar{\eta }}}{ds}\big (s,y(s),v(s)\big )ds-{\bar{\eta }}(0,y_0). \end{aligned} \end{aligned}$$
    (43)

From the exact increment formula (43) it is clear that, in order to minimize the cost \({\mathcal {I}}\), one should search for a state trajectory \(s \mapsto y(s)\), which ensures the “fastest increasing” of \({\bar{\eta }} (s, y)\). In view of (42), this leads to the following finite-dimensional optimization problem:

$$\begin{aligned} H\big ({\mathbf {x}}, {\bar{\phi }}(s),\xi ,v\big )\rightarrow \max \limits _{|v|\le 1},\quad \big (s,y=({\mathbf {t}}, {\mathbf {x}})\big ) \in {{\mathcal {S}}} \times {\mathcal {T}}\times {\mathbb {R}}^n, \end{aligned}$$

whose formal solutions are feedback controls of the structure

$$\begin{aligned} w=w_\xi (s,{\mathbf {x}})\in W_\xi \big ({\mathbf {x}}, {\bar{\phi }}(s)\big ), \end{aligned}$$

arising in Sect. 4. The associated feedback polygonal arcs and piecewise open-loop (piecewise constant) controls are, thus, the desired candidates on the role of discarding process \({\hat{\gamma }}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Staritsyn, M., Sorokin, S. On feedback strengthening of the maximum principle for measure differential equations. J Glob Optim 76, 587–612 (2020). https://doi.org/10.1007/s10898-018-00732-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-018-00732-3

Keywords

Navigation