Skip to main content
Log in

A limited-memory BFGS-based differential evolution algorithm for optimal control of nonlinear systems with mixed control variables and probability constraints

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

In this paper, we consider an optimal control problem of nonlinear systems with mixed control variables and probability constraints. To obtain a numerical solution of this optimal control problem, our target is to formulate this problem as a constrained nonlinear parameter optimization problem (CNPOP), which can be solved by using any gradient-based numerical computation method. Firstly, some binary functions are introduced for each value of the discrete-valued control variable (DCV). Following that, we relax these binary functions and impose a penalty term on the relaxation such that the solution of the resulting relaxed problem (RP) can converge to the solution of the original problem as the penalty parameter increases. Secondly, we introduce a simple initial transformation for the probability constraints. Following that, an adaptive sample approximation method (ASAM) and a novel smooth approximation technique (NSAT) are adopted to formulate the probability constraints as some deterministic constraints. Thirdly, a control parameterization approach (CPA) is used to transform the deterministic problem (i.e., an infinite dimensional problem) into a finite dimensional CNPOP. Fourthly, in order to combine the advantages of limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithms and differential evolution (DE) algorithms, a L-BFGS-based DE (L-BFGS-DE) algorithm is proposed for solving the resulting approximation problem based on an improvied L-BFGS (IL-BFGS) method and an improved DE (IDE) algorithm. Following that, we establish the convergence result of the L-BFGS-DE algorithm. The L-BFGS-DE algorithm consists of two stages. The objectives of the first and second stages are to obtain a probable position of the global solution and to accelerate the convergence rate, respectively. In the IL-BFGS method, we propose a novel updating rule (NUR), which uses not only the gradient information of the objective function but also the value of the objective function. This will improved the performance of the IL-BFGS method. In the IDE algorithm, a novel adaptive parameter adjustment (NAPA) method, a novel population size decrease (NPSD) strategy, and an improved mutation (IM) scheme are proposed to improve its performance. Finally, an anti-cancer drug therapy problem (ADTP) is further extended to illustrate the effectiveness of the L-BFGS-DE algorithm by taking into account some probability constraints. Numerical results show that the L-BFGS-DE algorithm has good performance and can obtain a stable and robust performance when considering the small noise perturbations in initial state.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Fig. 2
Fig. 3
Algorithm 3
Algorithm 4
Algorithm 5
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Sun, T., Sun, X., Wang, X., Wang, L.: A novel multidimensional penalty-free approach for constrained optimal control of switched control systems. Int. J. Robust. Nonlinear. Control. 31, 582–608 (2021)

    Article  MathSciNet  Google Scholar 

  2. Xiao, M., Li, Y., Tong, S.: Adaptive fuzzy output feedback inverse optimal control for vehicle active suspension systems. Neurocomputing 403, 257–267 (2020)

    Article  Google Scholar 

  3. Zhang, Q., Zhao, T., Zhang, Z.: Unfitted finite element for optimal control problem of the temperature in composite media with contact resistance. Numer. Algorithms 84, 165–180 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Li, R.X., Zhang, G.F., Liang, Z.Z.: Fast solver of optimal control problems constrained by Ohta-Kawasaki equations. Numer. Algorithms 85, 787–809 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  5. Lin, X., Chen, Y., Huang, Y.: A posteriori error estimates of hp spectral element methods for optimal control problems with l2-norm state constraint. Numer. Algorithms 83, 1145–1169 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  6. Tauchnitz, N.: The Pontryagin maximum principle for nonlinear optimal control problems with infinite horizon. J. Optim. Theory. Appl. 167, 27–48 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang, G.: Pontryagin maximum principle of optimal control governed by fluid dynamic systems with two point boundary state constraint. Nonlinear. Anal. Theory. Meth. Appl. 51, 509–536 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Sun, T., Sun, X.M.: An adaptive dynamic programming scheme for nonlinear optimal control with unknown dynamics and its application to turbofan engines. IEEE Trans. Ind. Inform. 17, 367–376 (2021)

    Article  Google Scholar 

  9. Mu, C., Wang, D., He, H.: Data-driven finite-horizon approximate optimal control for siscrete-time nonlinear systems using iterative HDP approach. IEEE Trans. Syst. Man. Cyb. 48, 2948–2961 (2018)

    Google Scholar 

  10. Xiao, L., Liu, X., He, S.: An adaptive pseudospectral method for constrained dynamic optimization problems in chemical engineering. Chem. Eng. Technol. 39, 1884–1894 (2016)

    Article  Google Scholar 

  11. Wu, X., Hou, Y., Zhang, K., Cheng, M.: Dynamic optimization of 1, 3-propanediol fermentation process: a switched dynamical system approach. Chinese J Chem Eng https://doi.org/10.1016/J.CJCHE.2021.03.041 (2021)

  12. Mu, C., Wang, D., He, H.: Novel iterative neural dynamic programming for data-based approximate optimal control design. Automatica 81, 240–252 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hartl, R.F., Sethi, S.P., Vickson, R.G.: A survey of the maximum principles for optimal control problems with state constraints. SIAM Rev. 37, 181–218 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  14. Liu, P., Liu, X., Wang, P., Li, G., Xiao, L., Yan, J., Ren, Z.: Control variable parameterisation with penalty approach for hypersonic vehicle reentry optimisation. Int. J. Control. 92, 2015–2024 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Wu, X., Zhang, K., Xin, X., Cheng, M.: Fuel-optimal control for soft lunar landing based on a quadratic regularization approach. Eur. J. Control. 49, 84–93 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, P., Li, X., Liu, X., Hu, Y.: An improved smoothing technique-based control vector parameterization method for optimal control problems with inequality path constraints. Optim. Control. Appl. Meth. 38, 586–600 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Wu, X., Zhang, K., Cheng, M., Xin, X.: A switched dynamical system approach towards the economic dispatch of renewable hybrid power systems. Int. J. Elec. Power. Energy. Syst. 103, 440–457 (2018)

    Article  Google Scholar 

  18. Howlett, P.: Optimal strategies for the control of a train. Automatica 32, 519–532 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  19. Wu, X., Zhang, K., Cheng, M.: Adaptive numerical approach for optimal control of a single train. J. Syst. Sci. Complex. 32, 1053–1071 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  20. Chen, T., Ren, Z., Lin, G., Wu, Z., Ye, B.: Real-time computational optimal control of an MHD flow system with parameter uncertainty quantification. J. Franklin. I(357), 2830–2850 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  21. Wu, X., Zhang, K., Cheng, M.: Computational method for optimal machine scheduling problem with maintenance and production. Int. J. Prod. Res. 55, 1791–1814 (2017)

    Article  Google Scholar 

  22. Byrd, R.H., Nocedal, J., Schnabel, R.B.: Representations of quasi-Newton matrices and their use in limited memory methods. Math. Program. 63, 129–156 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  23. Zheng, W., Bo, P., Liu, Y., Wang, W.: Fast B-spline curve fitting by l-BFGS. Comput. Aided. Geom. Design. 29, 448–462 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Berkani, M.S., Giurgea, S., Espanet, C., Coulomb, J.L., Kieffer, C.: Study on optimal design based on direct coupling between a FEM simulation model and l-BFGS-b algorithm. IEEE Trans. Magn. 49, 2149–2152 (2013)

    Article  Google Scholar 

  25. Nocedal, J.: Updating quasi-newton matrices with limited storage. Math. Comput. 35, 773–782 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lu, L., Wang, K., Tan, H., Li, Q.: Three-dimensional magnetotelluric inversion using l-BFGS. Acta. Geophys. 68, 1049–1066 (2020)

    Article  Google Scholar 

  27. Badem, H., Basturk, A., Caliskan, A., Yuksel, M.E.: A new hybrid optimization method combining artificial bee colony and limited-memory BFGS algorithms for efficient numerical optimization. Appl. Soft. Comput. 70, 826–844 (2018)

    Article  Google Scholar 

  28. Badem, H., Basturk, A., Caliskan, A., Yuksel, M.E.: A new efficient training strategy for deep neural networks by hybridization of artificial bee colony and limited-memory BFGS optimization algorithms. Neurocomputing 266, 506–526 (2017)

    Article  Google Scholar 

  29. Lin, H., Gao, Y., Wang, Y.: A continuously differentiable filled function method for global optimization. Numer. Algorithms 66, 511–523 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  30. Aslimani, N., Ellaia, R.: A new chaos optimization algorithm based on symmetrization and levelling approaches for global optimization. Numer. Algorithms 79, 1021–1047 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  31. Tsai, J.T., Liu, T.K., Chou, J.H.: Hybrid Taguchi-genetic algorithm for global numerical optimization. IEEE Trans. Evolut. Comput. 8, 365–377 (2004)

    Article  Google Scholar 

  32. Jones, A.E.W., Forbes, G.W.: An adaptive simulated annealing algorithm for global optimization over continuous variables. J. Global. Optim. 6, 1–37 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  33. Socha, K., Dorigo, M.: Ant colony optimization for continuous domains. Eur. J. Opera. Res. 185, 1155–1173 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  34. Bala, I., Yadav, A.: Comprehensive learning gravitational search algorithm for global optimization of multimodal functions. Neural. Comput. Appl. 32, 7347–7382 (2020)

    Article  Google Scholar 

  35. Gupta, S., Deep, K.: Hybrid sine cosine artificial bee colony algorithm for global optimization and image segmentation. Neural. Comput. Appl. 32, 9521–9543 (2020)

    Article  Google Scholar 

  36. Heidari, A.A., Aljarah, I., Faris, H., Chen, H., Luo, J., Mirjalili, S.: An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural. Comput. Appl. 32, 5185–5211 (2020)

    Article  Google Scholar 

  37. Mohammed, H.M., Rashid, T.A.: A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural. Comput. Appl. 32, 14701–14718 (2020)

    Article  Google Scholar 

  38. Li, P.Y.: Sample average approximation method for a class of stochastic generalized Nash equilibrium problems. J. Comput. Appl. Math. 261, 387–393 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Teo, K.L., Goh, C.J., Wong, K.H.A.: Unified computational approach to optimal control problems. Longman Scientific and Technical Essex (1991)

  40. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  41. Wu, X., Zhang, K., Sun, C.: Numerical algorithm for a class of constrained optimal control problems of switched systems. Numer. Algorithms 67, 771–792 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  42. Baker, J.E.: Reducing bias and inefficiency in the selection algorithm. In: Proceedings of the Second International Conference on Genetic Algorithms, pp 14–21 (1987)

  43. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global. Optim. 11, 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  44. Martin, R.B.: Optimal control drug scheduling of cancer chemotherapy. Automatica 28, 1113–1123 (1992)

    Article  MathSciNet  Google Scholar 

  45. Bellman, R.E.: Mathematical Methods in Medicine. World Scientific, Singapore (1983)

    Book  MATH  Google Scholar 

  46. Brunton, G.F., Wheldon, T.E.: The Gompertz equation and the construction of tumour growth curves. Cell. Prolif. 13, 455–460 (1980)

    Article  Google Scholar 

  47. Hellman, S., DeVita, V.T., Rosenberg, S.A.: Cancer: Principles and Practice of Oncology. Lippincott-Raven, Philadelphia (2001)

    Google Scholar 

  48. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  49. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global. Optim. 11, 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  50. Arellano-Garcia, H., Wozny, G.: Chance constrained optimization of process systems under uncertainty: I. Strict monotonicity. Comput. Chem. Eng. 33, 1568–1583 (2009)

    Article  Google Scholar 

  51. Caillau, J.B., Cerf, M., Sassi, A., Trélat, E., Zidani, H.: Solving chance constrained optimal control problems in aerospace via kernel density estimation. Optimal. Control. Appl. Meth. 39, 1833–1858 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  52. Paulson, J.A., Mesbah, A.: An efficient method for stochastic optimal control with joint chance constraints for nonlinear systems. Int. J. Robust. Nonlin. Control. 29, 5017–5037 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  53. Kleywegt, A.J., Shapiro, A., Homem-de-Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12, 479–502 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  54. Pagnoncelli, B.K., Ahmed, S., Shapiro, A.: Sample average approximation method for chance constrained programming: theory and applications. J. Optim. Theory. Appl. 142, 399–416 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  55. Ahmed, S.: Convex relaxations of chance constrained optimization problems. Optim. Lett. 8, 1–12 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  56. Calfa, B.A., Grossmann, I.E., Agarwal, A., Bury, S.J., Wassick, J.M.: Data-driven individual and joint chance-constrained optimization via kernel smoothing. Comput. Chem. Eng. 78, 51–69 (2015)

    Article  Google Scholar 

  57. Kawai, R.: Acceleration on adaptive importance sampling with sample average approximation. SIAM J. Sci. Comput. 39, A1586–A1615 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  58. Bollapragada, R., Byrd, R., Nocedal, J.: Adaptive sampling strategies for stochastic optimization. SIAM J. Optim. 28, 3312–3343 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  59. Pasupathy, R., Song, Y.: Adaptive sequential sample average approximation for solving two-stage stochastic linear programs. SIAM J. Optim. 31, 1017–1048 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  60. Campi, M.C., Garatti, S.: A sampling-and-discarding approach to chance-constrained optimization: feasibility and optimality. J. Optim. Theory. Appl. 148, 257–280 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors express their sincere gratitude to Professor Claude Brezinski, the editor, and the anonymous reviewer for their constructive comments in improving the presentation and quality of this manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant Nos. 61963010 and 61563011, and the Special Project for Cultivation of New Academic Talent and Innovation Exploration of Guizhou Normal University in 2019 under Grant No. 11904-0520077.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Wu.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Theorems for Section 3.5

Appendix A: Theorems for Section 3.5

For simplicity of notation, let \(\tilde u\left (t \right ) = \left [ {{\left (u\left (t \right )\right ]^{\mathrm {T}}}, {\left (z\left (t \right )\right )^{\mathrm {T}}}} \right ]^{\mathrm {T}}\). Then, Problem (3.16) can be written equivalently as follows:

$$\begin{array}{@{}rcl@{}} && \min\limits_{\tilde u\left(t \right)} \quad J_{\text{relax}} = \hat \varphi_{0} \left({x\left({t_{f} } \right)} \right) + {\int}_{t_{0} }^{t_{f} } {\hat L_{0} \left({x\left(t \right),\tilde u\left(t \right)} \right)dt} \end{array}$$
(A.1a)
$$\begin{array}{@{}rcl@{}} && s.t.\quad \dot x\left(t \right) = f\left({x\left(t \right),\tilde u\left(t \right)} \right) \end{array}$$
(A.1b)
$$\begin{array}{@{}rcl@{}} && \quad \quad x\left({t_{0} } \right) = x_{0} \end{array}$$
(A.1c)
$$\begin{array}{@{}rcl@{}} && \quad \quad \bar H_{j} \left({\tilde u\left(t \right)} \right) = \hat \varphi_{j} \left({x\left({t_{f} } \right)} \right) + {\int}_{t_{0} }^{t_{f} } {\hat L_{j} \left({x\left(t \right),\tilde u\left(t \right)} \right)dt} = 0,\quad j = 1, {\cdots} ,M_{2} + 1 \end{array}$$
(A.1d)

where \(\tilde u\left (t \right ) \in {\mathscr{A}}\); \({\mathscr{A}} = U \times \left [ {0,1} \right ]^{r}\); \(\hat \varphi _{0} \left ({x\left ({t_{f} } \right )} \right ) = \varphi \left ({x\left ({t_{f} } \right )} \right )\); \(\hat L_{0} \left ({x\left (t \right ),\tilde u\left (t \right )} \right ) = \bar L\left ({x\left (t \right ),u\left (t \right ), v\left (t \right )} \right )\); \(\hat \varphi _{j} \left ({x\left ({t_{f} } \right )} \right ) = 0\), j = 1,⋯,M2 + 1; \(\hat L_{j} \left ({x\left (t \right ),\tilde u\left (t \right )} \right ) = \left ({{\min \limits } \left \{ {\hat p_{j} \left ({u\left (t \right )} \right ) - 1 + \epsilon _{j},0} \right \}} \right )^{2}\), j = 1,⋯,M2; and \(\hat L_{M_{2} + 1} \left ({x\left (t \right ),\tilde u\left (t \right )} \right ) = \left ({\sum \limits _{q = 1}^{r} {z_{q} \left (t \right )} - 1} \right )^{2}\).

Note that the approximate inequality (3.17) and \(z_{q} \left (t \right ) = \sum \limits _{i = 1}^{M_{3} } {\theta _{qi} \chi _{i} \left (t \right )}\), qI3 can be combined as follows:

$$\tilde u \left(t\right) \approx \tilde u^{M_{3} } \left(t \right) = \sum\limits_{i = 1}^{M_{3} } {\tilde \theta_{i} \chi_{i} \left(t \right)},$$
(A.2)

where \(\tilde \theta _{i} = \left [ {\left (b_{i}\right )^{\mathrm {T}} , \left (\theta _{i}\right )^{\mathrm {T}} } \right ]^{\mathrm {T}}\), i = 1,⋯,M3. Let \(\tilde \theta ^{M_{3} } = \left [ {\left (\tilde \theta _{1}\right )^{\mathrm {T}} , {\cdots } , \left (\tilde \theta _{M_{3} }\right )^{\mathrm {T}} } \right ]^{\mathrm {T}}\). Then, Problem (3.18) can be written equivalently as follows:

$$\begin{array}{@{}rcl@{}} && \min\limits_{\tilde \theta^{M_{3} }\left(t\right) } \quad \tilde J = \hat \varphi_{0} \left({x\left({t_{f} } \right)} \right) + {\int}_{t_{0} }^{t_{f} } {\mathscr{L}_{0} \left({x\left(t \right),\tilde \theta^{M_{3} } } \right)dt} \end{array}$$
(A.3a)
$$\begin{array}{@{}rcl@{}} && s.t.\quad \dot x\left(t \right) = \tilde f\left({x\left(t \right),\tilde \theta^{M_{3} } } \right) \end{array}$$
(A.3b)
$$\begin{array}{@{}rcl@{}} && \quad \quad x\left({t_{0} } \right) = x_{0} \end{array}$$
(A.3c)
$$\begin{array}{@{}rcl@{}} && \quad \quad \tilde H_{j} \left({\tilde \theta^{M_{3} } } \right) = \hat \varphi_{j} \left({x\left({t_{f} } \right)} \right) + {\int}_{t_{0} }^{t_{f} } {\mathscr{L}_{j} \left({x\left(t \right),\bar u\left(t \right)} \right)dt} = 0,\quad j = 1, {\cdots} ,M_{2} + 1 \end{array}$$
(A.3d)

where \(\tilde \theta ^{M_{3} } \in \hat U\), \(\hat U = \underbrace {\tilde U \times {\cdots } \times \tilde U}_{M_{3} } \hfill\), \(\tilde U = U \times \left [ {0,1} \right ]^{r}\); \(\tilde f\left ({x\left (t \right ),\tilde \theta ^{M_{3} } } \right ) = f\left ({x\left (t \right ),\sum \limits _{i = 1}^{M_{3} } {\tilde \theta _{i} \chi _{i} \left (t \right )} } \right )\); and \({\mathscr{L}}_{j} \left ({x\left (t \right ),\tilde \theta ^{M_{3} } } \right ) = \hat L_{j} \left ({x\left (t \right ),\sum \limits _{i = 1}^{M_{3} } {\tilde \theta _{i} \chi _{i} \left (t \right )} } \right )\), j = 0, 1,⋯ ,M2 + 1.

Note that Problem (A.1) is a canonical optimal problem described by the Chapter 6 of the work [39]. Then, we can establish four Lemmas as follows:

Lemma A.1

For any \(\tilde u\left (t\right ) \in {\mathscr{B}}\), let \(\tilde u^{M_{3} } \left (t \right )\) is defined by

$$\tilde u^{M_{3} } \left(t \right) = \sum\limits_{i = 1}^{M_{3} } {\tilde \theta_{i} \chi_{i} \left(t \right)}$$
(A.4)

where

$$\tilde \theta_{i} = \frac{1}{{\rvert {\tau_{i} - \tau_{i - 1} } \rvert}}{\int}_{\tau_{i - 1} }^{\tau_{i} } {\tilde u\left(t \right)dt}.$$

Then, \(\tilde u^{M_{3} } \left (t \right )\) converges to \(\tilde u\left (t \right )\) a.e. on \(\left [t_{0}, t_{f}\right ]\) and \(\lim \limits _{M_{3} \to + \infty } {\int \limits }_{t_{0} }^{t_{f} } {\left \| {\tilde u^{M_{3} } \left (t \right ) - \tilde u\left (t \right )} \right \|dt} = 0\), where \({\mathscr{B}}\) is the set of all \(\tilde u\left (t\right ) \in {\mathscr{A}}\).

Proof

The proof is similar to that given for Lemma 6.4.1 in [39].

Lemma A.2

Suppose that \(\left \{ {\tilde u^{M_{3} } \left (t \right )} \right \}_{M_{3} = 1}^{+ \infty }\) is a bounded function sequence in \(L_{\infty }\). Then, the corresponding solution sequence \(\left \{ {x\left ({ \cdot \rvert \tilde u^{M_{3} } \left (t \right )} \right )} \right \}_{M_{3} = 1}^{+ \infty }\) of the ODE (A.3b) with the initial condition (A.3c) is also bounded in \(L_{\infty }\).

Proof

The proof is similar to that given for Lemma 6.4.2 in [39].

Lemma A.3

Suppose that \(\left \{ {\tilde u^{M_{3} } \left (t \right )} \right \}_{M_{3} = 1}^{+ \infty }\) is a bounded function sequence in \(L_{\infty }\) and \(\tilde u^{M_{3} } \left (t \right )\) converges to \(\tilde u\left (t\right )\) a.e. on \(\left [t_{0}, t_{f}\right ]\). Then, for any \(t \in \left [t_{0}, t_{f}\right ]\), we have

$$\lim\limits_{M_{3} \to + \infty } \left\| {x\left({ t \rvert\tilde u^{M_{3} } \left(t \right)} \right) - x\left({ t \rvert\tilde u\left(t \right)} \right)} \right\| = 0,$$

where \({x\left ({t \rvert \tilde u^{M_{3} } \left (t \right )} \right )}\) is the solution of the ODE (A.3b) with the initial condition (A.3c) and \({x\left ({t \rvert \tilde u\left (t \right )} \right )}\) is the solution of the ODE (A.1b) with the initial condition (A.1c).

Proof

The proof is similar to that given for Lemma 6.4.3 in [39].

Lemma A.4

Suppose that \(\left \{ {\tilde u^{M_{3} } \left (t \right )} \right \}_{M_{3} = 1}^{+ \infty }\) is a bounded function sequence in \(L_{\infty }\) and \(\tilde u^{M_{3} } \left (t \right )\) converges to \(\tilde u\left (t\right )\) a.e. on \(\left [t_{0}, t_{f}\right ]\). Then, we have

$$\lim\limits_{M_{3} \to + \infty } J_{\text{relax}} \left({\tilde u^{M_{3} } \left(t \right)} \right) = J_{\text{relax}} \left({\tilde u\left(t \right)} \right).$$

Proof

The proof is similar to that given for Lemma 6.4.4 in [39].

Suppose that the following conditions are satisfied:

Assumption A.1

The functions f and \(\hat L_{j}\), j = 0, 1,⋯ ,M2 + 1, and their partial derivatives with respect to each components of \(x\left (t\right )\) and \(\tilde u\left (t\right )\) are piecewise continuous on \(\left [t_{0}, t_{f}\right ]\) for each \(\left (x\left (t\right ), \tilde u\left (t\right )\right ) \in \mathbb {R}^{n} \times {\mathscr{R}}^{r}\) and continuous on \(\mathbb {R}^{n} \times {\mathscr{R}}^{r}\) for each \(t \in \left [t_{0}, t_{f}\right ]\), where \({\mathscr{R}}^{r} = \mathbb {R}^{r} \times \mathbb {R}^{r}\).

Assumption A.2

The functions \(\hat \varphi _{j}\), j = 0, 1,⋯ ,M2 + 1 are continuously differentiable with respect to \(x\left (t\right )\).

Let \({\mathscr{F}}^{M_{3}}\) be the set of all \(\tilde \theta ^{M_{3} } \in \hat U\). Then, a definition can be introduced as follows:

Definition A.1

\(\tilde \theta ^{M_{3} } \in {\mathscr{F}}^{M_{3}}\) is said to be \(\tilde \varepsilon\)-tolerated feasible, if satisfies the following conditions:

$$- \tilde \varepsilon \leqslant \tilde H_{j} \left({\tilde \theta^{M_{3} } } \right) \leqslant \tilde \varepsilon ,\quad j = 1, {\cdots} ,M_{2} + 1.$$
(A.5)

Let \({\mathscr{B}}^{M_{3}}\) be the subset of \({\mathscr{F}}^{M_{3}}\) such that the equalities described by (A.3d) are satisfied. Furthermore, let \({\mathscr{B}}^{M_{3}, \tilde \varepsilon }\) be the subset of \({\mathscr{F}}^{M_{3}}\) such that the inequalities described by (A.5) are satisfied. Clearly, \({\mathscr{B}}^{M_{3}} \subset {\mathscr{B}}^{M_{3}, \tilde \varepsilon }\).

Then, the \(\tilde \varepsilon\)-tolerated version of Problem (A.3) can be state as follows:

Choose a \(\tilde \theta ^{M_{3} } \in {\mathscr{B}}^{M_{3}, \tilde \varepsilon }\) such that the objective function (A.3a) is minimized subject to the ODE (A.3b) with the initial condition (A.3c).

For convenience, this problem is referred to as Problem (\(P_{\tilde \varepsilon }\)). Furthermore, the following assumption can be introduced:

Assumption A.3

There exists an integer \({\mathscr{M}}_{0}\) such that

$$\lim\limits_{M_{3} \to + \infty } \tilde J \left({\tilde u^{M_{3} ,\tilde \varepsilon , * } } \right) = \tilde J \left({\tilde u^{M_{3} , * } } \right),$$

uniformly with respect to \(M_{3} \ge {\mathscr{M}}_{0}\), where \({\tilde u^{M_{3} ,\tilde \varepsilon , * } }\) and \({\tilde u^{M_{3} , * } }\) are the optimal solutions of Problems (\(P_{\tilde \varepsilon }\)) and (A.3), respectively.

Now, the following two theorems will be provided to illustrate the relationship between Problems (A.1) and (A.3):

Theorem A.1

Let \({\tilde u^{M_{3} , * } }\) and \(\tilde u^{*}\) be the optimal solutions of Problems (A.3) and (A.1), respectively. Then, we have

$$\lim\limits_{M_{3} \to + \infty } J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) = J_{\text{relax}} \left({\tilde u^ * } \right).$$

Proof

Let \({\tilde u^{M_{3} ,\tilde \varepsilon , * } }\) be the optimal solution of Problem (\(P_{\tilde \varepsilon }\)). Then, by using Assumption (A.3), it follows that for any \(\mathcal {N} > 0\), there exists a ε0 > 0 such that

$$\tilde J_{\text{relax}} \left({\tilde u^{M_{3} ,\tilde \varepsilon , * } } \right) > \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) - \mathcal{N}$$
(A.6)

for any \(\tilde \varepsilon \in \left (0, \varepsilon _{0}\right )\) and \(M_{3} \in \left ({\mathscr{M}}, +\infty \right )\).

Let \({\tilde u^{*,M_{3}} }\) is defined from \(\tilde u^{*}\) by using Equality (A.4). Then, for any \(\tilde \varepsilon \in \left (0, \varepsilon _{0}\right )\), by using Lemmas A.1, A.3, Assumption A.1, and Assumption A.2, it follows that there exists an integer \({\mathscr{M}}_{1} > {\mathscr{M}}_{0}\) such that \({\tilde u^{*,M_{3}} } \in {\mathscr{B}}^{M_{3},\tilde \varepsilon }\) for all \(M_{3} > {\mathscr{M}}_{1}\). This implies that

$$\tilde J_{\text{relax}} \left({\tilde u^{M_{3} ,\tilde \varepsilon , * } } \right) \le \tilde J_{\text{relax}} \left({\tilde u^{*,M_{3} } } \right),$$
(A.7)

for all \(M_{3} > {\mathscr{M}}_{1}\).

By using Inequalities (A.6) and (A.7), we have

$$\tilde J_{\text{relax}} \left({\tilde u^{*,M_{3} } } \right) > \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) - \mathcal{N},$$
(A.8)

for all \(M_{3} > {\mathscr{M}}_{1}\).

In addition, by using Lemmas A.1 and A.4, it is obtained that

$$\lim\limits_{M_{3} \to + \infty } \tilde J_{\text{relax}} \left({\tilde u^{*,M_{3} } } \right) = \tilde J_{\text{relax}} \left({\tilde u^ * } \right).$$
(A.9)

Then, from Inequality (A.8) and Equality (A.9), we have

$$\mathcal{N} + \tilde J_{\text{relax}} \left({\tilde u^{*} } \right) \ge \lim\limits_{M_{3} \to + \infty } \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right).$$
(A.10)

Thus, from Inequality (A.10), it follows that

$$\lim\limits_{M_{3} \to + \infty } \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) = \tilde J_{\text{relax}} \left({\tilde u^{*} } \right),$$

because \(\mathcal {N} > 0\) is arbitrary and \(\tilde u^{*}\) is the optimal solutions of Problems (A.1).

Theorem A.2

Let \({\tilde u^{M_{3} , * } }\) and \(\tilde u^{*}\) be the optimal solutions of Problems (A.3) and (A.1), respectively. Suppose that \({\tilde u^{M_{3} , * } }\) converges to \(\hat u\) a.e. on \(\left [t_{0}, t_{f}\right ]\). Then, \(\hat u\) is also the optimal solution of Problem (A.1).

Proof

Note that \({\tilde u^{M_{3} , * } }\) converges to \(\hat u\) a.e. on \(\left [t_{0}, t_{f}\right ]\). Then, by using Lemma A.4, it is obtained that

$$\lim\limits_{M_{3} \to + \infty } \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) = \tilde J_{\text{relax}} \left({\hat u} \right).$$
(A.11)

Furthermore, by using Lemma A.3, Assumptions A.1, and A.2, it follows that \(\hat u\) is feasible for Problem (A.1). In addition, from Theorem A.1, we can obtain that

$$\lim\limits_{M_{3} \to + \infty } \tilde J_{\text{relax}} \left({\tilde u^{M_{3} , * } } \right) = \tilde J_{\text{relax}} \left({u^ * } \right).$$
(A.12)

Thus, by using Equalities (A.11) and (A.12) together with the feasibility of \(\hat u\), it is obtained that \(\hat u\) is also the optimal solution of Problem (A.1).

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, X., Zhang, K. A limited-memory BFGS-based differential evolution algorithm for optimal control of nonlinear systems with mixed control variables and probability constraints. Numer Algor 93, 493–542 (2023). https://doi.org/10.1007/s11075-022-01425-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-022-01425-5

Keywords

Navigation