Skip to main content
Log in

Fractional-order optimal control model for the equipment management optimization problem with preventive maintenance

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The current quality status of most machinery and equipment is based on its accumulated historical status, but the influence of the past quality status on the current status of equipment is often overlooked in optimization management. This paper uses a Caputo-type fractional derivative to characterize this property. By refining the nature and characteristics of the equipment maintenance effect function and considering the memory characteristics of equipment quality, the existing model is improved, and a fractional-order optimal control model for equipment maintenance and replacement is constructed. Theoretical analyses verify the effectiveness of the fractional-order equipment maintenance management model. Furthermore, the results of numerical experiments also reflect this difference between integer-order and fractional-order equipment maintenance management models. The result shows that with an increase of the order \(\alpha\), the optimal target value of the equipment maintenance management problem will also increase with the weakening of the memory effect.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Sethi SP, Thompson GL (2000) Optimal control theory: Applications to management science and economics. Kluwer Academic Publishers, Boston

    MATH  Google Scholar 

  2. Kamien MI, Schwartz NL (2012) Dynamic optimization: the calculus of variations and optimal control in economics and management[M]. Courier corporation, USA

    MATH  Google Scholar 

  3. Wang Z, Wang Q, Zhang Z et al (2021) A new configuration of autonomous CHP system based on improved version of marine predators algorithm: A case study[J]. Int Trans Electr Energy Syst 31(4):e12806

    Article  Google Scholar 

  4. Tian MW, Yan SR, Han SZ et al (2020) New Optimal Design for a Hybrid Solar Chimney, Solid Oxide Electrolysis and Fuel Cell based on Improved Deer hunting optimization algorithm[J]. J Clean Prod 249:119414

    Article  Google Scholar 

  5. Dya B, Yong W, Hla B et al (2019) System identification of PEM fuel cells using an improved Elman neural network and a new hybrid optimization algorithm[J]. Energy Rep 5:1365–1374

    Article  Google Scholar 

  6. Almeida R, Brito DA, Cruz AMC, Martins N et al (2019) An epidemiological MSEIR model described by the Caputo fractional derivative [J]. Int J Dyn Control 7(2):76–84

    MathSciNet  Google Scholar 

  7. Binshi X (2001) History and Development of Equipment Management. Equipment Manage 1:50–51 ((in Chinese))

    Google Scholar 

  8. Degbotse AT, Nachlas JA (2003) Use of nested renewals to model availability under opportunistic maintenance policies [C]. Reliability and Maintainability Symposium. IEEE

  9. Hui W, Xiumin F, Junqi Y (2003) Optimizing Equal Risk Preventive Maintenance Strategy Considering Opportunity Maintenance. Mach Design Res 19(3):51–56 (in Chinese)

    Google Scholar 

  10. Duffuaa SO, Ben-Daya M, Al-Sultan KS et al (2001) A generic conceptual simulation model for maintenance systems [J]. J Qual Maint Eng 7(3):207–219

    Article  Google Scholar 

  11. Thompson GL (1968) Optimal maintenance policy and sale date of a machine. Manage Sci 14:543–550

    Article  Google Scholar 

  12. Kamien MI, Schwartz NL (1971) Optimal maintenance and sale age for a machine subject to failure. Manage Sci 17:427–449

    MathSciNet  MATH  Google Scholar 

  13. Sethi SP, Morton TE (1972) A mixed optimization technique for the generalized machine replacement problem. Naval Res Logis Q 19:471–481

    Article  MathSciNet  MATH  Google Scholar 

  14. Tapiero CS (1973) Optimal maintenance and replacement of a sequence of machines and technical obsolescence [J]. Opsearch 19:1–13

    Google Scholar 

  15. Sethi SP, Thompson GL (1977) Christmas toy manufacturers problem: An application of the stochastic maximum principle. Opsearch 14:161–173

    MathSciNet  Google Scholar 

  16. Sethi SP, Chand S (1979) Planning horizon procedures in machine replacement models. Manage Sci 25:140–151

    Article  MATH  Google Scholar 

  17. Chand S, Sethi SP (1982) Planning horizon procedures for machine replacement models with several possible replacement alternatives [J]. Naval Res Logis Q 29(3):483–493

    Article  MATH  Google Scholar 

  18. Mehrez A, Berman N (1994) Maintenance optimal control, three machine replacement model under technological breakthrough expectations [J]. J Optim Theory Appl 1994(81):591–618

    Article  MathSciNet  MATH  Google Scholar 

  19. Mehrez A, Rabinowitz G, Shemesh E (2000) A discrete maintenance and replacement model under technological breakthrough expectations [J]. Ann Oper Res 99:351–372

    Article  MathSciNet  MATH  Google Scholar 

  20. Dogramaci A, Fraiman NM (2004) Replacement decisions with maintenance under certainty: An imbedded optimal control model. Oper Res 52:785–794

    Article  MathSciNet  MATH  Google Scholar 

  21. Dogramaci A (2005) Hibernation Durations for Chain of Machines with Maintenance Under Uncertainty. Optimal Control and Dynamic Games, C. Diessenberg, RF Hartl (Eds.), New York: Springer, 231–238

  22. Zhang Rong (2007) A limit property of the optimal control strategy for equipment maintenance and update under uncertain conditions [C]. Proceedings of the 26th Chinese Control Conference of the Control Theory Committee of the Chinese Society of Automation. The Control Theory Committee of the Chinese Society of Automation: China Control Theory Professional Committee of the Society of Automation, 2007: 1376–1380.(in Chinese)

  23. Love CE, Guo R (1996) Utilizing Weibull failure rates in repair limit analysis for equipment replacement/preventive maintenance decisions [J]. J Op Res Soc 47(11):1366–1376

    Article  MATH  Google Scholar 

  24. Marquez AC, Heguedas AS (2002) Models for maintenance optimization: a study for repairable systems and finite time periods [J]. Reliab Eng Syst Saf 75(3):367–377

    Article  Google Scholar 

  25. Monahan GE (1982) Survey of partially observable Markov decision processes: Theory, models and algorithms [J]. Manage Sci 28(1):1–16

    Article  MathSciNet  MATH  Google Scholar 

  26. Duffuaa S, Ben-Daya M, AI-Sultan KS (2001) A generic conceptual simulation model for maintenance systems. J Qual Maint Eng 7(3):207–219

    Article  Google Scholar 

  27. Charles AS, Floru IR, Azzaro-Pantel C et al (2003) Optimization of preventive maintenance strategies in a multipurpose batch plant: application to semiconductor manufacturing [J]. Comput Chem Eng 27(4):449–467

    Article  Google Scholar 

  28. Li GQ, Li JJ (2002) A semi-analytical simulation method for reliability assessments of structural systems [J]. Reliab Eng Syst Saf 78(3):275–281

    Article  Google Scholar 

  29. Ozekici Suleyman (1995) Optimal maintenance policies in random environments [J]. European Journal of Operational Research 283–294

  30. Levitin G (2004) Reliability and performance analysis for fault-tolerant programs consisting of versions with different characteristics. Reliab Eng Syst Saf 86(1):75–81

    Article  Google Scholar 

  31. Khan FI, Haddara MM (2003) Risk-based maintenance (RBM): a quantitative approach for maintenance/inspection scheduling and planning. J Loss Prev Process Ind 16(6):561–573

    Article  Google Scholar 

  32. Bangjun H, Xiumin F, Dengzhe M (2004) Simulation and optimization of preventive maintenance control strategy for production system equipment. Comput Int Manuf Syst 7:15–19 (in Chinese)

    Google Scholar 

  33. Karamasoukis CC, Kyriakidis EG (2010) Optimal maintenance of two stochastically deteriorating machines with an intermediate buffer. Eur J Oper Res 207(1):297–308

    Article  MathSciNet  MATH  Google Scholar 

  34. Honggen C (2011) Implementation decision model for equipment maintenance improvement. Syst Eng Theory Practice 31(05):954–960 (in Chinese)

    Google Scholar 

  35. Caesarendra W, Widodo A, Thom PH (2011) Combined Probability approach and indirect data-driven method for bearing degradation prognostics. IEEE Trans Reliab 60(1):14–20

    Article  Google Scholar 

  36. Yuzhong Z, Yang S (2013) Research on equipment maintenance cost model of machinery manufacturing enterprise. China Collect Econ 10:150–152 (in Chinese)

    Google Scholar 

  37. Wen Y (2015) Logistics equipment state maintenance model and its robust optimization [D]. Jilin University, (in Chinese)

  38. Youtang L, Houjun L (2017) Dynamic preventive maintenance model for deteriorating equipment based on reliability constraints. J Lanzhou Univ Technol 43(05):35–39 (in Chinese)

    Google Scholar 

  39. Zhibin Z (2019) Research on Joint Decision of Equipment Maintenance and Equipment Replacement Based on Degradation System [D]. South China University of Technology,(in Chinese)

  40. Podlubny I (1999) Fractional differential equations. Academic Press, San Diego

    MATH  Google Scholar 

  41. Kilbas AA, Srivastava HM, Trujillo JJ (2006) Theory and applications of fractional differential equations. Elsevier B. V, Amsterdam

    MATH  Google Scholar 

  42. Miller KS, Ross B (1993) An introduction to the fractional calculus and fractional differential equations. Wiley, New York

    MATH  Google Scholar 

  43. Bai Z, Lü H (2005) Positive solutions for boundary value problem of nonlinear fractional differential equation. J Math Anal Appl 311:495–505

    Article  MathSciNet  MATH  Google Scholar 

  44. Li CP, Deng WH (2007) Remarks on fractional derivatives. Appl Math Comput 187:777–784

    MathSciNet  MATH  Google Scholar 

  45. Su X, Zhang S (2011) Unbounded solutions to a boundary value problem of fractional order on the half-line. Comput Math Appl 61:1079–1087

    Article  MathSciNet  MATH  Google Scholar 

  46. Bressan A, Piccoli B (2007) Introduction to the mathematical theory of control. American Institute of Mathematical Sciences Press, New York

    MATH  Google Scholar 

  47. Deng H, Wei W (2015) Existence and stability analysis for nonlinear optimal control problems with 1-mean equicontinuous control [J]. J Ind Manage Opt 11:1409–1422

    Article  MathSciNet  MATH  Google Scholar 

  48. Kamocki R (2014) Pontryagin maximum principle for fractional ordinary optimal control problems. Math Methods Appl Sci 37:1668–1686

    Article  MathSciNet  MATH  Google Scholar 

  49. Yusun T (2004) Functional analysis course. Fudan University Press, Shanghai (in Chinese)

    Google Scholar 

  50. Wei G (2002) A generalization and application of Ascoli-Arzela Theorem. Syst Sci Math 22:115–122 (in Chinese)

    MATH  Google Scholar 

  51. Gongqing Z, Yuanqu L (2005) Lecture Notes on Functional Analysis [M]. Peking University Press, Beijing (in Chinese)

    Google Scholar 

  52. Dixo J, Mckee S (1986) Weakly singular discrete gronwall inequalities. Zeitschrift Fur Angewandte Mathematik und Mechanik 66:535–544

    Article  MathSciNet  MATH  Google Scholar 

  53. Li CP, Wu YJ, Ye RS (2013) Recent Advances in Applied Nonlinear Dynamics with Numerical Analysis: Fractional Dynamics, Network Dynamics, Classical Dynamics and Fractal Dynamics with Their Numerical Simulations [J]. World Scientific, Singapore

    Book  MATH  Google Scholar 

  54. Wang JR, Zhou Y, Feckan M (2012) Nonlinear impulsive problems for fractional differential equations and Ulam stability. Comput Math Appl 64:3389–3405

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Project of the National Natural Science Foundation of China (Grant No. 71672195), the Project of the National Natural Science Foundation of China (Grant No. 72072185), the Project of the National Natural Science Foundation of China (Grant No. 71872184), and the Project of Doctor of entrepreneurship and innovation in Jiangsu Province (Grant No. JSSCBS20211279).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhanmei Lv.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Important properties of fractional calculus

Lemma 6

[43] Let there be a function \(h(t) \in C(0, 1)\bigcap L(0, 1)\) with a \(\nu (\nu >0)\)-order Riemann–Liouville type fractional derivative, and let \(h\in C(0, 1)\bigcap\) L(0, 1); then,

$$\begin{aligned} I^{\nu }_{0}D_t^{\nu }h(t)=h(t)+C_1t^{\nu -1}+C_2t^{\nu -2}+ \cdots +C_Nt^{\nu -N}, \end{aligned}$$
(39)

where \(C_i \in {\mathbb {R}}, i=1, 2,\ldots ,N\) and N is the smallest positive integer that satisfies \(N \geqslant \nu\).

Lemma 7

[40,41,42] If \(\nu _1, \nu _2,\nu >0, t \in [0,1]\) and \(h(t) \in L [0,1],\) we have

$$\begin{aligned} I^{\nu _1}I^{\nu _2}h(t)=I^{\nu _1+\nu _2}h(t),_{0}D_t^{\nu }I^{\nu }h(t)=h(t). \end{aligned}$$
(40)

Lemma 8

[40, 44] If \(h(t) \in C [0,1]\) and \(\nu >0\), we have

$$\begin{aligned} \left[ I^{\nu }h(t) \right] _{t=0} = 0, \text { or } \lim _{t\rightarrow 0}\frac{1}{{\varGamma }(\nu )}\int ^t_0(t-s)^{\nu -1}h(s) {\mathrm{d}} s=0. \end{aligned}$$
(41)

Lemma 9

[40] Let \(\nu >0\); then, for the fractional differential equation

$$\begin{aligned} ^C_{0}D_t^{\nu }h(t)=0, \end{aligned}$$
(42)

there is solution of the following form:

$$\begin{aligned} h(t)&=c_0+c_1t+c_2t^2+\cdots +c_{n-1}t^{n-1}, c_i \in {\mathbb {R}},\\ i&=0,1,2,\cdots ,n-1, n=[\nu ]+1. \end{aligned}$$

Lemma 10

[40] Let \(\nu >0\); then, we have

$$\begin{aligned} I^{\nu } {}^C_{0}D_t^{\nu }h(t)=h(t)+c_0+c_1t+c_2t^2+\cdots +c_{n-1}t^{n-1}, \end{aligned}$$

where \(c_i \in {\mathbb {R}}, i=0,1, 2,\cdots ,n-1, n=[\nu ]+1.\)

Lemma 11

[41, 42] If \(\nu _1, \nu _2,\nu >0, t \in [0,1]\) and function \(h(t) \in L [0,1]\), then we have

$$\begin{aligned} {}^C_{0}D_t^{\nu }I^{\nu }h(t)=h(t). \end{aligned}$$
(43)

Lemma 12

[41, 45] Let \(h(t) \in L^1(0,+\infty )\), \(\nu _1, \nu _2,\nu >0\); then, we have

$$\begin{aligned} I^{\nu _1}I^{\nu _2}h(t)=I^{\nu _1+\nu _2}h(t), ^C_{0}D_t^{\nu }I^{\nu }h(t)=h(t). \end{aligned}$$
(44)

Lemma 13

[45] Let \(_{0}^C D_t^{\nu }h(t) \in L^1(0,+\infty ), \nu >0\); then, we have

$$\begin{aligned}&I^{\nu } {}_{0}^C D_t^{\nu }h(t)=h(t)+C_1t^{\nu -1}+C_2t^{\nu -2}+ \cdots +C_Nt^{\nu -N},\, t>0, \end{aligned}$$
(45)

where \(C_i \in {\mathbb {R}}, i=1, 2,\ldots ,N,\) and N is the smallest positive integer greater than or equal to \(\nu\).

Remark 2

If the value of the function h in the above definition and lemma is in a Banach space E, then the integral involved in the above definition and lemma refers to the integral in the Bochner sense. If the abstract function g is measurable and its norm is integrable in the Lebesgue sense, then it is Bochner integrable.

Important tools

Here, we mainly introduce the concept of compact sets and related theorems, some commonly used conclusions and theorems in functional analysis, several fixed point theorems, and the Gronwall inequality. These concepts are important tools for proving the main results of this article.

First, a series of concepts and theorems of compact sets are given. The relevant details can be found in [49, 51].

Definition 1

[49] Let X be a nonempty compact set, and let \(\{A_{\alpha }\}\) be a family subset of X, \(A\subset X\); if \(A\subset \bigcup _{\alpha }A_{\alpha }\), there is a family set \(\{A_{\alpha }\}\) covering A. If the intersection of any finite set in \(\{A_{\alpha }\}\) is not empty, then \(\{A_{\alpha }\}\) has the finite intersection property.

Definition 2

[49] Let X be a topological space, \(A\subset X\); if in any open set family covering A, a finite number of open coverings A can always be taken, then A is said to be a compact set in X.

Remark 3

[49] Two small conclusions regarding compact sets are given as follows:

  1. 1.

    A closed subset of a compact set is a compact set.

  2. 2.

    A compact set in a Hausdorff space X must be a closed set.

Theorem 5

[49] Assuming f is a continuous mapping from topological space \(X_1\) to topological space \(X_2\), the compact set A in \(X_1\) is similar to f(A) and is a compact set in \(X_2\).

Definition 3

[49] Assume X is a distance space, \(M\subset X\); if for any point in M, denoted as \(\{x_n\}\), there is a convergent subcolumn \(\{x_{n_k} in\)X\(\}\), then M is a column compact set in X. If \(\forall \epsilon >0\), there is a finite subset A of M, making \(M\subset \bigcup _{x\in A}O(x,\epsilon )\), then M is said to be completely bounded, and it is called A, as a limited \(\epsilon -\) net of M.

Remark 4

[49] Three small conclusions about column compact sets are given as follows:

  1. 1.

    The closure of a column compact set is still a column compact set.

  2. 2.

    The sequence compact set in the distance space must be completely bounded; a completely bounded set must be bounded.

  3. 3.

    Assume X is a distance space, \(M\subset X\); then, the necessary and sufficient condition for M to be a compact set is that M is a sequence tightly closed set.

The continuous functions on the compact space X are all C(X). For \(f\in C(X)\), we have

$$\begin{aligned} \Vert f\Vert =\max _{x}|f(x)|. \end{aligned}$$

According to \(\Vert \cdot \Vert\), C(X) is a Banach space.

Definition 4

[49] Let \(M\subset C(X)\); for arbitrary \(\epsilon >0\) and \(\delta >0\), when \(\rho (x,y)<\delta\), we have

$$\begin{aligned} |f(x)-f(y)|<\epsilon , \forall f\in M. \end{aligned}$$

Then, M is an equally continuous family of functions.

The Arzela–Ascoli theorem is given below.

Theorem 6

[49] Let E be a real Banach space, \(J_0=[a,b]\), and let

$$\begin{aligned} C[J_0,E]=\{x|x:J_0\rightarrow E \text { is continuous}\}. \end{aligned}$$

Its norm is

$$\begin{aligned} \Vert x\Vert =\sup _{t\in [0,T]}\Vert x(t)\Vert . \end{aligned}$$

Then, \(C[J_0,E]\) is a Banach space.

The necessary and sufficient condition for \(H\subset C[J_0,E]\) to be a relatively compact set is that H is an equicontinuous function of the family and that for an arbitrary \(t\in J_0\), \(H(t)=\{x(t)|x\in H\}\) is a relatively compact set in E.

The generalized Arzela–Ascoli theorem is given below.

Theorem 7

[50] Let X be a compact distance space, \(M\subset X\); the necessary and sufficient condition for M to be compact is that M is a bounded and equally continuous function family.

Definition 5

[51] Let \((X,\rho )\) be a distance space; therefore, \(T:(X,\rho )\rightarrow (X,\rho )\) is a contraction map. If \(0<\alpha <1\), make \(\rho (Tx,Ty)\leqslant \alpha \rho (x,y)\) \((\forall x,y \in X)\).

A commonly used fixed point theorem is given below: the Banach fixed point theorem, which is the principle of contraction mapping.

Theorem 8

[51] Let \((X,\rho )\) be a complete distance space, and let T be a compressed mapping from \((X,\rho )\) to itself; then, T has the only immovable X point. That is, there is only one \(x\in X\) such that \(Tx=x\).

For details on weakly singular Gronwall inequalities, see [52,53,54].

Theorem 9

[52, 53] Let u(t) be a continuous function that is nonnegative on [0, T]. If

$$\begin{aligned} u(t)\leqslant \varphi (t)+M\int _0^t\frac{u(s)}{(t-s)^{\alpha }} {\mathrm{d}} s, 0\leqslant t\leqslant T, \end{aligned}$$

where \(0\leqslant \alpha <1\), \(\varphi (t)\) is a nonnegative monotonically increasing continuous function on [0, T], and M is a positive constant.

$$\begin{aligned} u(t)\leqslant \varphi (t)E_{1-\alpha } (M{\varGamma }(1-\alpha )t^{1-\alpha }), 0\leqslant t\leqslant T, \end{aligned}$$

where \(E_{1-\alpha }(z)\) is a Mittag-Leffler function defined on \(0\leqslant \alpha <1\).

$$\begin{aligned} E_{1-\alpha }(z):=\sum _{n=0}^{\infty }\frac{z^n}{{\varGamma } (n(1-\alpha )+1)}. \end{aligned}$$

Theorem 10

[54] Let \(u(t)\in PC(J,{\mathbb {R}})\) satisfy the following inequality:

$$\begin{aligned} |u(t)|\leqslant c_1(t)&+c_2\int _0^t(t-s)^{q-1}|u(s)|{\mathrm{d}} s +\sum _{0<t_k<t}\theta _k|u(t_k^-)|, 0\leqslant t\leqslant T, \end{aligned}$$

where \(0< q <1\), \(c_1\) is a nonnegative monotonically increasing continuous function on [0, T], and \(c_2, \theta _k(0<t_k<t)\) is a positive constant.

$$\begin{aligned}&|u(t)|\leqslant c_1(t)\left( 1+\theta E_{q} (c_2{\varGamma }(q)t^{q})\right) ^k E_q (c_2{\varGamma }(q)t^{q}),\, t_k< t\leqslant t_{k+1}, \end{aligned}$$

where

$$\begin{aligned} \theta =\max _{0<t_k<t}\theta _k. \end{aligned}$$

Proof of the partial theorem lemma

Proof of Lemma 1

“Necessity”. Let \(y\in PC_m[0,T]\) be the solution of equation (18). By Lemma 10, when \(t\in [0,t_1]\), we have

$$\begin{aligned} y(t)=c_0+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^t(t-s)^{\alpha -1}c(s) {\mathrm{d}} s, c_0\in {\mathbb {R}}^m. \end{aligned}$$

By \(y(0)=y^0\), we have \(c_1=y^0\); thus,

$$\begin{aligned} y(t)=y^0+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^t(t-s)^{\alpha -1}c(s) {\mathrm{d}} s, t\in [0,t_1]. \end{aligned}$$

When \(t\in (t_1,t_2]\), we have

$$\begin{aligned} _{a_1}^CD_t^{\alpha }y(t)=c(t), {\varDelta } y(t_1)=y(t_1^+)-y(t_1^-)=\psi _1(y(t_1^-)). \end{aligned}$$

By Lemma 10, when \(t\in (t_1,t_2]\), we have

$$\begin{aligned} y(t)=c_1+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_1}^t(t-s)^{\alpha -1}c(s) {\mathrm{d}} s, c_1\in {\mathbb {R}}^m. \end{aligned}$$

By

$$\begin{aligned} y(t_1^+)-y(t_1^-)&=c_1+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_1}^{t_1}(t_1-s)^{\alpha -1}c(s) {\mathrm{d}} s-y^0 \\&-\quad \,\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^{t_1}(t_1-s)^{\alpha -1}c(s) {\mathrm{d}} s=c_1-y^0 \\&\quad -\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^{a_1}(t_1-s)^{\alpha -1}c(s) {\mathrm{d}} s=\psi _1(y(t_1^-))=\psi _1(y(t_1)), \end{aligned}$$

we have

$$\begin{aligned} c_1=y^0+\psi _1(y(t_1))+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^{a_1}(t_1-s)^{\alpha -1}c(s) {\mathrm{d}} s. \end{aligned}$$

Thus, for \(t\in (t_1,t_2]\), we have

$$\begin{aligned} y(t)&=y^0+\psi _1(y(t_1))+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_0}^{a_1}(t_1-s)^{\alpha -1}c(s) {\mathrm{d}} s \\&\quad +\frac{1}{{\varGamma }{(\alpha )}}\int _0^t(t-s)^{\alpha -1}c(s) {\mathrm{d}} s. \end{aligned}$$

By analogy, we have \(t\in (t_k,t_{k+1}]\), and thus,

$$\begin{aligned} y(t)&=y^0+\sum _{i=1}^{k}\psi _i(y(t_i))\\&\quad +\sum _{j=1}^{k}\frac{1}{{\varGamma }{(\alpha )}}\int _{a_{j-1}}^{a_{j}}(t_j-s)^{\alpha -1}c(s) {\mathrm{d}} s \\&\quad \,+\frac{1}{{\varGamma }{(\alpha )}}\int _{a_k}^t(t-s)^{\alpha -1}c(s) {\mathrm{d}} s. \end{aligned}$$

Therefore,

$$(t) = \left\{ {\begin{array}{*{20}l} {y^{0} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{t} {(t - s)^{{\alpha - 1}} } c(s)ds,t \in [0,t_{1} ],} \hfill \\ {y^{0} + \psi _{1} (y(t_{1} )) + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{{a_{1} }} {(t_{1} - s)^{{\alpha - 1}} } c(s)ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{1} }}^{t} {(t - s)^{{\alpha - 1}} } c(s)ds,t \in (t_{1} ,t_{2} ],} \hfill \\ {\qquad \qquad \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{k} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{k} {\frac{1}{{(\alpha )}}} \int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } c(s)ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{k} }}^{t} {(t - s)^{{\alpha - 1}} } c(s)ds,t \in (t_{k} ,t_{{k + 1}} ],} \hfill \\ {\qquad \qquad \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{p} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{p} {\frac{1}{{(\alpha )}}} \int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } c(s)ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{p} }}^{t} {(t - s)^{{\alpha - 1}} } c(s)ds,t \in (t_{p} ,T].} \hfill \\ \end{array} } \right.$$

“Adequacy”. Assume y satisfies the function (19). By Lemma 11, \(y\in PC_m[0,T]\) is the solution of function (18). The proof is complete. \(\square\)

Proof of Lemma 2

For the convenience of description and writing, let \(a_0=0\). \(\forall u\in U\), \(\forall f \in Y_K\), by Lemma 1, \(y\in PC_m[0,T]\) is the solution of function (20) if and only if \(y\in PC_m[0,T]\) is the solution of the following integral equation:

$$(t) = \left\{ {\begin{array}{*{20}l} {y^{0} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in [0,t_{1} ],} \hfill \\ {y^{0} + \psi _{1} (y(t_{1} )) + \frac{1}{{(\alpha )}}\int_{{a_{1} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{{a_{1} }} {(t_{1} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{1} ,t_{2} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{k} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{k} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{k} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{k} ,t_{{k + 1}} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{p} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{p} {} \frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{p} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{p} ,T].} \hfill \\ \end{array} } \right.$$

For a given \(u\in U\), \(f \in Y_K\), consider the following operator:

$$(Ty)(t) = \left\{ {\begin{array}{*{20}l} {y^{0} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in [0,t_{1} ],} \hfill \\ {y^{0} + \psi _{1} (y(t_{1} )) + \frac{1}{{(\alpha )}}\int_{{a_{1} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{{a_{1} }} {(t_{1} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{1} ,t_{2} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{k} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{k} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{k} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{k} ,t_{{k + 1}} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{p} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{p} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{p} }}^{t} {(t - s)^{{\alpha - 1}} } f(t,y(s),u(s))ds,t \in (t_{p} ,T].} \hfill \\ \end{array} } \right.$$

With the binding condition \((F_K)\), we have \(T:PC_m[0,T]\rightarrow PC_m[0,T]\).

Clearly, \(y\in PC_m[0,T]\) is the solution of (20) if and only if \(y\in PC_m[0,T]\) is the fixed point of operator T on \(PC_m[0,T]\).

Next, we will use the Banach fixed point theorem 8 to prove that the operator T has a unique fixed point in the Banach space \(PC_m[0,T]\).

First, since the condition \((H_{\phi })\) is established, the following equivalent norm can be defined in the Banach space \(PC_m[0,T]\):

$$\begin{aligned} \Vert x \Vert _*=\max _{0\leqslant t \leqslant T} {\mathrm{e}}^{-\chi _Kt} \Vert x \Vert , \end{aligned}$$

where \(\chi _K>0\) and

$$\sum\limits_{{i = 1}}^{k} \Theta _{i} e^{{ - \chi _{K} (t_{k} - t_{i} )}} + \frac{{D_{K} }}{{\chi _{K}^{\alpha } [(\alpha )]^{2} }} + e^{{ - \chi _{K} t}} \sum\limits_{{j = 1}}^{k} {\frac{{D_{K} }}{{(\alpha + 1)}}} [(t_{j} - a_{{j - 1}} )^{\alpha } - (t_{j} - a_{j} )^{\alpha } < 1.$$

In fact,

$$\begin{aligned} {\mathrm{e}}^{-\chi _KT} \Vert x \Vert \leqslant \Vert x \Vert _*=\max _{0\leqslant t \leqslant T} {\mathrm{e}}^{-\chi _Kt} \Vert x \Vert \leqslant \Vert x \Vert . \end{aligned}$$

It can be seen that \(\Vert \cdot \Vert _1\) and \(\Vert \cdot \Vert\) are equivalent norms. Next, we use the norm \(\Vert \cdot \Vert _1\) in related discussions.

For a given positive constant \(K>0\), we know that there is a constant \(D_K>0\) such that \(\forall (t,y_1,u), (t,y_2,u)\in I_K\), we have

$$\begin{aligned} \Vert f(t,y_1,u)-f(t,y_2,u) \Vert \leqslant D_K \Vert y_1-y_2 \Vert , \end{aligned}$$
(46)

where

$$\begin{aligned} I_K=[0,T]\times {\mathbb {R}}^m \times B_K \end{aligned}$$

and

$$\begin{aligned} B_K= \{u\in {\mathbb {R}}^n:\Vert u\Vert \leqslant K \}. \end{aligned}$$

For arbitrary \(\vartheta _1, \vartheta _2\in PC_m[0,T]\), let \(\vartheta _1\ne \vartheta _2\), and let \(\Vert \vartheta _1-\vartheta _2\Vert _*=\xi _0>0\); by the definition of \(\Vert \cdot \Vert _*\), \(\forall t\in [0,T]\), we have

$$\begin{aligned} {\mathrm{e}}^{-\chi _Kt} \Vert \vartheta _1(t)-\vartheta _2(t) \Vert&\leqslant {\mathrm{e}}^{-\chi _Kt} \Vert \vartheta _1-\vartheta _2 \Vert \leqslant \Vert \vartheta _1-\vartheta _2 \Vert _*=\xi _0, \end{aligned}$$

and thus, \(\forall t\in [0,T]\),

$$\begin{aligned} \Vert \vartheta _1(t)-\vartheta _2(t) \Vert \leqslant {\mathrm{e}}^{\chi _Kt}\xi _0. \end{aligned}$$

When \(t\in [0,t_1]\), we have

$$\begin{aligned}&\Vert {\mathrm{e}}^{-L_Kt} [(T\vartheta _1)(t)-(T\vartheta _2)(t)] \Vert \\&\quad ={\mathrm{e}}^{-L_Kt}\frac{1}{{\varGamma }{(\alpha )}}\left\| \int _0^t(t-s)^{\alpha -1}f(s,\vartheta _1(s),u(s)) {\mathrm{d}} s \right. \\&\qquad \,\left. -\int _0^t(t-s)^{\alpha -1}f(s,\vartheta _2(s),u(s)) {\mathrm{d}} s\right\| \\&\quad \leqslant {\mathrm{e}}^{-L_Kt}\frac{1}{{\varGamma }{(\alpha )}}\int _0^t\left\| (t-s)^{\alpha -1} (f(s,\vartheta _1(s),u(s)) \right. \\&\qquad \left. -f(s,\vartheta _2(s),u(s)))\right\| {\mathrm{d}} s\\&\quad \leqslant {\mathrm{e}}^{-L_Kt}\frac{D_K}{{\varGamma }{(\alpha )}}\int _0^t(t-s)^{\alpha -1} \Vert \vartheta _1(s)-\vartheta _2(s) \Vert {\mathrm{d}} s\\&\quad \leqslant {\mathrm{e}}^{-L_Kt}\frac{\xi _0 D_K}{{\varGamma }{(\alpha )}}\int _0^t(t-s)^{\alpha -1}{\mathrm{e}}^{L_Ks} {\mathrm{d}} s\\&\quad =\frac{\xi _0 D_K}{{\varGamma }{(\alpha )}}\int _0^t(t-s)^{\alpha -1}{\mathrm{e}}^{-\chi _K(t-s)} {\mathrm{d}} s =\frac{\xi _0 D_K}{{\varGamma }{(\alpha )}}\int _0^t\tau ^{\alpha -1}{\mathrm{e}}^{-\chi _K\tau } d\tau \\&\quad =\frac{\xi _0 D_K}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^t(\chi _K\tau )^{\alpha -1}{\mathrm{e}}^{-\chi _K\tau } {\mathrm{d}}(\chi _K\tau ) =\frac{\xi _0 D_K}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^{\chi _Kt}s^{\alpha -1}{\mathrm{e}}^{-s} {\mathrm{d}} s\\&\quad \leqslant \frac{\xi _0 D_K}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^{+\infty }s^{\alpha -1}{\mathrm{e}}^{-s} {\mathrm{d}} s = \frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\xi _0 \\&\quad \leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\right) \xi _0, \end{aligned}$$

and thus, for \(t\in [0,t_1]\),

$$\begin{aligned}&\Vert {\mathrm{e}}^{-L_Kt} [(T\vartheta _1)(t)-(T\vartheta _2)(t)] \Vert \\&\quad \leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\right) \xi _0. \end{aligned}$$

Next, consider that for \(t\in (t_k,t_{k+1}]\), \(\forall y\in PC_m[0,T]\), we have

$$\begin{aligned} \begin{aligned} (Ty)(t)&= y^0+\sum _{i=1}^{k} \psi _i(y(t_i)) +\sum _{j=1}^{k}\frac{1}{{\varGamma }{(\alpha )}}\int _{a_{j-1}}^{a_{j}}(t_j-s)^{\alpha -1}f(t,y(s),u(s)) {\mathrm{d}} s\\&\quad \, +\frac{1}{{\varGamma }{(\alpha )}}\int _{a_k}^t(t-s)^{\alpha -1}f(t,y(s),u(s)) {\mathrm{d}} s. \end{aligned} \end{aligned}$$

and we have

$$\begin{aligned}&\Vert {\mathrm{e}}^{-\chi _Kt} [(T\vartheta _1)(t)-(T\vartheta _2)(t)] \Vert \leqslant {\mathrm{e}}^{-\chi _Kt}\left\| \sum _{i=1}^{k} \psi _i(\vartheta _1(t_i))-\sum _{i=1}^{k} \psi _i(\vartheta _2(t_i))\right\| \\&\qquad \, + {\mathrm{e}}^{-\chi _Kt}\frac{1}{{\varGamma }{(\alpha )}}\left\| \int _{a_k}^t(t-s)^{\alpha -1}f(s,\vartheta _1(s),u(s)) {\mathrm{d}} s \right. \\&\qquad \, \left. -\int _{a_k}^t(t-s)^{\alpha -1}f(s,\vartheta _2(s),u(s)) {\mathrm{d}} s\right\| \\&\qquad \, +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{1}{{\varGamma }{(\alpha )}}\left\| \int _{a_{j-1}}^{a_{j}}(t_j-s)^{\alpha -1}(f(s,\vartheta _1(s),u(s)) -f(s,\vartheta _2(s),u(s))) {\mathrm{d}} s\right\| \\&\quad \leqslant {\mathrm{e}}^{-\chi _Kt}\sum _{i=1}^{k} {\varTheta }_i\left\| \vartheta _1(t_i)-\vartheta _2(t_i)\right\| \\&\qquad \, + {\mathrm{e}}^{-\chi _Kt}\frac{1}{{\varGamma }{(\alpha )}}\int _{a_k}^t\left\| (t-s)^{\alpha -1} (f(s,\vartheta _1(s),u(s)) -f(s,\vartheta _2(s),u(s)))\right\| {\mathrm{d}} s\\&\qquad \, +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{1}{{\varGamma }{(\alpha )}}\int _{a_{j-1}}^{a_{j}}(t_j-s)^{\alpha -1} \left\| f(s,\vartheta _1(s),u(s))-f(s,\vartheta _2(s),u(s))\right\| {\mathrm{d}} s \\&\quad \leqslant {\mathrm{e}}^{-\chi _Kt}\sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{\chi _Kt_i}\xi _0 + {\mathrm{e}}^{-\chi _Kt}\frac{D_K}{{\varGamma }{(\alpha )}}\int _{a_k}^t(t-s)^{\alpha -1} \Vert \vartheta _1(s)-\vartheta _2(s) \Vert {\mathrm{d}} s\\&\qquad \, +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha )}}\int _{a_{j-1}}^{a_{j}}(t_j- s)^{\alpha -1} \Vert \vartheta _1(s) - \vartheta _2(s) \Vert {\mathrm{d}} s\\&\quad \leqslant \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t-t_i)}\xi _0+{\mathrm{e}}^{-\chi _Kt}D_K\frac{\xi _0}{{\varGamma }{(\alpha )}} \int _{a_k}^t(t-s)^{\alpha -1}{\mathrm{e}}^{L_Ks} {\mathrm{d}} s\\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&\quad \leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0+D_K\frac{\xi _0}{{\varGamma }{(\alpha )}} \int _{a_k}^t(t-s)^{\alpha -1}{\mathrm{e}}^{-\chi _K(t-s)} {\mathrm{d}} s\\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&\quad =\left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0+D_K\frac{\xi _0}{{\varGamma }{(\alpha )}} \int _{a_k}^t\tau ^{\alpha -1}{\mathrm{e}}^{-\chi _K\tau } d\tau \\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&\quad =\left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0 +D_K\frac{\xi _0}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^t(\chi _K\tau )^{\alpha -1}{\mathrm{e}}^{-\chi _K\tau } {\mathrm{d}}(\chi _K\tau )\\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&\quad =\left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0+D_K\frac{\xi _0}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^{\chi _Kt}s^{\alpha -1}{\mathrm{e}}^{-s} {\mathrm{d}} s\\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&\leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0+D_K\frac{\xi _0}{\chi _K^{\alpha }{\varGamma }{(\alpha )}}\int _0^{+\infty }s^{\alpha -1}{\mathrm{e}}^{-s} {\mathrm{d}} s\\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\\&= \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}\right) \xi _0+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\xi _0 \\&\qquad \, +\xi _0{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]}{{\varGamma }{(\alpha +1)}}\\&\quad \leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\right. \\&\qquad \, \left. +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\right) \xi _0. \end{aligned}$$

In summary, we have

$$\begin{aligned}&\Vert T\vartheta _1-T\vartheta _2 \Vert _*\leqslant \left( \sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2} \right. \\&\quad \left. +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]\right) \Vert \vartheta _1-\vartheta _2 \Vert _*. \end{aligned}$$

Because we have

$$\begin{aligned}&\sum _{i=1}^{k} {\varTheta }_i{\mathrm{e}}^{-\chi _K(t_k-t_i)}+\frac{D_K}{\chi _K^{\alpha }[{\varGamma }{(\alpha )}]^2}\\&\quad +{\mathrm{e}}^{-\chi _Kt}\sum _{j=1}^{k}\frac{D_K}{{\varGamma }{(\alpha +1)}} [(t_j-a_{j-1})^{\alpha }-(t_j-a_{j})^{\alpha }]<1, \end{aligned}$$

T is a contraction mapping on \(PC_m[0,T]\); there is a principle of contraction mapping 8, and it can be seen that the operator T has a unique fixed point in \(PC_m[0,T]\). Therefore, Equation (20) has a unique solution in \(PC_m[0,T]\). The proof is complete. \(\square\)

Proof of theorem 4

According to the definition form of the performance index function J, we have

$$\begin{aligned} J_{f^j}(u^j)=k(y^j(0),y^j(T))+\int _0^T h(t,y^j(t),u^j(t)) {\mathrm{d}} t \end{aligned}$$

and

$$\begin{aligned} J_{f^*}(u^*)=k(y^*(0),y^*(T))+\int _0^T h(t,y^*(t),u^*(t)) {\mathrm{d}} t, \end{aligned}$$

where

$$y^{j} (t) = S_{{f^{j} }} (u^{j} )(t) = \left\{ {\begin{array}{*{20}l} {y^{0} + \frac{1}{{(\alpha )}}\int_{0}^{t} {(t - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds,t \in [0,t_{1} ],} \hfill \\ {y^{0} + \psi _{1} (y(t_{1} )) + \frac{1}{{(\alpha )}}\int_{{a_{1} }}^{t} {(t - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{{a_{1} }} {(t_{1} - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds,t \in (t_{1} ,t_{2} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{k} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{k} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{k} }}^{t} {(t - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds,t \in (t_{k} ,t_{{k + 1}} ],} \hfill \\ { \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{p} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{p} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{p} }}^{t} {(t - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds,t \in (t_{p} ,T]} \hfill \\ \end{array} } \right.$$

and

$$y^{*} (t) = S_{{f^{*} }} (u^{*} )(t) = \left\{ {\begin{array}{*{20}l} {y^{0} + \frac{1}{{(\alpha )}}\int_{0}^{t} {(t - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds,t \in [0,t_{1} ],} \hfill \\ {y^{0} + \psi _{1} (y(t_{1} )) + \frac{1}{{(\alpha )}}\int_{{a_{1} }}^{t} {(t - s)^{{\alpha - 1}} } f^{j} (s,y^{j} (s),u^{j} (s))ds} \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{0} }}^{{a_{1} }} {(t_{1} - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds,t \in (t_{1} ,t_{2} ],} \hfill \\ {\qquad \qquad \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{k} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{k} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{k} }}^{t} {(t - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds,t \in (t_{k} ,t_{{k + 1}} ],} \hfill \\ {\qquad \qquad \vdots ,} \hfill \\ {y^{0} + \sum\limits_{{i = 1}}^{p} {\psi _{i} (y(t_{i} ))} + \sum\limits_{{j = 1}}^{p} {\frac{1}{{(\alpha )}}\int_{{a_{{j - 1}} }}^{{a_{j} }} {(t_{j} - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds} } \hfill \\ {\quad {\mkern 1mu} + \frac{1}{{(\alpha )}}\int_{{a_{p} }}^{t} {(t - s)^{{\alpha - 1}} } f^{*} (s,y^{*} (s),u^{*} (s))ds,t \in (t_{p} ,T].} \hfill \\ \end{array} } \right.$$

From \(f^j\rightarrow f^*(j\rightarrow +\infty )\) and \(u^j\rightarrow u^*(j\rightarrow +\infty )\), by Lemma 4, we have \(y^j\rightarrow y^*(j\rightarrow +\infty )\).

The functions k and h satisfy the conditions \((H_k)\) and \((H_h)\), respectively. k and h are continuous functions, and we have

$$\begin{aligned} k(y^j(0),y^j(T))\rightarrow k(y^*(0),y^*(T)), j\rightarrow +\infty \end{aligned}$$

and

$$\begin{aligned}&\int _0^T h(t,y^j(t),u^j(t)) {\mathrm{d}} t \rightarrow \int _0^T h(t,y^*(t),u^*(t)) {\mathrm{d}} t, j\rightarrow +\infty . \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned}&k(y^j(0),y^j(T))+\int _0^T h(t,y^j(t),u^j(t)) {\mathrm{d}} t\\&\rightarrow k(y^*(0),y^*(T))+\int _0^T h(t,y^*(t),u^*(t)) {\mathrm{d}} t, j\rightarrow +\infty , \end{aligned} \end{aligned}$$

Therefore, we have

$$\begin{aligned} J_{f^j}(u^j)\rightarrow J_{f^*}(u^*), j\rightarrow +\infty . \end{aligned}$$

The proof is complete. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gong, Y., Zha, M. & Lv, Z. Fractional-order optimal control model for the equipment management optimization problem with preventive maintenance. Neural Comput & Applic 34, 4693–4714 (2022). https://doi.org/10.1007/s00521-021-06624-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06624-0

Keywords

Navigation