Skip to main content
Log in

Adaptive Control System with a Variable Adjustment Law Gain Based on the Recursive Least Squares Method

  • LINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

The aim of the present paper is to synthesize an adaptive control system with variable adaptation-loop gain to compensate for the plant parametric uncertainty. In contrast to the existing ones, such a system simultaneously (1) includes an algorithm for the automatic calculation of the parameter adjustment law gain in the controller, which operates in proportion to the current regressor value, thus permitting one to obtain an adjustable upper bound for the rate of convergence of the plant output–controller parameter errors to zero (subject to the condition of persistent excitation of the regressor); (2) does not require knowing the signs or values of the entries of the plant gain matrix. The Lyapunov second method and the recursive least squares method are used to synthesize such a control system. For this system, the stability and the boundedness of the above-mentioned error values are proved, and estimates for the rate of their convergence to zero are obtained. The efficiency of our approach is demonstrated by mathematical modeling of an example of a plant corresponding to the statement of the research problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

Similar content being viewed by others

REFERENCES

  1. Ioannou, P. and Sun, J., Robust Adaptive Control, New York: Dover, 2013.

    MATH  Google Scholar 

  2. Narendra, K.S. and Valavani, L.S., Direct and indirect model reference adaptive control, Automatica, 1979, vol. 15, no. 6, pp. 653–664.

    Article  MathSciNet  Google Scholar 

  3. Hang, C. and Parks, P.C., Comparative studies of model reference adaptive control systems, IEEE Trans. Autom. Control, 1973, vol. 18, no. 5, pp. 419–428.

    Article  Google Scholar 

  4. Fradkov, A.L., Adaptivnoe upravlenie v slozhnykh sistemakh: bespoiskovye metody (Adaptive Control in Complex Systems: Searchless Methods), Moscow: Nauka, 1990.

    MATH  Google Scholar 

  5. Wise, K.A., Lavretsky, E., and Hovakimyan, N., Adaptive control of flight: theory, applications, and open problems, Proc. 2006 Am. Control Conf. (2006), pp. 1–6.

  6. Ortega, R., Nikiforov, V., and Gerasimov, D., On modified parameter estimators for identification and adaptive control. A unified framework and some new schemes, Annu. Rev. Control, 2020, pp. 1–16.

  7. Narendra, K.S. and Annaswamy, A.M., Stable Adaptive Systems, New Jersey: Prentice Hall, 1989.

    MATH  Google Scholar 

  8. Sastry, S. and Bodson, M., Adaptive Control—Stability, Convergence, and Robustness, New Jersey: Prentice Hall, 1989.

    MATH  Google Scholar 

  9. Ioannou, P. and Kokotovic, P., Instability analysis and improvement of robustness of adaptive control, Automatica, 1984, vol. 20, no. 5, pp. 583–594.

    Article  MathSciNet  Google Scholar 

  10. Khalil, H.K. and Grizzle, J.W., Nonlinear Systems, New Jersey: Prentice-Hall, 2002.

    Google Scholar 

  11. Schatz, S.P., Yucelen, T., Gruenwald, B., and Holzapfel, F., Application of a novel scalability notion in adaptive control to various adaptive control frameworks, AIAA Guidance, Navigation, Control Conf. (2015), pp. 1–17.

  12. Jaramillo, J., Yucelen, T., and Wilcher, K., Scalability in model reference adaptive control, AIAA Scitech 2020 Forum (2020), pp. 1–13.

  13. Glushchenko, A., Petrov, V., and Lastochkin, K., Development of balancing robot control system on the basis of the second Lyapunov method with setpoint-adaptive step size, Proc. 21th Int. Conf. Complex Syst.: Control and Model. Probl. (CSCMP) (2019), pp. 1–6.

  14. Narendra, K.S. and Annaswamy, A.M., Persistent excitation in adaptive systems, Int. J. Control, 1987, vol. 45, no. 1, pp. 127–160.

    Article  MathSciNet  Google Scholar 

  15. Kumar, K.A. and Bhasin, S., Data driven MRAC with parameter convergence, IEEE Conf. Control Appl. (CCA) (2015), pp. 1662–1667.

  16. Chowdhary, G., Muhlegg, M., and Johnson, E., Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation, Int. J. Control, 2014, vol. 87, no. 8, pp. 1583–1603.

    Article  MathSciNet  Google Scholar 

  17. Narendra, K.S. and Kudva, P., Stable adaptive schemes for system identification and control. Part I, IEEE Trans. Syst. Man Cybern., 1974, vol. 6, pp. 542–551.

    Article  Google Scholar 

Download references

Funding

This work was supported by the Russian Foundation for Basic Research, project no. 18-47-310003 r_a.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. I. Glushchenko, V. A. Petrov or K. A. Lastochkin.

Additional information

Translated by V. Potapchouck

APPENDIX

Proof of Theorem 1. To prove the theorem, we substitute the expression (3.3) into Eq. (3.13). Then, under the condition \(\theta = \mathrm{const}\), we have the equation

$$ \dot {\tilde \theta } = - \Gamma \omega {\omega ^\mathrm {T}}\tilde \theta .$$
(A.1)

We select a candidate for the Lyapunov function as the quadratic form

$$ \begin {gathered} V = {{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta ,\\ {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2} \leqslant V \leqslant {\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2}, \end {gathered}$$
(A.2)
where \(\lambda _{\min }(.) \) and \(\lambda _{\max }(.) \) are the minimum and maximum eigenvalues of a matrix.

In view of Eqs. (3.9) and (3.13), the derivative of the quadratic form (A.2) along the trajectories of Eq. (A.1) is as follows:

$$ \begin {aligned} \dot V &= 2{{\tilde \theta }^{^\mathrm {T}}}{\Gamma ^{ - 1}}\dot {\tilde \theta } + {{\tilde \theta }^\mathrm {T}}{{\dot \Gamma }^{ - 1}}\tilde \theta = - 2{{\tilde \theta }^{^\mathrm {T}}}{\Gamma ^{ - 1}}\left [ {\Gamma \omega {\omega ^\mathrm {T}}\tilde \theta } \right ] + {{\tilde \theta }^\mathrm {T}}\left [ {\omega {\omega ^\mathrm {T}} - \lambda {\Gamma ^{ - 1}}} \right ]\tilde \theta \\ &= - {{\tilde \theta }^\mathrm {T}}\omega {\omega ^\mathrm {T}}\tilde \theta - \lambda {{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta = - \left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right ){\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}} - \lambda {{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta \\ &\leqslant - {\left \| {B_\mathrm {ref\,}^\dag } \right \|^2}{\left \| \varepsilon \right \|^2} - \lambda {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2}. \end {aligned}$$
(A.3)

The derivative (A.3) of the positive definite quadratic form (A.2) is a negative semidefinite function, and therefore, the parameter error is \(\tilde \theta \in {L_\infty } \), the generalized error is \(\varepsilon \in {L_\infty } \), and Eq. (A.2) is a Lyapunov function for system (A.1). At the same time, the Lyapunov function (A.2) has a finite limit as \(t\to \infty \),

$$ \begin {gathered} V\left ( {\tilde \theta \left ( {t \to \infty } \right )} \right ) = V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) + \displaystyle \int \limits _{{t_0}}^\infty {\dot Vd} t = V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) - \displaystyle \int \limits _{{t_0}}^\infty {\left [ {\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right ){{\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )}^\mathrm {T}} + \lambda \left ( {{{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta } \right )} \right ]d} t\\ \Rightarrow \displaystyle \int \limits _{{t_0}}^\infty {\left [ {{{\left \| {B_\mathrm {ref\,}^\dag } \right \|}^2}{{\left \| \varepsilon \right \|}^2} + \lambda {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){{\left \| {\tilde \theta } \right \|}^2}} \right ]d} t {}= V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) - V\left ( {\tilde \theta \left ( {t \to \infty } \right )} \right ) < \infty , \end {gathered}$$
and then \(\tilde \theta \in {L_2} \cap {L_\infty }\) and \(\omega \in {L_\infty }\) (as a result of the fact that \(\varepsilon \in {L_2} \cap {L_\infty }\)).

We have thus proved the first part of Theorem 1. To prove the second part of Theorem 1, we find the second derivative of the Lyapunov function (A.2),

$$ \begin {aligned} \ddot V &= - 2\left ( {B_\mathrm {ref\,}^\dag \dot \varepsilon } \right ){\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}} - \lambda \left ( {2{{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\dot {\tilde \theta } + {{\tilde \theta }^\mathrm {T}}{{\dot \Gamma }^{ - 1}}\tilde \theta } \right ) \\ &= - 2\left ( {B_\mathrm {ref\,}^\dag \dot \varepsilon } \right ){\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}} - \lambda \left ( {2{{\tilde \theta }^\mathrm {T}}\left [ { - \omega {\omega ^\mathrm {T}}\tilde \theta } \right ] + {{\tilde \theta }^\mathrm {T}}\left [ {\omega {\omega ^\mathrm {T}} - \lambda {\Gamma ^{ - 1}}} \right ]\tilde \theta } \right ) \\ &= - 2\left ( {B_\mathrm {ref\,}^\dag \dot \varepsilon } \right ){\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}} + \lambda \left ( {2{{\tilde \theta }^\mathrm {T}}\omega {\omega ^\mathrm {T}}\tilde \theta - {{\tilde \theta }^\mathrm {T}}\left [ {\omega {\omega ^\mathrm {T}} - \lambda {\Gamma ^{ - 1}}} \right ]\tilde \theta } \right ). \end {aligned}$$
(A.4)

Based on the expression (A.4), it is difficult to draw a conclusion on the boundedness of the second derivative of the function (A.2); therefore, in view of the relation \(\dot {\tilde \theta } = \dot {\hat \theta }\), we find the derivative of the generalized parameter error (3.2),

$$ \dot \varepsilon = B_\mathrm {ref\,}\left [ \dot {\tilde \theta }^\mathrm {T}\omega + {\tilde \theta }^\mathrm {T}\dot \omega \right ] = B_\mathrm {ref\,}\left [ - \Gamma \omega \omega ^\mathrm {T}\tilde \theta \omega + {\tilde \theta }^\mathrm {T}\dot \omega \right ].$$
(A.5)

Taking into account the expression (A.5), for calculation we rewrite Eq. (A.4) as

$$ \ddot V = - 2\left [ { - \Gamma \omega {\omega ^\mathrm {T}}\tilde \theta \omega + {{\tilde \theta }^\mathrm {T}}\dot \omega } \right ]{\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}} + \lambda \left ( {2{{\tilde \theta }^\mathrm {T}}\omega {\omega ^\mathrm {T}}\tilde \theta - {{\tilde \theta }^\mathrm {T}}\left [ {\omega {\omega ^\mathrm {T}} - \lambda {\Gamma ^{ - 1}}} \right ]\tilde \theta } \right ). $$

According to what has been proved, we have \(\tilde \theta \in {L_2} \cap {L_\infty }\), \(\varepsilon \in {L_2} \cap {L_\infty }\), and \(\omega \in {L_\infty } \), and by the statement of Theorem 1, \(\,\dot \omega {\,\in \,}{L_\infty } \). Then, to conclude that \(\ddot V {\,\in \,}{L_\infty } \), it remains to prove the \(L_\infty \)-boundedness of the matrices \(\Gamma \) and \(\Gamma ^{-1} \). To this end, we obtain a solution of the differential equation (3.9),

$$ {\Gamma ^{ - 1}}\left ( t \right ) = {\Gamma ^{ - 1}}\left ( 0 \right ){e^{ - \lambda t}} + \displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau . $$

Under the persistent excitation condition (2.13), it can readily be shown that for all \(t \geqslant T \) the value of \(\Gamma ^{-1} \) is bounded below by the expression

$$ \begin {aligned} {\Gamma ^{ - 1}}\left ( t \right ) &\geqslant \displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \\ &= \displaystyle \int \limits _{t - T}^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau + \displaystyle \int \limits _0^{t - T} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau . \end {aligned}$$
(A.6)

Now we obtain upper bounds for each of the two integrals on the right-hand side in (A.6) by the mean value theorem. To this end, we rewrite the persistent excitation condition (2.13) in the equivalent form

$$ \displaystyle \int \limits _{t - T}^{t} {\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant \alpha I. $$
(A.7)

Then, in view of the expression (A.7), the lower bound for the first integral has the form

$$ \displaystyle \int \limits _{t - T}^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant {e^{ - \lambda T}}\displaystyle \int \limits _{t - T}^{t} {\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant {e^{ - \lambda T}}\alpha I. $$
(A.8)

In a similar manner, we produce a lower bound for the second integral,

$$ \displaystyle \int \limits _0^{t - T} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant {e^{ - \lambda T}}\displaystyle \int \limits _0^{t - T} {\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant 0.$$
(A.9)

Adding (A.8) and (A.9), we obtain a lower bound for the entire matrix \(\Gamma ^{-1} \),

$$ {\Gamma ^{ - 1}}\left ( t \right ) \geqslant {e^{ - \lambda T}}\alpha I.$$
(A.10)

Now we obtain a lower bound for the matrix \(\Gamma ^{-1} \) \(\forall t\leqslant T \),

$$ {\Gamma ^{ - 1}}\left ( t \right ) \geqslant {\Gamma ^{ - 1}}\left ( 0 \right ){e^{ - \lambda T}} \geqslant {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ){e^{ - \lambda T}}I. $$
(A.11)

Then, in view of the estimates (A.10) and (A.11), the lower bound for the matrix \(\Gamma ^{-1} \) for all \(t \geqslant 0 \) has the form

$$ {\Gamma ^{ - 1}}\left ( t \right ) \geqslant {\min }\left \{ {{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ),\alpha } \right \}{e^{ - \lambda T}}I.$$
(A.12)

Since \(\omega \in {L_\infty } \) by what has been proved, it follows that the expression \( \omega \omega ^\mathrm {T}\) satisfies the inequality

$$ {\lambda _{\min }}\left ( {\omega {\omega ^\mathrm {T}}} \right ) \leqslant \omega {\omega ^\mathrm {T}} \leqslant {\lambda _{\max }}\left ( {\omega {\omega ^\mathrm {T}}} \right ).$$
(A.13)

Taking into account inequality (A.13), we obtain an upper bound for the matrix \(\Gamma ^{-1} \),

$$ \Gamma ^{-1} \left (t\right ) \leqslant \Gamma ^{-1} \left (0\right )+\lambda _{\max } \left (\omega \omega ^\mathrm {T}\right ) \displaystyle \int \limits _0^t e^{-\lambda \left (t-\tau \right )}\,d\tau I \leqslant \lambda _{\max }\left (\Gamma ^{-1}\left (0\right )\right )I + \dfrac {\lambda _{\max }\left (\omega \omega ^\mathrm {T}\right )}{\lambda } I. $$
(A.14)

By combining the expressions (A.12) and (A.14), we obtain inequalities for \(\Gamma \) and \(\Gamma ^{-1} \),

$$ \begin {aligned} {\min }\Big \{\lambda _{\min }\left (\Gamma ^{-1}\left (0\right )\right ),\alpha \Big \} e^{-\lambda T}I &\leqslant \Gamma ^{-1}\left ( t \right )\leqslant \lambda _{\max }\left (\Gamma ^{-1}\left (0\right )\right )I + \dfrac {\lambda _{\max }\left (\omega \omega ^\mathrm {T}\right )}{\lambda }I, \\[.3em] \left (\lambda _{\max }\left (\Gamma ^{-1}\left (0\right )\right ) + \dfrac {\lambda _{\max }\left (\omega \omega ^\mathrm {T}\right )}{\lambda }\right )^{-1}I &\leqslant \Gamma \left (t\right ) \leqslant \max \Big \{\lambda _{\min }^{-1}\left (\Gamma ^{-1}\left (0\right )\right ),\alpha ^{-1} \Big \}e^{\lambda T}I. \end {aligned} $$
(A.15)

It clearly follows from the expressions (A.15) that \(\Gamma \in L_\infty \), \(\Gamma ^{-1}\in L_\infty \), and hence \(\ddot V \in {L_\infty } \). Then the derivative (A.3) of the Lyapunov function (A.2) is uniformly continuous, and \(\dot V \to 0 \) by Barbalat’s lemma. Accordingly, we achieve the convergence \( \tilde \theta \to 0\) as \(t \to \infty \).

To find an estimate for the convergence rate of the error \(\tilde \theta \) to zero, we obtain an upper bound for the derivative (A.3) with allowance for inequality (A.13),

$$ \dot V = - {\tilde \theta ^\mathrm {T}}\omega {\omega ^\mathrm {T}}\tilde \theta - \lambda {\tilde \theta ^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta \leqslant - {\lambda _{\min }}\left ( {\omega {\omega ^\mathrm {T}}} \right ){\left \| {\tilde \theta } \right \|^2} - \lambda {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2}. $$
(A.16)

Further, to determine the minimum convergence rate, we proceed from the lower and upper bounds (A.15) for the matrix \(\Gamma ^{-1} \) to an expression for the lower and upper bounds for its norm,

$$ \begin {aligned} \left \| {{\Gamma ^{ - 1}}} \right \| &\geqslant \underbrace {\sqrt {n + 1} \left [ {{\min }\left \{ {{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ),\alpha } \right \}{e^{ - \lambda T}}} \right ]}_{{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right )} ,\\ \left \| {{\Gamma ^{ - 1}}} \right \| &\leqslant \underbrace {\sqrt {n + 1} \left [ {{\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ) + \dfrac {{{\lambda _{\max }}\left ( {\omega {\omega ^\mathrm {T}}} \right )}}{\lambda }} \right ]}_{{\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}} \right )}. \end {aligned}$$
(A.17)

Taking into account the expression (A.17), we rewrite the upper bound (A.16) as

$$ \begin {aligned} \dot V &\leqslant - {\lambda _{\min }}\left ( {\omega {\omega ^\mathrm {T}}} \right ){\left \| {\tilde \theta } \right \|^2} - \lambda \sqrt {n + 1} \left [ {{\min }\left \{ {{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ),\alpha } \right \}{e^{ - \lambda T}}} \right ]{\left \| {\tilde \theta } \right \|^2} \\ &\leqslant - \left [ {\dfrac {{\lambda {\lambda _{\min }}\left ( {\omega {\omega ^\mathrm {T}}} \right ) + {\lambda ^2}\sqrt {n + 1} \left [ {{\min }\left \{ {{\lambda _{\min }}( {{\Gamma ^{ - 1}}( 0 )} ),\alpha } \right \}{e^{ - \lambda T}}} \right ]}}{{\sqrt {n + 1} \left [ {\lambda {\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ) + {\lambda _{\max }}\left ( {\omega {\omega ^\mathrm {T}}} \right )} \right ]}}} \right ]{\lambda _{\max }}( {{\Gamma ^{ - 1}}} ) {\left \| {\tilde \theta } \right \|^2} \leqslant - \kappa V. \end {aligned} $$

Let us solve the resulting differential inequality substituting the lower bound

$$ \left \| {\tilde \theta } \right \| \leqslant \sqrt {\lambda _{\min }^{ - 1}\left ( {{\Gamma ^{ - 1}}} \right ){e^{ - \kappa \cdot t}}V\left ( 0 \right )}$$
(A.18)
for the Lyapunov function into the left-hand side of the solution.

It follows from the expression (A.18) that the error \(\tilde \theta \) decays exponentially at a rate faster than \(\kappa \), which is exactly what is claimed in the second part of Theorem 1. \(\quad \blacksquare \)

Proof of Theorem 2. A candidate for the Lyapunov function in the study of the stability of the closed-loop system (2.12) can be selected in the form of the sum of two quadratic forms,

$$ \begin {gathered} V = {\xi ^\mathrm {T}}H\xi = e_\mathrm {ref\,}^\mathrm {T}P{e_\mathrm {ref\,}} + {{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta ,\\[.03em] H = \mathrm {blockdiag} \left \{ {\begin {array}{*{20}{c}} P&{{\Gamma ^{ - 1}}} \end {array}} \right \},\\[.03em] {\lambda _{\min }}\left ( H \right ){\left \| \xi \right \|^2} \leqslant V \leqslant {\lambda _{\max }}\left ( H \right ){\left \| \xi \right \|^2}. \end {gathered} $$
(A.19)

In view of the relation \(\dot {\tilde \theta } = \dot {\hat \theta } \) and Eq. (3.3), the derivative of the quadratic form (A.19) according to the deviation equation (2.12) and the adaptation loop equations (4.1) acquire the form

$$ \begin {aligned} \dot V &= \dot e_\mathrm {ref\,}^\mathrm {T}P{e_\mathrm {ref\,}} + e_\mathrm {ref\,}^\mathrm {T}P{{\dot e}_\mathrm {ref\,}} + 2{{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\dot {\tilde \theta } + {{\tilde \theta }^\mathrm {T}}{{\dot \Gamma }^{ - 1}}\tilde \theta \\ &= e_\mathrm {ref\,}^\mathrm {T}\left [ {A_\mathrm {ref\,}^\mathrm {T}P + P{A_\mathrm {ref\,}}} \right ]{e_\mathrm {ref\,}} + 2e_\mathrm {ref\,}^\mathrm {T}P{B_\mathrm {ref\,}}{{\tilde \theta }^\mathrm {T}}\omega - 2{{\tilde \theta }^\mathrm {T}}\omega {\left [ {B_\mathrm {ref\,}^\dag \varepsilon + B_\mathrm {ref\,}^\mathrm {T}P{e_\mathrm {ref\,}}} \right ]^\mathrm {T}} + {{\tilde \theta }^\mathrm {T}}{{\dot \Gamma }^{ - 1}}\tilde \theta \\ &= - e_\mathrm {ref\,}^\mathrm {T}Q{e_\mathrm {ref\,}} - 2{{\tilde \theta }^\mathrm {T}}\omega {\omega ^\mathrm {T}}\tilde \theta + {{\tilde \theta }^\mathrm {T}}\left [ {2\omega {\omega ^\mathrm {T}} - \lambda {\Gamma ^{ - 1}}} \right ]\tilde \theta \\ &= - e_\mathrm {ref\,}^\mathrm {T}Q{e_\mathrm {ref\,}} - \lambda {{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta \leqslant - {\lambda _{\min }}\left ( Q \right ){\left \| {{e_\mathrm {ref\,}}} \right \|^2} - \lambda {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2}. \end {aligned}$$
(A.20)

The derivative (A.20) of the positive definite quadratic form (A.19) is a negative semidefinite function; therefore, the error is \( \xi {\, \in \,} {L_\infty } \), and Eq. (A.19) is a Lyapunov function for system (2.12). At the same time, the Lyapunov function (A.19) has a finite limit as \(t \to \infty \),

$$ \begin {aligned} &V\left ( {\tilde \theta \left ( {t \to \infty } \right )} \right ) = V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) + \displaystyle \int \limits _{{t_0}}^\infty {\dot Vd} t = V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) - \displaystyle \int \limits _{{t_0}}^\infty {\left [ {e_\mathrm {ref\,}^\mathrm {T}Q{e_\mathrm {ref\,}} + \lambda \left ( {{{\tilde \theta }^\mathrm {T}}{\Gamma ^{ - 1}}\tilde \theta } \right )} \right ]d} t\\ &\qquad {}\Rightarrow \displaystyle \int \limits _{{t_0}}^\infty {\left [ {{\lambda _{\min }}\left ( Q \right ){{\left \| {{e_\mathrm {ref\,}}} \right \|}^2} + \lambda {\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right ){{\left \| {\tilde \theta } \right \|}^2}} \right ]d} t = V\left ( {\tilde \theta \left ( {{t_0}} \right )} \right ) - V\left ( {\tilde \theta \left ( {t \to \infty } \right )} \right ) < \infty , \end {aligned} $$
and then \(\xi \in {L_2} \cap {L_\infty } \) and \(\omega \in {L_\infty } \) (because \({e_\mathrm {ref\,}} \in {L_2} \cap {L_\infty }\)).

We have thus proved the first part of Theorem 2. To prove the second part of Theorem 2, we find the second derivative of the Lyapunov function (A.19) taking into account Eq. (3.3),

$$ \begin {aligned} \ddot V &= - \dot e_\mathrm {ref\,}^\mathrm {T}Qe_\mathrm {ref\,} - e_\mathrm {ref\,}^\mathrm {T}Q{\dot e}_\mathrm {ref\,} - \lambda \left ( {\tilde \theta }^\mathrm {T}{\dot \Gamma }^{ - 1}\tilde \theta + 2{\tilde \theta }^\mathrm {T}\Gamma ^{ - 1}\dot {\tilde \theta } \right ) \\ &= - 2e_\mathrm {ref\,}^\mathrm {T}Q\left [ A_\mathrm {ref\,}e_\mathrm {ref\,} + B_\mathrm {ref\,}{\tilde \theta }^\mathrm {T}\omega \right ] + 2\lambda {\tilde \theta }^\mathrm {T}\omega \left [ B_\mathrm {ref\,}^\dag \varepsilon + B_\mathrm {ref\,}^\mathrm {T}Pe_\mathrm {ref\,} \right ]^\mathrm {T} \\ &\qquad {}- \lambda \left ( {\tilde \theta }^\mathrm {T}\left [ 2\omega \omega ^\mathrm {T} - \lambda \Gamma ^{ - 1} \right ]\tilde \theta \right ) = - 2e_\mathrm {ref\,}^\mathrm {T}Q\left [ A_\mathrm {ref\,}e_\mathrm {ref\,}+ B_\mathrm {ref\,}{\tilde \theta }^\mathrm {T}\omega \right ] \\ &\qquad {} + 2\lambda {\tilde \theta }^\mathrm {T}\omega e_\mathrm {ref\,}^\mathrm {T}PB_\mathrm {ref\,} + \lambda ^2\left ( {\tilde \theta }^\mathrm {T}\Gamma ^{ - 1}\tilde \theta \right ). \end {aligned}$$

Since it has been proved that \(\tilde \theta \in {L_2} \cap {L_\infty } \), \({e_\mathrm {ref\,}} \in {L_2} \cap {L_\infty }\), and \(\omega \in {L_\infty } \), it follows under the persistent excitation condition that \(\Gamma \in L_{\infty }\) and \(\Gamma ^{-1} \in L_{\infty } \) (the proof is similar to (A.6)–(A.15) in the proof of Theorem 1) and hence \(\ddot V \in {L_\infty }\) as well. In this case, the derivative (A.20) of the Lyapunov function (A.19) is uniformly continuous, and \(\dot V \to 0 \) by Barbalat’s lemma; accordingly, the convergence \(\xi \to 0 \) as \(t \to \infty \) is achieved.

To determine an estimate for the rate of convergence of the error \(\xi \) to zero, we rewrite the upper bound for the derivative (A.20) as

$$ \dot V\leqslant - \dfrac {\lambda _{\min }\left ( Q \right )}{\lambda _{\max }\left ( P \right )}\lambda _{\max }\left ( P \right ) \left \|e_\mathrm {ref\,}\right \|^2- \dfrac {\lambda {}\lambda _{\min }\left (\Gamma ^{-1}\right )} {\lambda _{\max }\left (\Gamma ^{-1}\right )} \lambda _{\max }\left (\Gamma ^{-1}\right )\left \|\tilde \theta \right \|^2. $$
(A.21)

Further, to determine the minimum convergence rate using the results obtained when proving Theorem 1, we write the lower and upper bounds for the norm \(\Gamma ^{-1}\) for the adjustment law \( \Gamma \) in the adaptation loop (4.1),

$$ \begin {aligned} \left \| {{\Gamma ^{ - 1}}} \right \| &\geqslant \underbrace {\sqrt {n + 1} \Big [ {{\min }\left \{ {{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ),2\alpha } \right \}{e^{ - \lambda T}}} \Big ]}_{{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}} \right )} ,\\ \left \| {{\Gamma ^{ - 1}}} \right \| &\leqslant \underbrace {\sqrt {n + 1} \left [ {{\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ) + \dfrac {{2{\lambda _{\max }}\left ( {\omega {\omega ^\mathrm {T}}} \right )}}{\lambda }} \right ]}_{{\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}} \right )}. \end {aligned}$$
(A.22)

Taking into account (A.22), we rewrite the upper bound for the derivative (A.21) as

$$ \begin {aligned} \dot V &\leqslant -\frac {\lambda _{\min }(Q)}{\lambda _{\max }(P)} \lambda _{\max }(P)\left \|e_\mathrm {ref\,}\right \|^2 \\ &\qquad {}- \frac {\lambda ^2\min \left \{\lambda _{\min }\left (\Gamma ^{-1}(0)\right ),2\alpha \right \}e^{-\lambda T}} {\lambda {}\lambda _{\max }\left (\Gamma ^{-1}(0)\right )+2\lambda _{\max }\left (\omega {}\omega ^\mathrm {T} \right )} \lambda _{\max }\left (\Gamma ^{-1}\right )\left \|\tilde \theta \right \|^2 \leqslant -\eta _{\min }V, \\ {\eta _{\min }} &= {\min }\left \{ {\frac {{{\lambda _{\min }}\left ( Q \right )}}{{{\lambda _{\max }}\left ( P \right )}}{\rm {; }}\dfrac {{{\lambda ^2}{\min }\left \{ {{\lambda _{\min }}\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ),2\alpha } \right \}{e^{ - \lambda T}}}}{{\lambda {}\lambda _{\max }\left ( {{\Gamma ^{ - 1}}\left ( 0 \right )} \right ) + 2{\lambda _{\max }}\left ( {\omega {\omega ^\mathrm {T}}} \right )}}} \right \}. \end {aligned}$$
(A.23)

Let us solve the resulting differential inequality while substituting the lower bound for the Lyapunov function into the left-hand side of the solution,

$$ \left \| \xi \right \| \leqslant \sqrt {\lambda _{\min }^{ - 1}\left ( H \right ){e^{ - {\eta _{\min }} \cdot t}}V\left ( 0 \right )}. $$
(A.24)

It follows from the majorant (A.24) that the error \(\xi \) decays exponentially at a rate faster than \(\eta _{\min } \); this is exactly what is claimed in the second part of Theorem 2.

To prove the third part of Theorem 2, we write a lower bound for the derivative (A.20),

$$ \begin {gathered} \dot V \geqslant - \dfrac {\lambda _{\max }\left ( Q \right )} {\lambda _{\max }\left (P\right )}{\lambda _{\max }\left (P\right )\left \| e_\mathrm {ref\,} \right \|^2} - \lambda {\lambda _{\max }}\left ( {{\Gamma ^{ - 1}}} \right ){\left \| {\tilde \theta } \right \|^2} \geqslant - {\eta _{\max }}V,\\ {\eta _{\max }} = {\max }\left \{\dfrac {\lambda _{\max }\left ( Q \right )}{\lambda _{\max }\left ( P \right )};\lambda \right \}. \end {gathered}$$
(A.25)

We solve the differential inequality (A.25) while substituting the upper bound for the Lyapunov function into the left-hand side of the solution,

$$ \left \| \xi \right \| \geqslant \sqrt {\lambda _{\max }^{ - 1}\left ( H \right ){e^{ - {\eta _{\max }} \cdot t}}V\left ( 0 \right )}. $$
(A.26)

It follows from the definition of \(\eta _{\max }\) in (A.25) and the minorant (A.26) that by increasing the parameter \(\lambda \), one can make the maximum rate of convergence of the error \(\xi \) arbitrarily large; this is what is claimed in the third part of Theorem 2. \(\quad \blacksquare \)

Remark.

As \(\lambda \to \infty \), the maximum convergence rate satisfies \(\eta _{\max }\to \infty \), but the minimum convergence rate satisfies \(\eta _{\min }\to 0\). Since \(\lambda T\to \infty \) in (A.23), this leads to a considerable increase in the distance between the majorant (A.24) and the minorant (A.26); this, in turn, leads to oscillations with respect to \(\xi \). Therefore, in practice, it makes little sense to use values of \(\lambda \) exceeding \(\lambda _{\max }=T^{-1}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glushchenko, A.I., Petrov, V.A. & Lastochkin, K.A. Adaptive Control System with a Variable Adjustment Law Gain Based on the Recursive Least Squares Method. Autom Remote Control 82, 619–633 (2021). https://doi.org/10.1134/S0005117921040020

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117921040020

Keywords

Navigation