Skip to main content
Log in

Static Controller Synthesis for Peak-to-Peak Gain Minimization as an Optimization Problem

  • LINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

An optimization approach to linear control systems has recently become very popular. For example, the linear feedback matrix in the classical linear-quadratic regulator problem can be viewed as a variable, and the problem can be reduced to the minimization of the performance indicator for this variable. To this end, one can apply the gradient method and obtain a justification of the convergence. This approach has been successfully applied to a number of problems, including output feedback optimization. The present paper is the first to apply this approach to the peak-to-peak gain minimization problem. A gradient method for finding a static state or output feedback is written out and justified. A number of examples are considered, including the single and double pendulums.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.

Similar content being viewed by others

Notes

  1. Understood in the sense of the second directional derivative.

REFERENCES

  1. Boyd, S., El Ghaoui, L., Feron, E., et al., Linear Matrix Inequalities in System and Control Theory, Philadelphia: SIAM, 1994.

    Book  Google Scholar 

  2. Abedor, J., Nagpal, K., and Poolla, K., A linear matrix inequality approach to peak-to-peak gain minimization, Int. J. Robust Nonlinear Control, 1996, vol. 6, no. 9–10, pp. 899–927.

    Article  MathSciNet  Google Scholar 

  3. Nazin, S.A., Polyak, B.T., and Topunov, M.V., Rejection of bounded exogenous disturbances by the method of invariant ellipsoids, Autom. Remote Control, 2007, vol. 68, no. 3, pp. 467–486.

    Article  MathSciNet  Google Scholar 

  4. Khlebnikov, M.V., Polyak, B.T., and Kuntsevich, V.M., Optimization of linear systems subject to bounded exogenous disturbances: the invariant ellipsoid technique, Autom. Remote Control, 2011, vol. 72, no. 11, pp. 2227–2275.

    Article  MathSciNet  Google Scholar 

  5. Polyak, B.T., Khlebnikov, M.V., and Shcherbakov, P.S., Upravlenie lineinymi sistemami pri vneshnikh vozmushcheniyakh: tekhnika lineinykh matrichnykh neravenstv (Control of Linear Systems Subject to Exogenous Disturbances: Technique of Linear Matrix Inequalities), Moscow: LENAND, 2014.

    Google Scholar 

  6. Grant, M. and Boyd, S., CVX: Matlab Software for Disciplined Convex Programming, version 2.1. http://cvxr.com/cvx .

  7. Balandin, D.V. and Kogan, M.M., Sintez zakonov upravleniya na osnove lineinykh matrichnykh neravenstv (Synthesis of Control Laws Based on Linear Matrix Inequalities), Moscow: Fizmatlit, 2007.

    Google Scholar 

  8. Kalman, R.E., Contributions to the theory of optimal control, Boletin Soc. Mat. Mexicana, 1960, vol. 5, no. 1, pp. 102–119.

    MathSciNet  Google Scholar 

  9. Levine, W. and Athans, M., On the determination of the optimal constant output feedback gains for linear multivariable systems, IEEE Trans. Autom. Control, 1970, vol. 15, no. 1, pp. 44–48.

    Article  MathSciNet  Google Scholar 

  10. Mäkilä, P.M. and Toivonen, H.T., Computational methods for parametric LQ problems—a survey, IEEE Trans. Autom. Control, 1987, vol. 32, no. 8, pp. 658–671.

  11. Fazel, M., Ge, R., Kakade, S., and Mesbahi, M., Global convergence of policy gradient methods for the linear quadratic regulator, Proc. 35th Int. Conf. Mach. Learn. (Stockholm, Sweden, July 10–15, 2018), vol. 80, pp. 1467–1476.

  12. Mohammadi, H., Zare, A., Soltanolkotabi, M., and Jovanović, M.R., Global exponential convergence of gradient methods over the nonconvex landscape of the linear quadratic regulator, Proc. 2019 IEEE 58th Conf. Decis. Control (Nice, France, December 11–13, 2019), pp. 7474–7479.

  13. Zhang, K., Hu, B., and Baar T., Policy optimization for \(\mathcal H_2 \) linear control with \(\mathcal H_{\infty } \) robustness guarantee: implicit regularization and global convergence, 2020. https://arxiv.org/abs/1910.09496 .

  14. Bu, J., Mesbahi, A., Fazel, M., and Mesbahi, M., LQR through the lens of first order methods: discrete-time case, 2019. https://arxiv.org/abs/1907.08921 .

  15. Fatkhullin, I. and Polyak, B., Optimizing static linear feedback: gradient method, SIAM J. Control Optim., 2021 (in press). https://arxiv.org/abs/2004.09875 .

  16. Polyak, B.T., Khlebnikov, M.V., and Shcherbakov, P.S., Linear matrix inequalities in control systems with uncertainty, Autom. Remote Control, 2021, vol. 82, no. 1, pp. 1–40.

    Article  MathSciNet  Google Scholar 

  17. Nesterov, Y. and Protasov, V.Y., Computing closest stable non-negative matrices, SIAM J. Matrix Anal. Appl., 2020, vol. 41, no. 1, pp. 1–28.

    Article  MathSciNet  Google Scholar 

  18. Polyak, B.T., Introduction to Optimization, New York: Optimization Software, 1987.

    MATH  Google Scholar 

  19. Horn, R.A. and Johnson, C.R., Matrix Analysis, New York: Cambridge Univ. Press, 1985. Translated under the title: Matrichnyi analiz, Moscow: Mir, 1989.

    Book  Google Scholar 

  20. Lee, C.-H., New results for the bounds of the solution for the continuous Riccati and Lyapunov equations, IEEE Trans. Autom. Control, 1997, vol. 42, no. 1, pp. 118–123.

    Article  MathSciNet  Google Scholar 

Download references

ACKNOWLEDGMENTS

The authors consider it their pleasant duty to express their gratitude to A.A. Tremba and an anonymous referee for their interest in the paper, critical remarks, and suggestions.

Funding

This work was supported by the Russian Science Foundation, project no. 21-71-30005.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to B. T. Polyak or M. V. Khlebnikov.

Additional information

Translated by V. Potapchouck

APPENDIX

Lemma A.1.

Let \(X \) and \(Y \) be solutions to the dual Lyapunov equations with the Hurwitz matrix \(A \) ,

$$ {A}^{\top }X+XA+W=0$$
and
$$ AY+Y{A}^{\top }+V=0.$$
Then
$$ \mathrm{tr}\thinspace (XV)=\mathrm{tr}\thinspace (YW).$$

Proof of Lemma A.1. Indeed, by direct computation we have

$$ \begin {aligned} \mathrm{tr}\thinspace (XV)&=\mathrm{tr}\thinspace \left (X(-AY-Y{A}^{\top })\right )=-\mathrm{tr}\thinspace (XAY)-\mathrm{tr}\thinspace (XY{A}^{\top }) \\ &=-\mathrm{tr}\thinspace {(XAY)}^{\top }-\mathrm{tr}\thinspace {({A}^{\top }XY)}^{\top }=\mathrm{tr}\thinspace \left (Y(-{A}^{\top }X-XA)\right )=\mathrm{tr}\thinspace (YW). \end {aligned} $$
The proof of Lemma A.1 is complete. \(\quad \blacksquare \)

The next lemma contains some well-known results (see, e.g., [19]), needed in the exposition to follow.

Lemma A.2.

  1. 1.

    For matrices \(A \) and \(B \) of appropriate dimensions one has the relations

    $$ \begin {gathered} \begin {aligned} \|AB\|_F&\leqslant \|A\|_F\|B\|, \\[.3em] |\mathrm{tr}\thinspace AB|&\leqslant \|A\|_F\|B\|_F, \\[.3em] \|A\|&\leqslant \|A\|_F, \end {aligned} \\[.3em] AB+{B}^{\top }{A}^{\top }\leqslant \varepsilon A{A}^{\top }+\frac {1}{\varepsilon }{B}^{\top }B\quad \text {for any}\quad \varepsilon >0. \end {gathered}$$
  2. 2.

    For positive semidefinite matrices \(A \) and \(B \) one has the relations

    $$ 0\leqslant \lambda _{\min }(A)\lambda _{\max }(B)\leqslant \lambda _{\min }(A)\mathrm{tr}\thinspace B\leqslant \mathrm{tr}\thinspace AB\leqslant \lambda _{\max }(A)\mathrm{tr}\thinspace B\leqslant \mathrm{tr}\thinspace A\mathrm{tr}\thinspace B. $$

Lemma A.3.

For a solution \(P \) of the Lyapunov equation

$$ AP+P{A}^{\top }+Q=0$$
with a Hurwitz matrix \(A\) and \(Q\succ 0\) one has the estimates
$$ \lambda _{\max }(P)\geqslant \frac {\lambda _{\min }(Q)}{2\sigma }, \quad \lambda _{\min }(P)\geqslant \frac {\lambda _{\min }(Q)}{2\|A\|},$$
(A.1)
where \( \sigma =-\max \limits _i{\mathrm {Re}\thinspace }\lambda _i(A) \) .

If, however, \( Q=D{D}^{\top }\) and the pair \((A,D)\) is controllable, then

$$ \lambda _{\max }(P)\geqslant \frac {\|u^*D\|^2}{2\sigma }>0,$$
(A.2)
where
$$ u^*A=\lambda u^*,\quad {\mathrm {Re}\thinspace }\lambda =-\sigma ,\quad \|u\|=1; $$
i.e., \(u \) is the left eigenvector of \(A \) corresponding to the eigenvalue \(\lambda \) of \(A\) with the greatest real part. The vector \(u \) and the number \( \lambda \) can be complex; here \(u^*\) designates complex conjugation and transposition.

Proof of Lemma A.3. The estimates (A.1) are well known; see, e.g., [20]. Let us prove the estimate (A.2). The explicit solution of the Lyapunov equation for a Hurwitz matrix has the form

$$ P=\int \limits _0^{+\infty }e^{At}D{D}^{\top }e^{{A}^{\top } t}dt.$$
Post- and pre-multiplying this relation by \(u \) and by \(u^* \), respectively, and taking into account the fact that \( u^*e^{At}=e^{\lambda t}u^*\) and \(e^{{A}^{\top }t}u=e^{\lambda ^*t}u\), we obtain
$$ \lambda _{\max }(P)\geqslant u^*Pu=\int \limits _0^{+\infty }u^*e^{At}D{D}^{\top }e^{{A}^{\top }t}udt =\int \limits _0^{+\infty }e^{(\lambda +\lambda ^*)t} u^*D{D}^{\top }u dt=\frac {\|u^*D\|^2}{2\sigma },$$
with \( \|u^*D\|{\thinspace >\thinspace }0\) by virtue of the controllability of the pair \((A,D)\); see, e.g., [5, Theorem D.1.5]. The proof of Lemma A.3 is complete. \(\quad \blacksquare \)

Proof of Lemma 1 .

(a) Equation (6) can be represented in the form

$$ \left (A+\frac {\alpha }{2}I\right )P+P{\left (A+\frac {\alpha }{2}I\right )}^{\top }=-\frac {1}{\alpha }D{D}^{\top }$$
and, according to [5, Lemma 1.2.3], has a unique solution if and only if the matrix \(A+\frac {\alpha }{2}I\) is Hurwitz— \({\mathrm {Re}\thinspace }\lambda _i(A+\frac {\alpha }{2}I)<0 \), i.e., for \(0<\alpha <2\sigma \).

Let us estimate the quantity \(f(\alpha )=\mathrm{tr}\thinspace CP(\alpha ){C}^{\top } \) using Lemma A.3 with the obvious replacements

$$ f(\alpha )=\mathrm{tr}\thinspace CP(\alpha ){C}^{\top }\geqslant \lambda _{\min }({C}^{\top }C)\lambda _{\max }\left (P(\alpha )\right )\geqslant \frac {\|u^*D\|^2\lambda _{\min }({C}^{\top }C)}{\alpha (2\sigma -\alpha )},$$
where \(u \) has the same meaning as in Lemma A.3 and the quantity \(\|u^*D\|^2 \) is positive by virtue of the assumption about the controllability of the pair \((A,D)\) (and therewith also of the pair \( (A+\frac {\alpha }{2}I,D)\)).

Now let us show that the function \(f(\alpha )=\mathrm{tr}\thinspace CP(\alpha ){C}^{\top }\) is strictly convex on the interval \((0,2\sigma ) \). In accordance with [5, Lemma 1.2.3], the solution of Eq. (6) is representable in closed form as

$$ P(\alpha )=\int \limits _0^{+\infty }e^{(A+\frac {\alpha }{2}I)t}\frac {1}{\alpha }D{D}^{\top }e^{{(A+\frac {\alpha }{2}I)}^{\top }t}dt =\int \limits _0^{+\infty } \underbrace {\frac {e^{\alpha t}}{\alpha }}_{g(\alpha ,t)}\underbrace {e^{At}D{D}^{\top }e^{{A}^{\top }t}}_{h(t)}dt.$$
However, \(g(\alpha ,t)>0 \), \(h(t)\succ 0 \) for \(\alpha >0 \); therefore, on the interval \((0,2\sigma ) \) we have
$$ P(\alpha )=\int \limits _0^{+\infty }g(\alpha ,t)h(t)dt\succ 0, \quad f(\alpha )=\mathrm{tr}\thinspace P(\alpha ){C}^{\top }_2C_2>0.$$
By a straightforward computation, we obtain
$$ g^{\prime {}\prime }(\alpha ,t)=\left ((\alpha t-1)^2+1\right )\frac {e^{\alpha t}}{\alpha ^3}\geqslant \frac {e^{\alpha t}}{\alpha ^3}=\frac {1}{\alpha ^2}g(\alpha ,t) $$
(here differentiation is with respect to \(\alpha \)); thus,
$$ f^{\prime {}\prime }(\alpha )=\int \limits _0^{+\infty }g^{\prime {}\prime }(\alpha ,t)h(t)dt\geqslant \frac {1}{\alpha ^2}f(\alpha )\geqslant \frac {1}{4\sigma ^2}f(\alpha ^*)>0. $$

Thus, the second derivative of the function \(f(\alpha ) \) is positive and tends to infinity at the endpoints of the interval \( (0,2\sigma )\).

In a similar way, by a straightforward calculation of the fourth derivative, we obtain

$$ g^{(IV)}(\alpha ,t)=\left ((\alpha t-2)^2\alpha ^2 t^2 +2(2\alpha t-3)^2+6\right )\frac {e^{\alpha t}}{\alpha ^5}\geqslant \frac {6}{\alpha ^5}e^{\alpha t}=\frac {6}{\alpha ^4}g(\alpha ,t);$$
thus,
$$ f^{(IV)}(\alpha )\geqslant \frac {6}{\alpha ^4}f(\alpha )>0;$$
i.e., the second derivative \( f^{\prime {}\prime }(\alpha )\) itself is convex and increases at the boundaries of the interval.

(b) Now let us derive a formula for the derivative of the function \(f(\alpha )\). In Eq. (6), the solution \(P \) is a function of \(\alpha \). Let us differentiate this equation; by \(P^{\prime } \) we mean the derivative with respect to \(\alpha \),

$$ AP^{\prime }+P^{\prime }{A}^{\top }+\alpha P^{\prime }+ P-\frac {1}{\alpha ^2}D{D}^{\top }=0.$$
Comparing the equations for \( P^{\prime }\) and \(Y \) and applying Lemma A.1, we obtain the desired formula
$$ f^{\prime }(\alpha )=\mathrm{tr}\thinspace CP^{\prime }{C}^{\top }=\mathrm{tr}\thinspace Y\left (P-\frac {1}{\alpha ^2}D{D}^{\top }\right ). $$

(c) In a similar manner, we will obtain an expression for the second derivative \(f(\alpha )\). Differentiating the equation for \(P^{\prime }\) with respect to \(\alpha \), we obtain

$$ AP^{\prime {}\prime }+P^{\prime {}\prime }{A}^{\top }+\alpha P^{\prime {}\prime }+ 2P^{\prime }+\frac {2}{\alpha ^3}D{D}^{\top }=0.$$
Applying again Lemma A.1 to this equation and to Eq. (10) (and bearing in mind that \(X=P^{\prime } \)), we obtain
$$ f^{\prime {}\prime }(\alpha )=\mathrm{tr}\thinspace CP^{\prime {}\prime }{C}^{\top }=2\mathrm{tr}\thinspace Y\left (X+\frac {1}{\alpha ^3}D{D}^{\top }\right ).$$
The proof of Lemma 1 is complete. \(\quad \blacksquare \)

Proof of Lemma 3. Consider a sequence of stabilizing controllers \(\{K_j\}\subseteq \mathcal S \) such that \(K_j\to K\in \partial \mathcal S\); i.e., \(\sigma (A+BKC_1)=0 \). This means that for each \(\varepsilon >0 \) there exists a number \(N=N(\varepsilon ) \) such that the inequality

$$ |\sigma (A+BK_jC_1)-\sigma (A+BKC_1)|=\sigma (A+BK_jC_1)<\varepsilon$$
holds for all \(j\geqslant N(\varepsilon )\).

Let \(P_j\) be the solution of the Lyapunov equation (15) associated with the controller \(K_j \),

$$ \left (A_{K_j}+\frac {\alpha _j}{2}I\right )P_j+P_j{\left (A_{K_j}+\frac {\alpha _j}{2}I\right )}^{\top }+\frac {1}{\alpha _j}D{D}^{\top }=0,$$
and let \(Y_j \) be a solution of the dual Lyapunov equation
$$ {\left (A_{K_j}+\frac {\alpha _j}{2}I\right )}^{\top }Y_j+Y_j\left (A_{K_j}+\frac {\alpha _j}{2}I\right )+C_2{C}^{\top }_2=0.$$
Then
$$ \begin {aligned} f(K_j)&=\mathrm{tr}\thinspace \left (C_2P_j{C}^{\top }_2\right )+\rho \|K_j\|^2_F\geqslant \mathrm{tr}\thinspace \left (P_jC_2{C}^{\top }_2\right ) =\mathrm{tr}\thinspace \left (Y_j\frac {1}{\alpha _j}D{D}^{\top }\right ) \\ &{}\geqslant \frac {1}{\alpha _j}\lambda _{\min }(Y_j)\|D\|_F^2 \geqslant \frac {1}{\alpha _j}\frac {\lambda _{\min }(C_2{C}^{\top }_2)}{2\|A+BK_jC+\frac {\alpha _j}{2}I\|}\|D\|_F^2 \\ &{}\geqslant \frac {\lambda _{\min }(C_2{C}^{\top }_2)}{4\sigma (A+BK_jC_1)\|A+BK_jC+\frac {\alpha _j}{2}I\|}\|D\|_F^2\geqslant \frac {\lambda _{\min }(C_2{C}^{\top }_2)}{4\varepsilon \big (\|A+BK_jC\|+\varepsilon \big )}\|D\|_F^2\xrightarrow [\varepsilon \to 0]{}+\infty , \end {aligned}$$
because
$$ 0<\alpha _j<2\sigma (A+BK_jC_1) \quad \text {and}\quad \sigma \left (A+BK_jC_1+\frac {\alpha _j}{2}I\right )\leqslant \sigma (A+BK_jC_1).$$

On the other hand,

$$ f(K_j)=\mathrm{tr}\thinspace (C_2P_j{C}^{\top }_2)+\rho \|K_j\|^2_F\geqslant \rho \|K_j\|^2_F\geqslant \rho \|K_j\|^2\xrightarrow [\|K_j\|\to +\infty ]{}+\infty .$$
The proof of Lemma 3 is complete. \(\quad \blacksquare \)

Proof of Lemma 4. System (12) closed by the feedback (13) acquires the closed-loop form

$$ \begin {aligned} \dot x &=(A+BKC_1)x+Dw,\\ z &=C_2x. \end {aligned}$$
(A.3)
Applying Theorem 1 to system (A.3), we arrive at the problem
$$ \min f(K,\alpha ),\quad f(K,\alpha )=\mathrm{tr}\thinspace C_2P{C}^{\top }_2+\rho \|K\|^2_F$$
under a constraint in the form of the Lyapunov equation for the matrix \(P \) of the invariant ellipsoid,
$$ (A+BKC_1)P+P{(A+BKC_1)}^{\top }+\alpha P+\frac {1}{\alpha }D{D}^{\top }=0. $$
Differentiation with respect to \(\alpha \) is performed just as above with the replacement of \(A \) by \(A_K=A+BKC_1 \). To differentiate with respect to \(K \), we give it an increment \(\Delta K \) and denote the corresponding increment of \(P \) by \(\Delta P \),
$$ \big (A+B(K+\Delta K)C_1\big )(P+\Delta P)+(P+\Delta P){\big (A+B(K+\Delta K)C_1\big )}^{\top }+\alpha (P+\Delta P)+\frac {1}{\alpha }D{D}^{\top }=0,$$
or, after linearization and subtraction of this and preceding equations,
$$ \left (A+BKC_1+\frac {\alpha }{2}I\right )\Delta P+\Delta P{\left (A+BKC_1+\frac {\alpha }{2}I\right )}^{\top }+B\Delta KC_1P+P{(B\Delta KC_1)}^{\top }=0.$$
(A.4)

Let us calculate the increment in the functional \(f(K) \) by linearizing the corresponding quantities,

$$ \Delta f(K) =\mathrm{tr}\thinspace C_2\Delta P{C}^{\top }_2+\rho \mathrm{tr}\thinspace {K}^{\top }\Delta K+\rho \mathrm{tr}\thinspace {(\Delta K)}^{\top }K=\mathrm{tr}\thinspace {C}^{\top }_2C_2\Delta P+2\rho \mathrm{tr}\thinspace {K}^{\top }\Delta K. $$

Consider the Lyapunov equation (20) dual to (A.4). By Lemma A.1, from Eqs. (A.4) and (20) we have

$$ \Delta f(K)=\mathrm{tr}\thinspace 2C_1PYB\Delta K+2\rho \mathrm{tr}\thinspace {K}^{\top }\Delta K=\langle 2(\rho K+{B}^{\top }YP{C}^{\top }_1),\Delta K\rangle .$$
Thus,
$$ \nabla _Kf(K,\alpha )=2(\rho K+{B}^{\top }YP{C}^{\top }_1).$$
The proof of Lemma 4 is complete. \(\quad \blacksquare \)

Proof of Lemma 5. Let us calculate

$$ \nabla ^2_Kf(K)[E,E]=\langle \nabla ^2_Kf(K)[E],E\rangle$$
by taking the derivative in the direction \(E\in \mathbb R^{p\times l} \) of \(\nabla _Kf(K)[E]=\langle \nabla _Kf(K),E\rangle \).

Linearizing the relevant quantities, we calculate the increment in the functional \(\nabla _Kf(K)[E]\) in direction \(E \),

$$ \begin {aligned} \Delta \nabla _Kf(K)[E]&=2\big (\rho K+\rho \delta E+{B}^{\top }(Y+\Delta Y)(P+\Delta P){C}^{\top }_1\big )-2\left (\rho K+{B}^{\top }YP{C}^{\top }_1\right ) \\ &{}=2\Big (\rho K+\rho \delta E+{B}^{\top }\big (Y+\delta Y^{\prime }(K)[E]\big )\big (P+\delta P^{\prime }(K)[E]\big ){C}^{\top }_1\Big )-2\left (\rho K+{B}^{\top }YP{C}^{\top }_1\right ) \\ &{}=2\delta \Big (\rho E+{B}^{\top }\big (YP^{\prime }(K)[E]+Y^{\prime }(K)[E]P\big ){C}^{\top }_1\Big ), \end {aligned}$$
where
$$ \begin {aligned} \Delta P&=P(K+\delta E)-P(K)=\delta P^{\prime }(K)[E], \\ \Delta Y&=Y(K+\delta E)-Y(K)=\delta Y^{\prime }(K)[E]. \end {aligned} $$

Thus, denoting \(P^{\prime }=P^{\prime }(K)[E] \) and \(Y^{\prime }=Y^{\prime }(K)[E] \), we have

$$ \frac 12\nabla ^2_Kf(K)[E,E]= \big \langle \rho E+{B}^{\top }(YP^{\prime }+Y^{\prime }P){C}^{\top }_1,E\big \rangle .$$

Further, \(P=P(K)\) is a solution to Eq. (15); let us write it in increments in direction \(E \),

$$ \left (A+B(K+\delta E)C_1\right )(P+\delta P^{\prime })+(P+\delta P^{\prime }){\left (A+B(K+\delta E)C_1\right )}^{\top }+\alpha (P+\delta P^{\prime })+\frac {1}{\alpha }D{D}^{\top }=0, $$
or
$$ \begin {aligned} (A+BKC_1)(P+\delta P^{\prime })&{}+(P+\delta P^{\prime }){(A+BKC_1)}^{\top }\\ &{}+\alpha (P+\delta P^{\prime })+\delta \left (BEC_1P+P{(BEC_1)}^{\top }\right )+\frac {1}{\alpha }D{D}^{\top }=0. \end {aligned}$$
Subtracting Eq. (6) from the resulting relation, we arrive at Eq. (22).

Further, \(Y=Y(K)\) is a solution of the Lyapunov equation (20); let us write it in increments in direction \(E\),

$$ {\left (A+B(K+\delta E)C_1\right )}^{\top }(Y+\delta Y^{\prime })+(Y+\delta Y^{\prime })\left (A+B(K+\delta E)C_1\right )+\alpha (Y+\delta Y^{\prime })+{C}^{\top }_2C_2=0, $$
or
$$ \begin {aligned} {(A+BKC_1)}^{\top }(Y+\delta Y^{\prime })&{}+(Y+\delta Y^{\prime })(A+BKC_1)+\\ &{}+\alpha (Y+\delta Y^{\prime })+\delta \left ({(BEC_1)}^{\top }Y+YBEC_1\right )+{C}^{\top }_2C_2=0. \end {aligned} $$
Subtracting Eq. (8) from the resulting relation, we have
$$ {\left (A+BKC_1+\frac {\alpha }{2}I\right )}^{\top }Y^{\prime }+Y^{\prime }\left (A+BKC_1+\frac {\alpha }{2}I\right )+{(BEC_1)}^{\top }Y+YBEC_1=0.$$
(A.5)

From (22) and (A.5) we have the relation

$$ \mathrm{tr}\thinspace P^{\prime }YBEC_1=\mathrm{tr}\thinspace Y^{\prime }BEC_1P;$$
thus,
$$ \frac 12\nabla ^2_Kf(K)[E,E]= \rho \langle E,E\rangle +\big \langle {B}^{\top }(YP^{\prime }+Y^{\prime }P){C}^{\top }_1,E\big \rangle = \rho \langle E,E\rangle +2\langle {B}^{\top }YP^{\prime }{C}^{\top }_1,E\rangle .$$
The proof of Lemma 5 is complete. \(\quad \blacksquare \)

Corollary A.1.

The action of the Hessian of the function \( f(K)\) on a matrix \( E\in \mathbb R^{p\times l}\) such that \(\|E\|_F=1\) satisfies the estimate

$$ \frac 12\sup _{\|E\|_F=1}|\nabla ^2_Kf(K)[E,E]|\leqslant \rho +2\|P^{\prime }\|_F\|Y\|\|B\|_F\|C_1\|.$$

Proof of Corollary A.1. According to (21),

$$ \begin {aligned} \frac 12\sup _{\|E\|_F=1}\big |\nabla ^2_Kf(K)[E,E]\big |&\leqslant \sup _{\|E\|_F=1}\rho \langle E,E\rangle +2\sup _{\|E\|_F=1}\big |\langle {B}^{\top }YP^{\prime }{C}^{\top }_1,E\rangle \big | \\ &{}=\rho \sup _{\|E\|_F=1}\|E\|^2_F+2\sup _{\|E\|_F=1}\big |\langle P^{\prime },YBEC_1\rangle \big |\\ &{}\leqslant \rho +2\|P^{\prime }\|_F\sup _{\|E\|_F=1}\|YBEC_1\|_F \leqslant \rho +2\|P^{\prime }\|_F\|Y\|\|B\|_F\|C_1\|, \end {aligned} $$
because, considering Lemma A.2, we have
$$ \|YBEC_1\|_F\leqslant \|Y\|\|B\|_F\|E\|_F\|C_1\|. $$
The proof of Corollary A.1 is complete. \(\quad \blacksquare \)

Proof of Lemma 6. According to Corollary A.1, it suffices to estimate from above the quantity

$$ \rho +2\|P^{\prime }\|_F\|Y\|\|B\|_F\|C_1\|.$$
We have the following estimate for \(\|Y\|\):
$$ \begin {aligned} \frac {1}{\alpha }\lambda _{\min }(D{D}^{\top })\|Y\|&\leqslant \frac {1}{\alpha }\lambda _{\min }(D{D}^{\top })\mathrm{tr}\thinspace Y\leqslant \frac {1}{\alpha }\mathrm{tr}\thinspace YD{D}^{\top }=\mathrm{tr}\thinspace Y\frac {1}{\alpha }D{D}^{\top } \\ &{}=\mathrm{tr}\thinspace P{C}^{\top }_2C_2 =\mathrm{tr}\thinspace C_2P{C}^{\top }_2=f(K)-\rho \|K\|_F^2\leqslant f(K)\leqslant f(K_0); \end {aligned}$$
hence
$$ \|Y\|\leqslant \frac {\alpha }{\lambda _{\min }(D{D}^{\top })}f(K_0).$$
(A.6)

The estimate for \(\alpha \) is established as follows:

$$ \begin {aligned} \alpha &<2\sigma (A+BKC_1)\leqslant 2\|A+BKC_1\| \\ &\leqslant 2\big (\|A\|+\|B\|\|K\|\|C_1\|\big )\leqslant 2\big (\|A\|+\|B\|\|K\|_F\|C_1\|\big ) \\ &\leqslant 2\big (\|A\|+\|B\|\sqrt {f(K)}\|C_1\|\big )\leqslant 2\big (\|A\|+\sqrt {f(K_0)}\|B\|\|C_1\|\big ); \end {aligned}$$
thus,
$$ \|Y\|\leqslant 2\frac {\|A\|+\sqrt {f(K_0)}\|B\|\|C_1\|}{\lambda _{\min }(D{D}^{\top })}f(K_0). $$

Now let us estimate \(\|P\|\) from above,

$$ \lambda _{\min }({C}^{\top }_2C_2)\|P\|\leqslant \mathrm{tr}\thinspace (C_2P{C}^{\top }_2)=f(K)-\rho \|K\|^2_F\leqslant f(K)\leqslant f(K_0);$$
hence
$$ \|P\|\leqslant \frac {f(K_0)}{\lambda _{\min }({C}^{\top }_2C_2)}. $$

Finally, let us estimate \(\|P^{\prime }\|_F\) from above. Considering Lemma A.2, note that

$$ \begin {aligned} &{}\lambda _{\max }\left (BEC_1P+P{(BEC_1)}^{\top }\right )=\big \|BEC_1P+P{(BEC_1)}^{\top }\big \|\leqslant \big \|P^2+BEC_1{(BEC_1)}^{\top }\big \| \\ &\qquad \qquad {}\leqslant \|P\|^2+\|B\|^2\|C_1\|^2\|E\|_F^2\leqslant \frac {f^2(K_0)}{\lambda ^2_{\min }({C}^{\top }_2C_2)}+\|B\|^2\|C_1\|^2=\xi \frac {1}{\alpha }\lambda _{\min }(D{D}^{\top }) \end {aligned} $$
for
$$ \xi =\frac {\alpha }{\lambda _{\min }(D{D}^{\top })}\left (\frac {f^2(K_0)}{\lambda ^2_{\min }({C}^{\top }_2C_2)}+\|B\|^2\|C_1\|^2\right ). $$
Therefore, for the solution \(P^{\prime } \) of the Lyapunov equation (22) one has the estimate
$$ \begin {aligned} P^{\prime }&{}\preccurlyeq \xi P\preccurlyeq \frac {\alpha }{\lambda _{\min }(D{D}^{\top })}\left (\frac {f^2(K_0)}{\lambda ^2_{\min }({C}^{\top }_2C_2)}+\|B\|^2\|C_1\|^2\right )\frac {f(K_0)}{\lambda _{\min }({C}^{\top }_2C_2)}I \\ &{}\preccurlyeq 2f(K_0)\frac {\|A\|+\sqrt {f(K_0)}\|B\|\|C_1\|}{\lambda _{\min }(D{D}^{\top })\lambda _{\min }({C}^{\top }_2C_2)}\left (\frac {f^2(K_0)}{\lambda ^2_{\min }({C}^{\top }_2C_2)}+\|B\|^2\|C_1\|^2\right )I, \end {aligned} $$
and hence
$$ \|P^{\prime }\|_F\leqslant 2\sqrt {n} f(K_0) \frac {\|A\| + \sqrt {f(K_0)}\|B\|\|C_1\|} {\lambda _{\min }(D{D}^{\top })\lambda _{\min }({C}^{\top }_2C_2) } \left ( \frac {f^2(K_0)}{\lambda ^2_{\min }({C}^{\top }_2C_2) } + \|B\|^2\|C_1\|^2 \right ) .$$
(A.7)
Considering estimates (A.6) and (A.7), we obtain the quantity (23). The proof of Lemma 6 is complete. \(\quad \blacksquare \)

Proof of Theorem 3. First of all, Algorithm 1 is well defined at the starting point, since \(K_0 \) is a stabilizing controller by assumption. Further, for sufficiently small \(\gamma _j\), \(f(K) \) is monotone decreasing in the algorithm (motion along the antigradient); i.e., the \(K_j\) remain in the domain \(\mathcal S_0\), and thus we can apply the results in Lemma 6 about the Lipschitz property of the gradient.

Thus, the results on the convergence of the gradient method for unconditional minimization in [18] are applicable. In particular, condition (b) at step 3 in Algorithm 1 will be satisfied after a finite number of partitions, and convergence along the gradient will occur linearly in the gradient method. The proof of Theorem 3 is complete. \(\quad \blacksquare \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Polyak, B.T., Khlebnikov, M.V. Static Controller Synthesis for Peak-to-Peak Gain Minimization as an Optimization Problem. Autom Remote Control 82, 1530–1553 (2021). https://doi.org/10.1134/S0005117921090034

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117921090034

Keywords

Navigation