Skip to main content
Log in

A Comparison of Guaranteeing and Kalman Filters

  • STOCHASTIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

We propose a new approach to filtering under arbitrary bounded exogenous disturbances based on reducing this problem to an optimization problem. The approach has a low computational complexity since only Lyapunov equations are solved at each iteration. At the same time, it possesses advantages essential from an engineering-practical point of view, namely, the possibilities to limit the filter matrix and to construct optimal filter matrices separately for each coordinate of the system’s state vector. A gradient method for finding the filter matrix is presented. According to the examples, the proposed recurrence procedure is rather effective and yields quite satisfactory results. This paper continues the series of research works devoted to feedback control design from an optimization perspective.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Notes

  1. It actually takes 3–4 iterations to obtain a solution with high accuracy if the starting point is not too close to the boundaries of the interval (ρ2(ALj + 1C), 1).

REFERENCES

  1. Kalman, R.E., A New Approach to Linear Filtering and Prediction Problems, J. Basic Engineer., 1960, vol. 82, no. 1, pp. 35–45.

    Article  MathSciNet  Google Scholar 

  2. Kailath, T., Sayed, A.H., and Hassibi, B., Linear Estimation, New Jersey: Prentice Hall, 2000.

    MATH  Google Scholar 

  3. Matasov, A.I., Osnovy teorii fil’tra Kalmana (Foundations of Kalman Filter Theory), Moscow: Mosk. Gos. Univ., 2021.

  4. Schweppe, F.C., Uncertain Dynamic Systems, New Jersey: Prentice Hall, 1973.

    Google Scholar 

  5. Kurzhanskii, A.B., Upravlenie i nablyudenie v usloviyakh neopredelennosti (Control and Observation under Uncertainty), Moscow: Nauka, 1977.

  6. Chernous’ko, State Estimation for Dynamic Systems, Boca Raton: CRC Press, 1994.

    Google Scholar 

  7. Polyak, B.T. and Topunov, M.V., Filtering under Nonrandom Disturbances: The Method of Invariant Ellipsoids, Dokl. Math., 2008, vol. 77, no. 1, pp. 158–162.

    Article  MathSciNet  MATH  Google Scholar 

  8. Khlebnikov, M.V. and Polyak, B.T., Filtering under Arbitrary Bounded Exogenous Disturbances: The Technique of Linear Matrix Inequalities, The 13th Multiconference on Control Problems (MCCP 2020), Proceedings of the 32nd Conference in Memory of Nikolay Ostryakov, St. Petersburg, October 6–8, 2020, Concern CSRI Elektropribor, pp. 291–294.

  9. Boyd, S., El Ghaoui, L., Feron, E., and Balakrishnan, V., Linear Matrix Inequalities in System and Control Theory, Philadelphia: SIAM, 1994.

    Book  MATH  Google Scholar 

  10. Polyak, B.T., Khlebnikov, M.V., and Shcherbakov, P.S., Upravlenie lineinymi sistemami pri vneshnikh vozmu-shcheniyakh: tekhnika lineinykh matrichnykh neravenstv (Control of Linear Systems under Exogenous Disturbances: The Technique of Linear Matrix Inequalities), Moscow: LENAND, 2014.

  11. Fazel, M., Ge, R., Kakade, S., and Mesbahi, M., Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator, Proc. 35th Int. Conf. Machine Learning, Stockholm, July 10–15, 2018, vol. 80, pp. 1467–1476.

  12. Mohammadi, H., Zare, A., Soltanolkotabi, M., and Jovanović, M.R., Global Exponential Convergence of Gradient Methods over the Nonconvex Landscape of the Linear Quadratic Regulator, Proc. 2019 IEEE 58th Conf. Decision Control, Nice, December 11–13, 2019, pp. 7474–7479.

  13. Zhang, K., Hu, B., and Başar, T., Policy Optimization for \({{\mathcal{H}}_{2}}\) Linear Control with \({{\mathcal{H}}_{\infty }}\) Robustness Guarantee: Implicit Regularization and Global Convergence, Proc. 2nd Conference on Learning for Dynamics and Control (2nd L4DC), Zürich, June 11–12, 2020, pp. 179–190.

  14. Bu, J., Mesbahi, A., Fazel, M., and Mesbahi, M., LQR through the Lens of First Order Methods: Discrete-Time Case, arXiv:1907.08921, 2019.

  15. Fatkhullin, I. and Polyak, B., Optimizing Static Linear Feedback: Gradient Method, SIAM J. Control Optim., 2021, vol. 59, no. 5, pp. 3887–3911.

    Article  MathSciNet  MATH  Google Scholar 

  16. Polyak, B.T. and Khlebnikov, M.V., Static Controller Synthesis for Peak-to-Peak Gain Minimization as an Optimization Problem, Autom. Remote Control, 2021, vol. 82, no. 9, pp. 1530–1553.

    Article  MathSciNet  MATH  Google Scholar 

  17. Polyak, B.T. and Khlebnikov, M.V., Observer-Aided Output Feedback Synthesis as an Optimization Problem, Autom. Remote Control, 2022, vol. 83, no. 3, pp. 303–324.

    Article  MathSciNet  MATH  Google Scholar 

  18. Polyak, B.T. and Khlebnikov, M.V., New Criteria for Tuning PID Controllers, Autom. Remote Control, 2022, vol. 83, no. 11, pp. 1724–1741.

    Article  MathSciNet  MATH  Google Scholar 

  19. Luenberger, D.G., Observing the State of a Linear System, IEEE Transactions on Military Electronics, 1964, vol. 8, pp. 74–80.

    Article  Google Scholar 

  20. Luenberger, D.G., An Introduction to Observers, IEEE Trans. Autom. Control, 1971, vol. 35, pp. 596–602.

    Article  Google Scholar 

  21. Polyak, B.T., Khlebnikov, M.V., and Shcherbakov, P.S., Linear Matrix Inequalities in Control Systems with Uncertainty, Autom. Remote Control, 2021, vol. 82, no. 1, pp. 1–40.

    Article  MathSciNet  MATH  Google Scholar 

  22. Nazin, S.A., Polyak, B.T., and Topunov, M.V., Rejection of Bounded Exogenous Disturbances by the Method of Invariant Ellipsoids, Autom. Remote Control, 2007, vol. 68, no. 3, pp. 467–486.

    Article  MathSciNet  MATH  Google Scholar 

  23. en.wikipedia.org/wiki/Kalman\_filter.

  24. Humpherys, J., Redd, P., and West, J., A Fresh Look at the Kalman Filter, SIAM Rev., 2012, vol. 54, no. 4, pp. 801–823.

    Article  MathSciNet  MATH  Google Scholar 

  25. Tang, W., Zhang, Q., Wang, Z., and Shen, Y., Ellipsoid Bundle and Its Application to Set-Membership Estimation, IFAC-PapersOnLine, 2020, vol. 53, no. 2, pp. 13688–13693.

    Article  Google Scholar 

  26. Tang, W., Zhang, Q., Wang, Z., and Shen, Y., Set-Membership Filtering with Incomplete Observations, Inform. Sci., 2020, vol. 517, pp. 37–51.

    Article  MathSciNet  Google Scholar 

  27. Polyak, B.T., Nazin, S.A., Durieu, C., and Walter, E., Ellipsoidal Parameter or State Estimation under Model Uncertainty, Automatica, 2004, vol. 40, no. 7, pp. 1171–1179.

    Article  MathSciNet  MATH  Google Scholar 

  28. Durieu, C., Walter, E., and Polyak, B., Multi-Input Multi-Output Ellipsoidal State Bounding, J. Optim. Theory Appl., 2001, vol. 111, no. 2, pp. 273–303.

    Article  MathSciNet  MATH  Google Scholar 

  29. Kwon, W.H., Moon, Y.S., and Ahn, S.C., Bounds in Algebraic Riccati and Lyapunov Equations: A Survey and Some New Results, Int. J. Control, 1996, vol. 64, pp. 377–389.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was partially financially supported by the Russian Science Foundation, project no. 21-71-30005, https://rscf.ru/en/project/21-71-30005/.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. V. Khlebnikov.

Additional information

This paper is dedicated to the blessed memory of Boris Polyak, the author’s teacher and friend

This paper was recommended for publication by E.Ya. Rubinovich, a member of the Editorial Board

Appendices

APPENDIX A

Lemma A.1. Let X and Y be the solutions of the dual discrete Lyapunov equations with a Schur matrix A:

$${{A}^{{\text{T}}}}XA - X + W = 0\quad and\quad AY{{A}^{{\text{T}}}} - Y + V = 0.$$

Then tr(XV) = tr(YW).

Proof of Lemma A.1. Indeed, direct calculations give

$$\begin{gathered} \operatorname{tr} (XV) = \operatorname{tr} \left( {X(Y - AY{{A}^{{\text{T}}}})} \right) = \operatorname{tr} (XY) - \operatorname{tr} \left( {XAY{{A}^{{\text{T}}}}} \right) \\ = \operatorname{tr} (YX) - \operatorname{tr} \left( {Y{{A}^{{\text{T}}}}XA} \right) = \operatorname{tr} \left( {Y(X - {{A}^{{\text{T}}}}XA)} \right) = \operatorname{tr} (YW). \\ \end{gathered} $$

The proof of Lemma A.1 is complete.

Lemma A.2. The solution P of the discrete Lyapunov equation

$$AP{{A}^{{\text{T}}}} - P + Q = 0$$

with a Schur matrix A and Q \( \succ \) 0 satisfies the lower bounds

$${{\lambda }_{{\max }}}(P)\; \geqslant \;\frac{{{{\lambda }_{{\min }}}(Q)}}{{1 - {{\rho }^{2}}}},\quad {{\lambda }_{{\min }}}(P)\; \geqslant \;\frac{{{{\lambda }_{{\min }}}(Q)}}{{1 - \sigma _{{\min }}^{2}(A)}},$$
(A.1)

where ρ = \(\mathop {\max }\limits_i \left| {{{\lambda }_{i}}(A)} \right|\) and σmin(A) is the smallest singular value of the matrix A.

If Q = DDT and the pair (A, D) is controllable, then

$${{\lambda }_{{\max }}}(P)\; \geqslant \;\frac{{{{{\left\| {u\text{*}{\kern 1pt} D} \right\|}}^{2}}}}{{1 - {{\rho }^{2}}}} > 0,$$
(A.2)

where

$$u\text{*}{\kern 1pt} A = \lambda u\text{*},\quad \left| \lambda \right| = \rho ,\quad \left\| u \right\| = 1,$$

i.e., u is the left eigenvector of the matrix A corresponding to the eigenvalue λ of the matrix A with the greatest magnitude. The vector u and the value λ can be complex; here, u* denotes the conjugate transpose of u.

Proof of Lemma A.2. The lower bounds (A.1) are well known; for example, see [29]. Let us prove (A.2). The explicit solution of the discrete Lyapunov equation with a Schur matrix A has the form

$$P = \sum\limits_{k = 0}^\infty {{{A}^{k}}D{{D}^{{\text{T}}}}{{{({{A}^{{\text{T}}}})}}^{k}}.} $$

Multiplying this equality by u on the right and by u* on the left, due to u*Ak = λku* and (AT)ku = (λ*)ku, we obtain

$${{\lambda }_{{\max }}}(P)\; \geqslant \;u\text{*}{\kern 1pt} Pu = \sum\limits_{k = 0}^\infty {u\text{*}{\kern 1pt} {{A}^{k}}D{{D}^{{\text{T}}}}{{{({{A}^{{\text{T}}}})}}^{k}}u} = \sum\limits_{k = 0}^\infty {{{{(\lambda \lambda \text{*})}}^{k}}u\text{*}{\kern 1pt} D{{D}^{{\text{T}}}}u} = \frac{{{{{\left\| {u\text{*}{\kern 1pt} D} \right\|}}^{2}}}}{{1 - {{\rho }^{2}}}},$$

where ||u*D|| > 0 by the controllability of the pair (A, D); for example, see [10, Theorem D.1.5]. The proof of Lemma A.2 is complete.

Now, we optimize the function  f(α) and consider the problem

$$\min f(\alpha ),\quad f(\alpha ) = \operatorname{tr} CP{{C}^{{\text{T}}}}$$

subject to the constraint

$$\frac{1}{\alpha }AP{{A}^{{\text{T}}}} - P + \frac{1}{{1 - \alpha }}D{{D}^{{\text{T}}}} = 0$$

for the matrix variables P = PT\({{\mathbb{R}}^{{n \times n}}}\) and a scalar parameter 0 < α < 1.

Here we impose more stringent requirements for the problem statement: the matrix C of the system output is supposed to be square and nonsingular. This assumption could be relaxed, but the current goal is to establish the simplest and most obvious results.

Lemma A.3. Assume that A is a Schur matrix, ρ is the spectral radius of A, ρ2 < α < 1, the pair (A, D) is controllable, and the matrix C is such that CTC \( \succ \) 0. Then the function f(α) = tr CP(α)CT possesses the following properties:

(a) The function f(α) is well-defined, positive, and strongly convex on the interval ρ2 < α < 1 and its values tend to infinity at the interval endpoints. Moreover, there exists a constant c > 0 such that

$$f(\alpha )\; \geqslant \;\frac{\alpha }{{(1 - \alpha )(\alpha - {{\rho }^{2}})}}c,\quad {{\rho }^{2}} < \alpha < 1;$$
(A.3)

(b) The function f(α) has the derivative

$$f{\kern 1pt} '(\alpha ) = \operatorname{tr} Y\left( {\frac{1}{{{{{(1 - \alpha )}}^{2}}}}D{{D}^{{\text{T}}}} - \frac{1}{{{{\alpha }^{2}}}}AP{{A}^{{\text{T}}}}} \right),$$

where P and Y are the solutions of the discrete Lyapunov equations

$$\frac{1}{\alpha }AP{{A}^{{\text{T}}}} - P + \frac{1}{{1 - \alpha }}D{{D}^{{\text{T}}}} = 0$$
(A.4)

and

$$\frac{1}{\alpha }{{A}^{{\text{T}}}}YA - Y + {{C}^{{\text{T}}}}C = 0,$$
(A.5)

respectively.

(c) The second derivative of the function f(α) is given by

$$f{\kern 1pt} ''(\alpha ) = 2\operatorname{tr} Y\left( {\frac{1}{{{{{(1 - \alpha )}}^{3}}}}D{{D}^{{\text{T}}}} + \frac{1}{{{{\alpha }^{3}}}}A(P - X){{A}^{{\text{T}}}}} \right),$$

where P, Y, and X satisfy the discrete Lyapunov equations (A.4), (A.5), and

$$\frac{1}{\alpha }AX{{A}^{{\text{T}}}} - X + \frac{1}{{{{{(1 - \alpha )}}^{2}}}}D{{D}^{{\text{T}}}} - \frac{1}{{{{\alpha }^{2}}}}AP{{A}^{{\text{T}}}} = 0,$$

respectively. Moreoverf ''(α*) > 0 and f ''(α) is monotonically increasing on the left and right of α*.

Proof of Lemma A.3. (a) Equation (6) can be written as

$$\left( {\frac{1}{{\sqrt \alpha }}A} \right)P{{\left( {\frac{1}{{\sqrt \alpha }}A} \right)}^{{\text{T}}}} - P = - \frac{1}{{1 - \alpha }}D{{D}^{{\text{T}}}};$$

according to [10, Lemma 1.2.6], there exists a unique solution if and only if \(\frac{1}{{\sqrt \alpha }}A\) is a Schur matrix: \(\left| {{{\lambda }_{i}}\left( {\frac{1}{{\sqrt \alpha }}A} \right)} \right|\) < 1, i.e., under the condition ρ2 < α < 1.

We estimate the value f(α) = tr CP(α)CT using Lemma A.2 with obvious changes:

$$\begin{gathered} f(\alpha ) = \operatorname{tr} CP(\alpha ){{C}^{{\text{T}}}}\; \geqslant \;{{\lambda }_{{\min }}}({{C}^{{\text{T}}}}C){{\lambda }_{{\max }}}(P(\alpha )) \\ \geqslant \;\frac{{{{{\left\| {u\text{*}{\kern 1pt} D} \right\|}}^{2}}{{\lambda }_{{\min }}}({{C}^{{\text{T}}}}C)}}{{(1 - \alpha )\left( {1 - {{\rho }^{2}}\left( {A{\text{/}}\sqrt \alpha } \right)} \right)}} = \frac{\alpha }{{(1 - \alpha )(\alpha - {{\rho }^{2}})}}{{\left\| {u\text{*}{\kern 1pt} D} \right\|}^{2}}{{\lambda }_{{\min }}}({{C}^{{\text{T}}}}C). \\ \end{gathered} $$

Here, u has the same meaning as in Lemma A.2 and the value ||u*D||2 is positive by the controllability of the pair (A/\(\sqrt \alpha \), D). (It follows from the controllability of (A, D).)

Now we show that the function f(α) = tr CP(α)CT is strictly convex on the interval (ρ2, 1). According to [10, Lemma 1.2.6], the solution of Eq. (A.4) can be explicitly represented as

$$P(\alpha ) = \sum\limits_{k = 0}^\infty {{{{\left( {\frac{1}{{\sqrt \alpha }}A} \right)}}^{k}}\frac{1}{{1 - \alpha }}D{{D}^{{\text{T}}}}{{{\left( {\frac{1}{{\sqrt \alpha }}{{A}^{{\text{T}}}}} \right)}}^{k}}} = \sum\limits_{k = 0}^\infty {\underbrace {\frac{1}{{(1 - \alpha ){{\alpha }^{k}}}}}_{g(\alpha ,k)}\underbrace {{{A}^{k}}D{{D}^{{\text{T}}}}{{{({{A}^{{\text{T}}}})}}^{k}}}_{{{H}_{k}}}.} $$

But Hk \( \succ \) 0 and g(α, k) > 0 for 0 < α < 1; therefore, on the interval (ρ2, 1) we have

$$P(\alpha ) = \sum\limits_{k = 0}^\infty {g(\alpha ,k){{H}_{k}}} \succ 0$$

and

$$f(\alpha ) = \operatorname{tr} P(\alpha ){{C}^{{\text{T}}}}C > 0.$$

Direct calculations give

$$g{\kern 1pt} '(\alpha ,k) = \left( {\frac{1}{{1 - \alpha }} - \frac{k}{\alpha }} \right)g(\alpha ,k),$$
$$g{\kern 1pt} ''(\alpha ,k) = \left( {{{{\left( {\frac{1}{{1 - \alpha }} - \frac{k}{\alpha }} \right)}}^{2}} + \frac{1}{{{{{(1 - \alpha )}}^{2}}}} + \frac{k}{{{{\alpha }^{2}}}}} \right)g(\alpha ,k)\; \geqslant \;\frac{1}{{{{{(1 - \alpha )}}^{2}}}}g(\alpha ,k).$$

(Here, differentiation is performed with respect to α.) As a result,

$$f{\kern 1pt} ''(\alpha ) = \sum\limits_{k = 0}^\infty {g{\kern 1pt} ''(\alpha ,k)\operatorname{tr} C{{H}_{k}}{{C}^{{\text{T}}}}} \; \geqslant \;\frac{1}{{{{{(1 - \alpha )}}^{2}}}}f(\alpha )\; \geqslant \;\frac{1}{{{{{(1 - {{\rho }^{2}})}}^{2}}}}f(\alpha \text{*}) > 0.$$

Thus, the second derivative of the function f(α) is positive and tends to infinity at the endpoints of the interval (ρ2, 1).

Next, with direct calculations of the fourth derivative, we obtain

$${{g}^{{(IV)}}}(\alpha ,k) = \sum\limits_{k = 0}^\infty {\frac{{k(k + 1)(k + 2)(k + 3)}}{{{{\alpha }^{{k + 4}}}}}} + \frac{{24}}{{{{{(1 - \alpha )}}^{4}}}}\; \geqslant \;\frac{{24}}{{{{{(1 - \alpha )}}^{4}}}},$$

so

$$\begin{gathered} {{f}^{{(IV)}}}(\alpha ) = \sum\limits_{k = 0}^\infty {{{g}^{{(IV)}}}(\alpha ,k)\operatorname{tr} C{{H}_{k}}{{C}^{{\text{T}}}}} \\ \geqslant \;\frac{{24}}{{{{{(1 - \alpha )}}^{4}}}}\sum\limits_{k = 0}^\infty {\operatorname{tr} C{{H}_{k}}{{C}^{{\text{T}}}}} > \frac{{24}}{{{{{(1 - {{\rho }^{2}})}}^{4}}}}\sum\limits_{k = 0}^\infty {\operatorname{tr} C{{H}_{k}}{{C}^{{\text{T}}}}} > 0, \\ \end{gathered} $$

i.e., the second derivative f ''(α) is convex and grows at the interval endpoints.

(b) Let us derive the formula for the derivative of  f(α). In Eq. (A.4), the solution P is a function of α. We differentiate this equation, interpreting P ' as the derivative with respect to α:

$$\frac{1}{\alpha }AP{\kern 1pt} '{\kern 1pt} {{A}^{{\text{T}}}} - P{\kern 1pt} '\; + \frac{1}{{{{{(1 - \alpha )}}^{2}}}}D{{D}^{{\text{T}}}} - \frac{1}{{{{\alpha }^{2}}}}AP{{A}^{{\text{T}}}} = 0.$$
(A.6)

Applying Lemma A.1 to the dual Eqs. (A.6) and (A.5) finally yields

$$f{\kern 1pt} '(\alpha ) = \operatorname{tr} CP{\kern 1pt} '{\kern 1pt} {{C}^{{\text{T}}}} = \operatorname{tr} P{\kern 1pt} '{\kern 1pt} {{C}^{{\text{T}}}}C = \operatorname{tr} Y\left( {\frac{1}{{{{{(1 - \alpha )}}^{2}}}}D{{D}^{{\text{T}}}} - \frac{1}{{{{\alpha }^{2}}}}AP{{A}^{{\text{T}}}}} \right).$$

(c) The desired expression for the second derivative of  f(α) can be established by analogy. Differentiating Eq. (A.6) with respect to α, we have

$$\frac{1}{\alpha }AP{\kern 1pt} ''{\kern 1pt} {{A}^{{\text{T}}}} - P{\kern 1pt} ''\; + \frac{2}{{{{{(1 - \alpha )}}^{3}}}}D{{D}^{{\text{T}}}} + \frac{2}{{{{\alpha }^{3}}}}AP{{A}^{{\text{T}}}} - \frac{2}{{{{\alpha }^{3}}}}AP{\kern 1pt} '{\kern 1pt} {{A}^{{\text{T}}}} = 0.$$

Applying Lemma A.1 to this equation and Eq. (A.5) with X = P', we arrive at

$$f{\kern 1pt} ''(\alpha ) = \operatorname{tr} CP{\kern 1pt} ''{\kern 1pt} {{C}^{{\text{T}}}} = \operatorname{tr} P{\kern 1pt} ''{\kern 1pt} {{C}^{{\text{T}}}}C = 2\operatorname{tr} Y\left( {\frac{1}{{{{{(1 - \alpha )}}^{3}}}}D{{D}^{{\text{T}}}} + \frac{1}{{{{\alpha }^{3}}}}A(P - X){{A}^{{\text{T}}}}} \right).$$

The proof of Lemma A.3 is complete.

Note that the function f(α) and its two derivatives are calculated by solving three discrete Lyapunov equations.

Due to the above properties, this function can be minimized using Newton’s method. We specify an initial approximation ρ2(A) < α0 < 1, e.g., α0 = (1 + ρ2(A))/2, and apply the iterative process

$${{\alpha }_{{j + 1}}} = {{\alpha }_{j}} - \frac{{f{\kern 1pt} '({{\alpha }_{j}})}}{{f{\kern 1pt} ''({{\alpha }_{j}})}}.$$
(A.7)

The next theorem ensures the global convergence of the algorithm; it can be proved by analogy with a similar result in [16].

Theorem A.1 [16]. In the method (A.7), we have the upper bounds

$$\left| {{{\alpha }_{j}} - \alpha \text{*}} \right|\;\leqslant \;\frac{{f{\kern 1pt} ''({{\alpha }_{0}})}}{{{{2}^{j}}f{\kern 1pt} ''(\alpha \text{*})}}\left| {{{\alpha }_{0}} - \alpha \text{*}} \right|,\quad \left| {{{\alpha }_{{j + 1}}} - \alpha \text{*}} \right|\;\leqslant \;c{{\left| {{{\alpha }_{j}} - \alpha \text{*}} \right|}^{2}},$$

where c > 0 is some constant (possibly, in explicit form).

The first bound ensures the global convergence of the method (faster than a geometric progression with a coefficient of 1/2); the second bound, the quadratic convergence in the neighborhood of the solution. In practice, it takes 3–4 iterations to obtain a solution with a high accuracy (unless the starting point is too close to the interval endpoints).

Returning to the optimization problem (4)–(5), we minimize the function

$$f(L) = \mathop {\min }\limits_\alpha f(L,\alpha )$$

after a preliminary study of its properties.

Lemma A.4. The function f(L) is well-defined and positive on the set \(\mathcal{S}\) of admissible filter matrices.

Indeed, if (ALC) is a Schur matrix, then ρ(ALC) < 1 and for ρ2(ALC) < α < 1, there exists a solution P \( \succcurlyeq \) 0 of the discrete Lyapunov equation (5). Thus, a strictly positive function f(L, α) is well-defined and f(L) > 0 due to (A.3). As in the continuous-time case, its domain \(\mathcal{S}\) may be nonconvex and disconnected and its boundaries nonsmooth.

Lemma A.5. On the set \(\mathcal{S}\) the function f(L) is coercive (i.e., it tends to infinity on the boundary of the domain). Moreover, the following lower bounds are valid:

$$\begin{gathered} f(L)\; \geqslant \;\frac{1}{{1 - {{\rho }^{2}}(A - LC)}}\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{1 - \sigma _{{\min }}^{2}(A - LC)}}\left\| {{{D}_{1}} - L{{D}_{2}}} \right\|_{F}^{2}, \\ f(L)\; \geqslant \;\rho {{\left\| L \right\|}^{2}}. \\ \end{gathered} $$
(A.8)

Proof of Lemma A.5. We consider a sequence {Lj} ⊆ \(\mathcal{S}\) of admissible matrices such that LjL\(\partial \mathcal{S}\), i.e., ρ(ALC) = 1. In other words, for any ε > 0 there exists a number N = N(ε) such that

$$\left| {\rho (A - {{L}_{j}}C) - \rho (A - LC)} \right| = 1 - \rho (A - {{L}_{j}}C) < \varepsilon $$

for all j \( \geqslant \) N(ε).

Let Pj be the solution of Eq. (5) associated with the filter matrix Lj:

$$\frac{1}{{{{\alpha }_{j}}}}(A - {{L}_{j}}C){{P}_{j}}{{(A - {{L}_{j}}C)}^{{\text{T}}}} - {{P}_{j}} + \frac{1}{{1 - {{\alpha }_{j}}}}({{D}_{1}} - {{L}_{j}}{{D}_{2}}){{({{D}_{1}} - {{L}_{j}}{{D}_{2}})}^{{\text{T}}}} = 0;$$

let Yj be the solution of its dual discrete Lyapunov equation

$$\frac{1}{{{{\alpha }_{j}}}}{{(A - {{L}_{j}}C)}^{{\text{T}}}}{{Y}_{j}}(A - {{L}_{j}}C) - {{Y}_{j}} + {{C}_{1}}C_{1}^{{\text{T}}} = 0.$$

In view of Lemma A.2, we have

$$f({{L}_{j}}) = \operatorname{tr} ({{C}_{1}}{{P}_{j}}C_{1}^{{\text{T}}}) + \rho \left\| {{{L}_{j}}} \right\|_{F}^{2}\; \geqslant \;\operatorname{tr} \left( {{{P}_{j}}{{C}_{1}}C_{1}^{{\text{T}}}} \right)$$
$$ = \operatorname{tr} \left( {{{Y}_{j}}\frac{1}{{1 - {{\alpha }_{j}}}}({{D}_{1}} - {{L}_{j}}{{D}_{2}}){{{({{D}_{1}} - {{L}_{j}}{{D}_{2}})}}^{{\text{T}}}}} \right)$$
$$ \geqslant \;\frac{1}{{1 - {{\alpha }_{j}}}}{{\lambda }_{{\min }}}({{Y}_{j}})\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}\; \geqslant \;\frac{1}{{1 - {{\alpha }_{j}}}}\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{1 - \sigma _{{\min }}^{2}(A - {{L}_{j}}C)}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}$$
$$ \geqslant \;\frac{1}{{1 - {{\rho }^{2}}(A - {{L}_{j}}C)}}\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{1 - \sigma _{{\min }}^{2}(A - {{L}_{j}}C)}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}$$
$$ \geqslant \;\frac{1}{\varepsilon }\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{1 - \sigma _{{\min }}^{2}(A - {{L}_{j}}C)}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}\;\xrightarrow[{\varepsilon \to 0}]{}\; + {\kern 1pt} \infty $$

since ρ2(ALjC) < αj < 1.

On the other hand,

$$f({{L}_{j}}) = \operatorname{tr} \left( {{{C}_{1}}{{P}_{j}}C_{1}^{{\text{T}}}} \right) + \rho \left\| {{{L}_{j}}} \right\|_{F}^{2}\; \geqslant \;\rho \left\| {{{L}_{j}}} \right\|_{F}^{2}\; \geqslant \;\rho {{\left\| {{{L}_{j}}} \right\|}^{2}}\;\xrightarrow[{\left\| {{{L}_{j}}} \right\| \to + \infty }]{}\; + {\kern 1pt} \infty .$$

The proof of Lemma A.5 is complete.

We introduce the level set

$${{\mathcal{S}}_{0}} = \left\{ {L \in \mathcal{S}{\kern 1pt} :\;\;f(L)\;\leqslant \;f({{L}_{0}})} \right\}.$$

Obviously, Lemma A.5 implies the following result.

Corollary A.1. For any L0\(\mathcal{S}\), the set \({{\mathcal{S}}_{0}}\) is bounded.

On the other hand, the function  f(L) achieves minimum on the set \({{\mathcal{S}}_{0}}\). (This function is continuous by the properties of the solution of the discrete Lyapunov equation and is considered on a compact set.) However, the set \({{\mathcal{S}}_{0}}\) has no common points with the boundary of \(\mathcal{S}\) due to (A.8). The function  f(L) is differentiable on \({{\mathcal{S}}_{0}}\); see below. Consequently, we arrive at the following result.

Corollary A.2. There exists a minimum point \({{L}_{*}}\) on the set  \(\mathcal{S}\), and the gradient of the function f(L) vanishes at this point.

Let us analyze the properties of the gradient of the function f(L, α).

Lemma A.6. The function f(L, α) is defined on the set of stabilizing L for ρ2(ALC) < α < 1.

On this admissible set, the function is differentiable, and its gradient is given by

$${{\nabla }_{\alpha }}f(L,\alpha ) = \operatorname{tr} Y\left( {\frac{1}{{{{{(1 - \alpha )}}^{2}}}}({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} - \frac{1}{{{{\alpha }^{2}}}}(A - LC)P{{{(A - LC)}}^{{\text{T}}}}} \right),$$
(A.9)
$${{\nabla }_{L}}f(L,\alpha ) = 2\left( {\rho L - \frac{1}{\alpha }Y(A - LC)P{{C}^{{\text{T}}}} - \frac{1}{{1 - \alpha }}Y({{D}_{1}} - L{{D}_{2}})D_{2}^{{\text{T}}}} \right),$$
(A.10)

where the matrices P and Y are the solutions of the discrete Lyapunov equations (5) and (8), respectively.

The minimum of f(L, α) is achieved at an inner point of the admissible set and is determined by the conditions

$${{\nabla }_{L}}f(L,\alpha ) = 0,\quad {{\nabla }_{\alpha }}f(L,\alpha ) = 0.$$

In addition, f(L, α) as a function of α is strictly convex on ρ2(ALC) < α < 1 and achieves minimum at an inner point of this interval.

Proof of Lemma A.6. We have the constrained optimization problem

$$\min f(L,\alpha ),\quad f(L,\alpha ) = \operatorname{tr} {{C}_{1}}PC_{1}^{{\text{T}}} + \rho \left\| L \right\|_{F}^{2}$$

subject to the discrete Lyapunov equation (5) for the matrix P of the invariant ellipsoid.

Following Lemma A.3, differentiation with respect to a is performed using the relations (A.9), (5), and (8). To differentiate with respect to L, we add an increment ΔL and denote by ΔP the corresponding increment of P. As a result, the relation (5) takes the form

$$\begin{gathered} \frac{1}{\alpha }(A - (L + \Delta L)C)(P + \Delta P){{(A - (L + \Delta L)C)}^{{\text{T}}}} - (P + \Delta P) \\ + \;\frac{1}{{1 - \alpha }}({{D}_{1}} - (L + \Delta L){{D}_{2}}){{({{D}_{1}} - (L + \Delta L){{D}_{2}})}^{{\text{T}}}} = 0. \\ \end{gathered} $$

Leaving the notation ΔP for the principal terms of the increment, we obtain

$$\begin{gathered} \frac{1}{\alpha }\left( {(A - LC)P{{{(A - LC)}}^{{\text{T}}}} - \Delta LCP{{{(A - LC)}}^{{\text{T}}}} - (A - LC)P{{{(\Delta LC)}}^{{\text{T}}}} + (A - LC)\Delta P{{{(A - LC)}}^{{\text{T}}}}} \right) \\ - \;(P + \Delta P) + \frac{1}{{1 - \alpha }}\left( {({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} - \Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} - ({{D}_{1}} - L{{D}_{2}}){{{(\Delta L{{D}_{2}})}}^{{\text{T}}}}} \right) = 0. \\ \end{gathered} $$

Subtracting Eq. (12) from this equation yields

$$\frac{1}{\alpha }(A - LC)\Delta P{{(A - LC)}^{{\text{T}}}}$$
$$ - \;\Delta P - \frac{1}{\alpha }\left( {\Delta LCP{{{(A - LC)}}^{{\text{T}}}} + (A - LC)P{{{(\Delta LC)}}^{{\text{T}}}}} \right)$$
(A.11)
$$ - \;\frac{1}{{1 - \alpha }}\left( {\Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} + ({{D}_{1}} - L{{D}_{2}}){{{(\Delta L{{D}_{2}})}}^{{\text{T}}}}} \right) = 0.$$

We calculate the increment of the functional f(L) by linearizing the corresponding values:

$$\Delta f(L) = \operatorname{tr} {{C}_{1}}\Delta PC_{1}^{{\text{T}}} + \rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L + \rho \operatorname{tr} {{(\Delta L)}^{{\text{T}}}}L = \operatorname{tr} \Delta PC_{1}^{{\text{T}}}{{C}_{1}} + 2\rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L.$$

By Lemma B.1, from the dual Eqs. (A.11) and (8) we have

$$\Delta f(L) = - 2\operatorname{tr} Y\left( {\frac{1}{\alpha }\Delta LCP{{{(A - LC)}}^{{\text{T}}}} + \frac{1}{{1 - \alpha }}\Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}} \right) + 2\rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L$$
$$ = 2\operatorname{tr} \left( {\rho {{L}^{{\text{T}}}}\Delta L - \frac{1}{\alpha }CP{{{(A - LC)}}^{{\text{T}}}}Y - \frac{1}{{1 - \alpha }}{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}Y} \right)\Delta L$$
$$ = \left\langle {2\left( {\rho L - \frac{1}{\alpha }Y(A - LC)P{{C}^{{\text{T}}}} - \frac{1}{{1 - \alpha }}Y({{D}_{1}} - L{{D}_{2}})D_{2}^{{\text{T}}}} \right),\Delta L} \right\rangle .$$

Thus, the relation (A.10) is derived and the proof of Lemma A.6 is complete.

The gradient of the function  f(L) is not Lipschitz on the set \(\mathcal{S}\). But it can be shown to possess this property on the subset \({{\mathcal{S}}_{0}}\) (similar to [16]).

These properties of the minimized function and its derivatives justify the minimization method implemented as Algorithm 1.

APPENDIX B

Lemma B.1 [16]. Let X and Y be the solutions of the dual Lyapunov equations with a Hurwitz matrix A:

$${{A}^{{\text{T}}}}X + XA + W = 0\quad and\quad AY + Y{{A}^{{\text{T}}}} + V = 0.$$

Then tr(XV) = tr(YW).

The properties of the function f(α) established in [16] fully apply to the case under consideration. In particular, the function f(α) is well-defined, positive, and strongly convex on the interval 0 < α < 2σ(ALC) and its values tend to infinity at the interval endpoints. Moreover, there exists a constant c > 0 such that

$$f(\alpha )\; \geqslant \;\frac{c}{{\alpha (2\sigma - \alpha )}},\quad 0 < \alpha < 2\sigma (A - LC).$$
(B.1)

The function  f(α) can be effectively minimized using Newton’s method. We specify an initial approximation 0 < α0 < 2σ(ALC), e.g., α0 = σ(ALC), and apply the iterative process

$${{\alpha }_{{j + 1}}} = {{\alpha }_{j}} - \frac{{f{\kern 1pt} '({{\alpha }_{j}})}}{{f{\kern 1pt} ''({{\alpha }_{j}})}},$$

where, according to [16],

$$\begin{gathered} f{\kern 1pt} '(\alpha ) = \operatorname{tr} Y\left( {P - \frac{1}{{{{\alpha }^{2}}}}({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}} \right), \\ f{\kern 1pt} ''(\alpha ) = 2\operatorname{tr} Y\left( {X + \frac{1}{{{{\alpha }^{3}}}}({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}} \right), \\ \end{gathered} $$
(B.2)

and P, Y, and X are the solutions of the Lyapunov equations (12), (13), and (14), respectively. Theorem A.1 remains valid as well.

The following lemma is a continuous-time analog of Lemma A.4.

Lemma B.2. The function f(L) is well-defined and positive on the set \(\mathcal{S}\) of admissible filter matrices.

Indeed, if (ALC) is a Hurwitz matrix, then σ(ALC) > 0 and for 0 < α < 2σ(ALC), there exists a solution P \( \succcurlyeq \) 0 of the Lyapunov equation (12). Thus, a strictly positive function f(L, α) is well-defined and  f(L) > 0 due to (B.1). Its domain \(\mathcal{S}\) may be nonconvex and disconnected and its boundaries nonsmooth; see [16].

Lemma B.3. On the set \(\mathcal{S}\) of admissible matrices the function f(L) is coercive (i.e., it tends to infinity on the boundary of the domain). Moreover, the following lower bounds are valid:

$$\begin{gathered} f(L)\; \geqslant \;\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})\left\| {{{D}_{1}} - L{{D}_{2}}} \right\|_{F}^{2}}}{{4\sigma (A - LC)\left( {\left\| {A - LC} \right\| + \sigma (A - LC)} \right)}}, \\ f(L)\; \geqslant \;\rho {{\left\| L \right\|}^{2}}. \\ \end{gathered} $$
(B.3)

Proof of Lemma B.3. We consider a sequence {Lj} ⊆ \(\mathcal{S}\) of admissible matrices such that LjL\(\partial \mathcal{S}\), i.e., σ(ALC) = 0. In other words, for any ε > 0 there exists a number N = N(ε) such that

$$\left| {\sigma (A - {{L}_{j}}C) - \sigma (A - LC)} \right| = \sigma (A - {{L}_{j}}C) < \varepsilon $$

for all j \( \geqslant \) N(ε).

Let Pj be the solution of equation (12) associated with the filter matrix Lj:

$$\left( {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right){{P}_{j}} + {{P}_{j}}{{\left( {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right)}^{{\text{T}}}} + \frac{1}{{{{\alpha }_{j}}}}({{D}_{1}} - {{L}_{j}}{{D}_{2}}){{({{D}_{1}} - {{L}_{j}}{{D}_{2}})}^{{\text{T}}}} = 0;$$

let Yj be the solution of its dual Lyapunov equation

$${{\left( {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right)}^{{\text{T}}}}{{Y}_{j}} + {{Y}_{j}}\left( {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right) + {{C}_{1}}C_{1}^{{\text{T}}} = 0.$$

In view of [16, Lemma A.3], we have

$$f({{L}_{j}}) = \operatorname{tr} ({{C}_{1}}{{P}_{j}}C_{1}^{{\text{T}}}) + \rho \left\| {{{L}_{j}}} \right\|_{F}^{2}$$
$$ \geqslant \;\operatorname{tr} ({{P}_{j}}{{C}_{1}}C_{1}^{{\text{T}}}) = \operatorname{tr} \left( {{{Y}_{j}}\frac{1}{{{{\alpha }_{j}}}}({{D}_{1}} - {{L}_{j}}{{D}_{2}}){{{({{D}_{1}} - {{L}_{j}}{{D}_{2}})}}^{{\text{T}}}}} \right)$$
$$ \geqslant \;\frac{1}{{{{\alpha }_{j}}}}{{\lambda }_{{\min }}}({{Y}_{j}})\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}\; \geqslant \;\frac{1}{{{{\alpha }_{j}}}}\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{2\left\| {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right\|}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}$$
$$ \geqslant \;\frac{1}{{4\sigma (A - {{L}_{j}}C)}}\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{\left\| {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right\|}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}$$
$$ \geqslant \;\frac{{{{\lambda }_{{\min }}}({{C}_{1}}C_{1}^{{\text{T}}})}}{{4\varepsilon \left( {\left\| {A - {{L}_{j}}C} \right\| + \varepsilon } \right)}}\left\| {{{D}_{1}} - {{L}_{j}}{{D}_{2}}} \right\|_{F}^{2}\;\xrightarrow[{\varepsilon \to 0}]{}\; + {\kern 1pt} \infty ,$$

since 0 < αj < 2σ(ALjC) and

$$\left\| {A - {{L}_{j}}C + \frac{{{{\alpha }_{j}}}}{2}I} \right\|\;\leqslant \;\left\| {A - {{L}_{j}}C} \right\| + \frac{{{{\alpha }_{j}}}}{2}.$$

On the other hand,

$$f({{L}_{j}}) = \operatorname{tr} ({{C}_{1}}{{P}_{j}}C_{1}^{{\text{T}}}) + \rho \left\| {{{L}_{j}}} \right\|_{F}^{2}\; \geqslant \;\rho \left\| {{{L}_{j}}} \right\|_{F}^{2}\; \geqslant \;\rho {{\left\| {{{L}_{j}}} \right\|}^{2}}\;\xrightarrow[{{\kern 1pt} \left\| {{{L}_{j}}} \right\|{\kern 1pt} \to + \infty }]{}\; + {\kern 1pt} \infty .$$

The proof of Lemma B.3 is complete.

We introduce the level set

$${{\mathcal{S}}_{0}} = \left\{ {L \in \mathcal{S}{\kern 1pt} :\;\;f(L)\;\leqslant \;f({{L}_{0}})} \right\}.$$

Obviously, Lemma B.3 implies the following result.

Corollary B.3. For any L0\(\mathcal{S}\), the set \({{\mathcal{S}}_{0}}\) is bounded.

On the other hand, the function f(L) achieves minimum on the set \({{\mathcal{S}}_{0}}\). (This function is continuous by the properties of the solution of the Lyapunov equation and is considered on a compact set.) However, the set \({{\mathcal{S}}_{0}}\) has no common points with the boundary of \(\mathcal{S}\) due to (B.3). The function f(L) is differentiable on \({{\mathcal{S}}_{0}}\); see below. Consequently, we arrive at the following result.

Corollary B.4. There exists a minimum point \({{L}_{*}}\) on the set \(\mathcal{S}\), and the gradient of the function f(L) vanishes at this point.

Let us analyze the properties of the gradient of the function f(L, α).

Lemma B.4. The function f(L, α) is defined on the set of stabilizing L for 0 < α < 2σ(ALC). On this admissible set, the function is differentiable, and its gradient is given by

$$\begin{gathered} {{\nabla }_{\alpha }}f(L,\alpha ) = \operatorname{tr} Y\left( {P - \frac{1}{{{{\alpha }^{2}}}}({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}} \right), \\ {{\nabla }_{L}}f(L,\alpha ) = 2\left( {\rho L - YP{{C}^{{\text{T}}}} - \frac{1}{\alpha }Y({{D}_{1}} - L{{D}_{2}})D_{2}^{{\text{T}}}} \right), \\ \end{gathered} $$
(B.4)

where the matrices P and Y are the solutions of the Lyapunov equations (12) and (13), respectively.

The minimum of f(L, α) is achieved at an inner point of the admissible set and is determined by the conditions

$${{\nabla }_{L}}f(L,\alpha ) = 0,\quad {{\nabla }_{\alpha }}f(L,\alpha ) = 0.$$

In additionf(L, α) as a function of α is strictly convex on 0 < α < 2σ(ALC) and achieves minimum at an inner point of this interval.

Proof of Lemma B.4. We have the constrained optimization problem

$$\min f(L,\alpha ),\quad f(L,\alpha ) = \operatorname{tr} {{C}_{1}}PC_{1}^{{\text{T}}} + \rho \left\| L \right\|_{F}^{2}$$

subject to the Lyapunov equation (12) for the matrix P of the invariant ellipsoid.

Differentiation with respect to α is performed using the relations (B.2), (12), and (13). To differentiate with respect to L, we add an increment ΔL and denote by ΔP the corresponding increment of P. As a result, the relation (12) takes the form

$$\left( {A - (L + \Delta L)C + \frac{\alpha }{2}I} \right)(P + \Delta P) + (P + \Delta P){{\left( {A - (L + \Delta L)C + \frac{\alpha }{2}I} \right)}^{{\text{T}}}}$$
$$ + \;\frac{1}{\alpha }({{D}_{1}} - (L + \Delta L){{D}_{2}}){{({{D}_{1}} - (L + \Delta L){{D}_{2}})}^{{\text{T}}}} = 0.$$

Leaving the notation ΔP for the principal terms of the increment, we obtain

$$\left( {A - (L + \Delta L)C + \frac{\alpha }{2}I} \right)P + P{{\left( {A - (L + \Delta L)C + \frac{\alpha }{2}I} \right)}^{{\text{T}}}}$$
$$ + \;\left( {A - LC + \frac{\alpha }{2}I} \right)\Delta P + \Delta P{{\left( {A - LC + \frac{\alpha }{2}I} \right)}^{{\text{T}}}}$$
$$ + \;\frac{1}{\alpha }\left( {({{D}_{1}} - L{{D}_{2}}){{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} - \Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} - ({{D}_{1}} - L{{D}_{2}}){{{(\Delta L{{D}_{2}})}}^{{\text{T}}}}} \right) = 0.$$

Subtracting Eq. (12) from this equation yields

$$\left( {A - LC + \frac{\alpha }{2}I} \right)\Delta P + \Delta P{{\left( {A - LC + \frac{\alpha }{2}I} \right)}^{{\text{T}}}} - \Delta LCP - P{{(\Delta LC)}^{{\text{T}}}}$$
(B.5)
$$ - \,\,\frac{1}{\alpha }\left( {\Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}} + ({{D}_{1}} - L{{D}_{2}}){{{(\Delta L{{D}_{2}})}}^{{\text{T}}}}} \right) = 0.$$
(B.6)

We calculate the increment of the functional f(L) by linearizing the corresponding values:

$$\Delta f(L) = \operatorname{tr} {{C}_{1}}\Delta PC_{1}^{{\text{T}}} + \rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L + \rho \operatorname{tr} {{(\Delta L)}^{{\text{T}}}}L = \operatorname{tr} \Delta PC_{1}^{{\text{T}}}{{C}_{1}} + 2\rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L.$$

By Lemma B.1, from the dual Eqs. (B.6) and (13) we have

$$\Delta f(L) = - \operatorname{tr} 2Y\left( {\Delta LCP + \frac{1}{\alpha }\Delta L{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}} \right) + 2\rho \operatorname{tr} {{L}^{{\text{T}}}}\Delta L$$
$$ = 2\operatorname{tr} \left( {\rho {{L}^{{\text{T}}}}\Delta L - CPY\Delta L - \frac{1}{\alpha }{{D}_{2}}{{{({{D}_{1}} - L{{D}_{2}})}}^{{\text{T}}}}Y\Delta L} \right)$$
$$ = \left\langle {2\left( {\rho L - YP{{C}^{{\text{T}}}} - \frac{1}{\alpha }Y({{D}_{1}} - L{{D}_{2}})D_{2}^{{\text{T}}}} \right),\Delta L} \right\rangle .$$

Thus, the relation (B.4) is derived and the proof of Lemma B.4 is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khlebnikov, M.V. A Comparison of Guaranteeing and Kalman Filters. Autom Remote Control 84, 389–411 (2023). https://doi.org/10.1134/S0005117923040094

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923040094

Keywords:

Navigation