Skip to main content
Log in

A Sparse State Kalman Filter Algorithm Based on Kalman Gain

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In order to improve tracking accuracy of time-varying sparse signals, a sparse state Kalman filter algorithm based on Kalman gain matrix is proposed in this paper. Under the constraint of sparse state, minimizing the mean square error and solving the optimization problem by using the symmetrical alternating direction multiplier method (symmetrical ADMM), while keeping the update expression of state estimation unchanged, a Kalman gain matrix is obtained which can make the state estimation sparse. In addition, this paper also discusses two different sparse constraints which are L1 norm constraint and cardinality function constraint. The proposed algorithm can be implemented under the framework of conventional Kalman filter algorithm, no need to introduce additional frameworks. Two groups of dynamic signal models, a slow change of signal and a random walk of nonzero elements, are simulated. Simulation results show that the proposed algorithm can improve the tracking accuracy of conventional Kalman filter algorithm for time-varying sparse signal.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig.1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. A. Charles, C. Rozell, Dynamic filtering of sparse signals using reweighted ℓ 1, in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6451–6455

  2. S. Boyd, N. Parikh, E. Chu, et al., Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  3. A. Carmi, P. Gurfil, D. Kanevsky, Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms. IEEE Trans. Signal Process. 58(4), 2405–2409 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Q. Deng, H. Zeng, J. Zhang, S. Tian, J. Cao, Z. Li, A. Liu, Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Signal Process. 157, 280–287 (2019)

    Article  Google Scholar 

  5. M. Doneva, Mathematical models for magnetic resonance imaging reconstruction: an overview of the approaches, problems, and future research areas. IEEE Signal Process. Mag. 37(1), 24–32 (2020)

    Article  Google Scholar 

  6. D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  7. B. He, H. Liu, Z. Wang, X. Yuan, A strictly contractive Peaceman-Rachford splitting method for convex programming. SIAM J. Optim. 24(3), 1011–1040 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. B. He, F. Ma, X. Yuan, Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imaging Sci. 9(3), 1467–1501 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. F. Lin, M. Fardad, M. Jovanović, Design of optimal sparse feedback gains via the alternating direction method of multipliers. IEEE Trans. Autom. Control 58(9), 2426–2431 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. E. Masazade, M. Fardad, P. Varshney, Sparsity-promoting extended Kalman filtering for target tracking in wireless sensor networks. IEEE Signal Process. Lett. 19(12), 845–848 (2012)

    Article  Google Scholar 

  11. L. Pishdad, F. Labeau, Analytic minimum mean-square error bounds in linear dynamic systems with Gaussian mixture noise statistics. IEEE Access 8, 67990–67999 (2020)

    Article  Google Scholar 

  12. J. Ranstam, J. Cook, LASSO regression. J. Br. Surg. 105(10), 1348–1348 (2018)

    Article  Google Scholar 

  13. T. Saha, S. Srivastava, S. Khare, P. Stanimirović, M. Petković, An improved algorithm for basis pursuit problem and its applications. Appl. Math. Comput. 355, 385–398 (2019)

    MathSciNet  MATH  Google Scholar 

  14. S. Thrun, Probabilistic robotics. Commun. ACM 45(3), 52–57 (2002)

    Article  Google Scholar 

  15. N. Vaswani, Kalman filtered compressed sensing, in 15th IEEE International Conference on Image Processing (IEEE, 2008), pp. 893–896

Download references

Acknowledgements

This work was supported by China Academy of Railway Sciences locomotive running Department condition monitoring system (Grant: 9151524108).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tiankuo Shao.

Ethics declarations

Conflict of interest

The authors state that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The solution of \(G_{t}\) subproblem:

1. When \(g\left( {G_{t} } \right) = \left\| {{\hat{x}}_{t|t - 1} + G_{t} \cdot e_{t} } \right\|_{1}\):

$$ \begin{gathered} \varphi \left( {G_{t,i} } \right) = \beta \left| {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right| + \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} \hfill \\ { = }\left\{ \begin{gathered} \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} + \beta \left( {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right){,}\quad \hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} \ge 0\quad {\text{a}} \hfill \\ \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} - \beta \left( {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right){,}\quad \hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} < 0\quad {\text{b}} \hfill \\ \end{gathered} \right.\quad 1 \le i \le n \hfill \\ \end{gathered} $$
(25)

The optimal solution of piecewise function can be classified and discussed in different regions. Writing a as an optimization problem:

$$ \begin{aligned} & \mathop {\min }\limits_{{G_{t,i} }} f_{1} \left( {G_{t,i} } \right) \triangleq \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} + \beta \left( {{\hat{x}}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right) \\ & {\text{s.t.}}\quad \hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} \ge 0 \\ \end{aligned} $$
(26)

Firstly, solving unconstrained optimization. By optimality conditions:

$$ \nabla f_{1} (G_{t,i} ) = 0 \Rightarrow G_{t,i} = - \frac{\beta }{\rho }e_{t}^{T} + V_{i}^{k} $$
(27)

When \(\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} - \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} \ge 0\), namely: when \(V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} \ge \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t}\):

$$ {\text{ G}}_{t,i} = - \frac{\beta }{\rho }e_{t}^{T} + V_{i}^{k} $$
(28)

When \(V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} < \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t}\), The optimal solution appears at boundary \(\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0\), This is equivalent to solve a constrained optimization problem. Let its Lagrange function be:

$$ L_{1} \left( {G_{t,i} ,\lambda_{1} } \right) = f_{1} \left( {G_{t,i} } \right) + \lambda_{1} \left( {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right) $$
(29)

According to the optimality conditions:

$$ \left\{ \begin{gathered} \rho \left( {G_{t,i} - V_{i}^{k} } \right) + \left( {\beta + \lambda_{1} } \right)e_{t}^{T} = 0 \hfill \\ \hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0 \hfill \\ \end{gathered} \right. $$
(30)

Hence:

$$ {\text{ G}}_{t,i} = V_{i}^{k} - \frac{{\left( {\hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }} $$
(31)

In conclusion:

$$ {\text{ G}}_{t,i} = \left\{ \begin{gathered} - \frac{\beta }{\rho }e_{t}^{T} + V_{i}^{k} {,}\quad V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} \ge \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} \hfill \\ V_{i}^{k} - \frac{{\left( {\hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}{,}\quad V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} < \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} \hfill \\ \end{gathered} \right. $$
(32)

Then write b in the form of optimization problem:

$$ \begin{aligned} & \mathop {\min }\limits_{{G_{t,i} }} f_{2} \left( {G_{t,i} } \right) \triangleq \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} - \beta \left( {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right) \\ & {\text{s.t.}}\quad \hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} < 0 \\ \end{aligned} $$
(33)

Similarly:

$$ {\text{ G}}_{t,i} = \left\{ \begin{gathered} \frac{\beta }{\rho }e_{t}^{T} + V_{i}^{k} {,}\quad V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} < - \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} \hfill \\ V_{i}^{k} - \frac{{\left( {\hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}{,}\quad V_{i}^{k} \cdot e_{t} + \hat{x}_{t|t - 1,i} \ge - \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} \hfill \\ \end{gathered} \right. $$
(34)

In summary:

$$ G_{t,i}^{k + 1} = \left\{ \begin{gathered} V_{i}^{k} + \frac{\beta }{\rho }e_{t}^{T} {,}\quad \hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot {e}_{t} \in \left( { - \infty , - \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} } \right) \hfill \\ V_{i}^{k} - \frac{{{\hat{x}}_{t|t - 1,i} e_{t}^{T} + V_{i}^{k} \cdot {e}_{t} \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}{,}\quad \hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot {e}_{t} \in \left[ { - \frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} ,\frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} } \right] \hfill \\ V_{i}^{k} - \frac{\beta }{\rho }e_{t}^{T} {,}\quad \hat{x}_{t|t - 1,i} + V_{i}^{k} \cdot {e}_{t} \in \left( {\frac{\beta }{\rho }e_{t}^{T} \cdot e_{t} , + \infty } \right) \hfill \\ \end{gathered} \right. $$
(35)

2. When \(g\left( {G_{t} } \right) = {\text{card}}\left( {\hat{x}_{t|t - 1} + G_{t} \cdot e_{t} } \right)\):

$$ \begin{aligned} \varphi (G_{t,i} ) & = \beta {\text{card}}\left( {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right) + \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} \\ & = \left\{ {\begin{array}{*{20}l} {\frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} {,}} & {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0} \\ {\frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} + \beta ,} & {\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} \ne 0} \\ \end{array} } \right.\quad 1 \le i \le n \\ \end{aligned} $$
(36)

It can be noted that when \(\hat{x}_{t|t - 1,i} + G_{t,i} \cdot e_{t} \ne 0\), \(G_{t,i} = V_{i}^{k}\) minimizes the objective function, and the minimum value is \(\beta\). When \({\hat{x}}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0\), at this point, an equality constrained optimization problem is formed:

$$ \begin{gathered} \mathop {\min }\limits_{{G_{t,i} }} \, \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} \hfill \\ s.t. \, {\hat{x}}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0 \hfill \\ \end{gathered} $$
(37)

Its Lagrange function is:

$$ L_{2} \left( {G_{t,i} ,\lambda_{2} } \right) = \, \frac{\rho }{2}\left\| {G_{t,i} - V_{i}^{k} } \right\|_{2}^{2} + \lambda \left( {{\hat{x}}_{t|t - 1,i} + G_{t,i} \cdot e_{t} } \right) $$
(38)

By optimality conditions:

$$ \left\{ \begin{gathered} \rho \left( {G_{t,i} - V_{i}^{k} } \right) + \lambda_{2} e_{t}^{T} = 0 \, \hfill \\ {\hat{x}}_{t|t - 1,i} + G_{t,i} \cdot e_{t} = 0 \, \hfill \\ \end{gathered} \right. $$
(39)

Hence:

$$ G_{t,i} = V_{i}^{k} - \frac{{\left( {{\hat{x}}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }} $$
(40)

In summary:

$$ G_{t,i}^{k + 1} = \left\{ \begin{gathered} V_{i}^{k} - \frac{{\left( {{\hat{x}}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}{, }\frac{\rho }{2}\left\| {\frac{{\left( {{\hat{x}}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}} \right\|_{2}^{2} \le \beta \hfill \\ V_{i}^{k} {, }\frac{\rho }{2}\left\| {\frac{{\left( {{\hat{x}}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} } \right) \cdot e_{t}^{T} }}{{e_{t}^{T} \cdot e_{t} }}} \right\|_{2}^{2} > \beta \hfill \\ \end{gathered} \right. $$
(41)

Notice that when \({\hat{x}}_{t|t - 1,i} + V_{i}^{k} \cdot e_{t} = 0\), \(G_{t,i}^{k + 1} = V_{i}^{k}\).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, T., Luo, Q. A Sparse State Kalman Filter Algorithm Based on Kalman Gain. Circuits Syst Signal Process 42, 2305–2320 (2023). https://doi.org/10.1007/s00034-022-02215-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-022-02215-z

Keywords

Navigation