Skip to main content
Log in

A New Heavy-Tailed Robust Kalman Filter with Time-Varying Process Bias

  • Short Paper
  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

A new heavy-tailed robust Kalman filter is presented to address the issue that the linear stochastic state-space model has heavy-tailed noise with time-varying process bias. The one-step predicted probability density function (PDF) is modeled as the Student’s-t-inverse-Wishart distribution, and the likelihood PDF is modeled as the Student’s-t distribution. To acquire the approximate joint posterior PDF, the conjugate prior distributions of the state vector and auxiliary variables are set as the Gaussian, the inverse-Wishart, the Gaussian-Gamma, and the Gamma distributions, respectively. A new Gaussian hierarchical state-space model is presented by introducing auxiliary variables. Based on the proposed Gaussian hierarchical state-space model, the parameters of the proposed heavy-tailed robust filter are jointly inferred using the approach of the variational Bayesian. The simulation illustrates that the time-varying process bias is adaptively real-time estimated in this paper. In comparison with the existing cutting-edge filters, the presented heavy-tailed robust filter obtains higher accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability Statement

The datasets generated during the current study are available from the corresponding author on reasonable request.

Abbreviations

KF:

Kalman filter

SSM:

State-space model

HPMN:

Heavy-tailed process and heavy-tailed measurement noises

PDF:

Probability density function

HKF:

Huber KF

MCCKF:

Maximum Correntropy Criterion KF

RSTKF:

Robust Student’s-t KF

PRKF:

Proposed robust KF

VB:

Variational Bayesian

KLD:

Kullback–Leibler divergence

RMSE:

Root mean square error

ARMSE:

Averaged root mean square error

IW:

Inverse-Wishart

STIW:

Student’s-t-inverse-Wishart

dof:

Degree of freedom

\(\mathbf {y}_{i:j} \triangleq \{\mathbf {y}_k|i \le k \le j\}\) :

measurement of time from i to j

\(N(\cdot ;\mu , \varSigma )\) :

Gaussian PDF with mean vector \(\mu \) and covariance matrix \(\varSigma \)

\({\text {IW}}(\cdot ;\nu , \varPsi )\) :

IW PDF with dof \(\nu \) and inverse scale matrix \(\varPsi \)

\({\text {STIW}}(\cdot ;\mu ,\varPsi ,t,T,\tau )\) :

STIW PDF with the location vector \(\mu \), the scale matrices \(\varPsi \) and T, the dof parameter t and \(\tau \), respectively

\({\text {St}}(\cdot ;\mu ,\varSigma ,\tau )\) :

Student’s-t PDF with the mean vector \(\mu \), the scale matrix \(\varSigma \) and the dof parameter \(\tau \)

\(G(\cdot ;\alpha ,\beta )\) :

Gamma PDF with shape parameter \(\alpha \) and rate parameter \(\beta \)

log:

Natural logarithm

exp:

Natural exponential

\(E_{x}{[}\cdot {]}\) :

Expectation of x

\(E^{(i)}{[}\cdot {]}\) :

Expectation at the ith iteration

\(q^{(i)}(\cdot )\) :

Approximation PDF \(q(\cdot )\) at the ith iteration

\({\text {trace}}(\cdot )\) :

Trace of a matrix

\(\mathbf{I} _n\) :

\(n \times n\) identity matrix

\(\mathbf {A}^{-1} \) :

Inverse of \(\mathbf {A}\)

\(\mathbf {A}^{\mathrm{T}}\) or \(\mathbf {x}^\mathrm{T} \) :

Transpose of \(\mathbf {A}\) or \(\mathbf {x}\)

References

  1. A. Almagbile, J. Wang, W. Ding, Evaluating the performances of adaptive Kalman filter methods in gps/ins integration. J Glob Position Syst 9(1), 33–40 (2010)

    Article  Google Scholar 

  2. C.M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006)

  3. B. Chen, X. Liu, H. Zhao, J.C. Principe, Maximum correntropy Kalman filter. Automatica 76, 70–77 (2017)

    Article  MathSciNet  Google Scholar 

  4. J.J. Deyst, J.C. Deckert, RCS jet failure identification for the space shuttle. IFAC Proc. Volumes 8(1), 428–435 (1975)

    Article  Google Scholar 

  5. B. Feng, M. Fu, H. Ma, Y. Xia, B. Wang, Kalman filter with recursive covariance estimation-sequentially estimating process noise covariance. IEEE Trans. Ind. Electron. 61(11), 6253–6263 (2014)

    Article  Google Scholar 

  6. Y. Huang, G. Jia, B. Chen, Y. Zhang, A new robust Kalman filter with adaptive estimate of time-varying measurement bias. IEEE Signal Process. Lett. 27, 700–704 (2020)

    Article  Google Scholar 

  7. Y. Huang, Y. Zhang, N. Li, Z. Wu, J.A. Chambers, A novel robust student’s t-based Kalman filter. IEEE Trans. Aerosp. Electron. Syst. 53(3), 1545–1554 (2017)

  8. Y. Huang, Y. Zhang, Z. Wu, N. Li, J. Chambers, A novel adaptive Kalman filter with inaccurate process and measurement noise covariance matrices. IEEE Trans. Autom. Control 63(2), 594–601 (2017)

    Article  MathSciNet  Google Scholar 

  9. Y. Huang, Y. Zhang, Y. Zhao, J.A. Chambers, A novel robust gaussian-student’s t mixture distribution based Kalman filter. IEEE Trans. Signal Process. 67(13), 3606–3620 (2019)

  10. R. Izanloo, S.A. Fakoorian, H.S. Yazdi, D. Simon, in Kalman Filtering based on the Maximum Correntropy Criterion in the Presence of Non-Gaussian Noise. (IEEE, 2016), pp. 500–505

  11. G. Jia, Y. Huang, Y. Zhang, J. Chambers, A novel adaptive Kalman filter with unknown probability of measurement loss. IEEE Signal Process. Lett. 26(12), 1862–1866 (2019)

    Article  Google Scholar 

  12. G. Jia, Y. Zhang, M. Bai, N. Li, J. Qian, A novel robust student’s t-based gaussian approximate filter with one-step randomly delayed measurements. Signal Process. 171, 107496 (2020)

  13. C.D. Karlgaard, H. Schaub, Huber-based divided difference filtering. J. Guid. Control Dyn. 30(3), 885–891 (2007)

    Article  Google Scholar 

  14. N. Li, Bai, M.m., Zhang, Y.g, et al., A novel student’s t-based kalman filter with colored measurement noise. Circuits, Systems, and Signal Processing, 1–18 (2020)

  15. L. Luo, Y. Zhang, T. Fang, N. Li, A new robust Kalman filter for sins/dvl integrated navigation system. IEEE Access 7, 51386–51395 (2019)

    Article  Google Scholar 

  16. M. Roth, E. Özkan, F. Gustafsson, in A student’s t filter for heavy tailed process and measurement noise. (IEEE, 2013), pp. 5770–5774

  17. S. Sarkka, A. Nummenmaa, Recursive noise adaptive kalman filtering by variational bayesian approximations. IEEE Transactions on Automatic Control 54(3), 596–600 (2009)

    Article  MathSciNet  Google Scholar 

  18. C. Shan, W. Zhou, Y. Yang, Z. Jiang, Multi-fading factor and updated monitoring strategy adaptive Kalman filter-based variational Bayesian. Sensors 21(1), 198 (2021)

    Article  Google Scholar 

  19. D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches (John Wiley & Sons, 2006)

  20. F.M. Sobolic, D.S. Bernstein, in Kalman-filter-based time-varying parameter estimation via retrospective optimization of the process noise covariance. (IEEE, 2016), pp. 4545–4550

  21. D.G. Tzikas, A.C. Likas, N.P. Galatsanos, The variational approximation for Bayesian inference. IEEE Signal Process. Mag. 25(6), 131–146 (2008)

    Article  Google Scholar 

  22. S.Y. Wang, C. Yin, S.K. Duan, L.D. Wang, A modified variational Bayesian noise adaptive Kalman filter. Circuits Syst. Signal Process. 36(10), 4260–4277 (2017)

    Article  Google Scholar 

  23. Z. Wang, W. Zhou, Robust linear filter with parameter estimation under student-t measurement distribution. Circuits Syst. Signal Process. 38(6), 2445–2470 (2019)

    Article  Google Scholar 

  24. A.S. Willsky, A survey of design methods for failure detection in dynamic systems. Automatica 12(6), 601–611 (1976)

    Article  MathSciNet  Google Scholar 

  25. D. Xu, Z. Wu, Y. Huang, A new adaptive Kalman filter with inaccurate noise statistics. Circuits Syst. Signal Process. 38(9), 4380–4404 (2019)

    Article  Google Scholar 

  26. S. Zhao, B. Huang, F. Liu, Linear optimal unbiased filter for time-variant systems without apriori information on initial conditions. IEEE Trans. Autom. Control 62(2), 882–887 (2016)

    Article  MathSciNet  Google Scholar 

  27. S. Zhao, B. Huang, Y.S. Shmaliy, Bayesian state estimation on finite horizons: the case of linear state-space model. Automatica 85, 91–99 (2017)

    Article  MathSciNet  Google Scholar 

  28. S. Zhao, Y.S. Shmaliy, F. Liu, Fast Kalman-like optimal unbiased fir filtering with applications. IEEE Trans. Signal Process. 64(9), 2284–2297 (2016)

    Article  MathSciNet  Google Scholar 

  29. B. Zhu, L. Chang, J. Xu, F. Zha, J. Li, Huber-based adaptive unscented Kalman filter with non-gaussian measurement noise. Circuits Syst. Signal Process. 37(9), 3842–3861 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Thanks very much for the help of the editor and reviewers to improve the quality of our manuscript. Not only that, some of the valuable comments you put forward will make me do a better job in the future.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-dong Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by National Natural Science Foundation of China (Grant No. 61573113), and in part by the Ph.D. Student Research and Innovation Fund of the Fundamental Research Funds for the Central Universities under Grant 3072020GIP0409.

Appendices

Appendix A: Detailed Derivation of the Log Joint Posterior PDF

Using (16)–(18), (20)–(21) into (25), the joint posterior PDF \( p(\varTheta ,{\mathbf {y}}_{1:k})\) can be reformulated as follows

$$\begin{aligned} \begin{aligned} p(\varTheta ,{\mathbf {y}}_{1:k})&=N({\mathbf {y}}_k;{\mathbf {H}}_k{\mathbf {x}}_k,{\mathbf {R}}_k/\lambda _k)G(\lambda _k;\nu /2,\nu /2) N({\mathbf {x}}_k;{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}+{\varvec{\psi }}_k,P_{k|k-1}/\xi _k)\\&\times IW({\mathbf {P}}_{k|k-1};{\hat{t}}_{k|k-1},{\mathbf {T}}_{k|k-1}) N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k-1},{\varvec{{\Psi }}}_{k|k-1}/\xi _k)G(\xi _k;\omega /2,\omega /2) \end{aligned}\nonumber \\ \end{aligned}$$
(57)

where the Gaussian PDF of a random vector of d dimension is given by [2]

$$\begin{aligned} N(x;\mu ,\varSigma )=\frac{1}{(2\pi )^{d/2}|\varSigma |^{1/2} }exp(-\frac{1}{2}(x-\mu )^T\varSigma ^{-1}(x-\mu )) \end{aligned}$$
(58)

and the inverse Wishart PDF of a symmetric positive definite random matrix B of \(\text {d} \times \text {d}\) dimension is given by [2]

$$\begin{aligned} IW({\mathbf {B}};\varsigma , \varDelta )=\frac{|\varDelta |^{\varsigma /2}{\mathbf {B}}^{-(\varsigma +d+1)/2}exp(-0.5{\text {trace}}(\varDelta {\mathbf {B}}^{-1}))}{2^{d\varsigma /2}\varGamma _d(\varsigma /2)} \end{aligned}$$
(59)

where \(\varGamma (\cdot )\) denotes the Gamma function, and the Gamma PDF is given by [2]

$$\begin{aligned} G(\sigma ;\zeta ,\eta )=\frac{\eta ^{\zeta }}{\varGamma (\zeta )}\sigma ^{\zeta -1}exp(-\sigma \eta ) \end{aligned}$$
(60)

Exploiting (58)–(60), and taking log operation to Eq.  (57), the log joint posterior PDF is formulated as follows

$$\begin{aligned} \begin{aligned} {\text {log}} p(\varTheta ,{\mathbf {y}}_{1:k})=&-0.5\lambda _k({\mathbf {y}}_k-{\mathbf {H}}_k{\mathbf {x}}_k)^T{\mathbf {R}}_k^{-1}({\mathbf {y}}_k-{\mathbf {H}}_k{\mathbf {x}}_k)\\&-0.5\xi _k({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^T{\mathbf {P}}_{k|k-1}^{-1} ({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)\\&-0.5\xi _k({\varvec{\psi }}_k-\hat{{\mathbf {q}}}_{k|k-1})^T{\varvec{{\Psi }}}_{k|k-1}^{-1}({\varvec{\psi }}_k-\hat{{\mathbf {q}}}_{k|k-1})\\&-0.5{\text {trace}}({\mathbf {T}}_{k|k-1}{\mathbf {P}}_{k|k-1}^{-1}) -0.5(n+{\hat{t}}_{k|k-1}+2){\text {log}}|{\mathbf {P}}_{k|k-1}|\\&+(0.5m+0.5\nu -1){\text {log}}\lambda _k-0.5\nu \lambda _k\\&+(0.5n+0.5\omega -1){\text {log}}\xi _k-0.5\omega \xi _k+c_\theta \end{aligned} \end{aligned}$$
(61)

Appendix B: Proof of the Proposition 2

Define the modified one-step predicted PDF is \(p^{(i+1)}({\mathbf {x}}_k|{\mathbf {y}}_{1:k-1})\) as follows

$$\begin{aligned} p^{(i+1)}({\mathbf {x}}_k|{\mathbf {y}}_{1:k-1}) =N({\mathbf {x}}_k;\hat{{\mathbf {x}}}_{k|k-1},\overline{{\mathbf {P}}}_{k|k-1}^{(i+1)}) \end{aligned}$$
(62)

similar to the modified one-step predicted PDF, define the modified likelihood PDF \(p^{(i+1)}({\mathbf {y}}_k|{\mathbf {x}}_k)\) as follows

$$\begin{aligned} p^{(i+1)}({\mathbf {y}}_k|{\mathbf {x}}_k) =N({\mathbf {y}}_k;{\mathbf {H}}_k{\mathbf {x}}_k,\overline{{\mathbf {R}}}_k^{(i+1)}) \end{aligned}$$
(63)

where \(\overline{{\mathbf {P}}}_{k|k-1}^{(i+1)}\) and \(\overline{{\mathbf {R}}}_k^{(i+1)}\) are the modified one-step predicted error covariance matrix and the modified measurement noise covariance matrix, which is as follows

$$\begin{aligned} \overline{{\mathbf {P}}}_{k|k-1}^{(i+1)}=&\frac{\{E^{(i)}[{\mathbf {P}}_{k|k-1}]\}^{-1}}{E^{(i)}[\xi _k]} \end{aligned}$$
(64)
$$\begin{aligned} \overline{{\mathbf {R}}}_k^{(i+1)}=&\frac{{\mathbf {R}}_k}{E^{(i)}[\lambda _k]} \end{aligned}$$
(65)

Using (62)–(65) in (31), we obtain

$$\begin{aligned} q^{(i+1)}({\mathbf {x}}_k)=\int \frac{1}{c_k^{(i+1)}}p^{(i+1)}({\mathbf {y}}_k|{\mathbf {x}}_k)p^{(i+1)}({\mathbf {x}}_k|{\mathbf {y}}_{1:k-1}) \end{aligned}$$
(66)

where \(c_k^{(i+1)}\) is the normalizing constant, which is as follows

$$\begin{aligned} c_k^{(i+1)}=\int p^{(i+1)}({\mathbf {y}}_k|{\mathbf {x}}_k)p^{(i+1)}({\mathbf {x}}_k|{\mathbf {y}}_{1:k-1})d{\mathbf {x}}_k \end{aligned}$$
(67)

Appendix C: Proof of the Proposition 3

Using (37), the \({\text {log}}q^{(i+1)}({\varvec{\psi }}_k,\xi _k)\) can be rewritten as follows

$$\begin{aligned} {\text {log}}q^{(i+1)}({\varvec{\psi }}_k,\xi _k)&={\text {log}}N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k-1},{\varvec{{\Psi }}}_{k|k-1}/\xi _k)\nonumber \\&\quad +\,(0.5n+0.5\omega -1){\text {log}}\xi _k-0.5\omega \xi _k\nonumber \\&\quad -0.5\xi _k {\text {trace}}({\mathbf {P}}_{k|k}^{(i)}E^{(i)}[{\mathbf {P}}_{k|k-1}^{-1}])\nonumber \\&\quad -0.5\xi _k(\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^TE^{(i)}[{\mathbf {P}}_{k|k-1}^{-1}]\nonumber \\&\quad \times \,(\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)+c_{\{{\varvec{\psi }}_k,\xi _k\}} \end{aligned}$$
(68)

To derive the approximate posterior PDF of the time-varying process bias, define the modified time-varying process bias and the corresponding error covariance as \(\widetilde{{\varvec{\psi }}}_k^{(i)}\) and \(\widetilde{{\varvec{{\Psi }}}}_k^{(i)}\), which is as follows

$$\begin{aligned} \widetilde{{\varvec{\psi }}}_k^{(i)}&=\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1} \end{aligned}$$
(69)
$$\begin{aligned} \widetilde{{\varvec{{\Psi }}}}_k^{(i)}&=\{E^{(i)}[{\mathbf {P}}_{k|k-1}^{-1}]\}^{-1} \end{aligned}$$
(70)

by inserting (69) and (70) into (68), Eq.  (68) is reformulated as follows

$$\begin{aligned} \begin{aligned} {\text {log}}q^{(i+1)}({\varvec{\psi }}_k,\xi _k)&= {\text {log}}N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k-1},{\varvec{{\Psi }}}_{k|k-1}/\xi _k)\\&+(0.5n+0.5\omega -1){\text {log}}\xi _k-0.5\omega \xi _k\\&-0.5\xi _k {\text {trace}}({\mathbf {P}}_{k|k}^{(i)}(\widetilde{{\varvec{{\Psi }}}}_k^{(i)})^{-1})\\&-0.5\xi _k(\widetilde{{\varvec{\psi }}}_k^{(i)}-{\varvec{\psi }}_k)^T(\widetilde{{\varvec{{\Psi }}}}_k^{(i)})^{-1} (\widetilde{{\varvec{\psi }}}_k^{(i)}-{\varvec{\psi }}_k)+c_{\{{\varvec{\psi }}_k,\xi _k\}} \end{aligned} \end{aligned}$$
(71)

Define prior and likelihood PDFs of the modified time-varying process bias as

$$\begin{aligned}&p({\varvec{\psi }}_k|\xi _k,{\mathbf {y}}_{1:k-1})=N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k-1},{\varvec{{\Psi }}}_{k|k-1}/\xi _k) \end{aligned}$$
(72)
$$\begin{aligned}&p(\widetilde{{\varvec{\psi }}}_k^{(i)}|{\varvec{\psi }}_k,\xi _k,{\mathbf {y}}_{1:k-1})=N(\widetilde{{\varvec{\psi }}}_k^{(i)}; {\varvec{\psi }}_k,\frac{\widetilde{{\varvec{{\Psi }}}}_k^{(i)}}{\xi _k}) \end{aligned}$$
(73)

using the Bayesian’s rule and the standard KF measurement-update stage, we have

$$\begin{aligned} \begin{aligned} p({\varvec{\psi }}_k|\xi _k,{\mathbf {y}}_{1:k-1})p(\widetilde{{\varvec{\psi }}}_k^{(i)}|{\varvec{\psi }}_k,\xi _k,{\mathbf {y}}_{1:k-1})= p({\varvec{\psi }}_k|\xi _k,{\mathbf {y}}_{1:k-1},\widetilde{{\varvec{\psi }}}_k^{(i)})p(\widetilde{{\varvec{\psi }}}_k^{(i)}|\xi _k,{\mathbf {y}}_{1:k-1}) \end{aligned}\nonumber \\ \end{aligned}$$
(74)

where the likelihood PDF and posterior PDF of the modified time-varying process bias are calculated by

$$\begin{aligned}&p(\widetilde{{\varvec{\psi }}}_k^{(i)}|\xi _k,{\mathbf {y}}_{1:k-1})= N(\widetilde{{\varvec{\psi }}}_k^{(i)};\hat{{\mathbf {q}}}_{k|k-1},\frac{{\varvec{{\Psi }}}_{k|k-1}+\widetilde{{\varvec{{\Psi }}}}_k^{(i)}}{\xi _k}) \end{aligned}$$
(75)
$$\begin{aligned}&p({\varvec{\psi }}_k|\xi _k,{\mathbf {y}}_{1:k-1},\widetilde{{\varvec{\psi }}}_k^{(i)})= N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k}^{(i)},\frac{{\varvec{{\Psi }}}_{k|k-1}}{\xi _k}) \end{aligned}$$
(76)

using (72)–(76) , (71) can be reformulated as follows

$$\begin{aligned} \begin{aligned} {\text {log}}q^{(i+1)}({\varvec{\psi }}_k,\xi _k)=&{\text {log}}N(\widetilde{{\varvec{\psi }}}_k^{(i)};\hat{{\mathbf {q}}}_{k|k-1},\frac{{\varvec{{\Psi }}}_{k|k-1}+ \widetilde{{\varvec{{\Psi }}}}_k^{(i)}}{\xi _k})\\&+{\text {log}}N({\varvec{\psi }}_k;\hat{{\mathbf {q}}}_{k|k}^{(i)},\frac{{\varvec{{\Psi }}}_{k|k-1}}{\xi _k})\\&+(0.5n+0.5\omega -1){\text {log}}\xi _k-0.5\omega \xi _k\\&-0.5\xi _k {\text {trace}}[{\mathbf {P}}_{k|k}^{(i)}( \widetilde{{\varvec{{\Psi }}}}_k^{(i)})^{-1}]+c_{\{{\varvec{\psi }}_k,\xi _k\}} \end{aligned} \end{aligned}$$
(77)

Appendix D: Calculation of the Complex Necessary Expectation

The expectation \({\mathbf {A}}_k^{(i)}\) in (27) can be calculated as follows

$$\begin{aligned} \begin{aligned} {\mathbf {A}}_k^{(i)}=&E_{{\mathbf {P}}_{k|k-1}}^{(i)}[({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^T]\\ =&E^{(i)}[({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)}+ \hat{{\mathbf {q}}}_{k|k}^{(i)}-{\varvec{\psi }}_k)\\&\times ({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)}+\hat{{\mathbf {q}}}_{k|k}^{(i)}-{\varvec{\psi }}_k)^T]\\ =&E^{(i)}[({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)}) ({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)})^T]+{\mathbf {P}}_{{\varvec{\psi }}_k}^{(i)}\\ =&E^{(i)}[({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)}+\hat{{\mathbf {x}}}_{k|k}^{(i)}- {\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)})\\&\times ({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)}+\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}- \hat{{\mathbf {q}}}_{k|k}^{(i)})^T]+{\mathbf {P}}_{{\varvec{\psi }}_k}^{(i)}\\ =&{\mathbf {P}}_{k|k}^{(i)}+(\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-\hat{{\mathbf {q}}}_{k|k}^{(i)})\\&\times (\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}- \hat{{\mathbf {q}}}_{k|k}^{(i)})^T+{\mathbf {P}}_{{\varvec{\psi }}_k}^{(i)} \end{aligned} \end{aligned}$$
(78)

where \({\mathbf {P}}_{{\varvec{\psi }}_k}^{(i+1)}\) is obtained by

$$\begin{aligned} {\mathbf {P}}_{{\varvec{\psi }}_k}^{(i+1)}=\frac{{\varvec{{\Psi }}}_{k|k}^{(i+1)}}{E^{(i+1)}[\xi _k]} \end{aligned}$$
(79)

The expectation \({\mathbf {C}}_k^{(i)}\) in (37) can be calculated as follows

$$\begin{aligned} \begin{aligned} {\mathbf {C}}_k^{(i)}=&E_{({\varvec{\psi }}_k,\xi _k)}^{(i)}[({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)({\mathbf {x}}_k-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^T]\\ =&E^{(i)}[({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)}+\hat{{\mathbf {x}}}_{k|k}^{(i)} -{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)\\&\times ({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)}+\hat{{\mathbf {x}}}_{k|k}^{(i)} -{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^T]\\ =&{\mathbf {P}}_{k|k}^{(i)}+[(\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k) (\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {F}}_{k-1}\hat{{\mathbf {x}}}_{k-1|k-1}-{\varvec{\psi }}_k)^T] \end{aligned} \end{aligned}$$
(80)

The expectation \({\mathbf {D}}_k^{(i)}\) in (45) can be calculated as follows

$$\begin{aligned} \begin{aligned} {\mathbf {D}}_k^{(i)}=&E_{\lambda _k}^{(i)}[({\mathbf {y}}_k-{\mathbf {H}}_k{\mathbf {x}}_k)({\mathbf {y}}_k-{\mathbf {H}}_k{\mathbf {x}}_k)^T]\\ =&E^{(i)}[({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)}+{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {H}}_k{\mathbf {x}}_k)\\&\times ({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)}+{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)}-{\mathbf {H}}_k{\mathbf {x}}_k)^T]\\ =&({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)})({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)})^T+{\mathbf {H}}_kE^{(i)}[({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)})({\mathbf {x}}_k-\hat{{\mathbf {x}}}_{k|k}^{(i)})^T]{\mathbf {H}}_k^T\\ =&({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)})({\mathbf {y}}_k-{\mathbf {H}}_k\hat{{\mathbf {x}}}_{k|k}^{(i)})^T+ {\mathbf {H}}_k{\mathbf {P}}_{k|k}^{(i)}{\mathbf {H}}_k^T \end{aligned} \end{aligned}$$
(81)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Zh., Zhou, Wd., Jia, Gl. et al. A New Heavy-Tailed Robust Kalman Filter with Time-Varying Process Bias. Circuits Syst Signal Process 41, 2358–2378 (2022). https://doi.org/10.1007/s00034-021-01866-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-021-01866-8

Keywords

Navigation