Skip to main content
Log in

Image modeling based on a 2-D stochastic subspace system identification algorithm

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Fitting a causal dynamic model to an image is a fundamental problem in image processing, pattern recognition, and computer vision. In image restoration, for instance, the goal is to recover an estimate of the true image, preferably in the form of a parametric model, given an image that has been degraded by a combination of blur and additive white Gaussian noise. In texture analysis, on the other hand, a model of a particular texture image can serve as a tool for simulating texture patterns. Finally, in image enhancement one computes a model of the true image and the residuals between the image and the modeled image can be interpreted as the result of applying a de-noising filter. There are numerous other applications within the field of image processing that require a causal dynamic model. Such is the case in scene analysis, machined parts inspection, and biometric analysis, to name only a few. There are many types of causal dynamic models that have been proposed in the literature, among which the autoregressive moving average and state-space models (i.e., Kalman filter) are the most commonly used. In this paper we introduce a 2-D stochastic state-space system identification algorithm for fitting a quarter plane causal dynamic Roesser model to an image. The algorithm constructs a causal, recursive, and separable-in-denominator 2-D Kalman filter model. The algorithm is tested with three real images and the quality of the estimated images are assessed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. It is well known that images are nonstationary processes, however, under mild violation of these assumptions, the 2-D 4SID algorithm performed very well with real images, as seen in Sect. 7.

  2. In order to avoid an abuse of notation when there is no time index involved, and for lack of better words, we still use the terms past and future as is commonly used in time domain subspace system identification.

  3. For zero mean processes, \({\varvec{A}}|{\varvec{B}}={\varvec{A}}{\varvec{D}}{\varvec{B}}^{\top }({\varvec{B}}{\varvec{D}}{\varvec{B}}^{\top })^{-1}{\varvec{B}}\) denotes the orthogonal projection of \({\varvec{A}}\) onto \({\varvec{B}}\), where \({\varvec{D}}\) is a diagonal weight matrix.

  4. 4SID stands for State Space Subspace System IDentification.

  5. There is no need to compute the Q part, just the triangular factor L will suffice, thus achieving a significant reduction in the number of computations.

  6. M(1 : m,  : ) denotes the first m rows of M and all its columns.

  7. MATLAB is a registered trademark of The Math Works, Inc.

References

  • Abramatic, J., Attasi, S., Chieze, J., & Curien, N. (1975). Mod‘eles statistiques bidimensionnels et application au traitement num’erique des images. In Colloque national sur le traitement du signal et ses applications, Nice, France.

  • Aggarwal, G., Chowdhury, A., & Chellappa, A. (2004). A system identification approach for video-based face recognition. In: Proceedings of the international conference on pattern recognition, Washington, D.C.

  • Alata, O., & Olivier, C. (2003). Choice of a 2-D causal autoregressive texture model using information criteria. Pattern Recognition Letters, 24, 1191–1201.

    Article  MATH  Google Scholar 

  • Ali, M., Chughtai, S. S., & Werner, H. (2009). Identification of spatially interconnected systems. In Proceedings of the 48th IEEE conference on decision and control, held jointly with the 2009 28th Chinese control conference (pp. 7163–7168).

  • Attasi, S. (1976). Modelling and recursive estimation for double indexed sequences. In R. Mehra & D. Lainiotis (Eds.), System identification: Advances and case studies (pp. 289–348). New York: Academic Press.

    Chapter  Google Scholar 

  • Azimi-Sadjadi, M., & Wong, P. (1987). Two-dimensional block Kalman filtering for image restoration. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35, 1736–1749.

    Article  Google Scholar 

  • Barry, P., Gran, R., & Waters, C. (1976). Two-dimensional filtering: A state space approach. In Proceedings of the IEEE conference on decision and control, Clearwater, FL.

  • Bose, N. (1977). Problems and progress in multidimensional system theory. Proceedings of the IEEE, 65, 824–840.

    Article  Google Scholar 

  • Camcioglu, E., & Kayran, A. H. (2013). Spectrum estimation using new efficient 2-d AR lattice modeling of random fields. In Africon, 2013 (pp. 1–6).

  • Chen, C., Tsai, J., & Shieh, L. (2003). Modeling of variable coefficient Roesser’s model for systems described by second order partial differential equation. Circuits, Systems, and Signal Processing, 22, 423–463.

    Article  MathSciNet  MATH  Google Scholar 

  • Delp, E., Kashyap, L., & Mitchell, O. (1979). Image data compression using autoregressive time series models. Pattern Recognition, 11, 313–323.

    Article  Google Scholar 

  • Dillabough, M. (2007). Discrete state-space models for distributed parameter systems (Unpublished master’s thesis). Sudbury, ON: School of Graduate Studies Laurentian University.

    Google Scholar 

  • Ding, T., Sznaier, M., & Camps, O. (2006). Robust identification of 2-D periodic systems with applications to texture synthesis and classification. In Proceedings of the IEEE conference on decision and control, San Diego, CA.

  • Fornasini, E., & Marchesini, G. (1978). Doubly-indexed dynamical systems: State-space models and structural properties. Mathematical Systems Theory, 12, 59–72.

    Article  MathSciNet  MATH  Google Scholar 

  • Fraanje, R., & Verhaegen, M. (2005). A spatial canonical approach to multidimensional state-space identification for distributed parameter systems. In Proceedings of the international workshop on multidi-mensional systems, Wuppertal.

  • Galkowski, K. (2000). Higher order discretization of 2-d systems. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 47(5), 713–722. doi:10.1109/81.847876.

    Article  Google Scholar 

  • Givone, D., & Roesser, R. (1972). Multidimensional linear iterative circuits—General properties. IEEE Transactions on Computers, 21, 1067–1073.

    Article  MathSciNet  MATH  Google Scholar 

  • Golub, G., & Van Loan, C. (1996). Matrix computations (3rd ed.). Baltimore, MD: John Hopkins University Press.

    MATH  Google Scholar 

  • Habibi, A. (1972). Two-dimensional Bayesian estimate of images. Proceedings of the IEEE, 60, 878–883.

    Article  Google Scholar 

  • Hunger, R. (2007). Floating point operations in matrix-vector calculus (Technical Report No. Version 1.3). Technische Universität München Associate Institute for Signal Processing. Retrieved from https://mediatum.ub.tum.de/doc/625604/625604.pdf.

  • Jain, A. (1981). Advances in mathematical models for image processing. Proceedings of the IEEE, 69, 502–528.

    Article  Google Scholar 

  • Kaczorek, T. (1985). Two dimensional linear systems. Berlin: Springer.

    MATH  Google Scholar 

  • Katayama, T. (2005). Subspace methods for system identification. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Katayama, T., & Kosaka, M. (1979). Recursive filtering algorithm for a two-dimensional system. IEEE Transactions of Automatic Control, 24, 130–132.

    Article  MATH  Google Scholar 

  • Katayama, T., & Picci, G. (1999). Realization of stochastic systems with exogenous inputs and subspace identification methods. Automatica, 35, 1635–1652.

    Article  MathSciNet  MATH  Google Scholar 

  • Kayran, A. H., & Camcioglu, E. (2014). New efficient 2-d lattice structures for general autoregressive modeling of random fields. IEEE Transactions on Signal Processing, 62(6), 1590–1602.

    Article  MathSciNet  Google Scholar 

  • Kizilkaya, A., & Kayran, A. (2005). Estimation of 2-D ARMA model parameters by using equivalent AR approach. Journal of the Franklin Institute, 342, 39–67.

    Article  MathSciNet  MATH  Google Scholar 

  • Kuduvalli, G., & Rangayyan, R. (1993). An algorithm for direct computation of 2-D linear prediction coefficients. IEEE Transactions on Signal Processing, 41, 996–1000.

    Article  Google Scholar 

  • Kung, S. Y., Levy, B. C., Morf, M., & Kailath, T. (1977). New results in 2-D system theory, part II: 2-D state-space models-realization and the notions of controllability, observability, and minimality. Proceedings of the IEEE, 65, 945–961.

    Article  Google Scholar 

  • Kurek, (1985). The general state-space model for a two-dimensional linear digital system. IEEE Transactions on Automatic Control, 30(6), 600–602.

  • Kurek, & Zaremba, M. (1993). Iterative learning control synthesis based on 2-D system theory. IEEE Transactions on Automatic Control, 38, 121–125.

  • Lashgari, B. (1981). Two-dimensional approximation, model reduction and minimum variance error estimation. Unpublished doctoral dissertation. University of Southern California, Los Angeles, CA.

  • Lashgari, B., & Silverman, L. (1986). Recursive estimation of two-dimensional processes with application to image processing. Circuits Systems and Signal Processing, 5, 212–226.

    Article  MATH  Google Scholar 

  • Lashgari, B., Silverman, L., & Abramatic, J. (1983). Approximation of 2-D separable in denominator filters. IEEE Transactions on Circuits and Systems, 30, 107–121.

    Article  MathSciNet  MATH  Google Scholar 

  • Lele, S., & Mendel, J. (1987). Modeling and recursive state estimation for two-dimensional noncausal filters with applications in image restoration. IEEE Transactions on Circuits and Systems, 34, 1507–1517.

    Article  Google Scholar 

  • Lin, T., Kawamata, M., & Higuchi, T. (1987). Decomposition of 2-D separable-denominator systems: Existence, uniqueness, and applications. IEEE Transactions on Circuits and Systems, 34, 292–296.

    Article  MATH  Google Scholar 

  • Magnus, J., & Neudecker, H. (2001). Matrix differential calculus with applications in statistics and econometrics. New York: Wiley.

    MATH  Google Scholar 

  • Mardia, K. V., Kent, J. T., & Bibby, J. M. (1979). Multivariate analysis. New York: Academic Press.

    MATH  Google Scholar 

  • Martin, C. (2005). Higher-order Kronecker products and tensor decompositions. Unpublished doctoral dissertation, Cornell University, Ithaca, NY.

  • Moonen, M., De Moor, B., Vandenberghe, L., & Vandewalle, J. (1989). On and off line identification of linear state space models. International Journal of Control, 49, 219–232.

    Article  MathSciNet  MATH  Google Scholar 

  • Owens, D., Amann, N., Rogers, E., & French, M. (2000). Analysis of linear iterative learning control schemes-a 2-D systems/repetitive processes approach. Multidimensional Systems and Signal Processing, 11, 125–177.

    Article  MathSciNet  MATH  Google Scholar 

  • Ramos, J. (1994). A subspace algorithm for identifying 2-D separable-in-denominator filters. IEEE Transactions on Circuits and Systems, 41, 63–67.

    Article  MATH  Google Scholar 

  • Ramos, J., Alenany, A., Shang, H., & Lopes dos Santos, P. (2011). Subspace algorithms for identifying separable in denominator 2-D systems with deterministic inputs. IET Control Theory and Applications, 5, 1748–1768.

    Article  MathSciNet  Google Scholar 

  • Ramos, J., & dos Santos, P. (2011). Subspace system identification of separable-in-denominator 2-D stochastic systems. In Proceedings of the IEEE conference on decision and control and European control conference, Orlando, FL.

  • Rao, C. R. (1964). The use and interpretation of principal component analysis in applied research. Sankhya, A, 26, 329359.

    MathSciNet  Google Scholar 

  • Roesser, R. (1975). A discrete-state-space model for linear image processing. IEEE Transactions on Automatic Control, 20, 1–10.

    Article  MathSciNet  MATH  Google Scholar 

  • Strintzis, M. (1976). Comments on “two-dimensional Bayesian estimate of images”. Proceedings of the IEEE, 64, 1255–1257.

    Article  Google Scholar 

  • Suresh, B., & Shenoi, B. (1981). New results in two-dimensional Kalman filtering with applications to image restoration. IEEE Transactions on Circuits and Systems, 28, 307–319.

    Article  Google Scholar 

  • USC-SIPI Image Database, (n.d.). http://sipi.usc.edu/database.

  • Van Overschee, P., & De Moor, B. (1996). Subspace identification for linear systems: Theory, implementation, and applications. New York: Kluwer.

    Book  MATH  Google Scholar 

  • Verhaegen, M. (1994). Identification of the deterministic part of mimo state space model given in innovations form from input–output data. Automatica, 30(1), 61–74.

    Article  MathSciNet  MATH  Google Scholar 

  • Verhaegen, M., & Verdult, V. (2007). Filtering and system identification: A least squares approach. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Wang, Z., Bovik, A., Sheikh, H., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600–612.

    Article  Google Scholar 

  • Woods, J., & Ingle, V. (1981). Kalman filtering in two dimensions: Further results. IEEE Transactions on Acoustic, Speech, and Signal Processing, 29, 188–197.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, Z. (1985). Multidimensional state-space model Kalman filtering with application to image restoration. IEEE Transactions on Acoustic, Speech, and Signal Processing, 33, 1576–1592.

    Article  Google Scholar 

  • Xiao, C., Sreeram, V., Liu, W., & Venetsanopolos, A. (1998). Identification and model reduction of 2-D systems via the extended impulse response gramians. Automatica, 34, 93–101.

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, R., Xie, L., & Zhang, C. (2006). H2 and mixed H2/H1 control of two-dimensional systems in Roesser model. Automatica, 42, 1507–1514.

    Article  Google Scholar 

  • Zhang, J. Y., & Steenaart, W. (1990). A very fast kalman filter for image restoration. In Proceedings of the IEEE international symposium on circuits and systems, New Orleans, LA.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José A. Ramos.

Appendices

Appendix 1: Kronecker product results

While implementing the 2-D stochastic 4SID algorithm, we will need some Kronecker product identities. In this Section we borrow some results from Magnus and Neudecker (2001) and Martin (2005). Let \({\varvec{A}}\in {\mathbb {R}}^{m \times n}\) and \({\varvec{B}}\in {\mathbb {R}}^{p \times q}\) be two arbitrary matrices. We then have the following results:

$$\begin{aligned} \text {vec}\{{\varvec{A}}^{\top }\}= & {} {\varvec{K}}_{mn} \text {vec}\{{\varvec{A}}\}, \end{aligned}$$
(71)

where \({\varvec{K}}_{mn}\) is called “the commutator matrix” in Magnus and Neudecker (2001) and “the perfect shuffle operator” in Martin (2005), and is a square matrix of dimensions \(mn \times mn\), given by

$$\begin{aligned} {\varvec{K}}_{mn}= & {} \left[ \begin{array}{c} I_{mn}(1:m:mn,1:mn) \\ I_{mn}(2:m:mn,1:mn) \\ \vdots \\ I_{mn}(m:m:mn,1:mn) \end{array} \right] . \end{aligned}$$
(72)

\({\varvec{K}}_{mn}\) is the matrix obtained from the rows of the \(mn \times mn\) identity matrix, \(I_{mn}\), by taking every mth row starting with the first, then every mth row starting with the second row, and so on, until the last block obtained by taking every mth row starting with the mth row. Here the MATLAB notation M(a : b : cd : e) is used to indicate every bth row of M starting with a and ending with c, and columns d through e.

Appendix 2: The operators “vec” and “reshape”

The vec operator converts a matrix into a column vector and the reshape operator, in general, converts a vector into a matrix.

Definition 1

Let \({\varvec{A}}\in {\mathbb {R}}^{m \times n}\). Then \(\text {vec}\{{\varvec{A}}\}\) is defined as

$$\begin{aligned} {\varvec{b}}= & {} \text {vec}\{{\varvec{A}}\} \nonumber \\= & {} \left[ \begin{array}{c} {\varvec{A}}(:,1) \\ {\varvec{A}}(:,2) \\ \vdots \\ {\varvec{A}}(:,n) \end{array} \right] \in {\mathbb {R}}^{mn}. \end{aligned}$$
(73)

Definition 2

Let \({\varvec{b}}\in {\mathbb {R}}^{mn}\). Then \(\text {reshape}\{{\varvec{b}},m,n\}\) is defined as

$$\begin{aligned} {\varvec{A}}= & {} \text {reshape}\{{\varvec{b}},m,n\} \;=\; \left[ \begin{array}{c|c|c|c} {\varvec{b}}(1:m)&{\varvec{b}}(m+1:2m)&\ldots&{\varvec{b}}((n-1)m+1:mn) \end{array} \right] . \end{aligned}$$
(74)

Note that the definition of \(\text {reshape}\{\cdot \}\) in MATLAB does not require \({\varvec{b}}\) to be a vector, but it operates on \({\varvec{b}}\) columnwise.

Appendix 3: Solving the coupled Lyapunov equations

Since (8) is not a Lyapunov equation per se, we need to break it up into its individual components in order to solve it. Adding the symmetry constraints, we obtain a system of equations of the form

$$\begin{aligned} \varPi _h= & {} A_1\varPi _hA_1^{\top } + A_2\varPi _vA_2^{\top } + Q_{hh} \end{aligned}$$
(75a)
$$\begin{aligned} \varPi _v= & {} A_4\varPi _vA_4^{\top } + Q_{vv} \end{aligned}$$
(75b)
$$\begin{aligned} \varPi _{hv}= & {} A_2\varPi _{v}A_4^{\top } + Q_{hv} \end{aligned}$$
(75c)
$$\begin{aligned} \varPi _{h}= & {} \varPi _{h}^{\top } \end{aligned}$$
(75d)
$$\begin{aligned} \varPi _{v}= & {} \varPi _{v}^{\top } \end{aligned}$$
(75e)
$$\begin{aligned} \varPi _{vh}= & {} \varPi ^{\top }_{hv}. \end{aligned}$$
(75f)

Now vectorizing (75a)–(75f), one can find \(\text {vec}\{\varPi _{h}\}\), \(\text {vec}\{\varPi _{v}\}\), and \(\text {vec}\{\varPi _{hv}\}\), by solving the following overdetermined system of equations

$$\begin{aligned} \left[ \begin{array}{c|c|c} I_{n_h^2}-A_1 \otimes A_1 &{} A_2 \otimes A_2 &{} 0_{n_h^2 \times n_hn_v} \\ \hline 0_{n_v^2 \times n_h^2} &{} I_{n_v^2}-A_4 \otimes A_4 &{} 0_{n_v^2 \times n_hn_v} \\ \hline 0_{n_hn_v \times n_h^2} &{} - A_4 \otimes A_2 &{} I_{n_hn_v} \\ \hline I_{n_h^2}-{\varvec{K}}_{n_hn_h} &{} 0_{n_h^2 \times n_v^2} &{} 0_{n_h^2 \times n_hn_v} \\ \hline 0_{n_v^2 \times n_h^2} &{} I_{n_v^2}-{\varvec{K}}_{n_vn_v} &{} 0_{n_v^2 \times n_hn_v} \\ \hline 0_{n_hn_v \times n_h^2} &{} 0_{n_hn_v \times n_v^2} &{} I_{n_hn_v} - {\varvec{K}}_{n_hn_v} \end{array} \right] \left[ \begin{array}{c} \text {vec}\{\varPi _{h}\} \\ \text {vec}\{\varPi _{v}\} \\ \text {vec}\{\varPi _{hv}\} \end{array} \right]= & {} \left[ \begin{array}{c} \text {vec}\{Q_{hh}\} \\ \hline \text {vec}\{Q_{vv}\} \\ \hline \text {vec}\{Q_{hv}\} \\ \hline 0_{n_h^2 \times 1} \\ \hline 0_{n_v^2 \times 1} \\ \hline 0_{n_hn_v \times 1} \end{array} \right] ,\nonumber \\ \end{aligned}$$
(76)

where \({\varvec{K}}_{n_hn_h}\in {\mathbb {R}}^{n_h^2 \times n_h^2}\), \({\varvec{K}}_{n_vn_v}\in {\mathbb {R}}^{n_v^2 \times n_v^2}\), and \({\varvec{K}}_{n_hn_v}\in {\mathbb {R}}^{n_hn_v \times n_hn_v}\) are commutator matrices such that \(\text {vec}\{\varPi _{h}^{\top }\}={\varvec{K}}_{n_hn_h}\text {vec}\{\varPi _{h}\}\), \(\text {vec}\{\varPi _{v}^{\top }\}={\varvec{K}}_{n_vn_v}\text {vec}\{\varPi _{v}\}\), and \(\text {vec}\{\varPi _{hv}^{\top }\}={\varvec{K}}_{n_hn_v}\text {vec}\{\varPi _{hv}\}\).

An alternative solution can be obtained by using a standard discrete Lyapunov equation solver such as “dlyap” from MATLAB. First one would solve

$$\begin{aligned} \varPi _v= & {} \text {dlyap}(A_4,Q_{vv}), \end{aligned}$$
(77)

then solve (75c) for \(\varPi _{hv}\), and finally solve

$$\begin{aligned} \varPi _h= & {} \text {dlyap}(A_1,A_2\varPi _vA_2^{\top } + Q_{hh}). \end{aligned}$$
(78)

Once \(\{\varPi _h,\varPi _v,\varPi _{hv},S_h,S_v,R\}\) are known, one can compute G and \(\varLambda _{0,0}\) from

$$\begin{aligned} G_1= & {} A_1\varPi _h C_1^{\top }+A_2\varPi _v C_2^{\top }+S_h \end{aligned}$$
(79a)
$$\begin{aligned} G_2= & {} A_4\varPi _v C_2^{\top } + S_v \end{aligned}$$
(79b)
$$\begin{aligned} \varLambda _{0,0}= & {} C_1\varPi _h C_1^{\top } + C_2\varPi _v C_2^{\top } + R. \end{aligned}$$
(79c)

Appendix 4: Solving the coupled Riccati equations for \(P_h\) and \(P_v\)

In the 2-D stochastic 4SID algorithm, the state estimate covariance matrices \(P_h\) and \(P_v\) must be found by solving the coupled Riccati equations (17a)–(17b). To the authors’ knowledge, there is no known algorithm to solve such coupled Riccati equations. An alternative is to solve the two Riccati equations recursively. First, let us compute the state estimation error covariance matrices \(\varDelta P_h=\varPi _h-P_h\) and \(\varDelta P_v=\varPi _v-P_v\) by direct substitution of the Lyapunov and Riccati equations, i.e.,

$$\begin{aligned} \varDelta P_h= & {} \varPi _h - P_h \;=\; A_1\varPi _hA_1^{\top } + A_2\varPi _vA_2^{\top } + Q_{hh} - A_1P_hA_1^{\top } - A_2P_vA_2^{\top } \nonumber \\&- (G-A_1P_hC_1^{\top }-A_2P_vC_2^{\top })(\varLambda _{00} - C_1P_hC_1^{\top } \nonumber \\&-C_2P_vC_2^{\top })^{-1} (G-A_1P_hC_1^{\top }-A_2P_vC_2^{\top })^{\top } \nonumber \\= & {} A_1\varDelta P_hA_1^{\top } + A_2\varDelta P_vA_2^{\top } + Q_{hh} - (A_1\varPi _hC_1^{\top }+A_2\varPi _hC_2^{\top } + S_h-A_1P_hC_1^{\top }\nonumber \\&-A_2P_vC_2^{\top }) \nonumber \\&\times (C_1\varPi _hC_1^{\top }+C_2\varPi _vC_2^{\top } + R - C_1P_hC_1^{\top } -C_2P_vC_2^{\top })^{-1} (A_1\varPi _hC_1^{\top }+A_2\varPi _hC_2^{\top } \nonumber \\&+ S_h-A_1P_hC_1^{\top }-A_2P_vC_2^{\top })^{\top } \nonumber \\= & {} A_1\varDelta P_hA_1^{\top } + A_2\varDelta P_vA_2^{\top } + Q_{hh} + (A_1\varDelta P_hC_1^{\top } + A_2\varDelta P_vC_2^{\top } + S_h) \nonumber \\&\times (C_1\varDelta P_hC_1^{\top } + C_2\varDelta P_vC_2^{\top } + R)^{-1}(A_1\varDelta P_hC_1^{\top } + A_2\varDelta P_vC_2^{\top } + S_h)^{\top } \nonumber \\ \varDelta P_h= & {} A_1\varDelta P_hA_1^{\top } + \varDelta Q_{hh} + (A_1\varDelta P_hC_1^{\top } + \varDelta S_h)(C_1\varDelta P_hC_1^{\top } + \varDelta R_h)^{-1} (A_1\varDelta P_hC_1^{\top } \nonumber \\&+ \varDelta S_h)^{\top }, \end{aligned}$$
(80)

where

$$\begin{aligned} \varDelta Q_{hh}= & {} A_2\varDelta P_vA_2^{\top } + Q_{hh} \end{aligned}$$
(81a)
$$\begin{aligned} \varDelta S_{h}= & {} A_2\varDelta P_vC_2^{\top } + S_h \end{aligned}$$
(81b)
$$\begin{aligned} \varDelta R_{h}= & {} C_2\varDelta P_vC_2^{\top } + R. \end{aligned}$$
(81c)

Doing the same for \(\varDelta P_v\), we obtain

$$\begin{aligned} \varDelta P_v= & {} A_4\varDelta P_vA_4^{\top } + Q_{vv} + (A_4\varDelta P_vC_2^{\top } + S_v) (C_2\varDelta P_vC_2^{\top } + \varDelta R_v)^{-1}(A_4\varDelta P_vC_2^{\top } + S_v)^{\top },\nonumber \\ \end{aligned}$$
(82)

where

$$\begin{aligned} \varDelta R_v= & {} C_1\varDelta P_hC_1^{\top } + R. \end{aligned}$$
(83)

Note that (80)–(82) are now standard algebraic Riccati equations and can be solved with the MATLAB function “dare”. One way to solve (80)–(82) is by first setting \(\varDelta P_h=0_{n_h \times n_h}\) and solving (82) to obtain an initial \(\varDelta P_v\). Then iterate between (80) and (82) until convergence. The algorithm would be as follows:

Algorithm Riccati

  1. 1.

    Initialization: Set \(k=0\) and \(\varDelta P_h(0)=0_{n_h \times n_h}\).

  2. 2.

    Compute \(\varDelta P_v(0)\) from (82) with \(\varDelta P_h(0)\) as in Step 1.

  3. 3.

    Compute

    $$\begin{aligned} \varDelta Q_{hh}(k)= & {} A_2\varDelta P_v(k)A_2^{\top } + Q_{hh} \\ \varDelta S_h(k)= & {} A_2\varDelta P_v(k)C_2^{\top } + S_h \\ \varDelta R_h(k)= & {} C_2\varDelta P_v(k)C_2^{\top } + R \\ \varDelta P_h(k+1)= & {} A_1\varDelta P_h(k)A_1^{\top } + \varDelta Q_{hh}(k) + (A_1\varDelta P_h(k)C_1^{\top } + \varDelta S_h(k)) \\&\times (C_1\varDelta P_h(k)C_1^{\top } + \varDelta R_h(k))^{-1}(A_1\varDelta P_h(k)C_1^{\top } + \varDelta S_h(k))^{\top } \\ \varDelta R_v(k)= & {} C_1\varDelta P_h(k)C_1^{\top } + R \\ \varDelta P_v(k+1)= & {} A_4\varDelta P_v(k)A_4^{\top } + Q_{vv} + (A_4\varDelta P_v(k)C_2^{\top } + S_v) (C_2\varDelta P_v(k)C_2^{\top } \\&+ \varDelta R_v(k))^{-1} (A_4\varDelta P_v(k)C_2^{\top } + S_v)^{\top }. \end{aligned}$$
  4. 4.

    Set \(k = k + 1\) and return to Step 3 until convergence, i.e., \(k \rightarrow k^{\star }\).

  5. 5.

    Upon convergence set \(\varDelta P_h=\varDelta P_h(k^{\star })\) and \(\varDelta P_v=\varDelta P_v(k^{\star })\), where \(k^{\star }\) is the convergence iteration index. Compute \(P_h\) and \(P_v\) from

    $$\begin{aligned} P_h= & {} \varPi _h - \varDelta P_h \end{aligned}$$
    (84a)
    $$\begin{aligned} P_v= & {} \varPi _v - \varDelta P_v. \end{aligned}$$
    (84b)
  6. 6.

    Compute the Kalman gain matrices from

    $$\begin{aligned} K_1= & {} (A_1\varDelta P_hC_1^{\top } + A_2\varDelta P_vC_2^{\top } + S_h)(C_1\varDelta P_hC_1^{\top } + C_2\varDelta P_vC_2^{\top } + R)^{-1} \end{aligned}$$
    (85a)
    $$\begin{aligned} K_2= & {} (A_4\varDelta P_vC_2^{\top } + S_v)(C_1\varDelta P_hC_1^{\top } + C_2\varDelta P_vC_2^{\top } + R)^{-1}. \end{aligned}$$
    (85b)

Note that one could use MATLAB’s “dare” function iteratively to find

$$\begin{aligned} \varDelta P_v= & {} \text {dare}(A_4^{\top }, C_2^{\top }, Q_{vv}, \varDelta R_v, S_v) \\ \varDelta P_h= & {} \text {dare}(A_1^{\top }, C_1^{\top }, \varDelta Q_{hh}, \varDelta R_h, \varDelta S_h) \end{aligned}$$

in step 3.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramos, J.A., Mercère, G. Image modeling based on a 2-D stochastic subspace system identification algorithm. Multidim Syst Sign Process 28, 1133–1165 (2017). https://doi.org/10.1007/s11045-016-0385-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-016-0385-4

Keywords

Navigation