Analysis of the asymptotic properties of the MOESP type of subspace algorithms☆
Introduction
Subspace algorithms are used for the estimation of linear, time-invariant, finite-dimensional, discrete time, state-space systems. They are an alternative to the more classical maximum likelihood and prediction error methods. The main advantages of subspace algorithms are their conceptual simplicity and their numerical properties. The main idea of these algorithms lies in the observation that the predictions of a time series from the whole past of the outputs and possibly the whole series of observed exogenous inputs for different time horizons are a function of the state vector and the future of the exogenous inputs: Every optimal (in the least-squares sense) predictor of the future of the process based on the entire past of the output process and the whole input process is a linear function of the state and the future of the exogenous inputs under appropriate assumptions on the noise and the data generating process. This fact can be used for estimation of the state (cf. Larimore, 1983; Peternell, Scherrer & Deistler, 1996) or the estimation of the linear mapping attaching the predictions to the state vectors and the future of the exogenous inputs (cf. Van Overschee and De Moor 1994, Van Overschee and De Moor 1996; Verhaegen, 1994). The statistical properties of the first type of algorithms are clarified to a large extent by Deistler, Peternell and Scherrer (1995), Peternell et al. (1996), Bauer, Deistler and Scherrer (1999) and Bauer (1998). Within the second type of algorithms, the MOESP class of algorithms is very popular. MOESP has been developed by Verhaegen and coworkers in a series of papers (Verhaegen and Dewilde 1992a, Verhaegen and Dewilde 1992b; Verhaegen & Dewilde, 1993; Verhaegen, 1994). The numerical properties of the latter algorithms have been investigated thoroughly in these papers. The consistency of this approach has been investigated in Jansson and Wahlberg 1997, Jansson and Wahlberg 1998. The main conclusion from these papers is that, in general, it is not enough to impose persistence of excitation type of conditions on the exogenous inputs in order to guarantee consistency. However, there are some special cases (see Jansson & Wahlberg, 1998). Asymptotic normality of the estimates of the poles of the transfer function has been established in Viberg, Ottersten, Wahlberg and Ljung (1993). In the current paper the asymptotic properties of the subspace estimates using various conditions on the exogenous inputs are considered. The analysis will center on conditions ensuring consistency of the approach in generic situations, and on asymptotic normality of the system matrix estimates.
The paper is organized as follows: Section 2 introduces the model class used for identification and presents some standard assumptions. Section 3 presents the class of algorithms considered. Section 4 then contains the main results of this paper, namely consistency and asymptotic normality of the system matrix estimates. Section 5 presents some numerical examples and finally Section 6 concludes the paper.
Throughout the paper the following notation will be used: Bold face symbols are used for matrices and vectors, lower case latin and greek symbols are used for scalars. As usual → will denote convergence for deterministic quantities and → a.s. stands for almost sure convergence of stochastic quantities. will denote convergence in distribution. Also the notation , where T denotes the sample size, is introduced. Here the initial conditions are such that holds for , where α and β are integers to be specified in the following section. Finally fn=o(gn) means .
Section snippets
Model set
In this paper the model class is restricted to linear, finite-dimensional, discrete time, time-invariant, state-space systems of the formwhere , is the s-dimensional observed output, denotes the s-dimensional white noise with zero mean and covariance matrix equal to unity. denotes the m-dimensional exogenous input series, which is assumed to be independent of the noise in an appropriate sense to be defined below. Finally, denotes
The algorithms
In this section a brief presentation of the algorithms considered in this paper will be given. The main fact that is used by subspace algorithms can be formulated as follows: Let be the vector of the stacked (finite) past of the process and let be the vector of the stacked (finite) future of the output process. Define and analogously from , and let . In what follows, it is assumed that α>n and β≥n.
Asymptotic properties
The first part of this section will focus on the question of consistency of the estimates. There will be two different concepts concerning the consistency, depending on whether the estimate of the transfer function is concerned, or whether the convergence of the system matrix estimates is investigated. From the description of the algorithm it can be seen that the system matrix estimates are a nonlinear function of the sample covariances of the joint process up to lag α+β−1. Up to
Numerical examples
In the previous section, the asymptotic normality of the MOESP algorithm has been derived. In Theorem 13 the variance of the limiting normal distribution has been denoted with . As has been stated already, depends on the covariance sequence of the inputs, the choice of the weighting matrices and the choice of the indices . The theorem also shows that can be calculated from the knowledge of the covariances of the covariance estimates of the joint process . This merely
Conclusions
In this paper the asymptotic performance of a special class of subspace algorithms has been investigated. The estimate of the transfer function from the exogenous inputs to the outputs has been shown to be a.s. consistent for a generic set of linear systems. The results in Jansson and Wahlberg (1997) show that this actually is the best result that can be expected. Furthermore, for a smaller generic set also the consistency for the system matrices has been shown, as well as asymptotic normality
Acknowledgements
Support by the Austrian ‘Fonds zur Förderung der wissenschaftlichen Forschung’ Projekt P11213-MAT, the foundation BLANCEFLOR Boncompagni-Ludovisi, née Bildt, and the Swedish Foundation for International Cooperation in Research and Higher Education is gratefully acknowledged.
Dietmar Bauer was born in St. Pölten, Austria, in 1972. He received his masters and Ph.D. degrees in Applied Mathematics from the Technical University of Vienna in 1995 and 1998 respectively. From 1995 until 1998 he was with the Institute for Econometrics, Operations Research and System Theory, Technical University of Vienna. Currently he is visiting the Department of Electrical and Computer Engineering, University of Newcastle, Australia. His research interests include system identification in
References (26)
- et al.
Consistency and asymptotic normality of some subspace algorithms for systems without observed inputs
Automatica
(1999) - et al.
Consistency and relative efficiency of subspace methods
Automatica
(1995) - et al.
A linear regression approach to state-space subspace system identification
Signal Processing
(1996) - et al.
On consistency of subspace methods for system identification
Automatica
(1998) - et al.
Statistical analysis of novel subspace identification methods
Signal Processing
(1996) - et al.
N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems
Automatica
(1994) Identification of the deterministic part of mimo state space models given in innovations form from input–output data
Automatica
(1994)Subspace-based methods for the identification of linear time-invariant systems
Automatica
(1995)- et al.
Analysis of state space system identification methods based on instrumental variables and subspace fitting
Automatica
(1997) - Anderson, T. W. (1971). The statistical analysis of time series. New York:...
Asymptotic theory for principal component analysis
Annals of Mathematics and Statistics
Cited by (95)
Uncertainty quantification of input matrices and transfer function in input/output subspace system identification
2022, Mechanical Systems and Signal ProcessingOn error analysis of a closed-loop subspace model identification method
2021, IFAC-PapersOnLineUncertainty quantification in data-driven stochastic subspace identification
2021, Mechanical Systems and Signal ProcessingCitation Excerpt :Important theoretical results include proofs that the identified system matrices are asymptotically normally distributed [7–9], and that they have asymptotically minimum variance (in other words, they are statistically efficient) for a specific type of weighting, the canonical variate analysis (CVA) weighting [10]. Analytical expressions that relate the asymptotic covariance of the identified system matrices to the exact system matrices and to covariances of the unkown inputs and state estimation errors, have also been derived [8,9,11,12], but explicit practical expressions for the actual covariance estimation from experimental data are not available from these works. Such expressions are available from the experimental vibration analysis literature, at least for the special case where the system matrices are identified from the shift-invariant structure of the identified observability matrix, and for unity left projection weighting [13–15].
Dietmar Bauer was born in St. Pölten, Austria, in 1972. He received his masters and Ph.D. degrees in Applied Mathematics from the Technical University of Vienna in 1995 and 1998 respectively. From 1995 until 1998 he was with the Institute for Econometrics, Operations Research and System Theory, Technical University of Vienna. Currently he is visiting the Department of Electrical and Computer Engineering, University of Newcastle, Australia. His research interests include system identification in particular subspace algorithms and parametrisation of linear systems, and economic applications of time series analysis. For a recent photograph of Dietmar Bauer please refer to Automatica 35(7) 1243–1254.
Magnus Jansson was born in Enköping, Sweden, in 1968. He received the Master of Science, Technical Licentiate, and Ph.D. degrees in electrical engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, in 1992, 1995 and 1997, respectively. From September 1998 he spent one year at the Department of Electrical and Computer Engineering, University of Minnesota, USA. He is currently a Research Associate at the Department of Signals, Sensors and Systems, Royal Institute of Technology.
His research interests include sensor array signal processing, time series analysis, and system identification.
For a recent photograph of Magnus Jansson please refer to Automatica 34(12) 1507–1519.
- ☆
This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor B. Ninness under the direction of Editor T. Söderström.
- 1
On leave from S3-Automatic Control, Royal Institute of Technology (KTH), Stockholm, Sweden.