Elsevier

Automatica

Volume 48, Issue 7, July 2012, Pages 1423-1431
Automatica

Brief paper
Lyapunov-based adaptive state estimation for a class of nonlinear stochastic systems

https://doi.org/10.1016/j.automatica.2012.05.002Get rights and content

Abstract

This paper is concerned with an adaptive state estimation problem for a class of nonlinear stochastic systems with unknown constant parameters. These nonlinear systems have a linear-in-parameter structure, and the nonlinearity is assumed to be bounded in a Lipschitz-like manner. Using stochastic counterparts of Lyapunov stability theory, we present adaptive state and parameter estimators with ultimately exponentially bounded estimator errors in the sense of mean square for both continuous-time and discrete-time nonlinear stochastic systems. Sufficient conditions are given in terms of the solvability of LMIs. Moreover, we also introduce a suboptimal design approach to optimizing the upper bound of the mean-square error of parameter estimation. This suboptimal design procedure is also realized by LMI computations. By a martingale method, we also show that the related Lyapunov function has a non-negative Lyapunov exponent.

Introduction

In this paper, we consider an adaptive state estimation problem for a class of nonlinear stochastic systems with a special linear-in-parameter (affine) structure. For continuous-time systems, we are concerned with the following nonlinear system: dx(t)=f(x(t))dt+g(x(t))θdt+B(θ)dW(t)dy(t)=h(x(t))dt+DdV(t) where θ is a vector of unknown constant parameters and W(t) and V(t) are independent standard Brownian motions. The problem is to design an adaptive state estimator to estimate the state x(t) and the parameter θ based on the continuous observation y(t). Our motivation for investigating this adaptive state estimation problem with this model structure came from our ongoing work on the control of neurostimulation systems for epileptic seizures where we deal with electroencephalogram (EEG) signals. In Wendling, Bartolomei, Bellanger, and Chauvel (2002), neural mass models (1)–(2) are used to generate EEG-like signals by adjusting θ as slowly time-varying parameters. Hence a key technical issue is to identify and track θ.

For the state estimation problem of nonlinear stochastic systems without unknown parameters, in Tarn and Rasis (1976), an observer with exponential ultimate boundedness in mean square was derived based on the boundedness results of Zakai (1967) for the solution to stochastic differential equations and in Agniel and Jury (1971) for stochastic difference equations via a Lyapunov-like method. Special structured observers dxˆ(t)=f(xˆ(t))+L(dy(t)h(xˆ(t))) for continuous-time systems and xˆ(k+1)=f(xˆ(k))+K(y(k)h(xˆ(k))) for discrete-time systems were employed, where f() is the deterministic term in differential or difference equations without involving unknown parameter θ. Here L and K are observer gains. The nonlinearity is assumed to be globally Lipschitz. This method has the advantages of simplicity and robustness. For example, the observer only needs that noise covariances are bounded. For the general case, a sufficient condition for the existence of the observer was given in terms of the negative definiteness of a matrix involving function derivatives. Meanwhile, the optimization issue was also addressed by Rasis (1974). In Yaz and Azemi (1993), nonlinearities were described in terms of Lipschitz-like constraints and a sufficient condition was presented in terms of a Lyapunov equation. For nonlinear deterministic continuous-time systems with linear-in-parameter structures, an adaptive observer was presented in Cho and Rajamani (1997). The sufficient condition involved a linear matrix inequality (LMI). Convergence analysis of the observer was also provided.

In this paper, we employ methods as in Tarn and Rasis (1976), Yaz and Azemi (1993) and Cho and Rajamani (1997) to solve the adaptive state estimation problem for both continuous-time and discrete-time nonlinear stochastic systems. Particularly, we adopt the Lipschitz-like approach as in Yaz and Azemi (1993) to describe the nonlinearity. This allows us to give sufficient conditions for the boundedness of estimation errors in terms of linear matrix inequalities (LMIs). The state and parameter estimators have the same structure as in (3), (4) given by Tarn and Rasis (1976). Exponentially ultimately bounded adaptive state estimators in mean square are presented both for continuous-time and discrete-time nonlinear systems with linear-in-parameter structures. Adaptive control problem has been considered in Deng and Krstić (2001) for nonlinear stochastic systems in a parametric strict-feedback form.

Section snippets

Continuous-time adaptive state estimation

The system under consideration is described by (1)–(2) where x(t)Rn×1,θRp×1, y(t)Rm×1, and all matrices involved have compatible dimensions. The problem is to identify the unknown constant parameter θ based on the continuous observation y(t). We assume that the required nonlinear estimator has a form: dxˆ(t)=f(xˆ(t))dt+g(xˆ(t))θˆ(t)dt+L[dy(t)h(xˆ(t))dt]dθˆ(t)=Γη(xˆ(t),θˆ(t))dt+K[dy(t)h(xˆ(t))dt]. Here the term dy(t)h(xˆ(t))dt plays a similar role as the innovation process in Kalman

Discrete-time adaptive state and parameter estimation

Consider the discrete-time nonlinear stochastic system xk+1=f(xk)+g(xk)θ+B(θ)wkyk=h(xk)+Dvk where xRn×1,θRp×1,yRm×1. The Gaussian noises wk,vk and the initial state x0 are mutually independent, and wk,vk have zero mean and bounded covariances nw,nv. The problem is to identify the unknown constant parameter θ based on the observation yk.

For discrete-time nonlinear systems, we use a quadratic difference-like function defined in Lemma 8 by (A.13) to design estimators. It is known that the

An illustrative example

Example 4.1

We consider the continuous-time system: dx1(t)=(0.7x1(t)+x2(t)(1+0.1sin(x1(t)))θ)dt+0.1dW1(t)dx2(t)=(0.5x1(t)x2(t)+0.2cos(x2(t))θ)dt+0.1dW2(t)dy(t)=(4x1(t)+0.5sin(4x1(t))+2x2(t))dt+0.1dV(t) where θ=0.8. Hence we have B(θ)=0.1I2×2 and g(x(t))=[(1+0.1sin(x1(t))),0.2cos(x2(t))].

In simulations, all parameters in Assumption 2.1 are set as θ=0.8,γ=B(θ)F=0.1414,γ0=1,γ1=0.2, γ2=γ3=γ4=γ5=0, γ6=1, γ7=1.118, Γ=1, α=0.01,x0=0, θˆ0=0, A=[0.710.51],C=[42]. Note that we use |sinx|1 and |sinxsiny||x

Conclusion

In this paper, we have considered adaptive state estimation problems for a class of nonlinear stochastic systems with a linear-in-parameter structure based on stochastic counterparts of Lyapunov theory. The ultimately exponentially bounded state and parameter estimators in the sense of mean square have been obtained for both continuous-time and discrete-time nonlinear stochastic systems. The sufficient conditions are described in terms of LMIs. An improvement is that we present a

Acknowledgments

We thank the anonymous referees and editors for valuable comments that helped to improve the presentation.

Li Xie was born in 1965, and received the Bachelor degree in electronic engineering (specializing in automatic control), the Master degree in control theory and application, and the Ph.D. degree in control, guidance, and simulation of flight vehicles from Xidian University in 1986, Harbin Engineering University in 1992, and Harbin Institute of Technology in 1996, respectively, all in PR China. He received the Ph.D. degree in electrical engineering from the University of New South Wales,

References (21)

  • A. Budhiraja

    Ergodic properties of the nonlinear filter

    Stochastic Processes and Their Applications

    (2001)
  • A. Poznyak et al.

    Observer matrix gain optimization for stochastic continuous time nonlinear systems

    Systems & Control Letters

    (2004)
  • X.J. Xie et al.

    Adaptive state-feedback stabilization of high-order stochastic systems with nonlinear parameterization

    Automatica

    (2009)
  • R.G. Agniel et al.

    Almost sure boundedness of randomly sampled systems

    SIAM Journal on Control

    (1971)
  • L. Arnold

    Stochastic differential equations: theory and applications

    (1974)
  • Y.M. Cho et al.

    A systematic approach to adaptive observer synthesis for nonlinear systems

    IEEE Transactions on Automatic Control

    (1997)
  • H. Deng et al.

    Stabilization of stochastic nonlinear systems driven by noise of unknown covariance

    IEEE Transactions on Automatic Control

    (2001)
  • R.Z. Has’minskii

    Stochastic stability of differential equations

    (1980)
  • I. Kanellakopoulos

    A discrete-time adaptive nonlinear system

    IEEE Transactions on Automatic Control

    (1994)
  • F. Kozin

    On almost sure asymptotic sample properties of diffusion processes defined by stochastic differential equations

    Journal of Mathematics of Kyoto University

    (1965)
There are more references available in the full text version of this article.

Cited by (0)

Li Xie was born in 1965, and received the Bachelor degree in electronic engineering (specializing in automatic control), the Master degree in control theory and application, and the Ph.D. degree in control, guidance, and simulation of flight vehicles from Xidian University in 1986, Harbin Engineering University in 1992, and Harbin Institute of Technology in 1996, respectively, all in PR China. He received the Ph.D. degree in electrical engineering from the University of New South Wales, Australia, in 2004. He is currently a professor in the School of Control and Computer Engineering, North China Electric Power University, Beijing. His research interests include estimation and control of stochastic systems, networked control systems, and power systems.

Pramod P. Khargonekar received his university education from the Indian Institute of Technology, Bombay and the University of Florida, Gainesville, FL. After holding faculty positions in Electrical Engineering at the University of Florida and University of Minnesota, he joined the University of Michigan in 1989 as Professor of Electrical Engineering and Computer Science. He became Chairman of the Department of Electrical Engineering and Computer Science in 1997 and also held the position of Claude E. Shannon Professor of Engineering Science. In July 2001, he rejoined the University of Florida and served as Dean of the College of Engineering till July 2009. He is currently Eckis Professor of Electrical and Computer Engineering at the University of Florida. Dr. Khargonekar’s research and teaching interests are centered on theory and applications of systems and control. He is a recipient of the NSF Presidential Young Investigator Award, the American Automatic Control Council’s Donald Eckman Award, the IEEE W. R. G. Baker Prize Award, the George Axelby Best Paper Award, the Hugo Schuck ACC Best Paper Award, the Japan Society for Promotion of Science Fellowship, Distinguished Alumnus Award from the Indian Institute of Technology, Bombay, and Springer Professorship at the University of California, Berkeley. He is a Fellow of IEEE. He is on the list of Highly Cited Researchers from the ISI Web of Science. At the University of Michigan, he received a teaching excellence award from the EECS department, a research excellence award from the College of Engineering, and the Arthur F. Thurnau Professorship. At the University of Minnesota, he received the George Taylor Distinguished Research Award from the Institute of Technology.

The material in this paper was partially presented at the 2010 American Control Conference (ACC 2010), June 30–July 2, 2010, Baltimore, Maryland, USA. This paper was recommended for publication in revised form by Associate Editor Shuzhi Sam Ge under the direction of Editor Miroslav Krstic. The work was completed when the first author was with Department of ECE, UF in 2009.

1

Tel.: +86 10 61772126.

View full text