Elsevier

Signal Processing

Volume 91, Issue 5, May 2011, Pages 1182-1189
Signal Processing

Least squares based and gradient based iterative identification for Wiener nonlinear systems

https://doi.org/10.1016/j.sigpro.2010.11.004Get rights and content

Abstract

This paper derives a least squares-based and a gradient-based iterative identification algorithms for Wiener nonlinear systems. These methods separate one bilinear cost function into two linear cost functions, estimating directly the parameters of Wiener systems without re-parameterization to generate redundant estimates. The simulation results confirm that the proposed two algorithms are valid and the least squares-based iterative algorithm has faster convergence rates than the gradient-based iterative algorithm.

Introduction

Nonlinear block oriented model such as Wiener and Hammerstein models can be used to approximate many nonlinear dynamic process. A Wiener model has a linear dynamic block followed by a static nonlinear function and a Hammerstein model puts a nonlinear part before a linear dynamic one [1], [2], [3], [4], [5].

The identification issues for Wiener systems have attracted great attention. Most existing contributions assumed that the nonlinear part of Wiener models is a linear combination or a piecewise-linear function [6], [7], or has an inverse function over the operating range of interest [8]. Hu and Chen [9] studied a no-invertibility nonlinear part of Wiener models; Kozek and Sinanović [10] used optimal local linear models for Wiener models identification; Figueroa et al. [11] proposed a simultaneous approach for Wiener model identification. Hagenblad et al. [12] derived a maximum likelihood method to identify Wiener models.

In the field of system modeling and control, the iterative identification methods are usually used to estimate the parameters of linear and nonlinear systems in which the information vector contains unknown variables (unmeasured variables or unknown noise terms) [6], [7], [13], [14], [15]. Kapetanios [16] gave a simple iterative idea for ARMA and VARMA models. Vörös [6], [7] used the iterative approaches to identify the parameters of Wiener models. The iterative solution of a bilinear equation system was proposed by Bai [17], this is a method using the hierarchical identification principle [18], [19], [20]. Ding and Chen [1] developed an iterative and a recursive least squares algorithms for Hammerstein nonlinear ARMAX systems. Their approaches require estimating more parameters than the Hammerstein system since by re-parameterization, the number of the parameters to be identified increases, leading to many redundant estimates.

On the basis of the work in [21], this paper derives a least squares-based and a gradient-based iterative algorithms by introducing two cost functions and by using the hierarchical identification principle in [18]. These iterative methods avoid re-parameterizing the linear and nonlinear parts of the Wiener system to generate redundant estimates. They estimate directly the parameters, increasing computation efficiency. These iterative algorithms use all the measured input–output data at each iterative computation (at each iteration), and thus can produce highly accurate parameter estimation. Ding et al. [22], [23], [24], [25], [26], [27], [28], [29] presented several novel multi-innovation identification methods which can be applied to the Wiener nonlinear systems in this paper.

Briefly, the paper is organized as follows. Section 2 describes identification problem formulation for the Wiener nonlinear systems. 3 The least squares-based iterative algorithm, 4 The gradient-based iterative algorithm derive a least squares-based and a gradient-based iterative algorithms for the Wiener systems, respectively. Section 5 provides an illustrative example to show the effectiveness of the proposed algorithms. Finally, we offer some concluding remarks in Section 6.

Section snippets

Problem description

Let us introduce some notations first. The symbol In stands for an identity matrix of order n and I is an identity matrix of appropriate sizes; the superscript T denotes the matrix transpose; 1n represents an n-dimensional column vector whose elements are 1; the norm of a matrix X is defined by X2=tr[XXT];λmax[X] represents the maximum eigenvalue of the square matrix X.

Refer to the Wiener models in [8], [11] and Hammerstein–Wiener models in [30], [31], and consider the following Wiener system

The least squares-based iterative algorithm

Introducing two cost functions:J1(θ)=J(a,b,f,d^k1)=t=1L[y(t)aTG(t)d^k1φT(t)bψT(t)f]2,J2(ϑ)=J(a^k,b^k,f^k,d)=t=1L[y(t)a^kTG(t)dφT(t)b^kψT(t)f^k]2.Define the stacked output vector Y(L), input information matrix Φ(L), noise information matrix Ψ(L), and information matrices ϒ(d^k1,L) and Ω(a^k,L) asY(L)y(1)y(2)y(L)RL,Φ(L)φT(1)φT(2)φT(L)RL×n,Ψ(L)ψT(1)ψT(2)ψT(L)RL×m,ϒ(d^k1,L)d^k1TGT(1)d^k1TGT(2)d^k1TGT(L)RL×p,Ω(a^k,L)a^kTG(1)a^kTG(2)a^kTG(L)RL×q.Hence, J1 and J2 can be

The gradient-based iterative algorithm

This section derives the gradient-based iterative algorithm for the Wiener models. For the optimization problems in (11), (12), minimizing J1(θ) and J2(ϑ) using the negative gradient search leads to the iterative algorithm of computing θ^k and ϑ^k as follows:θ^k=θ^k1μ1(k)2grad[J1(θ^k1)]=θ^k1+μ1(k)[ϒ(d^k1,L),Φ(L),Ψ(L)]T×{Y(L)[ϒ(d^k1,L),Φ(L),Ψ(L)]θ^k1}=θ^k1+μ1(k)ΞT(d^k1,L)[Y(L)Ξ(d^k1,L)θ^k1],ϑ^k=ϑ^k1μ2(k)2grad[J2(ϑ^k1)]=ϑ^k1+μ2(k)ΩT(a^k,L)[Y(L)Ω(a^k,L)d^k1Φ(L)b^kΨ(L)f^k],Ξ(d^k

Example

Consider the following nonlinear system with colored noise:y(t)=i=12ai[d1g(y(ti))+d2g2(y(ti))+d3g3(y(ti))]+j=12bju(tj)+fv(t1)+v(t),g1(y(ti))=y(ti),g2(y(ti))=y2(ti),g3(y(ti))=y3(ti),θ=[a1,a2,b1,b2,f]T=[0.25,0.28,0.30,1.00,0.05]T,ϑ=[d1,d2,d3]T=[0.80,0.50,0.3317]T,Θ=[θT,ϑT]T=[0.25,0.28,0.30,1.00,0.05,0.80,0.50,0.3317]T.

In simulation, the input {u(t)} is taken as a persistent excitation signal sequence with zero mean and unit variance, and {v(t)} as a white noise sequence with

Conclusions

A least squares-based and a gradient-based iterative algorithms are developed for Wiener nonlinear systems using the hierarchical identification principle. The proposed two iterative algorithms can give a satisfactory identification accuracy and the least squares-based iterative algorithm has faster convergence rates than the gradient-based iterative algorithm and requires computing the matrix inversion. Although the algorithms are presented for the Wiener models, the basic idea can also be

References (41)

Cited by (0)

This research was supported by the Shandong Province Higher Educational Science and Technology Program (J10LG12), the Shandong Provincial Natural Science Foundation (ZR2010FM024) and the China Postdoctoral Science Foundation (20100471493).

View full text