Fast communicationAdaptive sigmoidal plant identification using reduced sensitivity recursive least squares☆
Introduction
Structures comprising a linear filter followed by a nonlinear memoryless sigmoidal function are useful models in many application fields (biology, probability modelling, risk prediction, forecasting, signal processing, electronics and communications). In some cases, the input data does not follow a stationary distribution and adaptive algorithms are needed to adjust the model weights. The most common family of such algorithms are those based on gradient descent that follow directions in the weight space opposite to the gradient of the error surface to find a suitable solution for the weights (e.g., nonlinear counterparts of the Least Mean Squares (LMS) algorithm). However, such methods are slow to converge and very sensitive to highly correlated inputs. Other families of algorithms, such as those based on minimizing a Least Squares cost function (e.g., Recursive Least Squares, RLS), although showing improved characteristics, cannot directly be used due to the nonlinearity in the model. Some solutions have been proposed using piecewise approximations of the sigmoid using Taylor's expansions, and hence they are suboptimal [1], [3], [4], [5], [7], [8]. An improved approach, named as Non-Linear RLS (NL-RLS), has been proposed in [6] and since it does not rely on any approximation it outperforms the aforementioned methods. We propose here a modified Recursive Least Squares algorithm that provides better performance than competing state of the art methods in an adaptive sigmoidal plant identification scenario, as will be shown in the experimental section.
Section snippets
The proposed Reduced-Sensitivity RLS algorithm
The task of plant identification with sigmoidal function at the output of the filter has been depicted in Fig. 1(a). The top branch represents the plant output generation given an input signal x[n] and a nonstationary impulse response h[n]. The signal at the output of the filter is y[n]=hT[n]x[n], where x[n]=[x(n−N+1),…,x(n)]T (N is the number of weights in the filter), the output of the sigmoidal function is o[n]=f(y[n])=(ey[n]−e−y[n])/(ey[n]+e−y[n]), and o[n] is corrupted by additive white
Experiments
We illustrate the performance of the proposed algorithm in a nonlinear and adaptive plant identification task. The plant has been defined as d[n]=f(hT[n]x[n])+v[n], where h[n] is the time-variant impulse response, x[n] is the input vector, v[n] is the plant noise (Additive White Gaussian noise with variance ), and f(y)=(ey−e−y)/(ey+e−y) is the sigmoidal function. Three situations have been simulated. The first one, representing a stationary situation, uses h[n]=h[0] for n=1,…,N,
Conclusions
We have proposed a new version of the RLS algorithm that can be applied when a sigmoidal nonlinearity is present at the output of the linear filter, the new algorithm is described as Reduced-Sensitivity RLS (RS-RLS). The complexity of the algorithm is not much larger than RLS (same order of magnitude), since it only requires an extra function evaluation and weighting value computation, but the gain in convergence speed and final performance compensates for the needed extra computation. We have
References (9)
- et al.
Non-linear RLS-based algorithm for pattern classification
Signal Process.
(2006) - S.C. Douglas, T.H.Y. Meng, Linearized least-squares training of multilayer feedforward neural networks, in: Proceedings...
- et al.
Iterative reweighted least-squares design of FIR filters
IEEE Trans. Signal Process.
(1994) A rapid supervised learning neural networks for function interpolation and approximation
IEEE Trans. Neural Networks
(1996)
Cited by (0)
- ☆
This work has been supported by Spain Government (Grant TEC2008-02473).