Elsevier

Signal Processing

Volume 91, Issue 4, April 2011, Pages 1066-1070
Signal Processing

Fast communication
Adaptive sigmoidal plant identification using reduced sensitivity recursive least squares

https://doi.org/10.1016/j.sigpro.2010.09.017Get rights and content

Abstract

Logistic models, comprising a linear filter followed by a nonlinear memoryless sigmoidal function, are often found in practice in many fields, e.g., biology, probability modelling, risk prediction, forecasting, signal processing, electronics and communications, etc., and in many situations a real time response is needed. The online algorithms used to update the filter coefficients usually rely on gradient descent (e.g., nonlinear counterparts of the Least Mean Squares algorithm). Other algorithms, such as Recursive Least Squares, although promising improved characteristics, cannot be directly used due to the nonlinearity in the model. We propose here a modified Recursive Least Squares algorithm that provides better performance than competing state of the art methods in an adaptive sigmoidal plant identification scenario.

Introduction

Structures comprising a linear filter followed by a nonlinear memoryless sigmoidal function are useful models in many application fields (biology, probability modelling, risk prediction, forecasting, signal processing, electronics and communications). In some cases, the input data does not follow a stationary distribution and adaptive algorithms are needed to adjust the model weights. The most common family of such algorithms are those based on gradient descent that follow directions in the weight space opposite to the gradient of the error surface to find a suitable solution for the weights (e.g., nonlinear counterparts of the Least Mean Squares (LMS) algorithm). However, such methods are slow to converge and very sensitive to highly correlated inputs. Other families of algorithms, such as those based on minimizing a Least Squares cost function (e.g., Recursive Least Squares, RLS), although showing improved characteristics, cannot directly be used due to the nonlinearity in the model. Some solutions have been proposed using piecewise approximations of the sigmoid using Taylor's expansions, and hence they are suboptimal [1], [3], [4], [5], [7], [8]. An improved approach, named as Non-Linear RLS (NL-RLS), has been proposed in [6] and since it does not rely on any approximation it outperforms the aforementioned methods. We propose here a modified Recursive Least Squares algorithm that provides better performance than competing state of the art methods in an adaptive sigmoidal plant identification scenario, as will be shown in the experimental section.

Section snippets

The proposed Reduced-Sensitivity RLS algorithm

The task of plant identification with sigmoidal function at the output of the filter has been depicted in Fig. 1(a). The top branch represents the plant output generation given an input signal x[n] and a nonstationary impulse response h[n]. The signal at the output of the filter is y[n]=hT[n]x[n], where x[n]=[x(nN+1),…,x(n)]T (N is the number of weights in the filter), the output of the sigmoidal function is o[n]=f(y[n])=(ey[n]ey[n])/(ey[n]+ey[n]), and o[n] is corrupted by additive white

Experiments

We illustrate the performance of the proposed algorithm in a nonlinear and adaptive plant identification task. The plant has been defined as d[n]=f(hT[n]x[n])+v[n], where h[n] is the time-variant impulse response, x[n] is the input vector, v[n] is the plant noise (Additive White Gaussian noise with variance σv2=2.5×103), and f(y)=(eyey)/(ey+ey) is the sigmoidal function. Three situations have been simulated. The first one, representing a stationary situation, uses h[n]=h[0] for n=1,…,N,

Conclusions

We have proposed a new version of the RLS algorithm that can be applied when a sigmoidal nonlinearity is present at the output of the linear filter, the new algorithm is described as Reduced-Sensitivity RLS (RS-RLS). The complexity of the algorithm is not much larger than RLS (same order of magnitude), since it only requires an extra function evaluation and weighting value computation, but the gain in convergence speed and final performance compensates for the needed extra computation. We have

References (9)

  • E. Soria-Olivas et al.

    Non-linear RLS-based algorithm for pattern classification

    Signal Process.

    (2006)
  • S.C. Douglas, T.H.Y. Meng, Linearized least-squares training of multilayer feedforward neural networks, in: Proceedings...
  • C.S. Burrus et al.

    Iterative reweighted least-squares design of FIR filters

    IEEE Trans. Signal Process.

    (1994)
  • C.P. Chen

    A rapid supervised learning neural networks for function interpolation and approximation

    IEEE Trans. Neural Networks

    (1996)
There are more references available in the full text version of this article.

Cited by (0)

This work has been supported by Spain Government (Grant TEC2008-02473).

View full text