Skip to main content
Log in

Clipped LMS/RLS Adaptive Algorithms: Analytical Evaluation and Performance Comparison with Low-Complexity Counterparts

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

The high computational load of conventional adaptive FIR filters applied to long system identification and their weak tracking ability has encouraged researchers to seek for efficient adaptive algorithms for this kind of applications. One of the efficient solutions is the three-level clipped LMS/RLS adaptive algorithm. In this paper, an insight into the performance of these adaptive algorithms that evaluates the amount of steady-state misalignment error of clipped LMS/RLS adaptive algorithms employed for identification of time-invariant and time-varying systems, is presented. Employing it, we compare the misalignment performance with their low-complexity adaptive algorithm counterparts theoretically. In addition, we derive the optimal step size/forgetting factor explicitly and also obtain a relation between the optimal level of the clipping and step size/forgetting factor to achieve the lowest steady-state misalignment and then we explain as to how to improve the performance of these kinds of adaptive algorithms by adjusting the clipping threshold according to the noise level in such a way that a higher performance is achieved. Finally, different adaptive algorithms have been further coded in VHDL in order to evaluate them in terms of speed and hardware resources.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. T. Aboulnasr, K. Mayyas, MSE analysis of the M-max NLMS adaptive algorithm, in Proceedings of the IEEE International Conference on Acoustics, Speech, Signal Process., vol. 3, pp. 1669–1672 (1998)

  2. M. Abramowitz, I. Stegun, Handbook of Mathematical Functions (Dover Publications, New York, 1972)

    MATH  Google Scholar 

  3. J.B. Allen, D.A. Berkley, Image method for efficiently simulating small-room acoustics. J. Acoust. Soc. Am. 65(4), 943–950 (1979)

    Article  Google Scholar 

  4. M. Bekrani, A.W.H. Khong, Performance analysis and insights into the clipped input adaptive filters applied to system identification, in Proceedings of the International Conference on Signal Processing Systems, pp. 232–236 (2011)

  5. M. Bekrani, M. Lotfizad, A hybrid clipped/unclipped input adaptive filtering scheme for stereophonic acoustic echo cancellation, in Proceedings of the 17th Iranian Conference on Electrical Engineering, pp. 559–563 (2009)

  6. J. Benesty, T. Gänsler, D. Morgan, M. Sondhi, S. Gay, Advances in Network and Acoustic Echo Cancellation (Springer, New York, 2001)

    Book  MATH  Google Scholar 

  7. J. Benesty, D.R. Morgan, M.M. Sondhi, A better understanding and an improved solution to the specific problems of stereophonic acoustic echo cancellation. IEEE Trans. Speech Audio Process. 6(2), 156–165 (1998)

    Article  Google Scholar 

  8. N.J. Bershad, S. McLaughlin, C.F.N. Cowan, Performance comparison of RLS and LMS algorithms for tracking a first order markov communications channel. Proc. IEEE Int. Symp. Circuits Syst. 1, 266–270 (1990)

    Article  Google Scholar 

  9. S.H. Dandach, F. Baris, S. Dasgupta, B.D.O. Anderson, Adaptive source localization by mobile agents, in Proceedings of the IEEE Conference on Decision & Control, pp. 2045–2050 (2006)

  10. S.C. Douglas, Adaptive filters employing partial updates. IEEE Trans. Circuits Syst. II 44(3), 209–216 (1997)

    Article  Google Scholar 

  11. D.L. Duttweiler, Adaptive filter performance with nonlinearities in the correlation multiplier. IEEE Trans. Acoust. Speech Signal Process. ASSP 30(4), 578–586 (1982).

  12. E. Eweda, Analysis and design of a signed regressor LMS algorithm for stationary and nonstationary adaptive filtering with correlated gaussian data. IEEE Trans. Circuits Syst. 37(11), 1367–1374 (1990)

    Article  Google Scholar 

  13. M. Godavarti, A.O. Hero, Partial update LMS algorithms. IEEE Trans. Signal Process. 53(7), 2382–2399 (2005)

    Article  MathSciNet  Google Scholar 

  14. M. Guilin, F. Gran, F. Jacobsen, F. Agerkvist, Adaptive feedback cancellation with band-limited LPC vocoder in digital hearing aids. IEEE Trans. Audio Speech Lang. Process. 19(4), 677–687 (2011)

    Article  Google Scholar 

  15. S. Haykin, Adaptive Filter Theory (Prentice Hall, Englewood Cliffs, 2001)

    Google Scholar 

  16. A.W.H. Khong, P.A. Naylor, Selective-tap adaptive filtering with performance analysis for identification of time-varying systems. IEEE Trans. Audio Speech Lang. Process. 15(5), 1681–1695 (2007)

    Article  Google Scholar 

  17. S.M. Kuo, D.R. Morgan, Active Noise Control Systems: Algorithms and DSP Implementations (John Wiley, New York, 1996)

    Google Scholar 

  18. M. Lotfizad, H.S. Yazdi, Clipped input RLS applied to vehicle tracking. EURASIP J. Appl. Signal Process. 2005(8), 1221–1228 (2005)

    Article  MATH  Google Scholar 

  19. M. Lotfizad, H.S. Yazdi, Modified clipped LMS algorithm. EURASIP J. Appl. Signal Process. 2005(8), 1229–1234 (2005)

    Article  MATH  Google Scholar 

  20. P.A. Naylor, A.W.H. Khong, Affine projection and recursive least squares adaptive filters employing partial updates, in Proceedings of the IEEE Asilomar Conference Signals, Systems and Computers, vol. 1, pp. 950–954 (2004).

  21. W.A. Sethares, I.M.Y. Mareels, B.D.O. Anderson, C.R. Johnson, R.R. Bitmead, Excitation conditions for signed regressor least mean squares adaptation. IEEE Trans. Circuits Syst. 35(6), 613–624 (1988)

    Article  MathSciNet  Google Scholar 

  22. B. Widrow, S. Stearns, Adaptive Signal Processing (Prentice-Hall, Englewood Cliffs, 1985)

    MATH  Google Scholar 

  23. H.S. Yazdi, M. Lotfizad, M. Fathy, Car tracking by quantised input LMS, QX-LMS algorithm in traffic scenes. IEE Proc. Vis. Image Signal Process. 153(1), 37–45 (2006)

    Article  Google Scholar 

  24. V. Zarzoso, A.K. Nandi, Adaptive blind source separation for virtually any source probability density function. IEEE Trans. Signal Process. 48(2), 477–488 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehdi Bekrani.

Appendices

Appendix 1

To derive \(\eta \), we utilize (3) and (5) giving

$$\begin{aligned} \mathbf {v}(n+1)&=\mathbf {\widehat{h}}(n+1)-\mathbf {h}\nonumber \\&=\Big \{\mathbf {\widehat{h}}(n)+\varvec{\Psi }(n) e(n)\mathbf {\widetilde{x}}(n)\Big \}-\mathbf {h}\nonumber \\&=\Big \{\mathbf {\widehat{h}}(n)+\varvec{\Psi }(n) \mathbf {\widetilde{x}}(n)[d(n)-\mathbf {x}^\mathrm{{T}}(n)\mathbf {\widehat{h}}(n)]\Big \}-\mathbf {h}. \end{aligned}$$
(32)

Making \(w(n)\) the subject of (1) and substituting the resultant equation to the above, we obtain

$$\begin{aligned} \mathbf {v}(n+1)&=\mathbf {\widehat{h}}(n)-\mathbf {h}+\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n) \big \{w(n)-\mathbf {x}^\mathrm{{T}}(n)\big [\mathbf {\widehat{h}}(n)-\mathbf {h}\big ]\big \}\nonumber \\&=\mathbf {v}(n)+\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)\big \{w(n)-\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\big \}\nonumber \\&=\mathbf {v}(n)+\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)w(n)-\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n) \end{aligned}$$
(33)

and hence,

$$\begin{aligned} \mathbf {Q}(n+1)&= \mathbf {Q}(n)+ E\left\{ \varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)w^2(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\varvec{\Psi }^\mathrm{{T}}(n)\right\} \nonumber \\&\quad - E\left\{ \mathbf {v}(n)\big [\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\big ]^\mathrm{{T}}\right\} \nonumber \\&\quad - E\left\{ \big [\varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\big ]\mathbf {v}^\mathrm{{T}}(n)\right\} \nonumber \\&\quad + E\left\{ \varvec{\Psi }(n)\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\varvec{\Psi }^\mathrm{{T}}(n)\right\} . \end{aligned}$$
(34)

To simplify the above expression, we assume that \(w(n)\) is uncorrelated with \(x(n)\). Denoting \(\mathbf {C}(n)=E\{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\), we can express (34) as

$$\begin{aligned} \mathbf {Q}(n+1)&= \mathbf {Q}(n)+ \sigma _w^2\varvec{\Psi }(n)E\{\mathbf {\widetilde{x}}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - E\{\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - \varvec{\Psi }(n)E\{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\}\nonumber \\&\quad + \varvec{\Psi }(n)\mathbf {C}(n)\varvec{\Psi }^\mathrm{{T}}(n). \end{aligned}$$
(35)

Here, we show that \(\mathbf {C}(n)=\sigma _{\tilde{x}}^{2} \sigma ^2_x \mathrm {tr}\big \{\mathbf {Q}(n)\big \}\mathbf {I}\). The matrix \(\mathbf {C}(n)\) can be rewritten as

$$\begin{aligned} \mathbf {C}(n)&= E\big \{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\big \}\nonumber \\&= E\big \{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)E\{\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\}\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\big \}\nonumber \\&= E\big \{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\mathbf {Q}(n)\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\big \}. \end{aligned}$$
(36)

Therefore, if \(c_{pq}(n)\) is the element of \((p+1)\)th row and \((q+1)\)th column of \(\mathbf {C}(n)\) and \(r_{ij}(n)\) is the element of \((i+1)\)th row and \((j+1)\)th column of \(\mathbf {Q}(n)\), we have

$$\begin{aligned} c_{pq}(n)&= \sum _{i=0}^{L-1}\sum _{j=0}^{L-1}E\left\{ \widetilde{x}_p(n)\widetilde{x}_q(n)x_i(n)x_j(n)\right\} r_{ij}(n)\nonumber \\&\approx E\left\{ \widetilde{x}_p(n)\widetilde{x}_q(n)\right\} \sum _{i=0}^{L-1}\sum _{j=0}^{L-1}E\left\{ x_i(n)x_j(n)\right\} r_{ij}(n). \end{aligned}$$
(37)

The approximation in the above equation is written with assumption that the \(i\)th element of \(\mathbf {\widetilde{x}}(n)\) is independent of \(j\)th element of \(\mathbf {x}(n)\), for \(0\le i,j \le L-1\). This assumption is reasonable, since the input signal was assumed to be i.i.d. Gaussian distributed. With a simple calculation, we can see that \(\sum _{i=0}^{L-1}\sum _{j=0}^{L-1}E\left\{ x_i(n)x_j(n)\right\} r_{ij}(n)\) is equal to the matrix product of the expression \(E\{\mathbf {x}^\mathrm{{T}}(n)\mathbf {Q}(n)\mathbf {x}(n)\}\) and on the other hand, \(E\left\{ \widetilde{x}_p(n)\widetilde{x}_q(n)\right\} \) is the \((p+1,q+1)\)th element of \(\widetilde{\mathbf {R}}(n)\); thus we approximately have

$$\begin{aligned} \mathbf {C}(n)=\widetilde{\mathbf {R}}(n) E\{\mathbf {x}^\mathrm{{T}}(n)\mathbf {Q}(n)\mathbf {x}(n)\}. \end{aligned}$$
(38)

In addition, the scalar expression \(E\{\mathbf {x}^\mathrm{{T}}(n)\mathbf {Q}(n)\mathbf {x}(n)\}\) equals \(\mathrm {tr}\big \{\mathbf {R}_x(n)\mathbf {Q}(n)\big \}\). Hence,

$$\begin{aligned} \mathbf {C}(n)=\widetilde{\mathbf {R}}(n) \mathrm {tr}\big \{\mathbf {R}_x(n)\mathbf {Q}(n)\big \}. \end{aligned}$$
(39)

As the input signal is assumed i.i.d Gaussian, therefore, \(\mathbf {R}_x(n)=\sigma ^2_x\mathbf {I}\). In this case, the \(i\)th element of \(\mathbf {x}(n)\) is independent from \(j\)th element of \(\mathbf {x}(n)\) for \(i\ne j\). Therefore, \(i\)th element of \(\mathbf {\widehat{x}}(n)\) is also independent from \(j\)th element of \(\mathbf {\widehat{x}}(n)\) for \(i\ne j\). As a result, \(\mathbf {\widetilde{R}}(n)=\sigma _{\tilde{x}}^{2}\mathbf {I}\) in which \(\sigma _{\tilde{x}}^{2}\) is the variance of clipped input signals. As a result, (39) is simplified to

$$\begin{aligned} \mathbf {C}(n)=\sigma _{\tilde{x}}^{2} \sigma ^2_x \mathrm {tr}\big \{\mathbf {Q}(n)\big \}\mathbf {I}. \end{aligned}$$
(40)

Employing (40), (35) can be written as

$$\begin{aligned} \mathbf {Q}(n+1)&= \mathbf {Q}(n) + \sigma _w^2\varvec{\Psi }(n)\widetilde{\mathbf {R}}(n)\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - E\{\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\}E\{\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - \varvec{\Psi }(n)E\{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\}E\{\mathbf {v}(n)\mathbf {v}^\mathrm{{T}}(n)\}\nonumber \\&\quad + \varvec{\Psi }(n)\sigma _{\tilde{x}}^{2} \sigma ^2_x \mathrm {tr}\big \{\mathbf {Q}(n)\big \}\varvec{\Psi }^\mathrm{{T}}(n) \end{aligned}$$
(41)

in which, similar to [22], we have assumed that \(\mathbf {x}(n)\) and \(\mathbf {v}(n)\) are independent from each other.

Under steady-state condition, we assume that the adaptive algorithm has converged so that \(\varphi (n)\) varies to within an expected value. Therefore,

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf {Q}(n+1) = \lim _{n \rightarrow \infty } \mathbf {Q}(n)=\mathbf {Q}. \end{aligned}$$
(42)

Equation (41) can then be rewritten for the steady-state case as

$$\begin{aligned} \mathbf {Q}&= \mathbf {Q} + \sigma _w^2\varvec{\Psi }(n)\widetilde{\mathbf {R}}(n)\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - \mathbf {Q}E\{\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - \varvec{\Psi }(n)E\{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\}\mathbf {Q}\nonumber \\&\quad + \varvec{\Psi }(n)\sigma _{\tilde{x}}^{2} \sigma ^2_x \mathrm {tr}\big \{\mathbf {Q}\big \}\varvec{\Psi }^\mathrm{{T}}(n) \end{aligned}$$
(43)

resulting in

$$\begin{aligned}&\sigma _w^2\varvec{\Psi }(n)\widetilde{\mathbf {R}}(n)\varvec{\Psi }^\mathrm{{T}}(n) - \mathbf {Q}E\{\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad - \varvec{\Psi }(n)E\{\mathbf {\widetilde{x}}(n)\mathbf {x}^\mathrm{{T}}(n)\}\mathbf {Q}+ \varvec{\Psi }(n)\sigma _{\tilde{x}}^{2} \sigma ^2_x \mathrm {tr}\big \{\mathbf {Q}\big \}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&=\mathbf {0}. \end{aligned}$$
(44)

It is shown in [19] that for Gaussian tap-input signal vectors,

$$\begin{aligned} E\big \{\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\big \}=\frac{\alpha }{\sigma _x} E\big \{\mathbf {x}(n)\mathbf {x}^\mathrm{{T}}(n)\big \}, \end{aligned}$$
(45)

where \(\alpha =\sqrt{2/\pi }\exp (-\delta ^2/2)\). As a result,

$$\begin{aligned} E\big \{\mathbf {x}(n)\mathbf {\widetilde{x}}^\mathrm{{T}}(n)\big \}={\alpha }{\sigma _x} \mathbf {I}. \end{aligned}$$
(46)

Substituting (45) and (46) into (44), we obtain

$$\begin{aligned}&\sigma _w^2\sigma _{\tilde{x}}^{2}\varvec{\Psi }(n)\varvec{\Psi }^\mathrm{{T}}(n) - {\alpha }{\sigma _x}\mathbf {Q}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad -\, \alpha \sigma _x\varvec{\Psi }(n)\mathbf {Q} + \sigma _{\tilde{x}}^{2} \sigma _x^2 \mathrm {tr}\big \{\mathbf {Q}\big \}\varvec{\Psi }(n)\varvec{\Psi }^\mathrm{{T}}(n)=\mathbf {0}. \end{aligned}$$
(47)

As in C-LMS, \(\varvec{\Psi }(n)=\mu \mathbf {I}\), and also in C-RLS, assuming i.i.d. input and \(n\) approaches infinity, \(\varvec{\Psi }(n)=(1-\lambda )\widetilde{\mathbf {R}}^{-1}(n)=(1-\lambda )\sigma _{\tilde{x}}^{-2}\mathbf {I}\), we derive \(\mathbf {Q}\varvec{\Psi }^\mathrm{{T}}(n)= \varvec{\Psi }(n)\mathbf {Q}\). As a result, we can rewrite (47) as

$$\begin{aligned}&\sigma _w^2\sigma _{\tilde{x}}^{2}\varvec{\Psi }(n)\varvec{\Psi }^\mathrm{{T}}(n)- 2{\alpha }{\sigma _x}\mathbf {Q}\varvec{\Psi }^\mathrm{{T}}(n)\nonumber \\&\quad +\, \sigma _{\tilde{x}}^{2} \sigma _x^2 \mathrm {tr}\big \{\mathbf {Q}\big \}\varvec{\Psi }(n)\varvec{\Psi }^\mathrm{{T}}(n)=\mathbf {0} \end{aligned}$$
(48)

which simplifies to

$$\begin{aligned} \sigma _w^2\sigma _{\tilde{x}}^{2}\varvec{\Psi }(n) - 2{\alpha }{\sigma _x}\mathbf {Q} + \sigma _{\tilde{x}}^{2} \sigma _x^2 \mathrm {tr}\big \{\mathbf {Q}\big \}\varvec{\Psi }(n)=\mathbf {0}. \end{aligned}$$
(49)

To evaluate the steady-state misalignment, from (10) and (42), we achieve \(\eta =\mathrm {tr}\{\mathbf {Q}\}\). Thus taking the matrix trace of (49) results in

$$\begin{aligned} \sigma _w^2\sigma _{\tilde{x}}^{2}\mathrm {tr}\{\varvec{\Psi }(n)\}-2\alpha \sigma _x\eta +\sigma _{\tilde{x}}^{2} \sigma _x^2\eta \mathrm {tr}\{\varvec{\Psi }(n)\}=0, \end{aligned}$$
(50)

from which we arrive at

$$\begin{aligned} \eta =\frac{\sigma _w^2\sigma _{\tilde{x}}^{2}\mathrm {tr}\{\varvec{\Psi }(n)\}}{2\alpha \sigma _x-\sigma ^2_x\sigma _{\tilde{x}}^{2}\mathrm {tr}\{\varvec{\Psi }(n)\}}. \end{aligned}$$
(51)

Appendix 2

Proof of (12)

According to the definition of the three-level clipping, we have

$$\begin{aligned} {\widetilde{x}^2(n)} =\left\{ \begin{array}{ll} 1, &{} \quad |x(n)|>\delta _x \\ 0, &{} \quad |x(n)|\le \delta _x \end{array} \right. .~ \end{aligned}$$
(52)

The probability that the sample \(x(n)\) at instant \(n\) put within interval \([-\delta _x , \delta _x]\) is

$$\begin{aligned} P(-\delta _x<x(n)<\delta _x)=\int _{-\delta _x}^{\delta _x}f_x(\tau )\mathrm{{d}}\tau \end{aligned}$$
(53)

where \(f_x(\tau )\) is the pdf of \(x(n)\). Therefore,

$$\begin{aligned} P(|x(n)|>\delta _x)=1-\int _{-\delta _x}^{\delta _x}f_x(\tau )\mathrm{{d}}\tau =2\int _{\delta _x}^{\infty }f_x(\tau )\mathrm{{d}}\tau , \end{aligned}$$
(54)

which is equal to \(P\left( \widetilde{x}(n)=\pm 1\right) \) where in turn equals \(P\left( \widetilde{x}^2(n)=1\right) \). i.e.,

$$\begin{aligned} P\left( \widetilde{x}^2(n)=1\right) =P(|x(n)|>\delta _x)=2\int _{\delta _x}^{\infty }f_x(\tau )\mathrm{{d}}\tau . \end{aligned}$$
(55)

On the other hand, with regard to the mathematical definition of the expectation, \(E[\cdot ]\), we have

$$\begin{aligned} \sigma _{\tilde{x}}^{2}=E[\widetilde{x}^2(n)]&= 1\times P\left( \widetilde{x}^2(n)=1\right) +0\times P\left( \widetilde{x}^2(n)=0\right) \nonumber \\&= 2\int _{\delta _x}^{\infty }f_x(\tau )\mathrm{{d}}\tau . \end{aligned}$$
(56)

Regarding the assumption of Gaussian input signal,

$$\begin{aligned} \sigma _{\tilde{x}}^{2}=1-\mathrm {erf}\left\{ \frac{\delta _x}{\sigma _x \sqrt{2}}\right\} =\mathrm {erfc}\left\{ \frac{\delta _x}{\sigma _x \sqrt{2}}\right\} =\mathrm {erfc}\left\{ \frac{\delta }{\sqrt{2}}\right\} . \end{aligned}$$
(57)

Thus (12) is proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bekrani, M., Lotfizad, M. Clipped LMS/RLS Adaptive Algorithms: Analytical Evaluation and Performance Comparison with Low-Complexity Counterparts. Circuits Syst Signal Process 34, 1655–1682 (2015). https://doi.org/10.1007/s00034-014-9923-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-014-9923-1

Keywords

Navigation