Elsevier

Signal Processing

Volume 84, Issue 2, February 2004, Pages 311-315
Signal Processing

A geometrical derivation of the excess mean square error for Bussgang algorithms in a noiseless environment

https://doi.org/10.1016/j.sigpro.2003.10.020Get rights and content

Abstract

The steady-state excess mean square error (EMSE) is a useful performance criterion to measure how noisy Bussgang algorithms are. Thanks to a simple geometrical interpretation of LMS-like algorithms, the Pythagoras theorem gives us a general equation similar to the “fundamental energy conservation” of Mai and Sayed (IEEE Trans. Signal Process. 48 (1) (2000) 80). Thereafter, a simple, but general, closed form of the EMSE is derived for Bussgang algorithms when they have converged, i.e, when the optimal solution of cost function criterion is obtained. As an example of this closed form, the EMSE computation of the constant modulus algorithm is done and compared with the ones given in the literature.

Introduction

The performances of adaptive filters are characterized in terms of convergence rate, that is, the speed of the transient response and in steady-state mean square error level. It is well-known that fair comparison of two algorithms is a very difficult task taking into account these two antagonist behaviors. Numerous works in the literature concern the performance of adaptive filters. In [5], the author proposed a new measure of performance to allow fair comparison between several algorithms. Another parameter could also be used and gives another kind of information, which could be seen as the noise of the algorithm. This parameter called excess mean square error (EMSE) is the one we are studying in this paper.

Bussgang algorithms could easily be described geometrically; and a simple application of the Pythagoras theorem gives a very general relation similar to the one called “fundamental energy conservation” derived in [4]. This relation allows us to find a general closed form of the EMSE including the presence of the cost function, and could then be used to compute the EMSE of a large family of algorithms such as the constant modulus algorithm (CMA) as done thereafter.

Section 2, presents the problem formulation and sets the notations. The next section describes the geometrical interpretation and applies the Pythagoras theorem to the right triangles. Then the derivation of the EMSE in both real and complex cases is led by Section 4. And, in Section 5, before concluding, an example is given by computing, thanks to our method the EMSE of the CMA, whose expression could already be found in the literature.

Section snippets

Problem formulation and preliminaries

Before proceeding any further, let us define the notation used throughout the paper: vectors are denoted by bold letters; z̄ and Rez denote, respectively, the conjugate and real part of z, and X,Y is the scalar product of X and Y.

Thanks to the notation given in Fig. 1, Bussgang algorithms which follow the classical stochastic gradient algorithm are described by the general update formulaWk+1=Wk−μφ(zk)X̄k,with Xk=[xk,…,xk−N]T the vector of the input filter samples, the function φ is called the

Pythagoras and LMS

First of all, we should define several points of the Fig. 2. zopt is the output of the optimal filter and zk is the output of the filter at time k and zk+1 the output of the filter after the update (1). Mathematically speaking, we have zopt=〈Wopt,Xk, zk=〈Wk,Xk and zk+1=〈Wk+1,Xk.

As classically done, we define two errors: ea the a priori error, and ep the a posteriori error. These errors represent the difference between zopt and zk or zk+1. Noting ΔWk=WkWopt, we have ea=〈ΔWk,Xk and ep=〈ΔWk+1,

EMSE computation

During the steady state phase, we remark thatlimk→∞E||ΔWk+1||2=limk→∞E||ΔWk||2,from which we deriveE|ea|2=E|ep|2.Therefore, with the classical independence assumption between the data and the error, we obtain EΔe≈0. The main idea of the following is to develop an approximation of |φ(z)|2 and Reφ(z)ea from (7) near the optimum zopt. As zkzopt is the a priori error ea, we will find a simple relation between ea and the value of the function φ at the optimum. Thanks to this relation, the EMSE will

Constant modulus algorithm application

The CMA, one of the most widely studied algorithm, has been developed by Godard [3] as a Sato algorithm extension for constant module modulations like PSK; but surprisingly it works for other kinds of modulations. Its cost function can be written asJ(z)=14E||z|2−R|2withR=E|a|4E|a|2.Thus, in the CMA case, the functions needed for the EMSE computation areφ(z)=(|z|2−R)z,g(z)=2Reφ(z)z−zopt=(|z|2−R)[2|z|2−zzoptz̄zopt].In a noiseless SIMO context, the CMA have a zero-forcing solution. Therefore, we

Conclusion

This paper, develops a very simple geometrical approach for the steady-state analysis of Bussgang algorithms. More precisely, we obtain a general form of the excess mean square error which includes the cost function and therefore could be applied to many algorithms. The application to constant modulus algorithm done in this paper, proved the validity of this approach. We may add that this approach could be extended from a straightforward manner to other adaptive schemes.

References (5)

  • D.H. Brandwood

    A complex gradient operator and its application in adaptive array theory

    IEE Proc.

    (1983)
  • I. Fijalkow et al.

    Adaptive fractionally spaced blind CMA equalization: excess

    MSE IEEE Trans. Signal Process.

    (1998)
There are more references available in the full text version of this article.

Cited by (6)

View full text