Skip to main content
Log in

Feedforward neural network for blind equalization with PSK signals

  • Original Article
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

Most of the cost functions used for blind equalization are nonconvex and nonlinear functions of tap weights, when implemented using linear transversal filter structures. Therefore, a blind equalization scheme with a nonlinear structure that can form nonconvex decision regions is desirable. The efficacy of complex-valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present a complex valued neural network for blind equalization with M-ary phase shift keying (PSK) signals. The complex nonlinear activation functions used in the neural network are especially defined for handling the M-ary PSK signals. The training algorithm based on constant modulus algorithm (CMA) cost function is derived. The improved performance of the proposed neural network in both, stationary and nonstationary environments, is confirmed through computer simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Amari SI, Cichocki A (1998) Adaptive blind signal processing–Neural network approaches. Proc IEEE 86(10):2026–2048

    Article  Google Scholar 

  2. Chen S, Gibson GJ, Cowan CFN, Grant PM (1990) Adaptive equalization of finite nonlinear channels using multilayer perceptrons. Signal Process 20:107–109

    Article  Google Scholar 

  3. Chow TWS, Fang Y (2001) Neural blind deconvolution of MIMO noisy channels. IEEE T Circuits Syst-I 48(1):116–120

    Article  Google Scholar 

  4. Lin H, Amin M (2003) A dual mode technique for improved blind equalization for QAM signals. IEEE Signal Process Lett 10(2):29–31

    Article  Google Scholar 

  5. Thirion MN, Moreau E (2002) Generalized criterion for blind multivariate signal equalization. IEEE Signal Process Lett 9(2):72–74

    Article  Google Scholar 

  6. Destro Filho JB, Favier G, Travassos Romano JM (1996) Neural networks for blind equalization. Proc IEEE Int Conf Globcom 1:196–200

    Google Scholar 

  7. Fang Y, Chow TWS, Ng KT (1999) Linear neural network based blind equalization. Signal Process 76(1):37–42

    Article  Google Scholar 

  8. Feng CC, Chi CY (1999) Performance of cumulant based inverse filters for blind deconvolution. IEEE Trans Signal Proces 47(7):1922–1935

    Article  Google Scholar 

  9. Choi S, Cichocki A (1998) Cascade neural networks for multichannel blind deconvolution. Electron Lett 34(12):1186–1187

    Article  Google Scholar 

  10. Kechriotis G, Zervas E, Manolakos ES (1994) Using recurrent neural networks for adaptive communication channel equalization. IEEE T Neural Networ 5(2):267–278

    Article  Google Scholar 

  11. Haykin S (1994) Blind deconvolution. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  12. Haykin S (1996) Adaptive filter theory, 3rd edn. Prentice Hall, Upper Saddel River

    Google Scholar 

  13. You C, Hong D (1998) Nonlinear blind equalization schemes using complex valued multilayer feedforward neural networks. IEEE T Neural Networ 9(6):1442–1455

    Article  Google Scholar 

  14. Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall, Upper Saddle River

    Google Scholar 

  15. Haykin S (1996) Neural networks expand SP’s horizons. IEEE Signal Process Mag, pp 24–49

  16. Luo FL, Unbehauen R (1997) Applied neural networks for signal processing. Cambridge University Press, UK

    Google Scholar 

  17. Cichocki A, Unbehauen R (1994) Neural networks for optimization and signal processing. Wiley, Chichester

    Google Scholar 

  18. Benvenuto N, Piazza F (1992) On the complex backpropagation algorithm. IEEE T Signal Proces 40:967–969

    Article  Google Scholar 

  19. Johnson CR et al (1998) Blind equalization using the constant modulus criterion : a review. Proc IEEE 86(10):1927–1950

    Article  Google Scholar 

  20. Schniter P, Johnson CR Jr (1999) Dithered signed error CMA: robust, computationally efficient blind adaptive equalization. IEEE T Signal Proces 47(6):1592–1603

    Article  Google Scholar 

  21. Jalon NK (1992) Joint blind equalization, carrier recovery and timing recovery for high order QAM constellations. IEEE T Signal Proces 40:1383–1398

    Article  Google Scholar 

  22. Pandey R (2001) Blind equalization and signal separation using neural networks. PhD. thesis, I.I.T. Roorkee, India

  23. Liavas AP, Regalia PA, Delmas JP (1999) Blind channel approximation: effective channel order determination. IEEE T Signal Proces 47(12):3336–3344

    Article  Google Scholar 

  24. Vesin JM, Gruter R (1999) Model selection using a simplex reproduction genetic algorithm. Signal Proces 78:321–327

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rajoo Pandey.

Appendix

Appendix

The CMA cost function is expressed as

$$J(n) = \frac{1}{4}E{\left[ {{\left({{\left| {y(n)} \right|}^{2} - R_{2}} \right)}^{2}} \right]},$$
(25)

where

$$R_{2} = \frac{E[{\left| {s(n)} \right|}^{4} ]}{E[{\left| {s(n)} \right|}^{2} ]}.$$

Using the gradient descent technique, the weights of the neural network can be updated as

$$w^{{(2)}}_{k} (n + 1) = w^{{(2)}}_{k} (n) - \eta \nabla _{{w^{{(2)}}_{k}}} J(n)$$
(26)

and

$$w^{{(1)}}_{{kl}} (n + 1) = w^{{(1)}}_{{kl}} (n) - \eta \nabla _{{w^{{(1)}}_{{kl}}}} J(n),$$
(27)

where η is the learning rate parameter and the terms \(\nabla_{{w^{(2)}_{k}}} J(n)\) and \(\nabla _{{w^{(1)}_{{kl}}}} J(n)\) represent the gradients of the cost function J(n) defined by (25) with respect to the weights w(2) k and w(1) kl , respectively.

Since the activation function of the output layer neuron for M-ary PSK signal is defined in terms of modulus and angle of the activation sum, the gradient of the CMA cost function with respect to the output layer weight w(2) k (n) is expressed as

$$\nabla _{{w^{{(2)}}_{k}}} J(n) = ({\left| {y(n)} \right|}^{2} - R_{2})\,{\left| {y(n)} \right|}\,{\left({ab - \frac{b}{a}{\left| {y(n)} \right|}^{2}} \right)}\frac{{\partial {\rm net}^{{(2)}} (n)}}{{\partial w^{{(2)}}_{k} (n)}}.$$
(28)

To obtain an expression for the partial derivative of (28), we use the relationship

$${\left| {{\rm net}^{{(2)}} (n)} \right|}^{2} = {\left({{\rm net}^{{(2)}}_{\rm R} (n)} \right)}^{2} + {\left({{\rm net}^{{(2)}}_{\rm I} (n)} \right)}^{2}. $$
(29)

On differentiating (29) with respect to w(2) k , we get

$$\begin{aligned} \frac{{\partial {\left| {{\rm net}^{{(2)}} (n)} \right|}}} {{\partial w^{{(2)}}_{k} (n)}} = & \frac{1} {{{\left| {{\rm net}^{{(2)}} (n)} \right|}}}{\left[ {{\rm net}^{{(2)}}_{\rm R} (n)\,\frac{{\partial {\rm net}^{{(2)}}_{\rm R} (n)}} {{\partial w^{{(2)}}_{k} (n)}} + {\rm net}^{{(2)}}_{\rm I} (n)\frac{{\partial {\rm net}^{{(2)}}_{\rm I} (n)}} {{\partial w^{{(2)}}_{k} (n)}}} \right]} \\ = & \frac{1} {{{\left| {{\rm net}^{{(2)}} (n)} \right|}}}{\left[ {{\rm net}^{{(2)}}_{\rm R} (n){\left( {\varphi ^{{(1)}} (net^{{(1)}}_{{k,{\rm R}}} (n)) - j\varphi ^{{(1)}} ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))} \right)} + {\rm net}^{{(2)}}_{\rm I} (n){\left( {\varphi ^{{(1)}} ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n)) + j\varphi ^{{(1)}} ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n))} \right)}} \right]} \\ = & \frac{{\varphi ^{{(1)*}} ({\rm net}^{{(1)}}_{k} (n)){\rm net}^{{(2)}} (n)}} {{{\left| {{\rm net}^{{(2)}} (n)} \right|}}} \\ \end{aligned} $$
(30)

On substituting (30) in (28), the expression for the gradient becomes

$$\nabla _{{w^{{(2)}}_{k}}} J(n) = \delta ^{{(2)}} (n) u^{*}_{k} (n)$$
(31)

where

$$\delta ^{{(2)}} (n) = {\left[ {{\left({{\left| {y(n)} \right|}^{2} - R_{2}} \right)}{\left| {y(n)} \right|}{\left({ab - \frac{b}{a}{\left| {y(n)} \right|}^{2}} \right)}} \right]}\frac{{{\rm net}^{{(2)}} (n)}}{{{\left| {{\rm net}^{{(2)}} (n)} \right|}}}.$$
(32)

Substitution of (31) in (26) along with (32) gives the update equation (17) with (18) for M-ary PSK signal.

In order to obtain the update equation for the weights {w(1) kl }, we need the following gradient

$$\nabla _{{w^{{(1)}}_{{kl}}}} J(n) = {\left({{\left| {y(n)} \right|}^{2} - R_{2}} \right)}\,{\left| {y(n)} \right|}\,{\left({ab - \frac{b}{a}{\left| {y(n)} \right|}^{2}} \right)}\frac{{\partial {\left| {{\rm net}^{{(2)}} (n)} \right|}}}{{\partial w^{{(1)}}_{{kl}} (n)}}.$$
(33)

The partial derivative terms in (33) can be obtained by using (29)

$$\frac{{\partial {\left| {{\rm net}^{{(2)}} (n)} \right|}}}{{\partial w^{{(1)}}_{{kl}} (n)}} = \frac{1}{{{\left| {{\rm net}^{{(2)}} (n)} \right|}}}{\left[ {{\rm net}^{{(2)}}_{\rm R} (n)\frac{{\partial {\rm net}^{{(2)}}_{\rm R} (n)}}{{\partial w^{{(1)}}_{{kl}} (n)}} + {\rm net}^{{(2)}}_{\rm I} (n)\frac{{\partial {\rm net}^{{(2)}}_{\rm I} (n)}}{{\partial w^{{(1)}}_{{kl}}(n)}}} \right]},$$
(34)

where

$$\begin{aligned} \frac{{\partial {\rm net}^{{(2)}}_{\rm R} (n)}} {{\partial w^{{(1)}}_{{kl}} (n)}} = & w^{{(2)}}_{{k,{\rm R}}} (n){\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n)).(x_{{l,{\rm R}}} (n) - jx_{{l,{\rm I}}} (n)) \\ & - w^{{(2)}}_{{k,{\rm I}}} (n){\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))(x_{{l,{\rm I}}} (n) + jx_{{l,{\rm R}}} (n)) \\ \end{aligned} $$
(35)

and

$$\begin{aligned} \frac{{\partial {\rm net}^{{(2)}}_{\rm I} (n)}} {{\partial w^{{(1)}}_{{kl}} (n)}} = & w^{{(2)}}_{{k,{\rm R}}} (n){\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))(x_{{l,{\rm I}}} (n) + jx_{{l,{\rm R}}} (n)) \\ & + w^{{(2)}}_{{k,{\rm I}}} (n){\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n))(x_{{l,{\rm R}}} (n) - jx_{{l,{\rm I}}} (n)) \\ \end{aligned}. $$
(36)

Now the substitution of (35) and (36) in (34) and some simplification lead to

$$\frac{{\partial {\left| {{\rm net}^{{(2)}} (n)} \right|}}} {{\partial w^{{(1)}}_{{kl}} (n)}} = \frac{{x^{*}_{l} (n)}} {{{\left| {{\rm net}^{{(2)}} (n)} \right|}}}{\left( {{\varphi ^{(1)} }^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n))\,\operatorname{Re} [w^{{(2)}}_{k} (n){\rm net}^{{(2)*}} (n)] - j{\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))\,\operatorname{Im} [w^{{(2)}}_{k} (n){\rm net}^{{(2)*}} (n)]} \right)}. $$
(37)

Finally, by substituting (37) in (33), we get

$$\begin{aligned} \nabla _{{w^{{(1)}}_{{kl}} }} J(n) = & \delta ^{{(2)}} (n)x^{*}_{l} (n){\left( {{\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n))\,\operatorname{Re} [w^{{(2)}}_{k} (n){\rm net}^{{(2)*}} (n)] - j{\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))\,\operatorname{Im} [w^{{(2)}}_{k} (n)net^{{(2)*}} (n)]} \right)} \\ = & \delta ^{{(1)}}_{k} (n)x^{*}_{l} (n) \\ \end{aligned} $$
(38)

where

$$\delta ^{{(1)}}_{k} (n) = \frac{{\delta ^{{(2)}} (n)}}{{{\rm net}^{{(2)}} (n)}}{\left( {{\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm R}}} (n))\,\operatorname{Re} [w^{{(2)}}_{k} (n)net^{{(2)*}} (n)] - j{\varphi ^{(1)}} ^{\prime } ({\rm net}^{{(1)}}_{{k,{\rm I}}} (n))\,\operatorname{Im} [w^{{(2)}}_{k} (n){\rm net}^{{(2)*}} (n)]} \right)}.$$

Using (38) and (27), we get the update rule of (19) for M-ary PSK signal.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pandey, R. Feedforward neural network for blind equalization with PSK signals. Neural Comput & Applic 14, 290–298 (2005). https://doi.org/10.1007/s00521-004-0465-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-004-0465-5

Keywords

Navigation