Skip to main content

Reference pattern weight initialization for equalization

  • 3 Machine Learning
  • Conference paper
  • First Online:
Book cover Tasks and Methods in Applied Artificial Intelligence (IEA/AIE 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1416))

  • 1683 Accesses

Abstract

The problem of weight initialization in multilayer perceptron networks is considered. A computationally simple weight initialization method based on the usage of reference patterns is investigated in channel equalization application. On one hand, the proposed method aims to set the initial weight values to be such that inputs to network nodes are within the active region. On ] the other hand, the goal is to distribute the discriminant functions formed by the hidden units evenly into the input space area where training data is located. The proposed weight initialization is tested in the channel equalization application where several alternatives for obtaining suitable reference patterns are investigated. A comparison with the conventional random initialization shows that significant improvement in convergence can be achieved with the proposed method. In addition, the computational cost of the initialization was found to be negligible compared with the cost of training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Haykin, Neural networks, a comprehensive foundation, Macmillan, New York, NY, 1994.

    Google Scholar 

  2. A. Blum and R. Rivest, “Training a 3-node neural network is NP-complete,” Proceedings of Computational Learning Theory, pp. 9–18, 1988.

    Google Scholar 

  3. S. Judd, “On the complexity of loading shallow neural networks,” Journal of Complexity, vol. 4, pp. 177–192, 1988.

    Google Scholar 

  4. G. Tesauro and Y. Ahmad, “Asymptotic convergence of backpropagation,” Neural Computation, vol. 1, no. 3, pp. 382–391, 1989.

    Google Scholar 

  5. T. Denoeux and R. Lengelle, “Initializing back propagation networks with prototypes,” Neural Networks, vol. 6, pp. 351–363, 1993.

    Google Scholar 

  6. L. Wessels and E. Barnard, “Avoiding false local minima by proper initialization of connections,” IEEE Transactions on Neural Networks, vol. 3, no. 6, pp. 899–905, 1992.

    Google Scholar 

  7. G. Drago and S. Ridella, “Statistically controlled activation weight initialization (SCAWI),” IEEE Transactions on Neural Networks, vol. 3, no. 4, pp. 627–631, 1992.

    Google Scholar 

  8. M. Lehtokangas, P Salmela, J. Saarinen and K. Kaski, “Weight initialization techniques in neural network systems and their application,” in C. Leondes (ed.), Algorithms and Architectures, vol. 1 in the Neural Network Systems Techniques and Applications Series, Academic Press, 1997.

    Google Scholar 

  9. Y. Kim and J. Ra, “Weight value initialization for improving training speed in the backpropagation network,” Proceedings of IEEE International Joint Conference on Neural Networks, pp. 2396–2401, 1991.

    Google Scholar 

  10. T Kaylani and S. Dasgupta, “Weight initialization of MLP classifiers using boundary-preserving patterns,” Proceedings of IEEE International Conference on Neural Networks, pp. 113–118,1994.

    Google Scholar 

  11. J. Proakis, Digital communications, McGraw-Hill, New York, 1995.

    Google Scholar 

  12. S. Chen, G. Gibson, C. Cowan and P. Grant, “Adaptive equalization of finite non-linear channels using multilayer perceptrons,” Signal Processing, vol. 20, no. 2, pp. 107–119, 1990.

    Google Scholar 

  13. G. Gibson, S. Sin and C. Cowan, “The application of nonlinear structures to the reconstruction of binary signals,” IEEE Transactions on Signal Processing, vol. 39, no. 8, pp. 1109–1118, 1991.

    Google Scholar 

  14. M. Mouly and M-B. Pautet, The GSM system for mobile communications, Palaiseau: Mouly & Pautet, 1992.

    Google Scholar 

  15. J. Ton and R. Gonzalez, Pattern recognition principles, Addison-Wesley, London, 1981.

    Google Scholar 

  16. M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm,” Proceedings of IEEE International Conference on Neural Networks, pp. 586–591, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Angel Pasqual del Pobil José Mira Moonis Ali

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lehtokangas, M. (1998). Reference pattern weight initialization for equalization. In: Pasqual del Pobil, A., Mira, J., Ali, M. (eds) Tasks and Methods in Applied Artificial Intelligence. IEA/AIE 1998. Lecture Notes in Computer Science, vol 1416. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64574-8_443

Download citation

  • DOI: https://doi.org/10.1007/3-540-64574-8_443

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64574-0

  • Online ISBN: 978-3-540-69350-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics