Skip to main content
Log in

Combined learning and pruning for recurrent radial basis function networks based on recursive least square algorithms

  • Original Article
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

This paper discusses a way to combine training and pruning for the construction of a recurrent radial basis function network (RRBFN) based on recursive least square (RLS) learning. In our approach, a RRBFN is first trained using the proposed RLS algorithms. Afterwards, the error covariance matrix which is directly obtained from the RLS computations is used to remove some unimportant radial basis function (RBF) nodes. We propose two algorithms: (1) a “global” version which is suitable for low dimensional input space situation, and (2) a “local” version which can be applied in situations when the input dimension is large. In both cases, it is shown that the error covariance matrix, obtained from the RLS algorithms, can be used as a means for pruning the trained RRBFN. Simulation examples are presented to illustrate the proposed approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Ljung L (1987) System identification: theory for the user. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  2. Gerhenfeld NA, Weigend AS (1993) The future of time series. In: Time series prediction: forecasting the future and understanding the past. Addison Wesley, Reading, pp 1–70

    Google Scholar 

  3. Tong H (1989) Nonlinear time series. Oxford University Press, Oxford

    Google Scholar 

  4. Hecht-Nielsen R (1990) Neurocomputing. Addison-Wesley, Reading

    Google Scholar 

  5. Simpson PK (1990) Artificial neural systems: foundations, paradigms, applications, and implementations. Pergamon, New York

    Google Scholar 

  6. Ripley B, Haykin S (1994) Neural networks: a comprehensive foundation. Macmillan College Publishing Company, New York

    Google Scholar 

  7. Sejnowski T, Rosenberg R (1989) NETtalk: a parallel network that learns to read aloud. The Johns Hopkins University, Electrical and Computer Science Technical Report JHU/EECS–86/01, 32 pages. Reprinted in Neurocomputing Foundations of Research, Anderson JA, Rosenfeld E (eds) MIT, Cambridge, pp 663–672

  8. Hornik K, Stinchombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366

    Article  Google Scholar 

  9. Elman JL (1990) Finding structure in time. Cognit Sci 14:179–211

    Article  Google Scholar 

  10. Williams R, Zipser D (1989) A Learning algorithm for continually running fully recurrent neural networks. Neural Comput 1:270–280

    Google Scholar 

  11. Robinson T, Fallside F (1989) A recurrent error propagation network speech recognition system. Computer Speech and Language, pp 259–274

  12. Connor JT, Martin RD, Atlas LE (1994) Recurrent neural network and robust time series prediction. IEEE Trans Neural Netw 5:240–254

    Article  PubMed  CAS  Google Scholar 

  13. Watrous RL (1992) Induction of finite-state languages using second-order recurrent networks. Neural Comput 4:406–414

    Article  Google Scholar 

  14. Back AD, Tsoi AC (1991) FIR and IIR synapses, a new neural network architecture for time series modelling. Neural Comput 3:375–385

    Google Scholar 

  15. Tsoi AC, Back AD (1997) Discrete time recurrent neural network architectures, a unifying review. Neurocomputing 15(3–4):183–223

    Article  MATH  Google Scholar 

  16. Tsoi AC, Back AD (1994) Locally recurrent globally feedforward networks, a critical review of architectures. IEEE Trans Neural Netw 5:229–239

    Article  PubMed  CAS  Google Scholar 

  17. Nerrand O, Roussel-Ragot P, Personnaz L, Dreyfus G, Marocs S (1993) Neural networks and nonlinear adaptive filtering: unifying concepts and new algorithms. Neural Comput 5:165–199

    Google Scholar 

  18. Mulgrew B (1996) Applying radial basis functions. IEEE Signal Process Mag, March

  19. Moody J, Darken C (1989) Fast learning in networks of locally tuned processing units. Neural Comput 1:281–294

    Google Scholar 

  20. Moakes PA, Steve W (1994) Non-linear speech analysis using recurrent radial basis function networks. Neural Networks for Signal Processing-Proceedings of the IEEE Workshop 1994. IEEE, Piscataway, NJ, USA, pp 319–328

  21. Koivisto H, Vahteri J (1996) A training method for feedforward and recurrent RBF type neural network. STEP’96 Genes, Nets and Symbols. Finnish Artificial Intelligence Conference, Vassa, Finland, p 89

  22. Billings SA, Fung CF (1995) Recurrent radial basis function networks for adaptive noise cancelation. Neural Netw 8:273–290

    Article  Google Scholar 

  23. Chen S, Billings SA, Grant PM (1992) A recursive hybrid algorithm for nonlinear system identification using radial basis function networks for adaptive noise cancelation. Int J Control 5:1051–1070

    MathSciNet  Google Scholar 

  24. Mak MW (1995) A learning algorithm for recurrent radial basis function. Neural Process Lett 2:27–31

    ADS  Google Scholar 

  25. Cid-Sueiro J, Artes-Rodriguez A, Figueiras-Vidal AR (1994) Recurrent radial basis function networks for optimal symbol-by-symbol equalization. Signal Process 40:53–63

    Article  MATH  Google Scholar 

  26. Ye X, Loh NK (1994) Dynamic system identification using recurrent radial basis function network. In: Proceedings of the 1993 American Control Conference, vol 3, pp 2912–2916

  27. Linde Y, Buzo A, Gray M (1980) An algorithm for vector quantizer design. IEEE Trans Commun 28:84–95

    Article  Google Scholar 

  28. Tsoi AC (1989) Multilayer perceptron trained using radial basis functions. Electron Lett 25:1296–1297

    Google Scholar 

  29. Singhal S, Wu L (1989) Training feed–forward networks with the extended Kalman filter. In: Proceedings IEEE international conference ASSP, Glasgow, pp 1187–1190

  30. Shah S, Palmieri F, Datum M (1992) Optimal filtering algorithm for fast learning in feedforward neural networks. Neural Netw 5:779–787

    Article  Google Scholar 

  31. Leung CS, Wong KW, Sum J, Chan LW (1996) On-line training and pruning for RLS algorithms. Electron Lett pp 2152–2153

  32. Giles CL, Omlin CW (1994) Pruning recurrent neural networks for improved generalization performance. IEEE Trans Neural Netw 5:848–851

    Article  PubMed  CAS  Google Scholar 

  33. Reed R (1993) Pruning algorithm-a survey. IEEE Trans Neural Netw 4:740–747

    Article  PubMed  CAS  Google Scholar 

  34. Tsoi AC, Tan S (1997) Recurrent neural networks: a constructive algorithm, and its properties. Neurocomputing 15:309–326

    Article  Google Scholar 

  35. LeCun Y, Denker JS, Solla SA (1989) Optimal brain damage. In: Touretzky DS (ed) Advances in neural information processing, vol 2, pp 598–605

  36. Goodwin GC, Sin KS (1986) Adaptive estimation, filtering, and control. Prentice Hall, Englewood

    Google Scholar 

  37. Narenda KS, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. IEEE Trans Neural Netw 1:4–27

    Article  Google Scholar 

  38. Shynk JJ (1989) Adaptive IIR filtering. IEEE ASSP Mag, April, 1989

  39. Puskorius GV, Feldkamp LA (1994) Neurocontrol of nonlinear dynamical system with Kalman filter trained recurrent networks. IEEE Trans Neural Netw 5:279–297

    Article  PubMed  CAS  Google Scholar 

  40. William H (1989) Applied numerical linear algebra. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  41. Narenda KS, Parthasarathy K (1992) Neural networks and dynamical systems. Int J Approx Reason 6:109–131

    Article  Google Scholar 

  42. Weigend AS, Huberman BA, Rumelhart DE (1990) Predicting the future: a connectionist approach. Int J Neural Syst 1:193–209

    Article  Google Scholar 

  43. Weigend AS, Rumelhart DE, Huberman BA (1990) Back-propagation, weight-elimination and time series prediction. In: Touretzky DS, Elman JL, Sejnowski TJ, Hinton GE (eds) Proceedings of the 1990 Connectionist Models Summer School. Morgan Kaufmann, San Mateo, pp 105–116

  44. Jones AJ (2004) New tools in non-linear modelling and prediction. Comput Manag Sci 1(2):109–149

    MATH  Google Scholar 

  45. Svarer C, Hansen LK, Larsen J (1993) On design and evaluation of tapped-delay neural network architectures. In: Proceedings of IEEE international conference on neural networks, pp 46–51

  46. Leung CT, Chow TWS (1997) A novel noise robust fourth-order cumulants cost function. Neurocomputing 16:139–147

    Article  Google Scholar 

Download references

Acknowledgements

The work described in this paper was supported by the Strategic Grant, City University of Hong Kong, Hong Kong. [Project No. 7001218].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Sing Leung.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Leung, C.S., Tsoi, A.C. Combined learning and pruning for recurrent radial basis function networks based on recursive least square algorithms. Neural Comput & Applic 15, 62–78 (2006). https://doi.org/10.1007/s00521-005-0009-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-005-0009-7

Keywords

Navigation