Skip to main content
Log in

A sequential learning algorithm based on adaptive particle filtering for RBF networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

To address the problem of low filtering accuracy and divergence caused by unknown process noise statistics and local linearization in neural network state-space model, this paper proposes an adaptive process noise covariance particle filter algorithm for the radial basis function (RBF) networks. Using the algorithm, the evolution of the weights and centers of RBF networks is achieved sequentially in time by use of the extended Kalman particle filter algorithm, and the process noise covariance matrices are also obtained simultaneously by maximizing the evidence density function with respect to the process noise covariance matrices. Performance of the presented approach is evaluated by two function approximation problems. Experimental results show that the proposed approach obtains better prediction accuracy than other well-known training algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Haykin S (1994) Neural networks: a comprehensive foundation. McMillan College Publishing Company, New York, NY

    MATH  Google Scholar 

  2. Bishop CM (1995) Neural networks for pattern recognition. Clarendon Press, Oxford

    Google Scholar 

  3. Gan M, Peng H, Dong XP (2012) A hybrid algorithm to optimize RBF network architecture and parameters for nonlinear time series modeling. Appl Math Model 36(7):2911–2919

    Article  MATH  MathSciNet  Google Scholar 

  4. Gan M, Li HX (2013) An efficient variable projection formulation for separable nonlinear least squares problems. IEEE Transactions on Cybernetics. doi:10.1109/TCYB.2013.2267893

  5. Platt J (1991) A resource-allocating network for function interpolation. Neural Comput 3:213–215

    Article  MathSciNet  Google Scholar 

  6. Kadirkamanathan V, Niranjan M (1993) A function estimation approach to sequential learning with neural networks. Neural Comput 5:954–975

    Article  Google Scholar 

  7. de Freitas JFG, Niranjan M, Gee AH (1997) Hierarchical Bayesian-Kalman models for regularisation and ARD in sequential learning. Technical Report UED/FINFENG/TR 307, Cambridge University Engineering Department

  8. Kwok TY, Yeung DY (1997) Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans Neural Networks 8(3):630–645

    Google Scholar 

  9. Lu YW, Sundararajan N, Saratchandran P (1997) A sequential learning scheme for function approximation using minimal radial basis function (RBF) neural networks. Neural Comput 9:461–478

    Article  MATH  Google Scholar 

  10. Zhang J, Morris AJ (1998) A sequential learning approach for single hidden layer neural networks. Neural Netw 11(1):65–80

    Article  Google Scholar 

  11. de Freitas JFG, Niranjan M, Gee AH (2000) Dynamic learning with the EM algorithm for neural networks. J VLSI Signal Process Syst Signal Image Video Technol 26(1–2):119–131

    Article  MATH  Google Scholar 

  12. Hong Y-ST, White P (2009) Hydrological modeling using a dynamic neuro-fuzzy system with on-line and local learning algorithm. Adv Water Resour 32(1):110–119

    Article  Google Scholar 

  13. Singhal S, Wu L (1993) Training multilayer perceptrons with the extended Kalman algorithm. Advance in Neural Information Processing Systems 1. Morgan Kaufmann Publishers Inc., California, pp 133–140

    Google Scholar 

  14. Puskorius GV, Feldkamp LA (1991) Decoupled extended Kalman filter training of feedforward layered networks. In: Proceedings of International Joint Conference on Neural Networks, 307–312

  15. Shah S, Palmieri F, Datum M (1992) Optimal filtering algorithms for fast learning in feedforward neural networks. Neural Netw 5(5):779–787

    Article  Google Scholar 

  16. Iiguni Y, Sakai H, Tokumaru H (1992) A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter. IEEE Trans Signal Process 40(4):959–966

    Article  Google Scholar 

  17. Puskorius GV, Feldkamp LA (1994) Neurocontrol of nonlinear dynamics systems with Kalman filter trained recurrent networks. IEEE Trans Neural Networks 5(2):279–297

    Google Scholar 

  18. Ilkivová MR, Ilkiv BR, Neuschl T (2002) Comparison of a linear and nonlinear approach to engine misfires detection. Control Eng Pract 10(10):1141–1146

    Google Scholar 

  19. Simon D (2002) Training radial basis neural networks with the extended Kalman filter. Neurocomputing 48(1–4):455–475

    MATH  Google Scholar 

  20. Ciocoiu IB (2002) RBF networks training using a dual extended Kalman filter. Neurocomputing 48(1–4):609–622

    MATH  Google Scholar 

  21. Härter FP, de Campos Velho HF (2008) New approach to applying neural network in nonlinear dynamic model. Appl Math Model 32(12):2621–2633

    MATH  MathSciNet  Google Scholar 

  22. Meau YP, Ibrahim F, Narainasamy SA et al (2006) Intelligent classification of electrocardiogram (ECG) signal using extended Kalman filter (EKF) based neuro fuzzy system. Comput Methods Programs Biomed 82(2):157–168

    Google Scholar 

  23. Yang HZ, Li J, Ding F (2007) A neural network learning algorithm of chemical process modeling based on the extended Kalman filter. Neurocomputing 70(4–6):625–632

    Google Scholar 

  24. Jazwinski AH (1970) Stochastic Processes and Filtering Theory. Math Science Engineering Academic Press, New York

    MATH  Google Scholar 

  25. Wan EA, van der Merwe R (2000) The unscented Kalman filter for nonlinear estimation. In: Proceedings of the IEEE Conference on Adaptive Systems for Signal Processing, Communications, and Control Symposium. Lake Louise, Alberta, Canada, 153–158

  26. van der Merwe R, Wan EA (2001) The square-root unscented Kalman filter for state and parameter-estimation. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. Salt Lake City, UT, USA, 6:3461–3464

  27. Wan EA, van der Merwe R, Nelson AT (2000) Dual estimation and the unscented transformation. In: Advances in neural information processing systems. MIT Press, 12:666–672

  28. Zhan RH, Wan JW (2006) Neural network-aided adaptive unscented Kalman filter for nonlinear state estimation. IEEE Signal Process Lett 13(7):445–448

    Google Scholar 

  29. de Freitas JFG, Niranjan M, Gee AH (1999) Hybrid sequential Monte Carlo/Kalman methods to train neural networks in non-stationary environments. In: IEEE International Conference on Acoustics, Speech and Signal Processing 2:1057–1060

  30. Hong YST (2012) Dynamic nonlinear state-space model with a neural network via improved sequential learning algorithm for an online real-time hydrological modeling. J Hydrol 468–469:11–21

    Google Scholar 

  31. Shi Z, Tamura Y, Ozaki T (1999) Nonlinear time series modeling with the radial basis function-based state-dependent autoregressive model. Int J Syst Sci 30(7):717–727

    MATH  Google Scholar 

  32. Peng H, Ozaki T, Haggan-Ozaki V et al (2003) A parameter optimization method for the radial basis function type models. IEEE Trans Neural Netw 14(2):432–438

    Google Scholar 

  33. de Freitas JFG, Niranjan M, Gee AH et al (2000) Sequential Monte Carlo methods to train neural network models. Neural Comput 12(4):955–993

    Google Scholar 

  34. Doucet A (1998) On sequential simulation-based methods for Bayesian filtering. Technical Report CUED/F-INFENG/TR 310, Department of Engineering, Cambridge University, Cambridge

  35. Jazwinski AH (1969) Adaptive filtering. Automatica 5:475–485

    MATH  Google Scholar 

  36. Sage AP, Husa GW (1969) Adaptive filtering with unknown prior statistic. Jt Autom Control Conf 3:760–769

    Google Scholar 

  37. Li XR, Bar-Shalom Y (1994) A recursive multiple model approach to noise identification. IEEE Trans Aerosp Electron Syst 30(3):671–684

    Google Scholar 

  38. West M, Harrison J (1997) Bayesian forecasting and dynamic models. Springer Series in Statistics, Springer, New york

    MATH  Google Scholar 

  39. de Freitas JFG, Niranjan M, Gee AH (1999) The EM algorithm and neural networks for nonlinear state space estimation. Technical Report CUED/F-INFENG/TR 313, Cambridge University

  40. Penny WD, Roberts SJ (1998) Dynamic models for nonstationary signal segmentation. Comput Biomed Res 32(6):483–502

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No.71271215, No. 70921001, No. 11301041) and the International Science & Technology Cooperation Program of China (No. 2011DFA10440). The authors would like to thank the editors and the anonymous referees for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Peng.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xi, Y., Peng, H. & Chen, X. A sequential learning algorithm based on adaptive particle filtering for RBF networks. Neural Comput & Applic 25, 807–814 (2014). https://doi.org/10.1007/s00521-014-1551-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-014-1551-y

Keywords

Navigation