Skip to main content

An Improved Approach Combining Random PSO with BP for Feedforward Neural Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5855))

Abstract

In this paper, an improved approach which combines random particle swarm optimization with BP is proposed to obtain better generalization performance and faster convergence rate. It is well known that the backpropagation (BP) algorithm has good local search ability but it is easily trapped to local minima. On the contrary, the particle swarm optimization algorithm (PSO), with good global search ability, converges rapidly during the initial stages of a global research. Since the PSO suffers from the disadvantage of losing diversity, it converges more slow around the global minima. Hence, the global search is combined with local search reasonably in the improved approach which is called as RPSO-BP. Moreover, in order to improve the diversity of the swarm in the PSO, a random PSO (RPSO) is proposed in the paper. Compared with the traditional learning algorithms, the improved learning algorithm has much better convergence accuracy and rate. Finally, the experimental results are given to verify the efficiency and effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Homik, K.: Mulitilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  2. Chen, D.S., Jain, R.C.: A robust Back-propagation Algorithm for Function Approximation. IEEE Trans. on Neural Network 5, 467–479 (1994)

    Article  Google Scholar 

  3. Meng, J.L., Sun, Z.Y.: Application of combined neural networks in nonlinear function approximation. In: Proceedings of the Third World Congress on Intelligent Control and Automation, Hefei, pp. 839–841 (2000)

    Google Scholar 

  4. Kennedy, J., Eberhart, R.: Particle Swarm Optimization. In: IEEE International Conference on Neural Networks, Perth, IEEE Service Cente, Piscataway, NJ, pp. 1942–1948 (1995)

    Google Scholar 

  5. Li, X.P., Zhang, H.Y.: Improvement of Simulated Annealing Algorithm. Software Guide 7(4), 47–48 (2000)

    Google Scholar 

  6. Angeline, P.J., Sauders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans. Neural Networks 5(1), 54–65 (1994)

    Article  Google Scholar 

  7. Yao, X.: A review of evolutionary artifical neural networks. Int. J. Intell. Syst. 8(4), 539–567 (1993)

    Article  Google Scholar 

  8. Han, F., Ling, Q.H., Huang, D.S.: Modified Constrained Learning Algorithms Incorporating Additional Functional Constraints Into Neural Networks. Information Sciences 178(3), 907–919 (2008)

    Article  MATH  Google Scholar 

  9. Wang, X.G., Tang, Z., Tamura, H., Ishii, M.: A modified error function for the backpropagation algorithm. Neurocomputing 57, 477–484 (2004)

    Article  Google Scholar 

  10. Liu, Y.H., Chen, R., Peng, W., Zhou, L.: Optimal Design for Learning Rate of BP Neutral Network. Journal of Hubei University of Technology 22(2), 1–3 (2007)

    Article  Google Scholar 

  11. Ma, Y.Q., Huo, Z.Y., Yang, Z.: The Implementation of the Improved BP Algorithm by Adding the Item of the Momentum. Sci-Tech Information Development & Economy 16(8), 157–158 (2006)

    Google Scholar 

  12. Zhang, J.-R., Zhang, J., Lok, T.-M., Lyu, M.R.: A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training. Applied Mathematics and Computation 185(2), 1026–1037 (2007)

    Article  MATH  Google Scholar 

  13. Guo, W., Qiao, Y.Z., Hou, H.Y.: BP neural network optimized with PSO algorithm and its application in forecasting. In: Proc. 2006 IEEE International Conference on Information Acquisition, Weihai, Shandong,China, August 2006, pp. 617–621 (2006)

    Google Scholar 

  14. Han, F., Ling, Q.H.: A New Approach for Function Approximation Incorporating Adaptive Particle Swarm Optimization And A Priori Information. Applied Mathematics and Computation 205(2), 792–798 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  15. Shi, Y.H., Eberhat, R.C.: Parameter selection in particle swarm optimization. In: Proc. of 1998 Annual conference on Evolutionary Programming, San Diego, pp. 591–600 (1998)

    Google Scholar 

  16. Shi, Y.H., Eberhat, R.C.: A modified particle swarm optimizer. In: Proc.of IEEE World Conf. on Computation Intelligence, pp. 69–73 (1998)

    Google Scholar 

  17. Clerc, M., Kennedy, J.: The particle swarm:explosion,stability,and convergence in a multi-dimensional complex space. IEEE Trans. Evolut. Comput. 6(1), 58–73 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cui, Y., Ju, SG., Han, F., Gu, TY. (2009). An Improved Approach Combining Random PSO with BP for Feedforward Neural Networks. In: Deng, H., Wang, L., Wang, F.L., Lei, J. (eds) Artificial Intelligence and Computational Intelligence. AICI 2009. Lecture Notes in Computer Science(), vol 5855. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05253-8_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-05253-8_40

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05252-1

  • Online ISBN: 978-3-642-05253-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics