Skip to main content

Accelerating Learning Performance of Back Propagation Algorithm by Using Adaptive Gain Together with Adaptive Momentum and Adaptive Learning Rate on Classification Problems

  • Conference paper
Ubiquitous Computing and Multimedia Applications (UCMA 2011)

Abstract

The back propagation (BP) algorithm is a very popular learning approach in feedforward multilayer perceptron networks. However, the most serious problem associated with the BP is local minima problem and slow convergence speeds. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we propose a new modified back propagation learning algorithm by introducing adaptive gain together with adaptive momentum and adaptive learning rate into weight update process. By computer simulations, we demonstrate that the proposed algorithm can give a better convergence rate and can find a good solution in early time compare to the conventional back propagation. We use two common benchmark classification problems to illustrate the improvement in convergence time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Nawi, N.M., Ransing, R.S., Salleh, M.N.M., Ghazali, R., Hamid, N.A.: An Improved Back Propagation Neural Network Algorithm on Classification Problems. In: Zhang, Y., Cuzzocrea, A., Ma, J., Chung, K.-i., Arslan, T., Song, X. (eds.) DTA and BSBT 2010. Communications in Computer and Information Science, vol. 118, pp. 177–188. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  2. Nawi, N.M., Ghazali, R., Salleh, M.N.M.: The Development of Improved Back-Propagation Neural Networks Algorithm for Predicting Patients with Heart Disease. In: Zhu, R., Zhang, Y., Liu, B., Liu, C. (eds.) ICICA 2010. LNCS, vol. 6377, pp. 317–324. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  3. Sabeti, V., Samavi, S., Mahdavi, M., Shirani, S.: Steganalysis and payload estimation of embedding in pixel differences using neural networks. Pattern Recogn. 43, 405–415 (2010)

    Article  MATH  Google Scholar 

  4. Mandal, S., Sivaprasad, P.V., Venugopal, S., Murthy, K.P.N.: Artificial neural network modeling to evaluate and predict the deformation behavior of stainless steel type AISI 304L during hot torsion. Applied Soft Computing 9, 237–244 (2009)

    Article  Google Scholar 

  5. Subudhi, B., Morris, A.S.: Soft computing methods applied to the control of a flexible robot manipulator. Applied Soft Computing 9, 149–158 (2009)

    Article  Google Scholar 

  6. Yu, L., Wang, S.-Y., Lai, K.K.: An Adaptive BP Algorithm with Optimal Learning Rates and Directional Error Correction for Foreign Exchange Market Trend Prediction. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3973, pp. 498–503. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  7. Lee, K., Booth, D., Alam, P.: A comparison of supervised and unsupervised neural networks in predicting bankruptcy of Korean firms. Expert Systems with Applications 29, 1–16 (2005)

    Article  Google Scholar 

  8. Sharda, R., Delen, D.: Predicting box-office success of motion pictures with neural networks. Expert Systems with Applications 30, 243–254 (2006)

    Article  Google Scholar 

  9. Popescu, M.-C., Balas, V.E., Perescu-Popescu, L., Mastorakis, N.: Multilayer perceptron and neural networks. WSEAS Trans. Cir. and Sys. 8, 579–588 (2009)

    Google Scholar 

  10. Fung, C.C., Iyer, V., Brown, W., Wong, K.W.: Comparing the Performance of Different Neural Networks Architectures for the Prediction of Mineral Prospectivity. In: Proceedings of 2005 International Conference on Machine Learning and Cybernetics, pp. 394–398 (2005)

    Google Scholar 

  11. Alsmadi, M.K.S., Omar, K., Noah, S.A.: Back Propagation Algorithm: The Best Algorithm Among the Multi-layer Perceptron Algorithm. International Journal of Computer Science and Network Security 9, 378–383 (2009)

    Google Scholar 

  12. Otair, M.A., Salameh, W.A.: Speeding Up Back-Propagation Neural Networks. In: Proceedings of the 2005 Informing Science and IT Education Joint Conference, Flagstaff, Arizona, USA, pp. 167–173 (2005)

    Google Scholar 

  13. Bi, W., Wang, X., Tang, Z., Tamura, H.: Avoiding the Local Minima Problem in Backpropagation Algorithm with Modified Error Function. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E88-A, 3645–3653 (2005)

    Article  Google Scholar 

  14. Wang, X.G., Tang, Z., Tamura, H., Ishii, M., Sun, W.D.: An improved backpropagation algorithm to avoid the local minima problem. Neurocomputing 56, 455–460 (2004)

    Article  Google Scholar 

  15. Ng, W.W.Y., Yeung, D.S., Tsang, E.C.C.: Pilot Study On The LocalizedGeneralization Error Model For Single Layer Perceptron Neural Network. In: Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, August 13-16, pp. 3078–3082 (2006)

    Google Scholar 

  16. Ji, L., Wang, X., Yang, X., Liu, S., Wang, L.: Back-propagation network improved by conjugate gradient based on genetic algorithm in QSAR study on endocrine disrupting chemicals. Chinese Science Bulletin 53, 33–39 (2008)

    Article  Google Scholar 

  17. Sun, Y.-J., Zhang, S., Miao, C.-X., Li, J.-M.: Improved BP Neural Network for Transformer Fault Diagnosis. Journal of China University of Mining and Technology 17, 138–142 (2007)

    Article  Google Scholar 

  18. Hongmei, S., Gaofeng, Z.: A New BP Algorithm with Adaptive Momentum for FNNs Training. In: WRI Global Congress on Intelligent Systems, GCIS 2009, pp. 16–20 (2009)

    Google Scholar 

  19. Hamid, N.A., Nawi, N.M., Ghazali, R.: The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems. In: Proceeding of the International Conference on Advanced Science, Engineering and Information Technology 2011, pp. 178–184. Hotel Equatorial Bangi-Putrajaya, Malaysia (2011)

    Google Scholar 

  20. Nawi, N.M., Ransing, R.S., Ransing, M.S.: An Improved Conjugate Gradient Based Learning ALgorithm for Back Propagation Neural Networks. International Journal of Information and Mathematical Sciences 4, 46–55 (2008)

    Google Scholar 

  21. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188 (1936)

    Article  Google Scholar 

  22. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Series in Machine Learning. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  23. Maier, H.R., Dandy, G.C.: The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical study. Environmental Modelling and Software 13, 193–209 (1998)

    Article  Google Scholar 

  24. Hollis, P.W., Paulos, J.J.: The effects of precision constraints in a backpropagation learning network. In: International Joint Conference on Neural Networks, IJCNN, vol. 2, p. 625 (1989)

    Google Scholar 

  25. Thimm, G., Moerland, P., Fiesler, E.: The interchangeability of learning rate and gain in backpropagation neural networks. Neural Comput. 8, 451–460 (1996)

    Article  Google Scholar 

  26. Eom, K., Jung, K., Sirisena, H.: Performance improvement of backpropagation algorithm by automatic activation function gain tuning using fuzzy logic. Neurocomputing 50, 439–460 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Abdul Hamid, N., Mohd Nawi, N., Ghazali, R., Mohd Salleh, M.N. (2011). Accelerating Learning Performance of Back Propagation Algorithm by Using Adaptive Gain Together with Adaptive Momentum and Adaptive Learning Rate on Classification Problems. In: Kim, Th., Adeli, H., Robles, R.J., Balitanas, M. (eds) Ubiquitous Computing and Multimedia Applications. UCMA 2011. Communications in Computer and Information Science, vol 151. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20998-7_62

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-20998-7_62

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-20997-0

  • Online ISBN: 978-3-642-20998-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics