Skip to main content

RMIL/AG: A New Class of Nonlinear Conjugate Gradient for Training Back Propagation Algorithm

  • Conference paper
  • First Online:
Recent Advances on Soft Computing and Data Mining (SCDM 2018)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 700))

Included in the following conference series:

Abstract

The conventional back propagation (BP) algorithm is generally known for some disadvantages, such as slow training, easy to getting trapped into local minima and being sensitive to the initial weights and bias. This paper introduced a new class of efficient second order conjugate gradient (CG) for training BP called Rivaie, Mustafa, Ismail and Leong (RMIL)/AG. The RMIL uses the value of adaptive gain parameter in the activation function to modify the gradient based search direction. The efficiency of the proposed method is verified by means of simulation on four classification problems. The results show that the computational efficiency of the proposed method was better than the conventional BP algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: David, E.R., James, L.M., C.P.R. Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362. MIT Press (1986)

    Google Scholar 

  2. Zhang, S.L., Chang, T.C.: A study of image classification of remote sensing based on back-propagation neural network with extended delta bar delta. Math. Problems Eng. 2015, 10 (2016)

    Google Scholar 

  3. Nawi, N.M., Rehman, M., Khan, A.: WS-BP: an efficient wolf search based back-propagation algorithm. In: International Conference on Mathematics, Engineering and Industrial Applications 2014 (ICoMEIA 2014). AIP Publishing (2015)

    Google Scholar 

  4. Rehman, M.Z., Nawi, N.M.: The effect of adaptive momentum in improving the accuracy of gradient descent back propagation algorithm on classification problems. In: Mohamad Zain, J., Wan Mohd, W.M.B., El-Qawasmeh, E. (eds.) Second International Conference on Software Engineering and Computer Systems, ICSECS 2011, Kuantan, Pahang, Malaysia, 27–29 June 2011, Proceedings, Part I, pp. 380–390. Springer, Berlin, Heidelberg (2011)

    Google Scholar 

  5. Nawi, N.M., Khan, A., Rehman, M.Z.: A new back-propagation neural network optimized with cuckoo search algorithm. In: International Conference on Computational Science and Its Applications. Springer, Berlin, Heidelberg (2013)

    Google Scholar 

  6. Liu, Y., Jing, W., Xu, L.: Parallelizing backpropagation neural network using MapReduce and cascading model. Comput. Intell. Neurosci. 2016, 11 (2016)

    Google Scholar 

  7. Chen, Y., et al.: Three-dimensional short-term prediction model of dissolved oxygen content based on PSO-BPANN algorithm coupled with Kriging interpolation. Math. Problems Eng. 2016, 10 (2016)

    Google Scholar 

  8. Cao, J., Chen, J., Li, H.: An adaboost-backpropagation neural network for automated image sentiment classification. Sci. World J. 2014, 9 (2014)

    Google Scholar 

  9. Abdul Hamid, N., et al.: A review on improvement of back propagation algorithm. Glob. J. Technol. 1 (2012)

    Google Scholar 

  10. Van Ooyen, A., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Netw. 5(3), 465–471 (1992)

    Article  Google Scholar 

  11. Haykin, S.: Neural Networks: A Comprehensive Foundation, 842 pp. Prentice Hall PTR (1998)

    Google Scholar 

  12. Kumar, P., Merchant, S.N., Desai, U.B.: Improving performance in pulse radar detection using Bayesian regularization for neural network training. Digit. Signal Proc. 14(5), 438–448 (2004)

    Article  Google Scholar 

  13. Bishop, C.M.: Neural Networks for Pattern Recognition, 482 pp. Oxford University Press, Inc. (1995)

    Google Scholar 

  14. Fletcher, R., Powell, M.J.D.: A rapidly convergent descent method for minimization. Comput. J. 6(2), 163–168 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  15. Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hestenes, M., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bureau Stand. 49(6), 409–436 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  17. Maier, H.R., Dandy, G.C.: The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical study. Environ. Model Softw. 13(2), 193–209 (1998)

    Article  Google Scholar 

  18. Thimm, G., Moerland, P., Fiesler, E.: The interchangeability of learning rate and gain in backpropagation neural networks. Neural Comput. 8(2), 451–460 (1996)

    Article  Google Scholar 

  19. Nawi, N.M., Ransing, R., Hamid, N.A.: BPGD-AG: a new improvement of back-propagation neural network learning algorithms with adaptive gain. J. Sci. Technol. 2(2) (2010)

    Google Scholar 

  20. Nawi, N.M., et al.: An improved back propagation neural network algorithm on classification problems. In: Zhang, Y., et al. (eds.) Database Theory and Application, Bio-Science and Bio-Technology: International Conferences, DTA and BSBT 2010, Held as Part of the Future Generation Information Technology Conference, FGIT 2010, Jeju Island, Korea, 13–15 December 2010, Proceedings, pp. 177–188. Springer, Berlin, Heidelberg (2010)

    Google Scholar 

  21. Nawi, N.M., et al.: Enhancing back propagation neural network algorithm with adaptive gain on classification problems. Networks 4(2) (2011)

    Google Scholar 

  22. Nawi, N.M., et al.: Predicting patients with heart disease by using an improved back-propagation algorithm. J. Comput. 3(2) (2011)

    Google Scholar 

  23. Nawi, N.M., Wahid, N., Idris, M.M.: A new gradient based search direction for conjugate gradient algorithm. Int. J. Adv. Data Inf. Eng. 1(1), 1–7 (2016)

    Google Scholar 

  24. Abdul Hamid, N., et al.: Learning efficiency improvement of back propagation algorithm by adaptively changing gain parameter together with momentum and learning rate. In: Zain, J.M., Wan Mohd, W.M.B., El-Qawasmeh, E. (eds.) Second International Conference on Software Engineering and Computer Systems, ICSECS 2011, Kuantan, Pahang, Malaysia, 27–29 June 2011, Proceedings, Part III, pp. 812–824. Springer, Berlin, Heidelberg (2011)

    Google Scholar 

  25. Abdul Hamid, N., Nawi, N.M., Ghazali, R.: The effect of adaptive gain and adaptive momentum in improving training time of gradient descent back propagation algorithm on classification problems. Int. J. Adv. Sci. Eng. Inf. Technol. 1(2), 178–184 (2011)

    Google Scholar 

  26. Abdul Hamid, N., et al.: Improvements of back propagation algorithm performance by adaptively changing gain, momentum and learning rate. Int. J. New Comput. Archit. Appl. (IJNCAA) 1(4), 866–878 (2011)

    Google Scholar 

  27. Abdul Hamid, N., et al.: Solving local minima problem in back propagation algorithm using adaptive gain, adaptive momentum and adaptive learning rate on classification problems, In: International Journal of Modern Physics: Conference Series, pp. 448–455. World Scientific Publishing Company (2012)

    Google Scholar 

  28. Mohd Nawi, N., et al.: Predicting patients with heart disease by using an improved back-propagation algorithm. J. Comput. 3(2) (2011)

    Google Scholar 

  29. Nawi, N.M., Wahid, N., Idris, M.M.: A new gradient based search direction for conjugate gradient algorithm. Int. J. Adv. Data Inf. Eng. 1(1), 1–7 (2013)

    Google Scholar 

  30. Rivaie, M., et al.: A new class of nonlinear conjugate gradient coefficients with global convergence properties. Appl. Math. Comput. 218(22), 11323–11332 (2012)

    MathSciNet  MATH  Google Scholar 

  31. Angiulli, G., et al.: Accurate modelling of lossy SIW resonators using a neural network residual Kriging approach. IEICE Electron. Express 14(6), 20170073 (2017). 20170073

    Article  Google Scholar 

  32. RodrÃguez-QuiÃonez, J.C., et al.: Improve a 3D distance measurement accuracy in stereo vision systems using optimization methods’ approach. Opto-Electron. Rev. 25(1), 24–32 (2017)

    Google Scholar 

  33. Prechelt, L.: PROBEN1-A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms. Technical Report 21/94, Fakultat fur Informatik, Universitat Karlsruhe, /Anonymous FTP: /pub/papers/techreports/1994/1994-21 (1994)

    Google Scholar 

  34. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7(2), 179–188 (1936)

    Article  Google Scholar 

  35. Evett, I.W., Spiehler, E.J.: Rule induction in forensic science. In: Knowledge Based Systems, pp. 152–160. Halsted Press (1988)

    Google Scholar 

  36. Smith, J.W., et al.: Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In: Proceedings of the Annual Symposium on Computer Application in Medical Care, pp. 261–265 (1988)

    Google Scholar 

  37. Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  38. Nawi, N.M., et al.: Second order back propagation neural network (SOBPNN) algorithm for medical data classification. In: Phon-Amnuaisuk, S., Au, T.W. (eds.) Computational Intelligence in Information Systems: Proceedings of the Fourth INNS Symposia Series on Computational Intelligence in Information Systems (INNS-CIIS 2014), pp. 73–83. Springer International Publishing, Cham (2015)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Universiti Tun Hussein Onn Malaysia (UTHM) Ministry of Higher Education (MOHE) Malaysia for financially supporting this Research under Trans-disciplinary Research Grant Scheme (TRGS) vote no. T003. This research also supported by GATES IT Solution Sdn. Bhd under its publication scheme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nazri Mohd Nawi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Basri, S.M.M., Nawi, N.M., Mamat, M., Hamid, N.A. (2018). RMIL/AG: A New Class of Nonlinear Conjugate Gradient for Training Back Propagation Algorithm. In: Ghazali, R., Deris, M., Nawi, N., Abawajy, J. (eds) Recent Advances on Soft Computing and Data Mining. SCDM 2018. Advances in Intelligent Systems and Computing, vol 700. Springer, Cham. https://doi.org/10.1007/978-3-319-72550-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-72550-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-72549-9

  • Online ISBN: 978-3-319-72550-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics