Skip to main content

The Development of Improved Back-Propagation Neural Networks Algorithm for Predicting Patients with Heart Disease

  • Conference paper
Information Computing and Applications (ICICA 2010)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6377))

Included in the following conference series:

Abstract

A study on improving training efficiency of Artificial Neural Networks algorithm was carried out throughout many previous papers. This paper presents a new approach to improve the training efficiency of back propagation neural network algorithms. The proposed algorithm (GDM/AG) adaptively modifies the gradient based search direction by introducing the value of gain parameter in the activation function. It has been shown that this modification significantly enhance the computational efficiency of training process. The proposed algorithm is generic and can be implemented in almost all gradient based optimization processes. The robustness of the proposed algorithm is shown by comparing convergence rates and the effectiveness of gradient descent methods using the proposed method on heart disease data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ooyen, A.V., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Networks 5, 465–471 (1992)

    Article  Google Scholar 

  2. Ahmad, M., Salam, F.M.A.: Supervised learning using the cauchy energy function. In: International Conference on Fuzzy Logic and Neural Networks, vol. 1 (1992)

    Google Scholar 

  3. Pravin, C., Yogesh, S.: An activation function adapting training algorithm for sigmoidal feedforward networks. Neurocomputing 61, 429–437 (2004)

    Article  Google Scholar 

  4. Krzyzak, A., Dai, W., Suen, C.Y.: Classification of large set of handwritten characters using modified back propagation model. In: Proceedings of the International Joint Conference on Neural Networks, vol. 3, pp. 225–232 (1990)

    Google Scholar 

  5. Sang, H.O.: Improving the Error Backpropagation Algorithm with a Modified Error Function. IEEE Transactions on Neural Networks 8(3), 799–803 (1997)

    Article  Google Scholar 

  6. Hahn, M.L., Tzong, C.H., Chih, H.C.: Learning Efficiency Improvement of Back Propagation Algorithm by Error Saturation Prevention Method. IJCNN 3, 1737–1742 (1999)

    Google Scholar 

  7. Sang, H.O., Youngjik, L.: A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons. ETRI Journal 17(1), 11–22 (1995)

    Article  Google Scholar 

  8. Shamsuddin, S.M., Darus, M., Sulaiman, M.N.: Classification of Reduction Invariants with Improved Back Propagation. IJMMS 30(4), 239–247 (2002)

    MATH  MathSciNet  Google Scholar 

  9. Ng, S.C.: Fast convergence for back propagation network with magnified gradient function. In: Proceedings of the International Joint Conference on Neural Networks 2003, vol. 3, pp. 1903–1908 (2003)

    Google Scholar 

  10. Jacobs, R.A.: Increased rates of convergence through learning rate adaptation. Neural Networks 1, 295–307 (1988)

    Article  Google Scholar 

  11. Weir, M.K.: A method for self-determination of adaptive learning rates in back propagation. Neural Networks 4, 371–379 (1991)

    Article  Google Scholar 

  12. Yu, X.H., Chen, G.A., Cheng, S.X.: Acceleration of backpropagation learning using optimized learning rate and momentum. Electronics Letters 29(14), 1288–1289 (1993)

    Article  Google Scholar 

  13. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)

    Google Scholar 

  14. Fletcher, R., Powell, M.J.D.: A rapidly convergent descent method for minimization. British Computer J., 163–168 (1963)

    Google Scholar 

  15. Fletcher, R., Reeves, R.M.: Function minimization by conjugate gradients. Computer. Journal. 7(2), 149–160 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  16. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systerns. J. Research NBS 49, 409 (1952)

    MATH  MathSciNet  Google Scholar 

  17. Huang, H.Y.: A unified approach to quadratically convergent algorithms for function minimization. J. Optim. Theory Appl. 5, 405–423 (1970)

    Article  MATH  Google Scholar 

  18. Nawi, N.M., Ransing, M.R., Ransing, R.S.: An improved Conjugate Gradient based learning algorithm for back propagation neural networks. International Journal of Computational Intelligence 4(1), 46–55 (2007)

    Google Scholar 

  19. Blake, C. L.: UCI Machine Learning Databases, http://mlearn.ics.uci.edu/database/heart-disease/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nawi, N.M., Ghazali, R., Salleh, M.N.M. (2010). The Development of Improved Back-Propagation Neural Networks Algorithm for Predicting Patients with Heart Disease. In: Zhu, R., Zhang, Y., Liu, B., Liu, C. (eds) Information Computing and Applications. ICICA 2010. Lecture Notes in Computer Science, vol 6377. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16167-4_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16167-4_41

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16166-7

  • Online ISBN: 978-3-642-16167-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics