Skip to main content

An Improved Back Propagation Neural Network Algorithm on Classification Problems

  • Conference paper
Database Theory and Application, Bio-Science and Bio-Technology (BSBT 2010, DTA 2010)

Abstract

The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Majdi, A., Beiki, M.: Evolving Neural Network Using Genetic Algorithm for Predicting the Deformation Modulus of Rock Masses. International Journal of Rock Mechanics and Mining Science, vol 47(2), 246–253 (2010)

    Article  Google Scholar 

  2. Lee, K., Booth, D., Alam, P.: A Comparison of Supervised and Unsupervised Neural Networks in Predicting Bankruptcy of Korean Firms. Expert Systems with Applications 29(1), 1–16 (2005)

    Article  Google Scholar 

  3. Landajo, M., Andres, J.D., Lorca, P.: Robust Neural Modeling for the Cross-Sectional Analysis of Accounting Information. European Journal of Operational Research 177(2), 1232–1252 (2007)

    Article  MATH  Google Scholar 

  4. Razi, M.A., Athappily, K.: A Comparative Predictive Analysis of Neural Networks (NNs), Nonlinear Regression and Classification and Regression Tree (CART) Models. Expert Systems with Applications 2(1), 65–74 (2005)

    Article  Google Scholar 

  5. Behrman, M., Linder, R., Assadi, A.H., Stacey, B.R., Backonja, M.M.: Classification of Patients with Pain Based on Neuropathic Pain Symptoms: Comparison of an Artificial Neural Network against an Established Scoring System. European Journal of Pain 11(4), 370–376 (2007)

    Article  Google Scholar 

  6. Yesilnacar, E., Topal, T.: Landslide Susceptibility Mapping: A Comparison of Logistic Regression and Neural Networks Methods in a Medium Scale Study, Hendek region (Turkey). Engineering Geology 79(3–4), 251–266 (2005)

    Article  Google Scholar 

  7. Dvir, D., Ben-Davidb, A., Sadehb, A., Shenhar, A.J.: Critical Managerial Factors Affecting Defense Projects Success: A Comparison between Neural Network and Regression Analysis. Engineering Applications of Artificial Intelligence 19, 535–543 (2006)

    Article  Google Scholar 

  8. Gan, C., Limsombunchai, V., Clemes, M., Weng, A.: Consumer Choice Prediction: Artificial Neural Networks Versus Logistic Models. Journal of Social Sciences 1(4), 211–219 (2005)

    Article  Google Scholar 

  9. Chiang, W.K., Zhang, D., Zhou, L.: Predicting and Explaining Patronage Behavior toward Web and Traditional Stores Using Neural Networks: A Comparative Analysis with Logistic Regression. Decision Support Systems 41, 514–531 (2006)

    Article  Google Scholar 

  10. Chang, L.Y.: Analysis of Freeway Accident Frequencies: Negative Binomial Regression versus Artificial Neural Network. Safety Science 43, 541–557 (2005)

    Article  Google Scholar 

  11. Sharda, R., Delen, D.: Predicting box-office success of motion pictures with neural networks. Expert Systems with Applications 30, 243–254 (2006)

    Article  Google Scholar 

  12. Nikolopoulos, K., Goodwin, P., Patelis, A., Assimakopoulos, V.: Forecasting with cue information: A comparison of multiple regression with alternative forecasting approaches. European Journal of Operational Research 180(1), 354–368 (2007)

    Article  MATH  Google Scholar 

  13. Curteanu, S., Cazacu, M.: Neural Networks and Genetic Algorithms Used For Modeling and Optimization of the Siloxane-Siloxane Copolymers Synthesis. Journal of Macromolecular Science, Part A 45, 123–136 (2007)

    Article  Google Scholar 

  14. Lisa, C., Curteanu, S.: Neural Network Based Predictions for the Liquid Crystal Properties of Organic Compounds. Computer-Aided Chemical Engineering 24, 39–45 (2007)

    Article  Google Scholar 

  15. Fernandes, Lona, L.M.F.: Neural Network Applications in Polymerization Processes. Brazilian Journal Chemical Engineering 22, 323–330 (2005)

    Article  Google Scholar 

  16. Mutasem, K.S.A., Khairuddin, O., Shahrul, A.N.: Back Propagation Algorithm: The Best Algorithm Among the Multi-layer Perceptron Algorithm. International Journal of Computer Science and Network Security 9(4), 378–383 (2009)

    Google Scholar 

  17. Perantonis, S.J., Karras, D.A.: An Efficient Constrained Learning Algorithm with Momentum Acceleration. Neural Networks 8(2), 237–249 (1995)

    Article  Google Scholar 

  18. Kamarthi, S.V., Pittner, S.: Accelerating Neural Network Training using Weight Extrapolations. Neural Networks 12, 1285–1299 (1999)

    Article  Google Scholar 

  19. Møller, M.F.: A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks 6(4), 525–533 (1993)

    Article  Google Scholar 

  20. Lera, G., Pinzolas, M.: Neighborhood based Levenberg-Marquardt Algorithm for Neural Network Training. IEEE Transaction on Neural Networks 13(5), 1200–1203 (2002)

    Article  Google Scholar 

  21. Nawi, N.M., Ransing, M.R., Ransing, R.S.: An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks. International Journal of Computational Intelligence 4(1), 46–55 (2007)

    Google Scholar 

  22. Mangasarian, O.L., Wolberg, W.H.: Cancer Diagnosis via Linear Programming. SIAM News 23(5), 1–18 (1990)

    Google Scholar 

  23. Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., Johannes, R.S.: Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus. In: Proceedings of the Symposium on Computer Applications and Medical Care, pp. 261–265. IEEE Computer Society Press, Los Alamitos (1988)

    Google Scholar 

  24. Coomans, D., Broeckaert, I., Jonckheer, M., Massart, D.L.: Comparison of Multivariate Discrimination Techniques for Clinical Data—Application to The Thyroid Functional State. Methods of Information Medicine 22, 93–101 (1983)

    Google Scholar 

  25. Fisher, R.A.: The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics 7, 179–188 (1936)

    Article  Google Scholar 

  26. Holger, R.M., Graeme, C.D.: The Effect of Internal Parameters and Geometry on the Performance of Back-Propagation Neural Networks. Environmental Modeling and Software 13(1), 193–209 (1998)

    Google Scholar 

  27. Hollis, P.W., Harper, J.S., Paulos, J.J.: The Effects of Precision Constraints in a Backpropagation Learning Network. Neural Computation 2(3), 363–373 (1990)

    Article  Google Scholar 

  28. Thimm, G., Moerland, F., Fiesler, E.: The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks. Neural Computation 8(2), 451–460 (1996)

    Article  Google Scholar 

  29. Looney, C.G.: Stabilization and Speedup of Convergence in Training Feed Forward Neural Networks. Neurocomputing 10(1), 7–31 (1996)

    Article  MATH  Google Scholar 

  30. Eom, K., Jung, K., Sirisena, H.: Performance Improvement of Backpropagation Algorithm by Automatic Activation Function Gain Tuning Using Fuzzy Logic. Neurocomputing 50, 439–460 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nawi, N.M., Ransing, R.S., Salleh, M.N.M., Ghazali, R., Hamid, N.A. (2010). An Improved Back Propagation Neural Network Algorithm on Classification Problems. In: Zhang, Y., Cuzzocrea, A., Ma, J., Chung, Ki., Arslan, T., Song, X. (eds) Database Theory and Application, Bio-Science and Bio-Technology. BSBT DTA 2010 2010. Communications in Computer and Information Science, vol 118. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17622-7_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17622-7_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17621-0

  • Online ISBN: 978-3-642-17622-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics