Skip to main content

Artificial Neural Network Learning: A Comparative Review

  • Conference paper
  • First Online:
Methods and Applications of Artificial Intelligence (SETN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2308))

Included in the following conference series:

Abstract

Various neural learning procedures have been proposed by different researchers in order to adapt suitable controllable parameters of neural network architectures. These can be from simple Hebbian procedures to complicated algorithms applied to individual neurons or assemblies in a neural structure. The paper presents an organized review of various learning techniques, classified according to basic characteristics such as chronology, applicability, functionality, stochasticity etc. Some of the learning procedures that have been used for the training of generic and specific neural structures, and will be reviewed are: Hebbian-like (Grossberg, Sejnowski, Sutton, Bienenstock, Oja & Karhunen, Sanger, Yuile et al., Hasselmo, Kosko, Cheung & Omidvar), Reinforcement learning, Min-max learning, Stochastic learning, Genetics-based learning, Artificial life-based learning. The various learning procedures will be critically compared, and future trends will be highlighted.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Simpson P: Foundations of Neural Networks. Invited Paper. In: Sinencio E, Lau C: (eds.): Artificial Neural Networks: Paradigms, Applications and Hardware Implementations. IEEE Press, NY (1992)

    Google Scholar 

  2. Cichocki A, Unbehauen R: Neural Networks for Optimization and Signal Processing. John Wiley and Sons Ltd, London (1993)

    MATH  Google Scholar 

  3. Haykin S: Neural Networks: A Comprehensive Foundation. Macmillan College Publishing Co, NY (1994)

    MATH  Google Scholar 

  4. Hassoun M: Fundamentals of Artificial Neural Networks. MIT Press, MA (1995)

    MATH  Google Scholar 

  5. Simon H: The Sciences of the Artificial. MIT Press, Cambridge, MA (1981)

    Google Scholar 

  6. Simpson P: Fuzzy Min-Max Neural Networks. Proc. of the Int. Joint Conf. on Neural Networks. (1991) 1658–1669

    Google Scholar 

  7. Kosko B: Neural Networks and Fuzzy Systems. Prentice Hall International, NJ, (1992)

    MATH  Google Scholar 

  8. Neocleous C: A Neural Network Architecture Composed of Adaptively Defined Dissimilar Single-neurons: Applications in engineering design. PhD Thesis, Brunel University, UK (1997)

    Google Scholar 

  9. Hopfield J., Feinstein D, Palmer G: Unlearning has a Stabilizing Effect in Collective Memories. Nature Vol. 304 (1983) 158–159

    Article  Google Scholar 

  10. Wimbauer S., Klemmer N., Van Hemmen L: Universality of Unlearning. Neural Networks Vol.7 (1994) (2):261–270

    Article  Google Scholar 

  11. Kruschke J, Movellan J: Benefits of Gain. Speeded Learning and Minimal Layers in Backpropagation Networks. IEEE Trans. on Systems, Man and Cybernetics Vol.21 (1991) 273–280

    Article  MathSciNet  Google Scholar 

  12. Chen C, Chang W: A Feedforward Neural Network with Function Shape Autotuning. Neural Networks Vol.9 (1996) 4:627–641

    Article  Google Scholar 

  13. Ruf B: Computing and Learning with Spiking Neurons— Theory and Simulations. PhD Thesis. Technische Universität Graz, Austria (1998)

    Google Scholar 

  14. Trentin E: Networks with Trainable Amplitude of Activation Functions. Neural Networks Vol.14 (2001) (4&5):471–493

    Article  Google Scholar 

  15. Poirazi P., Neocleous C., Pattichis C, Schizas C: A Biologically Inspired Neural Network Composed of Dissimilar Single Neuron Models. Proc. of the 23rd IEEE Int. Conf. on Engineering in Medicine and Biology, Istanbul (2001)

    Google Scholar 

  16. Fahlman S, Lebiere C: Fast Learning Variations on Backpropagation: An Empirical Study. Proc. of the 1988 Connectionist Models Summer School. Morgan Kaufmann, LA, (1990)

    Google Scholar 

  17. Schapire R: The Strength of Weak Learnability. Machine Learning, Vol. 5 (1990) 2:197–227

    Google Scholar 

  18. Freund Y: Boosting a Weak Learning Algorithm by Majority. Information and Computation, Vol. 121 (1995) 2:256–285

    Article  MATH  MathSciNet  Google Scholar 

  19. Reed R., Marks R. II: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press. Cambridge, MA (1999)

    Google Scholar 

  20. Leung C., Wong K., Sum P, Chan L: A Pruning Algorithm for the Recursive Least Squares Algorithm. Neural Networks, Vol.14 (2001) 2:147–174

    Article  Google Scholar 

  21. Kirkpatrick S., Gelatt C, Vecchi M: Optimization by Simulated Annealing. Science, Vol. 220 (1983) 671–680

    Article  MathSciNet  Google Scholar 

  22. Ackley D., Hinton G, Sejnowski T: A Learning Algorithm for Boltzman Machines. Cognitive Science, Vol. 9 (1985) 147–169

    Article  Google Scholar 

  23. Simpson P: Foundations of Neural Networks. In Simpson P: (ed) Neural Networks Theory, Technology, and Applications. IEEE Technology Update Series, NY (1996)

    Google Scholar 

  24. Aluffi-Pentini F., Parisi V, Zirilli F: Global Optimization and Stochastic Differential Equations. J. on Optimization Theory and Applications, Vol. 47 (1985) 1–16

    Article  MATH  MathSciNet  Google Scholar 

  25. Gelfand S, Mitter S: Recursive Stochastic Algorithms for Global Optimization in Rd. SIAM J. on Control and Optimization, Vol. 29 (1991) 999–1018

    Article  MATH  MathSciNet  Google Scholar 

  26. Hebb D: Organization of Behavior. John Wiley & Sons, NY (1949)

    Google Scholar 

  27. Grossberg S: Some Nonlinear Networks Capable of Learning a Spatial Pattern of Arbitrary Complexity. Proc. of the National Academy of Sciences, USA, Vol. 59 (1968) 368–372

    Google Scholar 

  28. Amari S: Mathematical theory of neural learning. New Generation Computing, Vol. 8 (1991) 281–294

    Article  MATH  Google Scholar 

  29. Widrow B, Hoff M: Adaptive Switching Circuits. IRE Western Electric Show and Convention Record. Vol. 4 (1960) 96–104

    Google Scholar 

  30. Werbos P: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Dissertation, Harvard University (1974)

    Google Scholar 

  31. Parker, D: Optimal Algorithms for Adaptive Networks: Second Order Back Propagation, Second Order Direct Propagation, and Second Order Hebbian Learning. Proc. of the lEEE 1st Int. Conf. on Neural Networks, San Diego, CA, Vol. 2 (1987) 593–600

    Google Scholar 

  32. Le Cun: Une Procedure D'apprentisage Pour Reseau a Seuil Asymetrique. Cognitiva, Vol. 85 (1985) 599–604

    Google Scholar 

  33. Rumelhart D., Hinton G, McClelland J: In McClelland J. L., Rumelhart D. E. and the PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol. 1. Foundations. MIT Press, MA (1986)

    Google Scholar 

  34. Kohonen T: Self-organization and Associative Memories. Springer-Verlag, NY (1984)

    Google Scholar 

  35. Carpenter G, Grossberg S: Invariant Pattern Recognition and Recall by an Attentive Selforganizing ART Architecture in a Stationary World. Proc. of the lEEE 1st Int. Conf. on Neural Networks, San Diego CA, Vol. 2 (1987) 737–746

    Google Scholar 

  36. Bienenstock E., Cooper E, Munro P: Theory for the Development of Neural Selectivity: Orientation Specificity and Binocular Interaction in Visual Cortex. J. of Neuroscience, Vol. 2 (1982) 32–48

    Google Scholar 

  37. Oja E: A simplified neuron model as a principal component analyzer. J. Math. Biol. Vol. 15 (1982) 267–273.

    Article  MATH  MathSciNet  Google Scholar 

  38. Oja E, Karhunen J: On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix. J. of Mathematical Analysis and Applications, Vol. 104 (1985) 69–84

    Article  MathSciNet  Google Scholar 

  39. Oja E., Ogawa H, Wangviwattana: Principal Components Analysis by Homogeneous Neural Networks. IEICE Trans. on Information and Systems. (1992) E75-D:366–382

    Google Scholar 

  40. Oja E: Principal Components, Minor Components and Linear Neural Networks. Neural Networks, Vol. 5 (1992) 927–935

    Article  Google Scholar 

  41. Sanger T: Optimal Unsupervised Learning in a Single Layer Linear Feedforward Neural Network. Neural Networks, Vol. 2 (1989) 459–473

    Article  Google Scholar 

  42. Yuille A., Kammen D, Cohen D: Quadrature and the Development of Orientation Selective Cortical Cells by Hebb Rules. Biological Cybernetics, Vol. 61 (1989) 183–194

    Article  MATH  Google Scholar 

  43. Hasselmo M: Runaway Synaptic Modification in Models of Cortex: Implications for Alzheimer’s Disease. Neural Networks, Vol. 7 (1994) 1:13–40.

    Article  Google Scholar 

  44. Sejnowski T: Statistical Constraints on Synaptic Plasticity. J. of Math. Biology, Vol. 64 (1977) 385–389

    Google Scholar 

  45. Sutton R, Barto A: Toward a Modern Theory of Adaptive Networks: Expectation and Prediction. Psychological Review, Vol. 88 (1981) 135–171

    Article  Google Scholar 

  46. Klopf A: Drive Reinforcement Model of a Single Neuron Function: An Alternative to the Hebbian Neuron Model. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)

    Google Scholar 

  47. Kosko B: Differential Hebbian Learning. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)

    Google Scholar 

  48. Cheung J, Omidvar M: Mathematical Analysis of Learning Behaviour of Neuronal Models. In Anderson D: (ed) Neural Information Processing Systems NY (1988)

    Google Scholar 

  49. Kosko B: Feedback Stability and Unsupervised Learning. Proc. of the IEEE Int. Conf. on Neural Networks, IEEE Press, San Diego, Vol. 1 (1988) 141–152

    Google Scholar 

  50. Kosko B: Unsupervised Learning in Noise. IEEE Trans. on Neural Networks. Vol. 1 (1990) 1:44–57

    Article  Google Scholar 

  51. Widraw B., Gupta N, Maitra S: Punish/reward: Learning with a Critic in Adaptive Threshold Systems. IEEE Trans. on Systems, Man, and Cybernetics, Vol. 3 (1973) 455–465

    Article  Google Scholar 

  52. Barto A., Sutton R, Anderson C: Neuron-like Adaptive Elements that can solve Difficult Learning Control Problems. IEEE Trans. on Systems, Man and Cybernetics, Vol. 13 (1983) 834–846

    Google Scholar 

  53. Williams R: Reinforcement Learning in Connectionist Networks: A mathematical Analysis. University of California at San Diego. Institute of Cognitive Science Report 8605 (1986)

    Google Scholar 

  54. Barto A: Learning by Statistical Cooperation of Self-interested Neuron-like Computing Units. Human Neurobiology, Vol. 4 (1985) 229–256

    Google Scholar 

  55. Minsky M: Theory of Neural-analog Reinforcement Systems and its Application to the Brain-model Problem. PhD Thesis. Princeton University NJ (1954)

    Google Scholar 

  56. Minsky M, Selfridge O: Learning in Random Nets. Information Theory. 4th London Symposium, London(1961)

    Google Scholar 

  57. Simpson P: Fuzzy Min-max Classification with Neural Networks. Heuristics, Vol. 4 (1991) 7:1–9

    Google Scholar 

  58. Szu H: Fast Simulated Annealing. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)

    Google Scholar 

  59. Peterson C, Anderson J: A Mean Field Learning Algorithm for Neural Networks. Complex Systems, Vol. 1 (1987) 995–1019

    MATH  Google Scholar 

  60. Kennedy J, Eberhart R: Particle Swarm Optimization. Proc. IEEE Int. Conf. on Neural Networks, Perth, Australia.(1995)

    Google Scholar 

  61. Van den Bergh F, Engelbrecht A: Cooperative Learning in Neural Networks using Particle Swarm Optimizers. SAICSIT 2000 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Neocleous, C., Schizas, C. (2002). Artificial Neural Network Learning: A Comparative Review. In: Vlahavas, I.P., Spyropoulos, C.D. (eds) Methods and Applications of Artificial Intelligence. SETN 2002. Lecture Notes in Computer Science(), vol 2308. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46014-4_27

Download citation

  • DOI: https://doi.org/10.1007/3-540-46014-4_27

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43472-6

  • Online ISBN: 978-3-540-46014-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics