Abstract
Various neural learning procedures have been proposed by different researchers in order to adapt suitable controllable parameters of neural network architectures. These can be from simple Hebbian procedures to complicated algorithms applied to individual neurons or assemblies in a neural structure. The paper presents an organized review of various learning techniques, classified according to basic characteristics such as chronology, applicability, functionality, stochasticity etc. Some of the learning procedures that have been used for the training of generic and specific neural structures, and will be reviewed are: Hebbian-like (Grossberg, Sejnowski, Sutton, Bienenstock, Oja & Karhunen, Sanger, Yuile et al., Hasselmo, Kosko, Cheung & Omidvar), Reinforcement learning, Min-max learning, Stochastic learning, Genetics-based learning, Artificial life-based learning. The various learning procedures will be critically compared, and future trends will be highlighted.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Simpson P: Foundations of Neural Networks. Invited Paper. In: Sinencio E, Lau C: (eds.): Artificial Neural Networks: Paradigms, Applications and Hardware Implementations. IEEE Press, NY (1992)
Cichocki A, Unbehauen R: Neural Networks for Optimization and Signal Processing. John Wiley and Sons Ltd, London (1993)
Haykin S: Neural Networks: A Comprehensive Foundation. Macmillan College Publishing Co, NY (1994)
Hassoun M: Fundamentals of Artificial Neural Networks. MIT Press, MA (1995)
Simon H: The Sciences of the Artificial. MIT Press, Cambridge, MA (1981)
Simpson P: Fuzzy Min-Max Neural Networks. Proc. of the Int. Joint Conf. on Neural Networks. (1991) 1658–1669
Kosko B: Neural Networks and Fuzzy Systems. Prentice Hall International, NJ, (1992)
Neocleous C: A Neural Network Architecture Composed of Adaptively Defined Dissimilar Single-neurons: Applications in engineering design. PhD Thesis, Brunel University, UK (1997)
Hopfield J., Feinstein D, Palmer G: Unlearning has a Stabilizing Effect in Collective Memories. Nature Vol. 304 (1983) 158–159
Wimbauer S., Klemmer N., Van Hemmen L: Universality of Unlearning. Neural Networks Vol.7 (1994) (2):261–270
Kruschke J, Movellan J: Benefits of Gain. Speeded Learning and Minimal Layers in Backpropagation Networks. IEEE Trans. on Systems, Man and Cybernetics Vol.21 (1991) 273–280
Chen C, Chang W: A Feedforward Neural Network with Function Shape Autotuning. Neural Networks Vol.9 (1996) 4:627–641
Ruf B: Computing and Learning with Spiking Neurons— Theory and Simulations. PhD Thesis. Technische Universität Graz, Austria (1998)
Trentin E: Networks with Trainable Amplitude of Activation Functions. Neural Networks Vol.14 (2001) (4&5):471–493
Poirazi P., Neocleous C., Pattichis C, Schizas C: A Biologically Inspired Neural Network Composed of Dissimilar Single Neuron Models. Proc. of the 23rd IEEE Int. Conf. on Engineering in Medicine and Biology, Istanbul (2001)
Fahlman S, Lebiere C: Fast Learning Variations on Backpropagation: An Empirical Study. Proc. of the 1988 Connectionist Models Summer School. Morgan Kaufmann, LA, (1990)
Schapire R: The Strength of Weak Learnability. Machine Learning, Vol. 5 (1990) 2:197–227
Freund Y: Boosting a Weak Learning Algorithm by Majority. Information and Computation, Vol. 121 (1995) 2:256–285
Reed R., Marks R. II: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press. Cambridge, MA (1999)
Leung C., Wong K., Sum P, Chan L: A Pruning Algorithm for the Recursive Least Squares Algorithm. Neural Networks, Vol.14 (2001) 2:147–174
Kirkpatrick S., Gelatt C, Vecchi M: Optimization by Simulated Annealing. Science, Vol. 220 (1983) 671–680
Ackley D., Hinton G, Sejnowski T: A Learning Algorithm for Boltzman Machines. Cognitive Science, Vol. 9 (1985) 147–169
Simpson P: Foundations of Neural Networks. In Simpson P: (ed) Neural Networks Theory, Technology, and Applications. IEEE Technology Update Series, NY (1996)
Aluffi-Pentini F., Parisi V, Zirilli F: Global Optimization and Stochastic Differential Equations. J. on Optimization Theory and Applications, Vol. 47 (1985) 1–16
Gelfand S, Mitter S: Recursive Stochastic Algorithms for Global Optimization in Rd. SIAM J. on Control and Optimization, Vol. 29 (1991) 999–1018
Hebb D: Organization of Behavior. John Wiley & Sons, NY (1949)
Grossberg S: Some Nonlinear Networks Capable of Learning a Spatial Pattern of Arbitrary Complexity. Proc. of the National Academy of Sciences, USA, Vol. 59 (1968) 368–372
Amari S: Mathematical theory of neural learning. New Generation Computing, Vol. 8 (1991) 281–294
Widrow B, Hoff M: Adaptive Switching Circuits. IRE Western Electric Show and Convention Record. Vol. 4 (1960) 96–104
Werbos P: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Dissertation, Harvard University (1974)
Parker, D: Optimal Algorithms for Adaptive Networks: Second Order Back Propagation, Second Order Direct Propagation, and Second Order Hebbian Learning. Proc. of the lEEE 1st Int. Conf. on Neural Networks, San Diego, CA, Vol. 2 (1987) 593–600
Le Cun: Une Procedure D'apprentisage Pour Reseau a Seuil Asymetrique. Cognitiva, Vol. 85 (1985) 599–604
Rumelhart D., Hinton G, McClelland J: In McClelland J. L., Rumelhart D. E. and the PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol. 1. Foundations. MIT Press, MA (1986)
Kohonen T: Self-organization and Associative Memories. Springer-Verlag, NY (1984)
Carpenter G, Grossberg S: Invariant Pattern Recognition and Recall by an Attentive Selforganizing ART Architecture in a Stationary World. Proc. of the lEEE 1st Int. Conf. on Neural Networks, San Diego CA, Vol. 2 (1987) 737–746
Bienenstock E., Cooper E, Munro P: Theory for the Development of Neural Selectivity: Orientation Specificity and Binocular Interaction in Visual Cortex. J. of Neuroscience, Vol. 2 (1982) 32–48
Oja E: A simplified neuron model as a principal component analyzer. J. Math. Biol. Vol. 15 (1982) 267–273.
Oja E, Karhunen J: On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix. J. of Mathematical Analysis and Applications, Vol. 104 (1985) 69–84
Oja E., Ogawa H, Wangviwattana: Principal Components Analysis by Homogeneous Neural Networks. IEICE Trans. on Information and Systems. (1992) E75-D:366–382
Oja E: Principal Components, Minor Components and Linear Neural Networks. Neural Networks, Vol. 5 (1992) 927–935
Sanger T: Optimal Unsupervised Learning in a Single Layer Linear Feedforward Neural Network. Neural Networks, Vol. 2 (1989) 459–473
Yuille A., Kammen D, Cohen D: Quadrature and the Development of Orientation Selective Cortical Cells by Hebb Rules. Biological Cybernetics, Vol. 61 (1989) 183–194
Hasselmo M: Runaway Synaptic Modification in Models of Cortex: Implications for Alzheimer’s Disease. Neural Networks, Vol. 7 (1994) 1:13–40.
Sejnowski T: Statistical Constraints on Synaptic Plasticity. J. of Math. Biology, Vol. 64 (1977) 385–389
Sutton R, Barto A: Toward a Modern Theory of Adaptive Networks: Expectation and Prediction. Psychological Review, Vol. 88 (1981) 135–171
Klopf A: Drive Reinforcement Model of a Single Neuron Function: An Alternative to the Hebbian Neuron Model. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)
Kosko B: Differential Hebbian Learning. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)
Cheung J, Omidvar M: Mathematical Analysis of Learning Behaviour of Neuronal Models. In Anderson D: (ed) Neural Information Processing Systems NY (1988)
Kosko B: Feedback Stability and Unsupervised Learning. Proc. of the IEEE Int. Conf. on Neural Networks, IEEE Press, San Diego, Vol. 1 (1988) 141–152
Kosko B: Unsupervised Learning in Noise. IEEE Trans. on Neural Networks. Vol. 1 (1990) 1:44–57
Widraw B., Gupta N, Maitra S: Punish/reward: Learning with a Critic in Adaptive Threshold Systems. IEEE Trans. on Systems, Man, and Cybernetics, Vol. 3 (1973) 455–465
Barto A., Sutton R, Anderson C: Neuron-like Adaptive Elements that can solve Difficult Learning Control Problems. IEEE Trans. on Systems, Man and Cybernetics, Vol. 13 (1983) 834–846
Williams R: Reinforcement Learning in Connectionist Networks: A mathematical Analysis. University of California at San Diego. Institute of Cognitive Science Report 8605 (1986)
Barto A: Learning by Statistical Cooperation of Self-interested Neuron-like Computing Units. Human Neurobiology, Vol. 4 (1985) 229–256
Minsky M: Theory of Neural-analog Reinforcement Systems and its Application to the Brain-model Problem. PhD Thesis. Princeton University NJ (1954)
Minsky M, Selfridge O: Learning in Random Nets. Information Theory. 4th London Symposium, London(1961)
Simpson P: Fuzzy Min-max Classification with Neural Networks. Heuristics, Vol. 4 (1991) 7:1–9
Szu H: Fast Simulated Annealing. In Denker J: (ed.) Proc. of the AIP Conf. on Neural Networks for Computing NY (1986)
Peterson C, Anderson J: A Mean Field Learning Algorithm for Neural Networks. Complex Systems, Vol. 1 (1987) 995–1019
Kennedy J, Eberhart R: Particle Swarm Optimization. Proc. IEEE Int. Conf. on Neural Networks, Perth, Australia.(1995)
Van den Bergh F, Engelbrecht A: Cooperative Learning in Neural Networks using Particle Swarm Optimizers. SAICSIT 2000 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Neocleous, C., Schizas, C. (2002). Artificial Neural Network Learning: A Comparative Review. In: Vlahavas, I.P., Spyropoulos, C.D. (eds) Methods and Applications of Artificial Intelligence. SETN 2002. Lecture Notes in Computer Science(), vol 2308. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46014-4_27
Download citation
DOI: https://doi.org/10.1007/3-540-46014-4_27
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43472-6
Online ISBN: 978-3-540-46014-5
eBook Packages: Springer Book Archive