Abstract
The well-known Multi-Layered Perceptron has gained power thanks to the Back Propagation Algorithm. The difficulty which still subsists is its time-wasting. In fact, the learning process can be improved by using the Grow-And-Learn (GAL) algorithm. In this paper, we present such a hybrid system: the cooperation between GAL and MLP networks. The obtained system is more rapid and more efficient than the classic Back Propagation which computes on the MLP.
Preview
Unable to display preview. Download preview PDF.
References
Y. L. Cun, “Generalization and network design strategies,” technical report crg-tr-89-4, University of Toronto, 1989.
N. Ohnishi, A. Okamoto, and N. Sugie, “Selective presentation of learning samples for efficient learning in multi-layered perceptron,” Proceedings of the International Joint Conference on Neural Networks, vol. 1, pp. 278–289, Jan. 1990.
E. Alpaydin, “Grow-And-Learn: an incremental method for category learning,” Proceedings of the International Conference on Neural Networks, pp. 761–764, July 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cherruel, G., Solaiman, B., Autret, Y. (1995). Efficient learning in Multi-Layered Perceptron using the Grow-And-Learn algorithm. In: Pinto-Ferreira, C., Mamede, N.J. (eds) Progress in Artificial Intelligence. EPIA 1995. Lecture Notes in Computer Science, vol 990. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60428-6_34
Download citation
DOI: https://doi.org/10.1007/3-540-60428-6_34
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60428-0
Online ISBN: 978-3-540-45595-0
eBook Packages: Springer Book Archive