Abstract
Standard error back-propagation requites output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates the need to scale output data before training. We show that the utilization of “self-scaling” units results in faster convergence and more accurate results compared to the rescaled results of standard back-propagation.
Preview
Unable to display preview. Download preview PDF.
References
DP Morgan and CL Scofield, Neural Networks and Speech Processing, Kluwer Academic Publishers, 1991.
AK Rigler, JM Irvine and TP Vogl, Rescaling of Variables in Back Propagation Learning, Neural Networks, 4, pp 225–229, 1991.
JM Zurada, Introduction to Artificial Neural Systems, West Publishing Company, 1992.
JM Zurada, Lambda Learning Rule for Feedforward Neural Networks, Proceedings of the IEEE International Conference on Neural Networks, March 28–31, 1992, San Fransisco, California.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Engelbrecht, A.P., Cloete, I., Geldenhuys, J., Zurada, J.M. (1995). Automatic scaling using gamma learning for feedforward neural networks. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_198
Download citation
DOI: https://doi.org/10.1007/3-540-59497-3_198
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59497-0
Online ISBN: 978-3-540-49288-7
eBook Packages: Springer Book Archive