Abstract
A modified version of back-propagation learning algorithm is introduced. This new algorithm called epsilon-back-propagation allows a neural network to learn faster or al least as good as back-propagation. Experimental data is given in order to compare both methods.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
Hecht-Nielsen, R., Neuncomputing, Addison-Wesley Publishing Co., Reading, MA, 1990.
Hertz, J., A. Krogh and R. G. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley Publishing Co., Redwood City, CA, 1991.
Kröse, B.J. and P. van der Smagt, An introduction to Neural Networks, Dept. of Computer Systems, University of Amsterdam, 1993.
Rich, E., and K. Knight, Artificial Intelligent Systems, McGraw-Hill, New York, 1991.
Sandoval, Carlos A., A Modified Learning Algorithm for Backpropagation Networks, Master Thesis, University of Texas at el Paso, 1993.
Wasserman, P.D., Neural Computing Theory and Practice, Van Nostrand Reinhold, New York, 1989.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Trejo, L.A., Sandoval, C. (1995). Improving back-propagation: Epsilon-back-propagation. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_205
Download citation
DOI: https://doi.org/10.1007/3-540-59497-3_205
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59497-0
Online ISBN: 978-3-540-49288-7
eBook Packages: Springer Book Archive