Abstract
This paper shows how the process optimization methods known as Taguchi methods may be applied to the training of Artificial Neural Networks. A comparison is made between the efficiency of training using Taguchi methods and the efficiency of conventional training methods; attention is drawn to the advantages of Taguchi methods. Further, it is shown that Taguchi methods offer potential benefits in evaluating network behaviour such as the ability to examine interaction of weights and neurons within a network.
Similar content being viewed by others
References
Ranjit, K. R. (1990). A Primer on the Taguchi Method. Van Nostrand Reinhold.
Lochner, H. R. et al (1990). Designing for Quality. An Introduction to the Best of Taguchi and Western Methods of Statistical Method Design. Chapman-Hall.
Hilton, G. E. (1992). How Neural Networks Learn from Experience. Scientific American 267: 3.
Wasserman, P. D. (1989). Neural Computing, Theory and Practice. Van Nostrand Reinhold. Anon: The Matlab Neural Networks Toolbox Manual. Mathworks.
McShane, J. (1992). An Introduction to Neural Nets. Hewlett-packard Journal 43: 1.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Macleod, C., Dror, G. & Maxwell, G. Training Artificial Neural Networks Using Taguchi Methods. Artificial Intelligence Review 13, 177–184 (1999). https://doi.org/10.1023/A:1006534203575
Issue Date:
DOI: https://doi.org/10.1023/A:1006534203575