Skip to main content
Log in

Training Artificial Neural Networks Using Taguchi Methods

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

This paper shows how the process optimization methods known as Taguchi methods may be applied to the training of Artificial Neural Networks. A comparison is made between the efficiency of training using Taguchi methods and the efficiency of conventional training methods; attention is drawn to the advantages of Taguchi methods. Further, it is shown that Taguchi methods offer potential benefits in evaluating network behaviour such as the ability to examine interaction of weights and neurons within a network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ranjit, K. R. (1990). A Primer on the Taguchi Method. Van Nostrand Reinhold.

  • Lochner, H. R. et al (1990). Designing for Quality. An Introduction to the Best of Taguchi and Western Methods of Statistical Method Design. Chapman-Hall.

  • Hilton, G. E. (1992). How Neural Networks Learn from Experience. Scientific American 267: 3.

    Google Scholar 

  • Wasserman, P. D. (1989). Neural Computing, Theory and Practice. Van Nostrand Reinhold. Anon: The Matlab Neural Networks Toolbox Manual. Mathworks.

  • McShane, J. (1992). An Introduction to Neural Nets. Hewlett-packard Journal 43: 1.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Macleod, C., Dror, G. & Maxwell, G. Training Artificial Neural Networks Using Taguchi Methods. Artificial Intelligence Review 13, 177–184 (1999). https://doi.org/10.1023/A:1006534203575

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1006534203575

Navigation