Skip to main content

Exploring the Relationship between Neural Network Topology and Optimal Training Set by Means of Genetic Algorithms

  • Conference paper

Abstract

In a previous paper I have presented the results of optimizing Neural Network (NN) topology for the task of Natural Language Processing (NLP). In that research, all NN were trained with a fixed 20% of the total language. In this paper I present results of optimizing a set of configuration values that have proven to affect NN performance. For example, Elman has reported improved performance when the NN were trained with simple sentences first, and complex sentences later. On the other hand, Lawrence, Giles, and Fong have reported better results when the training data was presented in a single set. Lawrence, Giles, and Fong have also studied the effect of different learning algorithms on natural language tasks. Because of the ability of GA to search a problem space for minima without using knowledge about the problem itself, they are well suited for problems that might contain more than one possible solution. Finding different minima becomes important for real-life applications, since variables such as number of hidden nodes, number of hidden layers, number of connections, and size of training set all can affect training and response time for NN.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dávila, J. Co-evolution of neural network parameters for the task of natural language processing. Available at http://csp.gc.cuny.edu:8080/Mosaic_pages/nn_param_evol.ps. 1998.

  2. Elman, J. L. Distributed Representations, Simple Recurrent Networks, and Grammatical Structure. Machine Learning, volume 3, pp.195–224. 1991.

    Google Scholar 

  3. Jain, A. Parsing Complex Sentences with Structured Connectionist Networks. Neural Computation, 3, pp.110–120. 1991.

    Article  Google Scholar 

  4. Lawrence, S., Giles, C., and Fong, S. Natural Language Grammatical Inference with Recurrent Neural Networks. Accepted for Publication, IEEE Transactions on Knowledge and Data Engineering. 1998.

    Google Scholar 

  5. Schaffer, J., Caruana, R., Eshelman, L., and Das, R. A Study if Control Parameters Affecting Online Performance of Genetic Algorithms for Function Optimization. Proceedings of the third International Conference on Genetic Algorithms. San Mateo, California. Morgan Kaufmann. 1989.

    Google Scholar 

  6. Tanese, R. Distributed Genetic Algorithms. Proceedings of the third International Conference on Genetic Algorithms. San Mateo, California. Morgan Kaufmann. 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Wien

About this paper

Cite this paper

Dávila, J. (1999). Exploring the Relationship between Neural Network Topology and Optimal Training Set by Means of Genetic Algorithms. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6384-9_51

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83364-3

  • Online ISBN: 978-3-7091-6384-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics