Abstract
In a previous paper I have presented the results of optimizing Neural Network (NN) topology for the task of Natural Language Processing (NLP). In that research, all NN were trained with a fixed 20% of the total language. In this paper I present results of optimizing a set of configuration values that have proven to affect NN performance. For example, Elman has reported improved performance when the NN were trained with simple sentences first, and complex sentences later. On the other hand, Lawrence, Giles, and Fong have reported better results when the training data was presented in a single set. Lawrence, Giles, and Fong have also studied the effect of different learning algorithms on natural language tasks. Because of the ability of GA to search a problem space for minima without using knowledge about the problem itself, they are well suited for problems that might contain more than one possible solution. Finding different minima becomes important for real-life applications, since variables such as number of hidden nodes, number of hidden layers, number of connections, and size of training set all can affect training and response time for NN.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Dávila, J. Co-evolution of neural network parameters for the task of natural language processing. Available at http://csp.gc.cuny.edu:8080/Mosaic_pages/nn_param_evol.ps. 1998.
Elman, J. L. Distributed Representations, Simple Recurrent Networks, and Grammatical Structure. Machine Learning, volume 3, pp.195–224. 1991.
Jain, A. Parsing Complex Sentences with Structured Connectionist Networks. Neural Computation, 3, pp.110–120. 1991.
Lawrence, S., Giles, C., and Fong, S. Natural Language Grammatical Inference with Recurrent Neural Networks. Accepted for Publication, IEEE Transactions on Knowledge and Data Engineering. 1998.
Schaffer, J., Caruana, R., Eshelman, L., and Das, R. A Study if Control Parameters Affecting Online Performance of Genetic Algorithms for Function Optimization. Proceedings of the third International Conference on Genetic Algorithms. San Mateo, California. Morgan Kaufmann. 1989.
Tanese, R. Distributed Genetic Algorithms. Proceedings of the third International Conference on Genetic Algorithms. San Mateo, California. Morgan Kaufmann. 1989.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Wien
About this paper
Cite this paper
Dávila, J. (1999). Exploring the Relationship between Neural Network Topology and Optimal Training Set by Means of Genetic Algorithms. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_51
Download citation
DOI: https://doi.org/10.1007/978-3-7091-6384-9_51
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-83364-3
Online ISBN: 978-3-7091-6384-9
eBook Packages: Springer Book Archive