Skip to main content
Log in

Variable Hidden Layer Sizing in Elman Recurrent Neuro-Evolution

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The relationship between the size of the hidden layer in a neural network and performance in a particular domain is currently an open research issue. Often, the number of neurons in the hidden layer is chosen empirically and subsequently fixed for the training of the network. Fixing the size of the hidden layer limits an inherent strength of neural networks—the ability to generalize experiences from one situation to another, to adapt to new situations, and to overcome the “brittleness” often associated with traditional artificial intelligence techniques. This paper proposes an evolutionary algorithm to search for network sizes along with weights and connections between neurons.

This research builds upon the neuro-evolution tool SANE, developed by David Moriarty. SANE evolves neurons and networks simultaneously, and is modified in this work in several ways, including varying the hidden layer size, and evolving Elman recurrent neural networks for non-Markovian tasks. These modifications allow the evolution of better performing and more consistent networks, and do so more efficiently and faster.

SANE, modified with variable network sizing, learns to play modified casino blackjack and develops a successful card counting strategy. The contributions of this research are up to 8.3% performance increases over fixed hidden layer size models while reducing hidden layer processing time by almost 10%, and a faster, more autonomous approach to the scaling of neuro-evolutionary techniques to solving larger and more difficult problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M. Minsky, "Steps toward artificial intelligence," Computers and Thought, edited by E. Feigenbaum and J. Feldman, McGraw-Hill, pp. 406–450, 1963.

  2. D. Moriarty, "Symbiotic evolution of neural networks in sequential decision tasks," Ph.D. Dissertation, Dept. Of Computer Science, University of Texas at Austin, 1997.

  3. B. Fullmer and R. Miikkulainen, "Using marker-based genetic encoding of neural networks to evolve finite-state behavior," in Proceedings of the First European Conference on Artificial Life, 1992, pp. 255–262.

  4. D. Moriarty and R. Miikkulainen, "Discovering complex Othello strategies through evolutionary neural networks," Connection Science, vol. 7, no. 3, pp. 195–209, 1995.

    Google Scholar 

  5. D. Whitley, K. Mathias, and P. Fitzhorn, "Delta-coding: An iterative search strategy for genetic algorithms," in Proceedings of the Fourth International Conference on Genetic Algorithms, Morgan Kaufmann: Los Altos, CA, 1991.

    Google Scholar 

  6. M.A. Potter and K. De Jong, "Evolving neural networks with collaborative species," Navy Center for Applied Research in Artificial Intelligence, Technical Report, 1996.

  7. L. Patnaik and S. Mandavilli, "Adaptation in genetic algorithms," Genetic Algorithms for Pattern Recognition, CRC Press, 1996.

  8. N. Richards, D. Moriarty, and R. Miikkulainen, "Evolving neural networks to play go," Applied Intelligence, vol. 8, no. 1, 1998.

  9. F. Gomez and R. Miikkulainen, "Solving non-Markovian control tasks with neuro-evolution," Submitted to the International Conference on Machine Learning, 1998.

  10. F. Gomez and R. Miikkulainen, "Incremental evolution of complex general behavior," Adaptive Behavior, vol. 5, pp. 317–342, 1997.

    Google Scholar 

  11. J.L. Elman, "Finding structure in time," Cognitive Science, vol. 14, pp. 179–211, 1990.

    Google Scholar 

  12. E. Baum and D. Haussler, "What size network gives valid generalization?," Neural Computation, vol. 1 no. 1, pp. 151–160, 1994.

    Google Scholar 

  13. M. Jones, "Using recurrent networks for dimensionality reduction," AI Technical Report 1396, Massachusetts Institute of Technology, 1992.

  14. S. Russell and P. Norvig, Artificial Intelligence, A Modern Approach, Prentice-Hall, 1994.

  15. L. Humble and C. Cooper, TheWorld'sGreatest Blackjack Book, Doubleday, 1980.

  16. S. Romaniuk, "Learning to learn with evolutionary growth perceptrons," Genetic Algorithms for Pattern Recognition, CRC Press, 1996.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kaikhah, K., Garlick, R. Variable Hidden Layer Sizing in Elman Recurrent Neuro-Evolution. Applied Intelligence 12, 193–205 (2000). https://doi.org/10.1023/A:1008315023738

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008315023738

Navigation