Skip to main content

Genetic synthesis of discrete-time recurrent neural network

  • Conference paper
  • First Online:
New Trends in Neural Computation (IWANN 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Included in the following conference series:

Abstract

In this paper, we proposed a different genetic model for optimizing both network architecture and connection weights of Discrete-Time Recurrent Neural Networks in evolutionary process. Empirical studies show that our model can efficiently generate the appropiate network size and topology for small applications. We have used two experiments: a parity function and a finite state machine for detection of sequences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Almeida, L., “A learning rule for asynchronous perceptrons with feedback in a combinatorial environment”, IEEE First Conference on Neural Networks, vol. 2, pp. 609–618, 1987.

    Google Scholar 

  2. Almeida, L., “Backpropagation in Perceptrons with Feedback”, in Neural Computers, (Eds. Roff Eckmiller and C.v.d. Malsburg), pp. 199–208, Springer-Verlag, 1989.

    Google Scholar 

  3. Fahlman, S.E. and Lebiere, C., “The cascade-correlation learning architecture”, in Advances in Neural Information Processing Systems (Ed. D. S. Touretzky), vol. 2, pp. 524–532, Morgan Kauffmann, 1990.

    Google Scholar 

  4. Fontanari, J.F. and Meir, R., “Evolving a learning algorithm for the binary perceptron”, Network, vol. 2, pp. 353–359, 1991.

    Google Scholar 

  5. Frean, M., “The upstart algorithm: a method for constructing and training feed-forward neural networks”. Neural Computation, vol. 2, pp. 198–209, 1990.

    Google Scholar 

  6. Gruau, F.C., “Cellular Encoding of Genetic Neural Networks”, Technical Report #92-21, Laboratoire de l'Informatique du Parallelisme, Ecole Normale Superieure de Lyon, Mai 1992.

    Google Scholar 

  7. Harp, S.A. and Samad, T. and Guha, A., “Towards the genetic synthesis of neural networks”, in Third International Conference on Genetic Algorithms (Ed. J. D. Schaffer), pp. 360–369, Morgan Kauffmann, 1989.

    Google Scholar 

  8. Hirose, Y. and Yamashita, K. and Hijiya, S., “Back-propagation algorithm which varies the number of hidden units”, Neural Networks, vol. 4, pp. 61–66, 1991.

    Google Scholar 

  9. Holland, J.H. “Adaptation in Natural and Artificial Systems”, University of Michigan press, Ann Arbor, 1975.

    Google Scholar 

  10. Kitano, H., “Designing neural networks using genetic algorithms with graph generation system”, Complex Systems, vol. 4, pp. 461–476, 1990.

    Google Scholar 

  11. Pineda, F.J., “Generalization of Back-propagation to Recurrent Neural Networks”, Physical Review Letters, vol. 59, pp. 2229–2232, American Physical Society, 1987.

    Google Scholar 

  12. Pineda, F.J., “Dynamics and Architecture for Neural Computation”, Journal of Complexity, vol. 4, pp. 216–245, Academic Press, 1988.

    Google Scholar 

  13. Rumelhart, D.E., Hinton, G.E. and Williams, R.J., “Learning internal representations by error propagation”, Parallel Distributed Proccessing, vol. 1, pp. 310–362. MIT Press, 1986.

    Google Scholar 

  14. Schraudolph, N.N., Computer Science & Engeniering Department, University of California, San Diego, La Jolla, CA 92093-0114.

    Google Scholar 

  15. Whitley, D., Starkweather, T. and Bogart, C., “Genetic algorithms and neural networks: optimizing connections and connectivity”. Parallel Computing, vol. 14, pp. 347–361, 1990.

    Google Scholar 

  16. Williams, R.J. and Zisper, D., “A learning algorithm for continually running fully recurrent neural networks”, Neural Computation, vol. 1, pp. 270–280, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Marín, F.J., Sandoval, F. (1993). Genetic synthesis of discrete-time recurrent neural network. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_144

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_144

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics