Skip to main content

Forward-backward building blocks for evolving neural networks with intrinsic learning behaviours

  • Neural Nets Simulation, Emulation and Implementation
  • Conference paper
  • First Online:
Book cover Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

Abstract

This paper describes the forward-backward module: a simple building block that allows the evolution of neural networks with intrinsic supervised learning ability. This expands the range of networks that can be efficiently evolved compared to previous approaches, and also enables the networks to be invertible i.e. once a network has been evolved for a given problem domain, and trained on a particular dataset, the network can then be run backwards to observe what kind of mapping has been learned, or for use in control problems. A demonstration is given of the kind of self-training networks that could be evolved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Chalmers, “The evolutions of learning: an experiment in genetic connectionism,” in Proceedings of the 1990 Connectionist Models Summer School (D. Touretzky, J. Elman, T. Sejnowski, and G. Hinton, eds.), San Francisco: Morgan Kaufman, (1990).

    Google Scholar 

  2. S. Bengio, Y. Bengio, and J. Cloutier, “Use of genetic programming for the search of a new learning rule for neural networks,” in Proceedings of IEEE International Conference on Evolutionary Computation, pp. 324–327, Orlando: IEEE, (1994).

    Google Scholar 

  3. B. Yamauchi and R. Beer, “Sequential behaviour and learning in evolved dynamical neural networks,” Adaptive Behaviour, vol. 2, pp. 219–246, (1994).

    Google Scholar 

  4. S. Lucas, “Evolving neural network learning behaviours with set-based chromosomes,” in Proceedings of European Symposium on Artificial Neural Networks (ESANN '96), pp. 291–296, Brussels: D facto, (1996).

    Google Scholar 

  5. D. Fogel, “Using evolutionary programming to create networks that are capable of playing tic-tac-toe,” in Proceedings of IEEE International Conference on Neural Networks, pp. 875–880, San Francisco: IEEE, (1993).

    Google Scholar 

  6. D. Dasgupta and D. McGregor, “Designing application specific neural networks using the structured genetic algorithm,” in Proceedings of COGANN-92 — IEEE International Workshop on Combinations of Genetic Algorithms and Neural Networks, pp. 87–96, Baltimore: IEEE, (1992).

    Google Scholar 

  7. L. Marti, “Genetically generated neural networks ii: searching for an optimal representation,” in Proceedings of the International Joint Conference on Neural Networks (Baltimore '92), pp. I, 221–226, San Diego, CA: IEEE, (1992).

    Google Scholar 

  8. J. McDonell and D. Waagen, “Neural network structure design by evolutionary programming,” in Proceedings of the Third Annual Conference on Evolutionary Programming (D. Fogel and W. Atmar, eds.), pp. 79–89, Evolutionary Programming Society, (1993).

    Google Scholar 

  9. J. McDonell, W. Page, and D. Waagen, “Neural network construction using evolutionary search,” in Proceedings of the Third Annual Conference on Evolutionary Programming (A. Sebald and L. Fogel, eds.), pp. 9–16, World Scientific, (1994).

    Google Scholar 

  10. H. Kitano, “Designing neural networks using genetic algorithm with graph generation system,” Complex Systems, vol. 4, pp. 461–476, (1990).

    Google Scholar 

  11. F. Gruau, “Cellular encoding of genetic neural networks,” Laboratoire de l'Informatique du Parallelisme Technical Report 92-21, Ecole Normale Superieure de Lyon, (1992).

    Google Scholar 

  12. F. Gruau, “Automatic definition of modular neural networks,” Adaptive Behaviour, vol. 3, pp. 151–183, (1994).

    Google Scholar 

  13. E. Boers and H. Kuiper, “Biological metaphors and the design of modular artificial neural networks,” Masters thesis, Department of Computer Science and Experimental and Theoretical Psychology, Leiden University, the Netherlands, (1993).

    Google Scholar 

  14. H. Muhlenbein and B. Zhang, “Synthesis of sigma-pi neural networks by the breeder genetic programming,” in Proceedings of IEEE International Conference on Evolutionary Computation, pp. 318–323, Orlando: IEEE, (1994).

    Google Scholar 

  15. B. Zhang and H. Muehlenbein, “Balancing accuracy and parsimony in genetic programming,” Evolutionary Computation, vol. 3, pp. 17–38, (1995).

    Google Scholar 

  16. K. Sharman, A. Esparcia-Alcazar, and Y. Li, “Evolving signal processing algorithms by genetic programming,” in Proceedings of IEE 1st International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications, pp. 473–480, London: IEE, (1995).

    Google Scholar 

  17. H. Kitano, “Neurogenetic learning: An integrated model of designing and training neural networks using genetic algorithms,” Physica D, vol. 75, pp. 225–238, (1994).

    Google Scholar 

  18. H. Kitano, “A simple model of neurogenesis and cell differentiation,” Artificial Life, vol. 2, pp. 79–99, (1995).

    Google Scholar 

  19. S. Lucas, “Growing adaptive neural networks with graph grammars,” in Proceedings of European Symposium on Artificial Neural Networks (ESANN '95), pp. 235–240, Brussels: D facto, (1995).

    Google Scholar 

  20. S. Lucas, “Towards the open-ended evolution of neural networks,” in Proceedings of IEE 1st International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications, pp. 388–393, London: IEE, (1995).

    Google Scholar 

  21. D. Montana, “Strongly typed genetic programming,” Evolutionary Computation, vol. 3, pp. 199–230, (1995).

    Google Scholar 

  22. D. Rumelhart, G. Hinton, and R. Williams, “Chapter 8: Learning internal representations,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (D. Rumelhart and J. McClelland, eds.), pp. 319–362, London: The MIT Press, (1986).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lucas, S.M. (1997). Forward-backward building blocks for evolving neural networks with intrinsic learning behaviours. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032531

Download citation

  • DOI: https://doi.org/10.1007/BFb0032531

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics