Skip to main content
Log in

Introducing Synaptic Delays in the NEAT Algorithm to Improve Modelling in Cognitive Robotics

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper describes and tests an approach to improve the temporal processing capabilities of the neuroevolution of augmenting topologies (NEAT) algorithm. This algorithm is quite popular within the robotics community for the production of trained neural networks without having to determine a priori their size and topology. The main drawback of the traditional NEAT algorithm is that, even though it can implement recurrent synaptic connections, which allow it to perform some time related processing tasks, its capabilities are rather limited, especially when dealing with precise time dependent phenomena. NEAT’s ability to capture the underlying dynamics that correspond to complex time series still has a lot of room for improvement. To address this issue, the paper describes a new implementation of the NEAT algorithm that is able to generate artificial neural networks (ANNs) with trainable time delayed synapses in addition to its previous capacities. We show that this approach, called \(\uptau \)-NEAT improves the behavior of the neural networks obtained when dealing with complex time related processes. Several examples are presented, both dealing with the generation of ANNs that are able to produce complex theoretical signals such as chaotic signals or real data series, as in the case of the monthly number of international airline passengers or monthly \(\hbox {CO}_{2}\) concentrations. In these examples, \(\uptau \)-NEAT clearly improves over the traditional NEAT algorithm in these tasks. A final example of the integration of this approach within a robot cognitive mechanism is also presented, showing the clear improvements it could provide in the modeling required for many cognitive processes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Byrne MD (2003) Cognitive architecture. The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications, pp 97–117

  2. Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Ment Dev 1(1):12–34

    Article  Google Scholar 

  3. Weng J (2007) On developmental mental architectures. Neurocomputing 70(13–15):2303–2323

    Article  Google Scholar 

  4. Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62

    Article  Google Scholar 

  5. Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447

    Article  Google Scholar 

  6. Bellas F, Becerra JA, Duro RJ (2009) Using promoters and functional introns in genetic algorithms for neuroevolutionary learning in non-stationary problems. Neurocomputing 72:2134–2145

    Article  Google Scholar 

  7. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127

    Article  Google Scholar 

  8. Stanley KO, Miikkulainen R (2002) Efficient evolution of neural networks topologies. In: Proceedings of the 2002 congress on evolutionary computation (CEC’02), pp 569–577

  9. Wang G, Cheng G, Carr TR (2013) The application of improved NeuroEvolution of augmenting topologies neural network in Marcellus Shale lithofacies prediction. Comput Geosci 54:50–65

    Article  Google Scholar 

  10. Chen L, Alahakoon D (2006) NeuroEvolution of augmenting topologies with learning for data classification. In: Information and automation, 2006. ICIA 2006. International conference on. IEEE, pp 367–371

  11. Krčah P (2008) Towards efficient evolution of morphology and control. In: GECCO’08: proceedings of the 10th annual conference on genetic and evolutionary computation 2008, pp 287–288

  12. Stanley KO, Bryant BD, Miikkulainen R (2005) Real-time neuroevolution in the NERO video game. IEEE Trans Evol Comput 9(6):653–668

    Article  Google Scholar 

  13. Raffe WL, Zambetta F, Li X (2013) Neuroevolution of content layout in the PCG: angry bots video game. In: 2013 IEEE congress on evolutionary computation CEC, pp 673–680

  14. Cardamone L, Loiacono D, Lanzi PL (2009) Evolving competitive car controllers for racing games with neuroevolution. In: Proceedings of the 11th annual genetic and evolutionary computation conference, GECCO-2009, pp 1179–1186

  15. Kohl N, Stanley K, Miikkulainen R, Samples M, Sherony R (2006) Evolving a real-world vehicle warning system. In: GECCO 2006—genetic and evolutionary computation conference, vol 2, pp 1681–1688

  16. Stanley KO, Miikkulainen R (2004) Competitive coevolution through evolutionary complexification. J Artif Intell Res 21:63–100

    Google Scholar 

  17. Gers FA, Schraudolph N, Schmidhuber J (2003) Learning precise timing with LSTM recurrent networks. J Mach Learn Res 3:115–143

    MathSciNet  MATH  Google Scholar 

  18. Renart A (2013) Recurrent networks learn to tell time. Nat Neurosci 16:772–774

    Article  Google Scholar 

  19. Waibel A, Hanazawa T, Hinton G, Shikano K, Lang KJ (1989) Phoneme recognition using time-delay neural networks. IEEE Trans Acoust Speech Signal Process 37(3):328–339

    Article  Google Scholar 

  20. Duro RJ, Reyes JS (1999) Discrete-time backpropagation for training synaptic delay-based artificial neural networks. IEEE Trans Neural Netw 10(4):779–789

    Article  Google Scholar 

  21. Marom E, Saad D, Cohen B (1997) Efficient training of recurrent neural network with time delays. Neural Netw 10(1):51–59

    Article  Google Scholar 

  22. Kim S-S (1998) Time-delay recurrent neural network for temporal correlations and prediction. Neurocomputing 20(1–3):253–263

    Article  MATH  Google Scholar 

  23. Boné R, de Crucianu M, Beauville JP (2002) Learning long-term dependencies by the selective addition of time-delayed connections to recurrent neural network. Neurocomputing 48(1–4):229–250

    MATH  Google Scholar 

  24. Mañé R (1981) On the dimension of the compact invariant sets of certain non-linear maps. Dyn Syst Turbul 898:230–242

    MATH  Google Scholar 

  25. Takens F (1985) On the numerical determination of the dimension of an attractor. Dyn Syst Bifurc 1125:99–106

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang Y, Kim S-P, Principe JC (2005) Comparison of TDNN training algorithms in brain machine interfaces. In: 2005 IEEE international joint conference on neural networks. IJCNN ’05. Proceedings, vol 4, pp 2459–2462

  27. Santos J, Duro RJ, Becerra JA, Crespo JL, Bellas F (2001) Considerations in the application of evolution to the generation of robot controllers. Inf Sci 133:127–148

    Article  MATH  Google Scholar 

  28. Caamaño P, Bellas F, Duro RJ (2015) \(\uptau \)-Neat: initial experiments in precise temporal processing through neuroevolution. Neurocomputing 150:43–49

    Article  Google Scholar 

  29. Caamano P, Bellas F, Duro RJ (2014) Augmenting the NEAT algorithm to improve its temporal processing capabilities. In: 2014 international joint conference on neural networks (IJCNN), pp 1467–1473

  30. Michalewicz Z (1996) Genetic algorithms+ data structures= evolution programs. Springer, Berlin

    Book  MATH  Google Scholar 

  31. Home A (2005–2010) ANJI: another NEAT java implementation. http://anji.sourceforge.net/

  32. Thoning KW, Tans PP, Komhyr WD (1989) Atmospheric carbon dioxide at Mauna Loa Observatory: 2, analysis of the NOAA GMCC data. J Geophys Res Atmos 94(D6):8549–8565

    Article  Google Scholar 

  33. Box GEP, Jenkins GM (1976) Time series analysis, forecasting and control. Holden-Day, San Francisco

    MATH  Google Scholar 

  34. Bellas F, Duro RJ, Faina A, Souto D (2010) Multilevel darwinist brain (MDB): artifcial evolution in a cognitive architecture for real robots. IEEE Trans Auton Ment Dev 2(4):340–354

    Article  Google Scholar 

  35. Bellas F, Caamaño P, Faiña A, Duro RJ (2014) Dynamic learning in cognitive robotics through a procedural long term memory. Evol Syst 5(1):49–63

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. J. Duro.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caamaño, P., Salgado, R., Bellas, F. et al. Introducing Synaptic Delays in the NEAT Algorithm to Improve Modelling in Cognitive Robotics. Neural Process Lett 43, 479–504 (2016). https://doi.org/10.1007/s11063-015-9426-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-015-9426-5

Keywords

Navigation