Skip to main content

Reinforcement Learning in MirrorBot

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3696))

Abstract

For this special session of EU projects in the area of NeuroIT, we will review the progress of the MirrorBot project with special emphasis on its relation to reinforcement learning and future perspectives. Models inspired by mirror neurons in the cortex, while enabling a system to understand its actions, also help in the solving of the curse of dimensionality problem of reinforcement learning. Reinforcement learning, which is primarily linked to the basal ganglia, is a powerful method to teach an agent such as a robot a goal-directed action strategy. Its limitation is mainly that the perceived situation has to be mapped to a state space, which grows exponentially with input dimensionality. Cortex-inspired computation can alleviate this problem by pre-processing sensory information and supplying motor primitives that can act as modules for a superordinate reinforcement learning scheme.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rizzolatti, G., Fogassi, L., Gallese, V.: Motor and cognitive functions of the ventral premotor cortex. Current Opinion in Neurobiology 12, 149–154 (2002)

    Article  Google Scholar 

  2. Touzet, C.: Neural reinforcement learning for behaviour synthesis. Robotics and Autonomous Systems 22, 251–281 (1997)

    Article  Google Scholar 

  3. Weber, C., Wermter, S., Zochios, A.: Robot docking with neural vision and reinforcement. Knowledge-Based Systems 17, 165–172 (2004)

    Article  Google Scholar 

  4. Jog, M., Kubota, Y., Connolly, C., Hillegaart, V., Graybiel, A.: Building neural representations of habits. Science 286, 1745–1749 (1999)

    Article  Google Scholar 

  5. Elshaw, M., Weber, C., Zochios, A., Wermter, S.: A mirror neuron inspired hierarchical network for action selection. In: Proc. NeuroBotics, pp. 89–97 (2004)

    Google Scholar 

  6. Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  7. Pulvermüller, F.: The Neuroscience of Language. On Brain Circuits of Words and Serial Order. Cambridge University Press, Cambridge (2003)

    Book  Google Scholar 

  8. Kondo, T., Ito, K.: A reinforcement learning with evolutionary state recruitment strategy for autonomous mobile robots control. Robotics and Autonomous Systems 46, 111–124 (2004)

    Article  Google Scholar 

  9. Lee, I., Lau, Y.: Adaptive state space partitioning for reinforcement learning. Engineering Applications of Artificial Intelligence 17, 577–588 (2004)

    Article  Google Scholar 

  10. Barto, A., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems 13, 41–77 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  11. Kalmár, Z., Szepesvári, C., Lörincz, A.: Module-based reinforcement learning: Experiments with a real robot. Machine Learning 31, 55–85 (1998)

    Article  MATH  Google Scholar 

  12. Humphrys, M.: W-learning: A simple RL-based society of mind. In: 3rd European Conference on Artificial Life, p. 30 (1995)

    Google Scholar 

  13. Knoblauch, A., Markert, H., Palm, G.: An associative model of cortical language and action processing. In: Proc. 9th Neural Comp. and Psych. Workshop (2004)

    Google Scholar 

  14. Vitay, J., Rougier, N., Alexandre, F.: A distributed model of visual spatial attention. In: Biomimetic Neural Learning for Intelligent Robotics. Springer, Heidelberg (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Weber, C., Muse, D., Elshaw, M., Wermter, S. (2005). Reinforcement Learning in MirrorBot. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Biological Inspirations – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3696. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550822_48

Download citation

  • DOI: https://doi.org/10.1007/11550822_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28752-0

  • Online ISBN: 978-3-540-28754-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics