Abstract
For this special session of EU projects in the area of NeuroIT, we will review the progress of the MirrorBot project with special emphasis on its relation to reinforcement learning and future perspectives. Models inspired by mirror neurons in the cortex, while enabling a system to understand its actions, also help in the solving of the curse of dimensionality problem of reinforcement learning. Reinforcement learning, which is primarily linked to the basal ganglia, is a powerful method to teach an agent such as a robot a goal-directed action strategy. Its limitation is mainly that the perceived situation has to be mapped to a state space, which grows exponentially with input dimensionality. Cortex-inspired computation can alleviate this problem by pre-processing sensory information and supplying motor primitives that can act as modules for a superordinate reinforcement learning scheme.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Rizzolatti, G., Fogassi, L., Gallese, V.: Motor and cognitive functions of the ventral premotor cortex. Current Opinion in Neurobiology 12, 149–154 (2002)
Touzet, C.: Neural reinforcement learning for behaviour synthesis. Robotics and Autonomous Systems 22, 251–281 (1997)
Weber, C., Wermter, S., Zochios, A.: Robot docking with neural vision and reinforcement. Knowledge-Based Systems 17, 165–172 (2004)
Jog, M., Kubota, Y., Connolly, C., Hillegaart, V., Graybiel, A.: Building neural representations of habits. Science 286, 1745–1749 (1999)
Elshaw, M., Weber, C., Zochios, A., Wermter, S.: A mirror neuron inspired hierarchical network for action selection. In: Proc. NeuroBotics, pp. 89–97 (2004)
Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (2001)
Pulvermüller, F.: The Neuroscience of Language. On Brain Circuits of Words and Serial Order. Cambridge University Press, Cambridge (2003)
Kondo, T., Ito, K.: A reinforcement learning with evolutionary state recruitment strategy for autonomous mobile robots control. Robotics and Autonomous Systems 46, 111–124 (2004)
Lee, I., Lau, Y.: Adaptive state space partitioning for reinforcement learning. Engineering Applications of Artificial Intelligence 17, 577–588 (2004)
Barto, A., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems 13, 41–77 (2003)
Kalmár, Z., Szepesvári, C., Lörincz, A.: Module-based reinforcement learning: Experiments with a real robot. Machine Learning 31, 55–85 (1998)
Humphrys, M.: W-learning: A simple RL-based society of mind. In: 3rd European Conference on Artificial Life, p. 30 (1995)
Knoblauch, A., Markert, H., Palm, G.: An associative model of cortical language and action processing. In: Proc. 9th Neural Comp. and Psych. Workshop (2004)
Vitay, J., Rougier, N., Alexandre, F.: A distributed model of visual spatial attention. In: Biomimetic Neural Learning for Intelligent Robotics. Springer, Heidelberg (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Weber, C., Muse, D., Elshaw, M., Wermter, S. (2005). Reinforcement Learning in MirrorBot. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Biological Inspirations – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3696. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550822_48
Download citation
DOI: https://doi.org/10.1007/11550822_48
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28752-0
Online ISBN: 978-3-540-28754-4
eBook Packages: Computer ScienceComputer Science (R0)