Abstract
Recently, much attention has been focused on utilizing reinforcement learning (RL) for designing robot controllers. However, as the state spaces of these robots become continuous and high dimensional, it results in time-consuming process. In order to adopt the RL for designing the controllers of such complicated systems, not only adaptability but also computational efficiencies should be taken into account. In this paper, we introduce an adaptive state recruitment strategy which enables a learning robot to rearrange its state space conveniently according to the task complexity and the progress of the learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Asada, M., Noda, S., Tawaratsumida, S., Hosoda, K.: Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning. Machine Learning 23, 279–303 (1996)
Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Systems, Man, and Cybernetics 13, 834–846 (1983)
Brooks, R.A.: Artificial life and real robots. In: Proc. of the First European Conference on Artificial Life, MIT Press, Cambridge (1992)
Digney, B.L.: Learning Hierarchical Control Structure for Multiple Tasks and Changing Environments. In: Proc. of the Fifth International Conf. on Simulation of Adaptive Behavior, pp. 321/330. A Bradford Book, MIT Press, Cambridge, MA (1998)
Eggenberger, P., Ishiguro, A., Tokura, S., Kondo, T., Uchikawa, Y.: Toward Seamless Transfer from Simulated to Real Worlds: A Dynamically-Rearranging Neural Network Approach. In: Demiris, J., Wyatt, J.C. (eds.) EWLR 1999. LNCS (LNAI), vol. 1812, pp. 44–60. Springer, Heidelberg (2000)
Fritzke, B.: Incremental learning of locally linear mappings. In: Proc. of the International Conference on Artificial Neural Networks (1995)
Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, MIT Press (1975)
Jockusch, S., Ritter, H.: Self Organizing Maps: Local Competition and Evolutionary Optimization Neural Networks 7(8), 1229–1239 (1994)
Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences (1995)
K-Team SA: Khepera User Manual ver 5.0 (1998)
Miglino, O., Lund, H.H., Nolfi, S.: Evolving Mobile Robots in Simulated and Real Environments. Artificial Life 2, 417–434 (1995)
Moody, J., Darken, C.J.: Fast learning in networks of locally-tuned processing units. Neural Computation 1, 281–294 (1989)
Moriarty, D.E.: Efficient reinforcement learning through symbiotic evolution. Machine Learning 32(22), 11 (1996)
Morimoto, J., Doya, K.: Acquisition of Stand-up Behavior by Real Robot using Reinforcement Learning. In: Proc. of International Conference on Machine Learning, pp. 623–630 (2000)
Nolfi, S., Floreano, D.: “Evolutionary Robotics” —The Biology, Intelligence, and Technology of Self-Organizing Machines–. A Bradford Book, MIT Press, Cambridge, MA (2000)
Platt, J.: A Resource-Allocating Network for Function Interpolation Neural Networks 3(2), 213–225 (1991)
Samejima, K., Omori, T.: Adaptive internal state space formation by reinforcement learning for real-world agent. Neural Networks 12(7-8), 1143–1156 (1999)
Sato, M., Ishii, S.: On-line EM Algorithms for the Normalized Gaussian Network. Neural Computation 12(2) (1999)
Stewart, W.W.: Classifier that Approximate Functions IlliGAL Report No. 2001027, Illinois Genetic Algorithms Laboratory (2001)
Vijayakumar, S., Schaal, S.: Fast and Efficient Incremental Learning for Highdimensional Movement Systems. In: Proc. of International Conf. on Robotics and Automation, ICRA 2000 (2000)
Yoshimoto, J., Ishii, S., Sato, M.: Application of reinforcement learning based on on-line EM algorithm to balancing of acrobot. Systems and Computers in Japan 32(5), 12–20 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kondo, T., Ito, K. (2004). A Study on Designing Robot Controllers by Using Reinforcement Learning with Evolutionary State Recruitment Strategy. In: Ijspeert, A.J., Murata, M., Wakamiya, N. (eds) Biologically Inspired Approaches to Advanced Information Technology. BioADIT 2004. Lecture Notes in Computer Science, vol 3141. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27835-1_19
Download citation
DOI: https://doi.org/10.1007/978-3-540-27835-1_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23339-8
Online ISBN: 978-3-540-27835-1
eBook Packages: Springer Book Archive