Abstract
Subgoal learning is investigated to effectively build a goal-oriented behavior control rule with which a mobile robot can achieve a task goal for any starting task configurations. For this, states of interest are firstly extracted from successful task episodes, where the averaged occurrence frequency of states is used as threshold value to identify states of interest. And, subgoals are learned by clustering similar features of state transition tuples. Here, features used in clustering are produced by using changes of the states in the state transition tuples. A goal-oriented behavior control rule is made in such a way that proper actions are sequentially and/or reactively generated from the subgoal according to the context of states. To show the validities of our proposed subgoal learning as well as a goal-oriented control rule of mobile robots, a Box-Pushing-Into-a-Goal(BPIG) task is simulated and experimented.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hue, C.W., Was, B.W., Chen, Y.: Subgoal ordering and Granularity Control for Incremental Planning. In: IEEE International Conference on Tools with AI (2005)
Goel, S., Huber, M.: Subgoal Discovery for Hierarchical Reinforcement Learning Using Learned Policies. In: International FLAIRS Conference, pp. 346–350 (2003)
Stolle, M., Atkeson, C.G.: Policies Based on Trajectory Libraries. In: IEEE International Conference on Robotics and Automation, Orlando, Florida, USA, pp. 3344–3349 (2006)
Fukazawa, Y., Trevai, C., Ota, J., Arai, T.: Acquisition of Intermediate Goals for an Agent Executing Multiple Tasks. IEEE Transactions on Robotics 22(5), 1034–1040 (2006)
Sutton, R., Precup, D., Singh, S.: Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning. In: AI, vol. 112, pp. 181–211 (1999)
Mannor, S., Menache, I., Hoze, A., Klein, U.: Dynamic Abstraction in Reinforcement Learning via Clustering. In: The 21st International Conference on Machine Learning, Banff, Canada, pp. 560–567 (2004)
Shen, J., Gu, G., Liu, H.: Automatic Option Generation in Hierarchical Reinforcement Learning via Immune Clustering. In: Systems and Control in Aerospace and Astronautics, ISSCAA, pp. 19–21 (2006)
Suh, I.H., Kim, M.J., Lee, S., Yi, B.J.: A novel dynamic priority-based action-selection-mechanism integrating a reinforcement learning. In: IEEE Conference on Robotics and Automation, pp. 2639–2646 (2004)
LaValle, S.M.: Planning Algorithms, ch. 4. Cambridge Press (2006)
Mitchel, T.M.: Machine Learning, ch. 2. McGraw-Hill, New York (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lee, S.H., Lee, S., Suh, I.H., Chung, W.K. (2009). Learning of Subgoals for Goal-Oriented Behavior Control of Mobile Robots. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-02490-0_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02489-4
Online ISBN: 978-3-642-02490-0
eBook Packages: Computer ScienceComputer Science (R0)