Abstract
This paper presents a proposal of learning obstacle avoidance behavior in unknown environment. The robot learns this behavior through seeking to collide with possible obstacles. The field of view (FOV) of the robot sensors is partitioned into five neighboring portions, and each is associated with an agent that applies Q-learning with fuzzy states codified in distance notions. The five agents recommend actions independently and a mechanism of arbitration is employed to generate the final action. After hundreds of collision, the robot can achieve collision-free navigation with high successful ratio, through integrating the goal information and the learned obstacle avoidance behavior. Simulation results verify the effectiveness of our proposal.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research 5(1), 90–98 (1986)
Homaifar, A., McCormick, E.: Simultaneous design of membership functions and rule sets for fuzzy controllers using genetic algorithms. IEEE Transaction on Fuzzy Systems 3, 129–139 (1995)
Pal, P.K., Kar, A.: Mobile robot navigation using a neural net. In: IEEE International Conference on Robotics and Automation, pp. 1503–1508 (1995)
Kozakiewwicz, C., Ejiri, M.: Neural network approach to path planning for two dimension robot motion. In: IEEE/RSJ International Conference on Intelligent Robots Systems, pp. 818–823 (1991)
Beom, H.R., Cho, H.S.: A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning. IEEE Transaction on System, Man, Cybernetics 25, 464–477 (1995)
Yung, N.H.C., Ye, C.: An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning. IEEE Transaction System, Man, Cybernetics B 29, 314–321 (1999)
Digney, B.: Learning hierarchical control structures for multiple tasks and changing environments. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)
Moren, J.: Dynamic action sequences in reinforcement learning. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, Zurich, Switzerland (1998)
Smart, W., Kaelbling, L.: Practical reinforcement learning in continuous spaces. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 903–910 (2000)
Sutton, R., Barto, A.: Reinforcement Learning: an Introduction. The MIT Press, Cambridge (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lin, M., Zhu, J., Sun, Z. (2004). Learning Obstacle Avoidance Behavior Using Multi-agent Learning with Fuzzy States. In: Bussler, C., Fensel, D. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 2004. Lecture Notes in Computer Science(), vol 3192. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30106-6_40
Download citation
DOI: https://doi.org/10.1007/978-3-540-30106-6_40
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22959-9
Online ISBN: 978-3-540-30106-6
eBook Packages: Springer Book Archive