2010 Special IssueComparison of behavior-based and planning techniques on the small robot maze exploration problem
Introduction
In recent years, small mobile robotic agents have become very popular, especially for testing various control algorithms in a laboratory environment. The affordability of hardware platforms such as E-puck together with mature software simulation environments makes small robots a commonly used experimental platform. It is useful to consider that such a platform possesses certain special properties differentiating it from other robotic agents. Small mobile robots are usually not equipped by a suite of sophisticated sensors, neither have they a computational power for executing time exacting algorithms.
While developing a control and localization framework for such a platform, one has two distinct options. It is possible to use a reactive type of robot control (Slušný, Neruda, & Vidnerová, 2008b) utilizing adaptive algorithms that allow robot to learn general rules of control. A second approach is based on motion planning depending on localization mechanisms (Arkin, 1998). In this work we try to put both approaches for small mobile robots into perspective. Several experiments are performed to demonstrate the relative performance of behavior-based and planning-based methods, and their advantages and limitations are discussed.
The paper is organized as follows. In the following section, we introduce the miniature E-puck robot (see Fig. 1). Next two sections describe the -learning and evolutionary algorithms on one hand, and localization and motion planning techniques on the other hand. In Section 6.1 we describe a setup of our experiments. Section 6.2 shows results of our experiments. Finally, in Section 7 we conclude the work and mention possible directions of future work.
Section snippets
Related work
Reactive and behavior-based systems deal with agents of low levels of cognitive complexity in complex, noisy and uncertain environments. The focus is on the intelligent behaviors that arise as a result of an agent’s interaction with its environment. The ultimate goal of the process is to develop an embodied and autonomous agent with a high degree of adaptive possibilities (Pfeifer & Scheier, 1999). Brooks has outlined several issues raised in controlling multiple autonomous mobile robots to
Small physical robots
E-puck (E-puck, 0000, see Fig. 3) is a mobile robot with a diameter of 70 mm and a weight of 50 g. The sensory system employs eight active infrared light sensors distributed around the body, six on one side and two on other side. The closer they are to a surface, the higher is the amount of infrared light measured. Unfortunately, because of their imprecision and characteristics, they can be used as bumpers only. Besides infrared sensors, the robot is equipped with a low-cost VGA camera with
Evolutionary robotics
Evolutionary robotics combines two AI approaches: neural networks and evolutionary algorithms. The control system of the robot is realized by a neural network, in our case an RBF network. The network provides direct mapping between robots sensors and effectors, i.e. from the robot sensor values to the differential drive of robot wheels which results in a typical reactive control. It is difficult to train such a network by traditional supervised learning algorithms since they require instant
Localization and motion planning
Localization is the process of estimating robot’s current position in the known map. In our case, we estimate robot’s position and orientation in the three-dimensional space . Motion planning module plans path between two points in this space.
Framework
In order to compare performance and properties of described algorithms, we conducted a simulated experiment. The E-puck robot was trained to explore the environment and avoid walls. The E-puck sensors can detect a white paper at a maximum distance of approximately 8 cm. Sensors return values from interval [0, 4095]. Effectors accept values from interval . The higher the absolute value, the faster is the motor moving in either direction. Without any further preprocessing of sensor’s
Conclusion
The paper presented different approaches to localization and control of small mobile robots. The first approach deals with planning and localization. While the planning component is rather straightforward, we have demonstrated that localization is crucial, especially for devices with rather poor collection of sensors. Typically, the dead reckoning is not usable because the fast accumulation of errors causes the position to be very unreliable after few steps of the robot. A partition-based
Acknowledgement
This research has been supported by the Grant Agency of the Czech Republic under project no. 201/08/1744.
References (35)
Robot learning with GA-based fuzzy reinforcement learning agents
Information Sciences
(2002)- E-puck, Online documentation....
- Webots....
Behavior-based robotics
(1998)Integrated systems based on behaviors
SIGART Bulletin
(1991)- Brooks, Rodney A. (1991). Intelligence without reason. Technical report. Massachusetts Institute of...
- et al.
Coordinating multiple agents via reinforcement learning
Autonomous Agents and Multi-Agent Systems
(2005) Evolutionary computation: The fossil record
(1998)- et al.
Hierarchical multi-agent reinforcement learning
Autonomous Agents and Multi-Agent Systems
(2006) Neural networks: A comprehensive foundation
(1998)
Adaptation in natural and artificial systems
A new approach to linear filtering and prediction problems
Transactions of the ASME. Series D, Journal of Basic Engineering
An architecture for behavior-based reinforcement learning
Adaptive Behavior—Animals, Animats, Software Agents, Robots, Adaptive Systems
Planning algorithms
Reinforcement learning in the multi-robot domain
Autonomous Robots
Challenges in evolving controllers for physical robots
Robotics and Autonomous Systems
Cited by (5)
Multi-robot path planning using co-evolutionary genetic programming
2012, Expert Systems with ApplicationsCitation Excerpt :The constraints refer to the individual robot constraints as well as the constraints to avoid collision with other robot. Slusny, Neruda, and Vidnerová (2010) used evolutionary radial basis function networks and reinforcement learning for the task of localization and motion planning of a robot. The models were later analyzed by rule extraction technique, which converted the model to a rule based model for easy analysis and understanding.
Construction of Simulation Environment and Design of Path Planning for Maze Robot
2023, 2023 IEEE 6th International Conference on Information Systems and Computer Aided Education, ICISCAE 2023Space partitioning and maze solving by bacteria
2019, Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICSTResearch on classification of intelligent robotic architecture
2012, Journal of ComputersBézier curve based dynamic obstacle avoidance and trajectory learning for autonomous mobile robots
2010, Proceedings of the 2010 10th International Conference on Intelligent Systems Design and Applications, ISDA'10