Abstract
Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
F. Adler and D. Gordon, Information collection and spread by networks of patrolling ants, The American Naturalist 140(3) (1992) 373-400.
T. Balch and R. Arkin, Avoiding the past: A simple, but effective strategy for reactive navigation, in: International Conference on Robotics and Automation (1993) pp. 678-685.
A. Barto, S. Bradtke and S. Singh, Learning to act using real-time dynamic programming, Artif. Intell. 73(1) (1995) 81-138.
B. Bonet and H. Geffner, Planning with incomplete information as heuristic search in belief space, in: Proceedings of the International Conference on Artificial Intelligence Planning and Scheduling (2000) pp. 52-61.
B. Bonet, G. Loerincs and H. Geffner, A robust and fast action selection mechanism, in: Proceedings of the National Conference on Artificial Intelligence (1997) pp. 714-719.
R. Brooks and A. Flynn, Fast, cheap, and out of control: A robot invasion of the solar system, Journal of the British Interplanetary Society (1989) 478-485.
G. Chartrand and L. Lesniak, Graphs and Digraphs (Wadsworth and Brooks/Cole, 2nd edition, 1986).
T. Ishida, Real-Time Search for Learning Autonomous Agents (Kluwer Academic, 1997).
T. Ishida and R. Korf, Moving target search, in: Proceedings of the International Joint Conference on Artificial Intelligence (1991) pp. 204-210.
L. Kaelbling, M. Littman and A. Moore, Reinforcement learning: A survey, J. Artif. Intell. Res. 4 (1996) 237-285.
S. Koenig, A. Blum, T. Ishida and R. Korf, eds., Proceedings of the AAAI-97 Workshop on On-Line Search (AAAI Press, 1997). Available as AAAI Technical Report WS-97-10.
S. Koenig and R.G. Simmons, Real-time search in non-deterministic domains, in: Proceedings of the International Joint Conference on Artificial Intelligence (1995) pp. 1660-1667.
S. Koenig and R.G. Simmons, Easy and hard testbeds for real-time search algorithms, in: Proceedings of the National Conference on Artificial Intelligence (1996) pp. 279-285.
S. Koenig and R.G. Simmons, Solving robot navigation problems with initial pose uncertainty using real-time heuristic search, in: Proceedings of the International Conference on Artificial Intelligence Planning Systems (1998) pp. 145-153.
S. Koenig and B. Szymanski, Value-update rules for real-time search, in: Proceedings of the National Conference on Artificial Intelligence (1999) pp. 718-724.
R. Korf, Real-time heuristic search: First results, in: Proceedings of the National Conference on Artificial Intelligence (1987) pp. 133-138.
R. Korf, Real-time heuristic search, Artif. Intell. 42(2-3) (1990) 189-211.
N. Nilsson, Problem-Solving Methods in Artificial Intelligence (McGraw-Hill, 1971).
J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving (Addison-Wesley, 1985).
A. Pirzadeh and W. Snyder, A unified solution to coverage and search in explored and unexplored terrains using indirect control, in: Proceedings of the International Conference on Robotics and Automation (1990) pp. 2113-2119.
R. Russell, Heat trails as short-lived navigational markers for mobile robots, in: Proceedings of the International Conference on Robotics and Automation (1997) pp. 3534-3539.
R. Russell, D. Thiel and A. Mackay-Sim, Sensing odour trails for mobile robot navigation, in: Proceedings of the International Conference on Robotics and Automation (1994) pp. 2672-2677.
S. Russell and E. Wefald, Do the Right Thing-Studies in Limited Rationality (MIT Press, 1991).
R. Sharpe and B. Webb, Simulated and situated models of chemical trail following in ants, in: Proceedings of the International Conference on Simulation of Adaptive Behavior (1998) pp. 195-204.
R. Sutton and A. Barto, Reinforcement Learning: An Introduction (MIT Press, 1998).
B. Szymanski and S. Koenig, The complexity of node counting on undirected graphs, Technical Report, Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY (1998).
S. Thrun, Efficient exploration in reinforcement learning. Technical Report CMU-CS-92-102, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA (1992).
S. Thrun, The role of exploration in learning control, in: Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches, eds. D. White and D. Sofge (Van Nostrand-Reinhold, 1992) pp. 527-559.
I. Wagner, M. Lindenbaum and A. Bruckstein, On-line graph searching by a smell-oriented vertex process, in: Proceedings of the AAAI Workshop on On-Line Search, eds. S. Koenig, A. Blum, T. Ishida and R. Korf (1997) pp. 122-125. Available as AAAI Technical Report WS-97-10.
I. Wagner, M. Lindenbaum and A. Bruckstein, Efficiently searching a dynamic graph by a smell-oriented vertex process, Ann. Math. Artif. Intell. 24 (1998) 211-223.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Koenig, S., Szymanski, B. & Liu, Y. Efficient and inefficient ant coverage methods. Annals of Mathematics and Artificial Intelligence 31, 41–76 (2001). https://doi.org/10.1023/A:1016665115585
Issue Date:
DOI: https://doi.org/10.1023/A:1016665115585