Skip to main content
Log in

How a mobile robot selects landmarks to make a decision based on an information criterion

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

Most current mobile robots are designed to determine their actions according to their positions. Before making a decision, they need to localize themselves. Thus, their observation strategies are mainly for self-localization. However, observation strategies should not only be for self-localization but also for decision making. We propose an observation strategy that enables a mobile robot to make a decision. It enables a robot equipped with a limited viewing angle camera to make decisions without self-localization. A robot can make a decision based on a decision tree and on prediction trees of observations constructed from its experiences. The trees are constructed based on an information criterion for the action decision, not for self-localization or state estimation. The experimental results with a four legged robot are shown and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Busquets, D., de Mantaras, R.L., Sierra, C., and Dietterich, T.G. 2002. Reinforcement learning for landmark-based robot navigation. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems.

  • Cassandra, A.R., Kaelbling, L.P., and Kurien, J.A. 1996. Acting under uncertainty: Discrete Bayesian models for mobile robot navigation. In Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 963–972.

  • Fox, D., Burgard, W., and Thrun, S. 1998. Active Markov localization for mobile robots. Robotics and Autonomous Systems, 25:195–207.

    Google Scholar 

  • Hutchinson, S.A. and Kak, A.C. 1989. Planning sensing strategies in a robot work cell with multi-sensor capabilities. IEEE Transactions on Robotics and automation, 5(6):765–783.

    Google Scholar 

  • Jensfelt, P., Austin, D., and Christensen, H.I. 2000. Towards task oriented localization. In Proceedings of the Intelligent Autonomous Systems 6, IOS Press, pp. 612–619.

  • Kristensen, S. 1997. Sensor planning with bayesian decision theory. Robotics and Autonomous Systems, 19:273–286.

    Article  Google Scholar 

  • McCallum, R.A. 1996. Hidden state and reinforcement learning with instance-based state identification. IEEE Transaction on System, Man and Cybernetics Part B, 26(3):464–474.

    Article  Google Scholar 

  • Mihaylova, L., Lefebvre, T., Bruyninckx, H., Gadeyne, K., and Schutter, J.D. 2002. Active sensing for robotics – a survey. In Proceedings of the 5the International Conference On Numerical Methods and Applications.

  • Miyazaki, K. and Kobayashi, S. 1998. Learning deterministic policies in partially observable Markov decision processes. In Proceedings of the Intelligent Autonomous Systems, Y. Kakazu, M. Wada, and T. Sato, (Eds.), 5: pp. 250–257.

  • Moon, I.H., Miura, J., and Shirai, Y. 1999. On-line viewpoint and motion planning for efficient visual navigation under uncertainty. Robotics and Autonomous Systems, 28(2–3):237–248.

    Article  Google Scholar 

  • Quinlan, J.R. 1979. Discovering rules from large collections of examples: A case study. In Expert Systems in the Microelectronic Age, D. Michie (Ed.) University Press, 1979.

  • Quinlan, J.R. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers.

  • Roy, N., Burgard, W., Fox, D., and Thrun, S. 1999. Coastal navigation: Robot navigation under uncertainty in dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation.

  • Sakaguchi, Y. 1994. Haptic sensing system with active perception. Advanced Robotics, 8(3):263–283.

    Google Scholar 

  • Tani, J., Yamamoto, J., and Nishi, H. 1997. Dynamical interactions between learning, visual attention, and behavior: An experiment with a vision-based mobile robot. In Fourth European Conference on Artificial Life, P. Husbands and I. Harvey (Eds.), The MIT Press, pp. 309–317.

  • Wan, E.A., and van der Merwe, R. 2000. The unscented Kalman filter for nonlinear estimation. In Procedings of the Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000.

  • Wang, H., Yao, K., Pottie, G., and Estrin, D. 2004. Entropy-based sensor selection heuristic for target localization. In Proceedings of the Third International Symposium on Information Processing in Sensor Networks, pp. 36–45.

  • Whitehead, S.D. 1991. A complexity analysis of cooperative mechanisms in reinforcement learning. In Proceedings of AAAI-91, pp. 607–613.

  • Whitehead, S.D. and Ballard, D.H. 1990. Active perception and reinforcement learning. In Proceedings of the Seventh International Conference on Machine Learning, Morgan Kaufmann, pp. 179–188.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noriaki Mitsunaga.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mitsunaga, N., Asada, M. How a mobile robot selects landmarks to make a decision based on an information criterion. Auton Robot 21, 3–14 (2006). https://doi.org/10.1007/s10514-005-5557-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-005-5557-2

Keywords

Navigation