Skip to main content

Robot navigation with markov models: A framework for path planning and learning with limited computational resources

  • Accepted Papers
  • Conference paper
  • First Online:
Reasoning with Uncertainty in Robotics (RUR 1995)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1093))

Included in the following conference series:

Abstract

Navigation methods for mobile robots need to take various sources of uncertainty into account in order to get robust performance. The ability to improve performance with experience and to adapt to new circumstances is equally important for long-term operation. Real-time constraints, limited computation and memory, as well as the cost of collecting training data also need to be accounted for. In this paper, we discuss our evolving architecture for mobile robot navigation that we use as a test-bed for evaluating methods for dealing with uncertainty in the face of real-time constraints and limited computational resources. The architecture is based on POMDP models that explicitly represent actuator uncertainty, sensor uncertainty, and approximate knowledge of the environment (such as uncertain metric information). Using this model, the robot is able to track its likely location as it navigates through a building. Here, we discuss additions to the architecture: a learning component that allows the robot to improve the POMDP model from experience, and a decision-theoretic path planner that takes into account the expected performance of the robot as well as probabilistic information about the state of the world. A key aspect of both additions is the efficient allocation of computational resources and their practical application to real-world robots.

This research was supported in part by NASA under contract NAGW-1175 and by the Wright Laboratory and ARPA under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations or the United States government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A.R. Cassandra, L.P. Kaelbling, and M.L. Littman. Acting optimally in partially observable stochastic domains. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), pages 1023–1028, 1994.

    Google Scholar 

  2. R. Goodwin. Reasoning about when to start acting. In Proceedings of the Second International Conference on Artificial Intelligence Planning Systems (AIPS-94), pages 86–91. 1994.

    Google Scholar 

  3. B. Hannaford and P. Lee. Hidden Markov model analysis of force/torque information in telemanipulation. International Journal of Robotics Research, 10(5):528–539, 5 1991.

    Google Scholar 

  4. S. Koenig and R.G. Simmons. Unsupervised learning of probabilistic models for robot navigation. In Proceedings of the International Conference on Robotics and Automation (ICRA-96), 1996.

    Google Scholar 

  5. M.L. Littman, A.R. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: Scaling up. In Proceedings of the Twelfth International Conference on Machine Learning (ML-95), pages 362–370, 1995.

    Google Scholar 

  6. W.S. Lovejoy. A survey of algorithmic methods for partially observed Markov decision processes. Annals of Operations Research, 28(1):47–65, 1991.

    Article  Google Scholar 

  7. I. Nourbakhsh, R. Powers, and S. Birchfield. Dervish: An office-navigating robot. AI Magazine, 16(2):53–60, 1995.

    Google Scholar 

  8. R. Parr and S. Russell. Approximating optimal policies for partially observable stochastic domains. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1088–1094, 1995.

    Google Scholar 

  9. L.R. Rabiner. An introduction to hidden Markov models. IEEE ASSP Magazine, pages 4–16. 1 1986.

    Google Scholar 

  10. R. Simmons. Becoming increasingly reliable. In Proceedings of the Second International Conference on Artificial Intelligence Planning Systems (AIPS-94), pages 152–157, 1994.

    Google Scholar 

  11. R. Simmons and S. Koenig. Probabilistic robot navigation in partially observable environments. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1080–1087, 1995.

    Google Scholar 

  12. J. Yang, Y. Xu, and C.S. Chen. Hidden Markov model approach to skill learning and its application to telerobotics. IEEE Transactions on Robotics and Automation, 10(5):621–631, 10 1994.

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Leo Dorst Michiel van Lambalgen Frans Voorbraak

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Koenig, S., Goodwin, R., Simmons, R.G. (1996). Robot navigation with markov models: A framework for path planning and learning with limited computational resources. In: Dorst, L., van Lambalgen, M., Voorbraak, F. (eds) Reasoning with Uncertainty in Robotics. RUR 1995. Lecture Notes in Computer Science, vol 1093. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0013970

Download citation

  • DOI: https://doi.org/10.1007/BFb0013970

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61376-3

  • Online ISBN: 978-3-540-68506-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics