Skip to main content

Learning High-Level Navigation Strategies via Inverse Reinforcement Learning: A Comparative Analysis

  • Conference paper
  • First Online:
AI 2016: Advances in Artificial Intelligence (AI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9992))

Included in the following conference series:

Abstract

With an increasing number of robots acting in populated environments, there is an emerging necessity for programming techniques that allow for efficient adjustment of the robot’s behavior to new environments or tasks. A promising approach for teaching robots a certain behavior is Inverse Reinforcement Learning (IRL), which estimates the underlying reward function of a Markov Decision Process (MDP) from observed behavior of an expert. Recently, an approach called Simultaneous Estimation of Rewards and Dynamics (SERD) has been proposed, which extends IRL by simultaneously estimating the dynamics. The objective of this work is to compare classical IRL algorithms with SERD for learning high level navigation strategies in a realistic hallway navigation scenario solely from human expert demonstrations. We show that the theoretical advantages of SERD also pay off in practice by estimating better models of the dynamics and explaining the expert’s demonstrations more accurately.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning (2004)

    Google Scholar 

  2. Bloem, M., Bamos, N.: Infinite time horizon maximum causal entropy inverse reinforcement learning. In: IEEE 53rd Annual Conference on Decision and Control (2014)

    Google Scholar 

  3. Boularias, A., Kober, J., Peters, J.: Relative entropy inverse reinforcement learning. In: Proceedings of 14th International Conference on Artificial Intelligence and Statistics (2011)

    Google Scholar 

  4. Helbing, D., Molnar, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51(5), 4282 (1995)

    Article  Google Scholar 

  5. Henry, P., Vollmer, C., Ferris, B., Fox, D.: Learning to navigate through crowded environments. In: IEEE International Conference on Robotics and Automation (2010)

    Google Scholar 

  6. Herman, M., Fischer, V., Gindele, T., Burgard, W.: Inverse reinforcement learning of behavioral models for online-adapting navigation strategies. In: IEEE ICRA (2015)

    Google Scholar 

  7. Herman, M., Gindele, T., Wagner, J., Schmitt, F., Burgard, W.: Inverse reinforcement learning with simultaneous estimation of rewards and dynamics. In: AISTATS (2016)

    Google Scholar 

  8. Klein, E., Geist, M., Piot, B., Pietquin, O.: Inverse reinforcement learning through structured classification. In: Proceedings of Advances in Neural Information Processing Systems (2012)

    Google Scholar 

  9. Neu, G., Szepesvári, C.: Apprenticeship learning using inverse reinforcement learning and gradient methods. In: UAI (2007)

    Google Scholar 

  10. Ng, A.Y., Russell, S.J.: Algorithms for inverse reinforcement learning. In: ICML (2000)

    Google Scholar 

  11. Ziebart, B.D., Bagnell, J.A., Dey, A.K.: Modeling interaction via the principle of maximum causal entropy. In: Proceedings of the International Conference on Machine Learning (2010)

    Google Scholar 

  12. Ziebart, B.D., Maas, A., Bagnell, J.A.D., Dey, A.: Maximum entropy inverse reinforcement learning. In: Proceeding of AAAI 2008 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Herman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Herman, M., Gindele, T., Wagner, J., Schmitt, F., Quignon, C., Burgard, W. (2016). Learning High-Level Navigation Strategies via Inverse Reinforcement Learning: A Comparative Analysis. In: Kang, B.H., Bai, Q. (eds) AI 2016: Advances in Artificial Intelligence. AI 2016. Lecture Notes in Computer Science(), vol 9992. Springer, Cham. https://doi.org/10.1007/978-3-319-50127-7_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-50127-7_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-50126-0

  • Online ISBN: 978-3-319-50127-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics