Skip to main content

Inverse Reinforcement Learning for Agents Behavior in a Crowd Simulator

  • Conference paper
  • First Online:
Book cover Massively Multi-Agent Systems II (MMAS 2018)

Abstract

Crowd behavior has been subject of study due to its applications in fields like disaster evacuation, smart town planning and business strategic placing. However, obtaining patterns from the crowd to make a working model is difficult, as it requires an enormous quantity of data from observation and analysis and is impractical in many scenarios due to logistic and legal issues. Machine learning techniques are a good tool to overcome these difficulties, using a relatively small training data set to identify patterns, allowing crowd agents to react to similar situations accordingly. We implemented a behavioral agent model that uses such techniques into a large-scale crowd simulator, and apply inverse reinforcement learning to adjust agents’ behaviors by examples. The goal of the system is to provide to the agents a realistic behavior model and a method to orient themselves without knowing the scenario’s layout, based in learnt patterns around environment features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.openstreetmap.org.

References

  1. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 1. ACM (2004)

    Google Scholar 

  2. Alger, M.: Deep inverse reinforcement learning (2015)

    Google Scholar 

  3. Crociani, L., Lämmel, G., Vizzari, G.: Multi-scale simulation for crowd management: a case study in an urban scenario. In: Osman, N., Sierra, C. (eds.) AAMAS 2016. LNCS (LNAI), vol. 10002, pp. 147–162. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46882-2_9

    Chapter  Google Scholar 

  4. Crociani, L., Vizzari, G., Yanagisawa, D., Nishinari, K., Bandini, S.: Route choice in pedestrian simulation: design and evaluation of a model based on empirical observations. Intell. Artif. 10(2), 163–182 (2016)

    Google Scholar 

  5. Dvijotham, K., Todorov, E.: Inverse optimal control with linearly-solvable MDPs. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 335–342 (2010)

    Google Scholar 

  6. Faccin, J., Nunes, I., Bazzan, A.: Understanding the behaviour of learning-based BDI agents in the Braess’ paradox. In: Berndt, J.O., Petta, P., Unland, R. (eds.) MATES 2017. LNCS (LNAI), vol. 10413, pp. 187–204. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64798-2_12

    Chapter  Google Scholar 

  7. Helbing, D., Molnar, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51(5), 4282–4286 (1995)

    Article  Google Scholar 

  8. Herman, M., Gindele, T., Wagner, J., Schmitt, F., Quignon, C., Burgard, W.: Learning high-level navigation strategies via inverse reinforcement learning: a comparative analysis. In: Kang, B.H., Bai, Q. (eds.) AI 2016. LNCS (LNAI), vol. 9992, pp. 525–534. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50127-7_45

    Chapter  Google Scholar 

  9. Johansson, A., Helbing, D., Shukla, P.K.: Specification of the social force pedestrian model by evolutionary adjustment to video tracking data. Adv. Complex Syst. 10(2), 271–288 (2007). https://doi.org/10.1142/S0219525907001355

    Article  MathSciNet  MATH  Google Scholar 

  10. Kohjima, M., Matsubayashi, T., Sawada, H.: What-if prediction via inverse reinforcement learning. In: Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2017, Marco Island, Florida, USA, 22–24 May 2017, pp. 74–79 (2017). https://aaai.org/ocs/index.php/FLAIRS/FLAIRS17/paper/view/15503

  11. Lämmel, G., Plaue, M.: Getting out of the way: collision-avoiding pedestrian models compared to the RealWorld. In: Weidmann, U., Kirsch, U., Schreckenberg, M. (eds.) Pedestrian and Evacuation Dynamics 2012, pp. 1275–1289. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-02447-9_105

    Chapter  Google Scholar 

  12. Lämmel, G., Grether, D., Nagel, K.: The representation and implementation of time-dependent inundation in large-scale microscopic evacuation simulations. Transp. Res. Part C Emerg. Technol. 18(1), 84–98 (2010)

    Article  Google Scholar 

  13. Levine, S., Popovic, Z., Koltun, V.: Nonlinear inverse reinforcement learning with Gaussian processes. In: Advances in Neural Information Processing Systems, pp. 19–27 (2011)

    Google Scholar 

  14. Luo, L., et al.: Agent-based human behavior modeling for crowd simulation. Comput. Animat. Virtual Worlds 19(3–4), 271–281 (2008)

    Article  Google Scholar 

  15. Martinez-Gil, F., Lozano, M., Fernández, F.: Emergent behaviors and scalability for multi-agent reinforcement learning-based pedestrian models. Simul. Model. Pract. Theory 74, 117–133 (2017)

    Article  Google Scholar 

  16. Ng, A.Y., Russell, S.J., et al.: Algorithms for inverse reinforcement learning. In: ICML, pp. 663–670 (2000)

    Google Scholar 

  17. Schadschneider, A., Klingsch, W., Klüpfel, H., Kretz, T., Rogsch, C., Seyfried, A.: Evacuation dynamics: empirical results, modeling and applications. In: Meyers, R. (ed.) Extreme Environmental Events, pp. 517–550. Springer, New York (2011). https://doi.org/10.1007/978-1-4419-7695-6_29

    Chapter  Google Scholar 

  18. de Albuquerque Siebra, C., Botelho Neto, G.P.: Evolving the behavior of autonomous agents in strategic combat scenarios via sarsa reinforcement learning. In: Proceedings of the 2014 Brazilian Symposium on Computer Games and Digital Entertainment, SBGAMES 2014, Washington, DC, USA, pp. 115–122. IEEE Computer Society (2014). https://doi.org/10.1109/SBGAMES.2014.36

  19. Šošić, A., KhudaBukhsh, W.R., Zoubir, A.M., Koeppl, H.: Inverse reinforcement learning in swarm systems. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 1413–1421. International Foundation for Autonomous Agents and Multiagent Systems (2017)

    Google Scholar 

  20. Svetlik, M., Leonetti, M., Sinapov, J., Shah, R., Walker, N., Stone, P.: Automatic curriculum graph generation for reinforcement learning agents, November 2016. http://eprints.whiterose.ac.uk/108931/

  21. Torrens, P.M., Nara, A., Li, X., Zhu, H., Griffin, W.A., Brown, S.B.: An extensible simulation environment and movement metrics for testing walking behavior in agent-based models. Comput. Environ. Urban Syst. 36(1), 1–17 (2012)

    Article  Google Scholar 

  22. Yamashita, T., Soeda, S., Noda, I.: Evacuation planning assist system with network model-based pedestrian simulator. In: Yang, J.-J., Yokoo, M., Ito, T., Jin, Z., Scerri, P. (eds.) PRIMA 2009. LNCS (LNAI), vol. 5925, pp. 649–656. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-11161-7_52

    Chapter  Google Scholar 

  23. Zanlungo, F., Ikeda, T., Kanda, T.: Social force model with explicit collision prediction. EPL (Europhys. Lett.) 93(6), 68005 (2011)

    Article  Google Scholar 

  24. Zhong, J., Cai, W., Luo, L., Zhao, M.: Learning behavior patterns from video for agent-based crowd modeling and simulation. Auton. Agents Multi-Agent Syst. 30(5), 990–1019 (2016)

    Article  Google Scholar 

  25. Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K.: Maximum entropy inverse reinforcement learning. In: AAAI, Chicago, IL, USA, vol. 8, pp. 1433–1438 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nahum Alvarez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alvarez, N., Noda, I. (2019). Inverse Reinforcement Learning for Agents Behavior in a Crowd Simulator. In: Lin, D., Ishida, T., Zambonelli, F., Noda, I. (eds) Massively Multi-Agent Systems II. MMAS 2018. Lecture Notes in Computer Science(), vol 11422. Springer, Cham. https://doi.org/10.1007/978-3-030-20937-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20937-7_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20936-0

  • Online ISBN: 978-3-030-20937-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics