Skip to main content

Communicative and Cooperative Learning for Multi-agent Indoor Navigation

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14646))

Included in the following conference series:

  • 145 Accesses

Abstract

The ability to cooperate and work as a team is one of the “holy grail” goals of intelligent robots. To address the importance of communication in multi-agent reinforcement learning (MARL), we propose a Cooperative Indoor Navigation (CIN) task, where agents cooperatively navigate to reach a goal in a 3D indoor room with realistic observation inputs. This navigation task is more challenging and closer to real-world robotic applications than previous multi-agent tasks since each agent can observe only part of the environment from its first-person view. Therefore, this task requires the communication and cooperation of agents to accomplish. To research the CIN task, we collect a large-scale dataset with challenging demonstration trajectories. The code and data of the CIN task have been released. The prior methods of MARL primarily emphasized the learning of policies for multiple agents but paid little attention to the communication model, resulting in their inability to perform optimally in the CIN task. In this paper, we propose a MARL model with a communication mechanism to address the CIN task. In our experiments, we discover that our proposed model outperforms previous MARL methods and communication is the key to addressing the CIN task. Our quantitative results shows that our proposed MARL method outperforms the baseline by 6% on SPL. And our qualitative results demonstrates that the agent with the communication mechanism is able to explore the whole environment sufficiently so that navigate efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aiello, M., et al.: IPPO: A privacy-aware architecture for decentralized data-sharing (2020). arXiv:2001.06420

  2. Anderson, P., et al.: On evaluation of embodied navigation agents (2018). arXiv:1807.06757

  3. Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR (2018)

    Google Scholar 

  4. Baker, B., et al.: Emergent tool use from multi-agent autocurricula. In: ICLR (2020)

    Google Scholar 

  5. Bard, N., et al.: The Hanabi challenge: a new frontier for AI research. Artif. Intell. 280, 103216 (2020)

    Google Scholar 

  6. Batra, D., et al.: ObjectNav revisited: On evaluation of embodied agents navigating to objects (2020). arXiv:2006.13171

  7. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning (2019). arXiv:1912.06680

  8. Chang, A., et al.: Matterport3D: Learning from RGB-D data in indoor environments (2017). arXiv:1709.06158

  9. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). arXiv:1406.1078

  10. Deitke, M., et al.: RoboTHOR: an open simulation-to-real embodied AI platform. In: CVPR (2020)

    Google Scholar 

  11. Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI (2018)

    Google Scholar 

  12. Hu, S., Zhu, F., Chang, X., Liang, X.: UPDeT: universal multi-agent RL via policy decoupling with transformers. In: ICLR (2021)

    Google Scholar 

  13. Ikram, K., Mondragón, E., Alonso, E., Garcia-Ortiz, M.: HexaJungle: a marl simulator to study the emergence of language (2021)

    Google Scholar 

  14. Khan, M.J., Ahmed, S.H., Sukthankar, G.: Transformer-based value function decomposition for cooperative multi-agent reinforcement learning in starCraft. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. vol. 18, pp. 113–119 (2022)

    Google Scholar 

  15. Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., Farhadi, A.: AI2-THOR: An interactive 3D environment for visual AI (2017). arXiv:1712.05474

  16. Lin, T., Huh, J., Stauffer, C., Lim, S.N., Isola, P.: Learning to ground multi-agent communication with autoencoders. NeurIPS 34, 15230–15242 (2021)

    Google Scholar 

  17. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: ICML (1994)

    Google Scholar 

  18. Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., Graepel, T.: Emergent coordination through competition. In: ICLR (2019)

    Google Scholar 

  19. Mahajan, A., Rashid, T., Samvelyan, M., Whiteson, S.: MAVEN: Multi-agent variational exploration (2019). arXiv:1910.07483

  20. Mnih, V., et al.: Playing Atari with deep reinforcement learning (2013). arXiv:1312.5602

  21. Mordatch, I., Abbeel, P.: Emergence of grounded compositional language in multi-agent populations. In: AAAI (2017)

    Google Scholar 

  22. Paquette, P., et al.: No press diplomacy: Modeling multi-agent gameplay (2019). arXiv:1909.02128

  23. Pérez-Liébana, D., et al.: The multi-agent reinforcement learning in malmÖ (marlÖ) competition (2019). arXiv:1901.08129

  24. Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML (2018)

    Google Scholar 

  25. Samvelyan, M., et al.: The StarCraft multi-agent challenge (2019). arXiv:1902.04043

  26. Savva, M., Chang, A.X., Dosovitskiy, A., Funkhouser, T.A., Koltun, V.: MINOS: Multimodal indoor simulator for navigation in complex environments (2017). arXiv:1712.03931

  27. Savva, M., et al.: Habitat: a platform for embodied AI research. In: ICCV (2019)

    Google Scholar 

  28. Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: ICLR 2016 (2016)

    Google Scholar 

  29. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv:1707.06347

  30. Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y., et al.: Policy gradient methods for reinforcement learning with function approximation. In: NeurIPS (1999)

    Google Scholar 

  31. Wang, T., Gupta, T., Mahajan, A., Peng, B., Whiteson, S., Zhang, C.: RODE: Learning roles to decompose multi-agent tasks (2020). arXiv:2010.01523

  32. Wani, S., Patel, S., Jain, U., Chang, A.X., Savva, M.: MultiON: Benchmarking semantic map memory using multi-object navigation (2020). arXiv:2012.03912

  33. de Witt, C.S., et al.: Is independent learning all you need in the StarCraft multi-agent challenge? CoRR (2020)

    Google Scholar 

  34. Xia, F., et al.: Interactive Gibson benchmark: a benchmark for interactive navigation in cluttered environments. IEEE Robot. Autom. Lett. 5(2), 713–720 (2020)

    Google Scholar 

  35. Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., Savarese, S.: Gibson Env: real-world perception for embodied agents. In: CVPR (2018)

    Google Scholar 

  36. Yang, Y., et al.: Multi-agent determinantal Q-learning. In: ICML (2020)

    Google Scholar 

  37. Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games (2021). arXiv:2103.01955

  38. Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games. CoRR (2021)

    Google Scholar 

  39. Zabounidis, R., Campbell, J., Stepputtis, S., Hughes, D., Sycara, K.P.: Concept learning for interpretable multi-agent reinforcement learning. In: Conference on Robot Learning, pp. 1828–1837. PMLR (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent CS Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, F., Lee, V.C., Liu, R. (2024). Communicative and Cooperative Learning for Multi-agent Indoor Navigation. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14646. Springer, Singapore. https://doi.org/10.1007/978-981-97-2253-2_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2253-2_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2252-5

  • Online ISBN: 978-981-97-2253-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics