Skip to main content

Learning Summarised Messaging Through Mediated Differentiable Inter-Agent Learning

  • Conference paper
  • First Online:
Multi-Agent Systems and Agreement Technologies (EUMAS 2020, AT 2020)

Abstract

In recent years, notable research has been done in the area of communication in multi-agent systems. When agents have a partial view of the environment, communication becomes essential for collaboration. We propose a Deep Q-Learning based multi-agent communication approach: Mediated Differentiable Inter-Agent Learning (M-DIAL), where messages produced by individual agents are sent to a mediator that encodes all the messages into a global embedding. The mediator essentially summarises the crux of the messages it receives into a single global message that is then broadcasted to all the participating agents. The proposed technique allows the agents to receive only essential abstracted information and also reduces the overall bandwidth required for communication. We analyze and evaluate the performance of our approach over several collaborative multi-agent environments.

S. Gopal, R. Mathur and S. Deshwal—Contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Das, A., et al.: TarMAC: targeted multi-agent communication. In: Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 1538–1546. PMLR, 09–15 Jun 2019

    Google Scholar 

  2. Ferber, J.: Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1999)

    Google Scholar 

  3. Foerster, J., Assael, I.A., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 2137–2145. Curran Associates, Inc. (2016)

    Google Scholar 

  4. Hausknecht, M.J., Stone, P.: Deep recurrent q-learning for partially observable MDPs. In: AAAI Fall Symposia (2015)

    Google Scholar 

  5. Hernandez-Leal, P., Kartal, B., Taylor, M.E.: A survey and critique of multiagent deep reinforcement learning. Auton. Agents Multi-Agent Syst. 33(6), 750–797 (2019)

    Article  Google Scholar 

  6. Hoshen, Y.: Vain: attentional multi-agent predictive modeling. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 2701–2711. Curran Associates, Inc. (2017)

    Google Scholar 

  7. Jackson, D., Ratnieks, F.: Communication in ants. Curr. Biol. 16, R570–R574 (2006)

    Article  Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, December 2014

    Google Scholar 

  9. Long, P., Fan, T., Liao, X., Liu, W., Zhang, H., Pan, J.: Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6252–6259, May 2018

    Google Scholar 

  10. Mnih, V., et al.: Playing atari with deep reinforcement learning. In: NIPS Deep Learning Workshop (2013)

    Google Scholar 

  11. Nguyen, T.T., Nguyen, N.D., Nahavandi, S.: Deep reinforcement learning for multi-agent systems: A review of challenges, solutions and applications. CoRR abs/1812.11794 (2018). http://arxiv.org/abs/1812.11794

  12. Oroojlooyjadid, A., Hajinezhad, D.: A review of cooperative multi-agent deep reinforcement learning. ArXiv abs/1908.03963 (2019)

    Google Scholar 

  13. Papoudakis, G., Christianos, F., Rahman, A., Albrecht, S.V.: Dealing with non-stationarity in multi-agent deep reinforcement learning. CoRR abs/1906.04737 (2019). http://arxiv.org/abs/1906.04737

  14. Peng, P., et al.: Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. CoRR abs/1703.10069 (2017). http://arxiv.org/abs/1703.10069

  15. Silver, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)

    Article  MathSciNet  Google Scholar 

  16. Simões, D., Lau, N., Reis, L.P.: Multi-agent deep reinforcement learning with emergent communication. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, July 2019

    Google Scholar 

  17. Sukhbaatar, S., Szlam, A., Fergus, R.: Learning multiagent communication with backpropagation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS 2016, pp. 2252–2260. Curran Associates Inc., Red Hook (2016)

    Google Scholar 

  18. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford Book, Cambridge (2018)

    MATH  Google Scholar 

  19. Tampuu, A., et al.: Multiagent cooperation and competition with deep reinforcement learning. PLOS ONE 12(4), 1–15 (2017). https://doi.org/10.1371/journal.pone.0172395

    Article  Google Scholar 

  20. Tan, M.: Multi-agent reinforcement learning: independent vs. cooperative agents. In: In Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337. Morgan Kaufmann (1993)

    Google Scholar 

  21. Theis, L., Shi, W., Cunningham, A., Huszár, F.: Lossy image compression with compressive autoencoders (2017)

    Google Scholar 

  22. Valogianni, K., Ketter, W., Collins, J.: A multiagent approach to variable-rate electric vehicle charging coordination. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, pp. 1131–1139 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anil Singh Parihar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gopal, S., Mathur, R., Deshwal, S., Parihar, A.S. (2020). Learning Summarised Messaging Through Mediated Differentiable Inter-Agent Learning. In: Bassiliades, N., Chalkiadakis, G., de Jonge, D. (eds) Multi-Agent Systems and Agreement Technologies. EUMAS AT 2020 2020. Lecture Notes in Computer Science(), vol 12520. Springer, Cham. https://doi.org/10.1007/978-3-030-66412-1_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66412-1_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66411-4

  • Online ISBN: 978-3-030-66412-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics