Skip to main content

Multi-agent Cooperation and Competition with Two-Level Attention Network

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12533))

Included in the following conference series:

Abstract

Multi-agent reinforcement learning (MARL) has made significant advances in multi-agent systems. However, it is hard to learn a stable policy in complicated and changeable environment. To address these issues, a two-level attention network is proposed, which is composed of across-group observation attention network (AGONet) and intentional communication network (ICN). AGONet is designed to distinguish the different semantic meanings of observations (including friend group, foe group, and object/entity group) and extract different underlying information of different groups with across-group attention. Based AGONet, the proposed network framework is invariant to the number of agents existing in the system, which can be applied in large-scale multi-agent systems. Furthermore, to enhance the cooperation of the agents in the same group, ICN is used to aggregate the intentions of neighbors in the same group, which are extracted by AGONet. It obtains the understanding and intentions of their neighbors in the same group and enlarges the receptive filed of the agent. The simulation results demonstrate that the agents can learn complicated cooperative and competitive strategies and our method is superiority to existing methods.

Research supported by the National Key Research, Development Program of China under Grant 2018AAA0102404, and Innovation Academy for Light-duty Gas Turbine, Chinese Academy of Sciences, No. CXYJJ19-ZD-02.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, A., Kumar, S., Sycara, K.: Learning transferable cooperative behavior in multi-agent teams. arXiv preprint arXiv:1906.01202 (2019)

  2. Foerster, J.N., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  3. Iqbal, S., Sha, F.: Actor-attention-critic for multi-agent reinforcement learning. In: International Conference on Machine Learning, pp. 2961–2970 (2019)

    Google Scholar 

  4. Jiang, J., Dun, C., Huang, T., Lu, Z.: Graph convolutional reinforcement learning. arXiv preprint arXiv:1810.09202 (2018)

  5. Jiang, J., Lu, Z.: Learning attentional communication for multi-agent cooperation. In: Advances in Neural Information Processing Systems, pp. 7254–7264 (2018)

    Google Scholar 

  6. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  7. Li, X., Zhang, J., Bian, J., Tong, Y., Liu, T.Y.: A cooperative multi-agent reinforcement learning framework for resource balancing in complex logistics network. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 980–988 (2019)

    Google Scholar 

  8. Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)

  9. Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Abbeel, O.P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in Neural Information Processing Systems, pp. 6379–6390 (2017)

    Google Scholar 

  10. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  11. Nguyen, H.T., et al.: A deep hierarchical reinforcement learner for aerial shepherding of ground swarms. In: Gedeon, T., Wong, K.W., Lee, M. (eds.) ICONIP 2019. LNCS, vol. 11953, pp. 658–669. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36708-4_54

    Chapter  Google Scholar 

  12. Radhakrishnan, B.M., Srinivasan, D.: A multi-agent based distributed energy management scheme for smart grid applications. Energy 103, 192–204 (2016)

    Article  Google Scholar 

  13. Ryu, H., Shin, H., Park, J.: Multi-agent actor-critic with hierarchical graph attention network. arXiv preprint arXiv:1909.12557 (2019)

  14. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  15. Vashishth, S., Yadati, N., Talukdar, P.: Graph-based deep learning in natural language processing. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp. 371–372 (2020)

    Google Scholar 

  16. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  17. Vinyals, O., et al.: Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019)

    Article  Google Scholar 

  18. Yang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., Wang, J.: Mean field multi-agent reinforcement learning. In: 35th International Conference on Machine Learning, ICML 2018, vol. 80, pp. 5571–5580. PMLR (2018)

    Google Scholar 

  19. Zhang, Y., Dai, H., Kozareva, Z., Smola, A.J., Song, L.: Variational reasoning for question answering with knowledge graph. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiqiang Pu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, S., Pu, Z., Yi, J., Wang, H. (2020). Multi-agent Cooperation and Competition with Two-Level Attention Network. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Lecture Notes in Computer Science(), vol 12533. Springer, Cham. https://doi.org/10.1007/978-3-030-63833-7_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63833-7_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63832-0

  • Online ISBN: 978-3-030-63833-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics