Skip to main content

Learning Distinct Strategies for Heterogeneous Cooperative Multi-agent Reinforcement Learning

  • Conference paper
  • First Online:
  • 2257 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12894))

Abstract

Value decomposition has been a promising paradigm for cooperative multi-agent reinforcement learning. Many different approaches have been proposed, but few of them consider the heterogeneous settings. Agents with tremendously different behaviours bring great challenges for centralized training with decentralized execution. In this paper, we provide a formulation for the heterogeneous multi-agent reinforcement learning with some theoretical analysis. On top of that, we propose an efficient two-stage heterogeneous learning method. The first stage refers to a transfer technique by tuning existed homogeneous models to heterogeneous ones, which can accelerate the convergent speed. In the second stage, an iterative learning with centralized training is designed to improve the overall performance. We make experiments on heterogeneous unit micromanagement tasks in StarCraft II. The results show that our method could improve the win rate by around 20% for the most difficult scenario, compared with state-of-the-art methods, i.e., QMIX and Weighted QMIX.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)

  2. Feng, J., et al.: Learning to collaborate: Multi-scenario ranking via multi-agent reinforcement learning. In: Proceedings of the 2018 World Wide Web Conference, pp. 1939–1948 (2018)

    Google Scholar 

  3. Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  4. Guestrin, C., Koller, D., Parr, R.: Multiagent planning with factored mdps. Adv. Neural Inf. Process. Syst. 14, 1523–1530 (2001)

    Google Scholar 

  5. Jain, P., Kar, P.: Non-convex optimization for machine learning. Found. Trends®Mach. Learn. 10(3–4), 142–363 (2017). https://doi.org/10.1561/2200000058

  6. Laurent, G.J., Matignon, L., Fort-Piat, L., et al.: The world of independent learners is not markovian. Int. J. Knowl. Based Intell. Eng. Syst. 15(1), 55–64 (2011)

    Google Scholar 

  7. Ma, J., Wu, F.: Feudal multi-agent deep reinforcement learning for traffic signal control. In: Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, pp. 816–824 (2020)

    Google Scholar 

  8. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  9. Nguyen, D.T., Kumar, A., Lau, H.C.: Credit assignment for collective multiagent rl with global rewards. In: Advances in Neural Information Processing Systems, pp. 8102–8113 (2018)

    Google Scholar 

  10. Oliehoek, F.A., Spaan, M.T., Vlassis, N.: Optimal and approximate q-value functions for decentralized pomdps. J. Artif. Intell. Res. 32, 289–353 (2008)

    Article  MathSciNet  Google Scholar 

  11. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)

    Article  Google Scholar 

  12. Rashid, T., Farquhar, G., Peng, B., Whiteson, S.: Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  13. Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning. In: International Conference on Machine Learning, pp. 4295–4304 (2018)

    Google Scholar 

  14. Samvelyan, M., et al.: The starcraft multi-agent challenge. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2186–2188 (2019)

    Google Scholar 

  15. Son, K., Kim, D., Kang, W.J., Hostallero, D.E., Yi, Y.: Qtran: learning to factorize with transformation for cooperative multi-agent reinforcement learning. In: International Conference on Machine Learning, pp. 5887–5896 (2019)

    Google Scholar 

  16. Sunehag, P., et al.: Value-decomposition networks for cooperative multi-agent learning based on team reward. In: Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, pp. 2085–2087 (2018)

    Google Scholar 

  17. Sutton, R., Barto, A.: Reinforcement Learning, An Introduction. 2nd edn, Bradford Books, MIT Press, Cambridge (2018)

    Google Scholar 

  18. Tirinzoni, A., Poiani, R., Restelli, M.: Sequential transfer in reinforcement learning with a generative model. In: International Conference on Machine Learning, pp. 9481–9492. PMLR (2020)

    Google Scholar 

  19. Wang, J., Ren, Z., Liu, T., Yu, Y., Zhang, C.: Qplex: Duplex dueling multi-agent q-learning (2020)

    Google Scholar 

  20. Wang, T., Dong, H., Lesser, V., Zhang, C.: Roma: Multi-agent reinforcement learning with emergent roles. In: Proceedings of the 37th International Conference on Machine Learning, vol. 119, pp. 9876–9886 (2020)

    Google Scholar 

  21. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  22. Zhang, T., et al.: Multi-agent collaboration via reward attribution decomposition (2020)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant 61902425.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinhai Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wan, K., Xu, X., Li, Y. (2021). Learning Distinct Strategies for Heterogeneous Cooperative Multi-agent Reinforcement Learning. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12894. Springer, Cham. https://doi.org/10.1007/978-3-030-86380-7_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86380-7_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86379-1

  • Online ISBN: 978-3-030-86380-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics