Skip to main content

Cooperation and Competition: Flocking with Evolutionary Multi-Agent Reinforcement Learning

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13623))

Included in the following conference series:

Abstract

Flocking is a very challenging problem in a multi-agent system; traditional flocking methods also require complete knowledge of the environment and a precise model for control. In this paper, we propose Evolutionary Multi-Agent Reinforcement Learning (EMARL) in flocking tasks, a hybrid algorithm that combines cooperation and competition with little prior knowledge. As for cooperation, we design the agents’ reward for flocking tasks according to the boids model. While for competition, agents with high fitness are designed as senior agents, and those with low fitness are designed as junior, letting junior agents inherit the parameters of senior agents stochastically. To intensify competition, we also design an evolutionary selection mechanism that shows effectiveness on credit assignment in flocking tasks. Experimental results in a range of challenging and self-contrast benchmarks demonstrate that EMARL significantly outperforms the full competition or cooperation methods.

Y. Guo, X. Xie and R. Zhao—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. “boid multi-agent rl environment & multi-agent rl agent” (2020). https://github.com/zombie-einstein/flock_env

  2. Bansal, T., Pachocki, J., Sidor, S., Sutskever, I., Mordatch, I.: Emergent complexity via multi-agent competition. arXiv preprint arXiv:1710.03748 (2017)

  3. Bardi, M., Cardaliaguet, P.: Convergence of some mean field games systems to aggregation and flocking models. Nonlinear Anal. 204, 112199 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bonabeau, E.: Agent-based modeling: methods and techniques for simulating human systems. Proc. Natl. Acad. Sci. 99(suppl_3), 7280–7287 (2002)

    Google Scholar 

  5. Chang, W., Lizhen, W., Chao, Y., Zhichao, W., Han, L., Chao, Y.: Coactive design of explainable agent-based task planning and deep reinforcement learning for human-UAVs teamwork. Chin. J. Aeronaut. 33(11), 2930–2945 (2020)

    Article  Google Scholar 

  6. Chen, C., Hou, Y., Ong, Y.S.: A conceptual modeling of flocking-regulated multi-agent reinforcement learning. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 5256–5262. IEEE (2016)

    Google Scholar 

  7. Drugan, M.M.: Reinforcement learning versus evolutionary computation: a survey on hybrid algorithms. Swarm Evol. Comput. 44, 228–246 (2019)

    Article  Google Scholar 

  8. Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  9. Grover, P., Bakshi, K., Theodorou, E.A.: A mean-field game model for homogeneous flocking. Chaos Interdiscip. J. Nonlinear Sci. 28(6), 061103 (2018)

    Google Scholar 

  10. Hu, Y., Gao, Y., An, B.: Multiagent reinforcement learning with unshared value functions. IEEE Trans. Cybern. 45(4), 647–662 (2014)

    Article  Google Scholar 

  11. Hung, S.M., Givigi, S.N.: A q-learning approach to flocking with UAVs in a stochastic environment. IEEE Trans. Cybern. 47(1), 186–197 (2016)

    Article  Google Scholar 

  12. Hung, S.M., Givigi, S.N., Noureldin, A.: A dyna-q (lambda) approach to flocking with fixed-wing UAVs in a stochastic environment. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1918–1923. IEEE (2015)

    Google Scholar 

  13. Khadka, S., et al.: Collaborative evolutionary reinforcement learning. In: International Conference on Machine Learning, pp. 3341–3350. PMLR (2019)

    Google Scholar 

  14. Khadka, S., Tumer, K.: Evolution-guided policy gradient in reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  15. Konda, V., Tsitsiklis, J.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, vol. 12 (1999)

    Google Scholar 

  16. Lighthill, M.J., Whitham, G.B.: On kinematic waves ii. a theory of traffic flow on long crowded roads. Proc. Roy. Soc. Lond. Ser. A. Math. Phys. Sci. 229(1178), 317–345 (1955)

    Google Scholar 

  17. Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Pieter Abbeel, O., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  18. Majumdar, S., Khadka, S., Miret, S., McAleer, S., Tumer, K.: Evolutionary reinforcement learning for sample-efficient multiagent coordination. In: International Conference on Machine Learning, pp. 6651–6660. PMLR (2020)

    Google Scholar 

  19. Mavridis, C.N., Tirumalai, A., Baras, J.S., Matei, I.: Semi-linear Poisson-mediated flocking in a Cucker-Smale model. IFAC-PapersOnLine 54(9), 404–409 (2021)

    Article  Google Scholar 

  20. Perrin, S., Laurière, M., Pérolat, J., Geist, M., Élie, R., Pietquin, O.: Mean field games flock! the reinforcement learning way. arXiv preprint arXiv:2105.07933 (2021)

  21. Quera, V.Q.J., Salvador Beltrán, F., Dolado i Guivernau, R.: Flocking behaviour: agent-based simulation and hierarchical leadership. Jasss-J. Artif. Soc. Soc. Simul. 13(2), 8 (2010)

    Google Scholar 

  22. Reynolds, C.: Boids background and update (2001). http://www.red3d.com/cwr/boids/

  23. Tan, M.: Multi-agent reinforcement learning: independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337 (1993)

    Google Scholar 

  24. Toner, J., Tu, Y.: Flocks, herds, and schools: a quantitative theory of flocking. Phys. Rev. E 58(4), 4828 (1998)

    Article  MathSciNet  Google Scholar 

  25. Weaver, L., Tao, N.: The optimal reward baseline for gradient-based reinforcement learning. arXiv preprint arXiv:1301.2315 (2013)

  26. Wu, J., Liu, Y.: Flocking behaviours of a delayed collective model with local rule and critical neighbourhood situation. Math. Comput. Simul. 179, 238–252 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  27. Yan, C., Xiang, X., Wang, C.: Fixed-wing UAVs flocking in continuous spaces: a deep reinforcement learning approach. Robot. Auton. Syst. 131, 103594 (2020)

    Article  Google Scholar 

  28. Yan, C., Xiang, X., Wang, C., Lan, Z.: Flocking and collision avoidance for a dynamic squad of fixed-wing UAVs using deep reinforcement learning. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4738–4744. IEEE (2021)

    Google Scholar 

  29. Zhu, S., Belardinelli, F., León, B.G.: Evolutionary reinforcement learning for sparse rewards. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1508–1512 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han Long .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, Y., Xie, X., Zhao, R., Zhu, C., Yin, J., Long, H. (2023). Cooperation and Competition: Flocking with Evolutionary Multi-Agent Reinforcement Learning. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30105-6_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30104-9

  • Online ISBN: 978-3-031-30105-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics