Skip to main content

An Efficient Evaluation Mechanism for Evolutionary Reinforcement Learning

  • Conference paper
  • First Online:
Intelligent Computing Theories and Application (ICIC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13393))

Included in the following conference series:

  • 1647 Accesses

Abstract

In recent years, many algorithms use Evolutionary Algorithms (EAs) to help Reinforcement Learning (RL) jump out of local optima. Evolutionary Reinforcement Learning (ERL) is a popular algorithm in this field. However, ERL evaluate the population in each loop of the algorithm, which is inefficient because of the uncertainties of the population’s experience. In this paper, we propose a novel evaluation mechanism, which only evaluates the population when the RL agent has difficulty in studying further. This mechanism can improve the efficiency of the hybrid algorithms in most cases, and even in the worst scenario, it only reduces the performance marginally. We embed this mechanism into ERL, denoted as E-ERL, and compare it with original ERL and other state-of-the-art RL algorithms. Results on six continuous control problems validate the efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  2. Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)

    Article  Google Scholar 

  3. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)

  4. Khadka, S., Tumer, K.: Evolution-guided policy gradient in reinforcement learning. In: Advances in Neural Information Processing Systems. pp. 1188–1200 (2018)

    Google Scholar 

  5. Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2017)

  6. Rechenberg, I., Eigen, M.: Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Frommann-Holzboog Stuttgart (1973)

    Google Scholar 

  7. Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. ArXiv e-prints (2017)

    Google Scholar 

  8. Burnett, K., Qian, J., Du, X., Liu, L., Yoon, D.J., et al.: Zeus: A system description of the twotime winner of the collegiate SAE autodrive competition. J. Field Robotics 38, 139–166 (2021)

    Article  Google Scholar 

  9. Boutilier, J.J., Brooks, S.C., Janmohamed, A., Byers, A., Buick, J.E., et al.: Optimizing a drone network to deliver automated external defibrillators. Circulation 135, 2454–2465 (2017)

    Article  Google Scholar 

  10. Todorov, E., Erez, T., Tassa, Y.: Mujoco: A physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE (2012)

    Google Scholar 

  11. Bodnar, C., Day, B., Lió, P.: Proximal distilled evolutionary reinforcement learning. In Proceedings of the Conference on Artificial Intelligence (AAAI’20), pp. 3283–3290. AAAI Press (2020)

    Google Scholar 

  12. Pourchot, A., Sigaud, O.: CEM-RL: Combining evolutionary and gradient-based methods for policy search. In: International Conference on Learning Representations (2019)

    Google Scholar 

  13. De Boer, P.-T., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Annals of Operations Res. 134(1), 19–67 (2005)

    Google Scholar 

  14. Khadka, S., et al.: Collaborative evolutionary reinforcement learning. In: International Conference on Machine Learning (2019)

    Google Scholar 

  15. Majumdar, S., Khadka, S., Miret, S., Mcaleer, S., Tumer, K.: Evolutionary reinforcement learning for sample-efficient multiagent coordination. In: International Conference on Machine Learning, pp. 6651–6660 (2020)

    Google Scholar 

  16. Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, O.P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative competitive environments. In: Advances in Neural Information Processing Systems, pp. 6379–6390 (2017)

    Google Scholar 

  17. Zheng, H., Jiang, J., Wei, P., Long, G., Zhang, C.: Competitive and cooperative heterogeneous deep reinforcement learning. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS’20, pp. 1656–1664, Richland, SC (2020)

    Google Scholar 

  18. Fujimoto, S., van Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning (2018)

    Google Scholar 

  19. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning (2018)

    Google Scholar 

  20. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  21. Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)

  22. Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P.: Benchmarking deep reinforcement learning for continuous control. In: International Conference on Machine Learning, pp. 1329– 1338 (2016)

    Google Scholar 

  23. Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560 (2017)

  24. Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.:. Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897 (2015)

    Google Scholar 

  25. Brockman, G., et al.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)

  26. Ye, D., et al.: Mastering complex control in MOBA games with deep reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6672–6679 (2020)

    Google Scholar 

  27. Dhariwal, P., et al.: Openai Baselines. https://github.com/openai/baselines (2017)

  28. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928– 1937 (2016)

    Google Scholar 

  29. Cobbe, K.W., Hilton, J., Klimov, O., Schulman, J.: Phasic policy gradient, In: International Conference on Machine Learning, PMLR, pp. 2020–2027 (2021)

    Google Scholar 

  30. Hasselt, V.H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2094–2100 (2016)

    Google Scholar 

  31. Hessel, M., et al.: Rainbow: Combining improvements in deep reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  32. Conti, E., Madhavan, V., Such, F.P., Lehman, J., Stanley, K.O., Clune, J.: Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560 (2017)

  33. Cully, A., Clune, J., Tarapore, D., Mouret, J.-B.: Robots that can adapt like animals. Nature 521(7553), 503 (2015)

    Article  Google Scholar 

  34. Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)

    Google Scholar 

  35. Pugh, J.K., Soros, L.B., Stanley, K.O.: Quality diversity: a new frontier for evolutionary computation. Frontiers in Robotics and AI 3, 40 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 61876110, 61836005, and 61672358, the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and Shenzhen Technology Plan under Grant JCYJ20190808164211203.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiuzhen Lin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, X., Zhu, Q., Lin, Q., Li, J., Chen, J., Ming, Z. (2022). An Efficient Evaluation Mechanism for Evolutionary Reinforcement Learning. In: Huang, DS., Jo, KH., Jing, J., Premaratne, P., Bevilacqua, V., Hussain, A. (eds) Intelligent Computing Theories and Application. ICIC 2022. Lecture Notes in Computer Science, vol 13393. Springer, Cham. https://doi.org/10.1007/978-3-031-13870-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13870-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13869-0

  • Online ISBN: 978-3-031-13870-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics