Abstract
In recent years, many algorithms use Evolutionary Algorithms (EAs) to help Reinforcement Learning (RL) jump out of local optima. Evolutionary Reinforcement Learning (ERL) is a popular algorithm in this field. However, ERL evaluate the population in each loop of the algorithm, which is inefficient because of the uncertainties of the population’s experience. In this paper, we propose a novel evaluation mechanism, which only evaluates the population when the RL agent has difficulty in studying further. This mechanism can improve the efficiency of the hybrid algorithms in most cases, and even in the worst scenario, it only reduces the performance marginally. We embed this mechanism into ERL, denoted as E-ERL, and compare it with original ERL and other state-of-the-art RL algorithms. Results on six continuous control problems validate the efficiency of our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)
Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)
Khadka, S., Tumer, K.: Evolution-guided policy gradient in reinforcement learning. In: Advances in Neural Information Processing Systems. pp. 1188–1200 (2018)
Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2017)
Rechenberg, I., Eigen, M.: Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Frommann-Holzboog Stuttgart (1973)
Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. ArXiv e-prints (2017)
Burnett, K., Qian, J., Du, X., Liu, L., Yoon, D.J., et al.: Zeus: A system description of the twotime winner of the collegiate SAE autodrive competition. J. Field Robotics 38, 139–166 (2021)
Boutilier, J.J., Brooks, S.C., Janmohamed, A., Byers, A., Buick, J.E., et al.: Optimizing a drone network to deliver automated external defibrillators. Circulation 135, 2454–2465 (2017)
Todorov, E., Erez, T., Tassa, Y.: Mujoco: A physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE (2012)
Bodnar, C., Day, B., Lió, P.: Proximal distilled evolutionary reinforcement learning. In Proceedings of the Conference on Artificial Intelligence (AAAI’20), pp. 3283–3290. AAAI Press (2020)
Pourchot, A., Sigaud, O.: CEM-RL: Combining evolutionary and gradient-based methods for policy search. In: International Conference on Learning Representations (2019)
De Boer, P.-T., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Annals of Operations Res. 134(1), 19–67 (2005)
Khadka, S., et al.: Collaborative evolutionary reinforcement learning. In: International Conference on Machine Learning (2019)
Majumdar, S., Khadka, S., Miret, S., Mcaleer, S., Tumer, K.: Evolutionary reinforcement learning for sample-efficient multiagent coordination. In: International Conference on Machine Learning, pp. 6651–6660 (2020)
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, O.P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative competitive environments. In: Advances in Neural Information Processing Systems, pp. 6379–6390 (2017)
Zheng, H., Jiang, J., Wei, P., Long, G., Zhang, C.: Competitive and cooperative heterogeneous deep reinforcement learning. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS’20, pp. 1656–1664, Richland, SC (2020)
Fujimoto, S., van Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning (2018)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning (2018)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P.: Benchmarking deep reinforcement learning for continuous control. In: International Conference on Machine Learning, pp. 1329– 1338 (2016)
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560 (2017)
Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.:. Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897 (2015)
Brockman, G., et al.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)
Ye, D., et al.: Mastering complex control in MOBA games with deep reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6672–6679 (2020)
Dhariwal, P., et al.: Openai Baselines. https://github.com/openai/baselines (2017)
Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928– 1937 (2016)
Cobbe, K.W., Hilton, J., Klimov, O., Schulman, J.: Phasic policy gradient, In: International Conference on Machine Learning, PMLR, pp. 2020–2027 (2021)
Hasselt, V.H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2094–2100 (2016)
Hessel, M., et al.: Rainbow: Combining improvements in deep reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Conti, E., Madhavan, V., Such, F.P., Lehman, J., Stanley, K.O., Clune, J.: Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560 (2017)
Cully, A., Clune, J., Tarapore, D., Mouret, J.-B.: Robots that can adapt like animals. Nature 521(7553), 503 (2015)
Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)
Pugh, J.K., Soros, L.B., Stanley, K.O.: Quality diversity: a new frontier for evolutionary computation. Frontiers in Robotics and AI 3, 40 (2016)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grants 61876110, 61836005, and 61672358, the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and Shenzhen Technology Plan under Grant JCYJ20190808164211203.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, X., Zhu, Q., Lin, Q., Li, J., Chen, J., Ming, Z. (2022). An Efficient Evaluation Mechanism for Evolutionary Reinforcement Learning. In: Huang, DS., Jo, KH., Jing, J., Premaratne, P., Bevilacqua, V., Hussain, A. (eds) Intelligent Computing Theories and Application. ICIC 2022. Lecture Notes in Computer Science, vol 13393. Springer, Cham. https://doi.org/10.1007/978-3-031-13870-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-13870-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-13869-0
Online ISBN: 978-3-031-13870-6
eBook Packages: Computer ScienceComputer Science (R0)