Abstract
In this paper we consider the problem of how to balance exploration and exploitation in deep reinforcement learning (DRL). We propose a generative method called double replay buffers with restricted gradient (DRBRG). DRBRG divides the replay buffer in experience replay into two parts: the exploration buffer and the exploitation buffer. The two replay buffers with different retention policies can increase sample diversity to prevent over-fitting caused by exploiting. In order to avoid the deviation of the current policy from the past behaviors by exploring, we introduce a gradient penalty to limit the policy change into a trust region. We compare our method with other methods using experience replay on continuous-action environments. Empirical results show that our method outperforms existing methods both in training performance and generalization performance.
Keywords
This work is in part supported by the Natural Science Foundation of China (61876119), the Natural Science Foundation of Jiangsu (BK20181432) and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Andrychowicz, M., et al.: Hindsight experience replay. In: Advances in Neural Information Processing Systems (NIPS), pp. 5055–5065 (2017)
Babuška, R., Groen, F.C.: Interactive Collaborative Information Systems. SCI. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-11688-9
Bellman, R.: Dynamic programming and stochastic control processes. Inf. Control 1(3), 228–239 (1958)
Brockman, G., et al.: OpenAI Gym. CoRR abs/1606.01540 (2016). http://arxiv.org/abs/1606.01540
de Bruin, T., Kober, J., Tuyls, K., Babuška, R.: Improved deep reinforcement learning for robotics through distribution-based experience retention. In: International Conference on Intelligent Robots and Systems (IROS), pp. 3947–3952. IEEE (2016)
de Bruin, T., Kober, J., Tuyls, K., Babuška, R.: Off-policy experience retention for deep actor-critic learning. In: Deep Reinforcement Learning Workshop, Advances in Neural Information Processing Systems (NIPS) (2016)
de Bruin, T., Kober, J., Tuyls, K., Babuška, R.: Experience selection in deep reinforcement learning for control. J. Mach. Learn. Res. 19(1), 347–402 (2018)
De Bruin, T., Kober, J., Tuyls, K., Babuška, R.: The importance of experience replay database composition in deep reinforcement learning. In: Deep Reinforcement Learning Workshop, Advances in Neural Information Processing Systems (NIPS) (2015)
Dietterich, T.G.: Robust artificial intelligence and robust human organizations. Front. Comput. Sci. 13(1), 1–3 (2019). https://doi.org/10.1007/s11704-018-8900-4
François-Lavet, V., Henderson, P., Islam, R., Bellemare, M.G., Pineau, J.: An introduction to deep reinforcement learning. Found. Trends Mach. Learn. 11(3–4), 219–354 (2018)
Goodfellow, I.J., Mirza, M., Da, X., Courville, A.C., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. CoRR abs/1312.6211 (2013). http://arxiv.org/abs/1312.6211
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. In: International Conference on Learning Representations (ICLR) (2016)
Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 8(3–4), 293–321 (1992)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Liu, Q., et al.: A survey on deep reinforcement learning. Chin. J. Comput. 41(1), 1–27 (2018)
Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. In: International Conference on Learning Representations (ICLR) (2016)
Schulman, J., Levine, S., Abbeel, P., Jordan, M.I., Moritz, P.: Trust region policy optimization. In: Proceedings of the 32nd International Conference on Machine Learning (ICML) (2015)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. CoRR abs/1707.06347 (2017). http://arxiv.org/abs/1707.06347
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550(7676), 354–359 (2017)
Sutton, R.S., Barto, A.G.: Reinforcement Learning - An Introduction, 2nd edn. MIT Press, Cambridge (2018)
Vitter, J.S.: Random sampling with a reservoir. ACM Trans. Math. Softw. (TOMS) 11(1), 37–57 (1985)
Zhang, L., et al.: A framework of dual replay buffer: balancing forgetting and generalization in reinforcement learning. In: Workshop on Scaling Up Reinforcement Learning (SURL), International Joint Conference on Artificial Intelligence (IJCAI) (2019)
Zhong, S., Liu, Q., Zhang, Z., Fu, Q.: Efficient reinforcement learning in continuous state and action spaces with Dyna and policy approximation. Front. Comput. Sci. 13(1), 106–126 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, L., Zhang, Z. (2020). Double Replay Buffers with Restricted Gradient. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Lecture Notes in Computer Science(), vol 12533. Springer, Cham. https://doi.org/10.1007/978-3-030-63833-7_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-63833-7_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63832-0
Online ISBN: 978-3-030-63833-7
eBook Packages: Computer ScienceComputer Science (R0)