Abstract
As an important technique of preventing overfitting, regularization is widely used in supervised learning. However, regularization has not been systematically studied in deep reinforcement learning (deep RL). In this paper, we study the generalization of deep Q-network (DQN), applying with mainstream regularization approaches, including l1, l2 and dropout. We pay attention on agent’s performance not only in original environments, but also in parameter-varying environments which are variational but the same task type. Furthermore, the dropout is modified to make it more adaptive to DQN. Then, a new dropout is proposed to speed up the optimization of DQN. Experiments show that regularization helps deep RL achieve better performance in both original and parameter-varying environments when the number of samples is insufficient.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Hardt, M., Recht, B., Singer, Y.: Train faster, generalize better: stability of stochastic gradient descent. In: 33rd International Conference on Machine Learning, pp. 1868–1877. IMLS Press, New York (2016)
Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Comput. 7(1), 108–116 (1995)
Chandra, B., Sharma, R.K.: Adaptive noise schedule for denoising autoencoder. In: Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K. (eds.) ICONIP 2014. LNCS, vol. 8834, pp. 535–542. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12637-1_67
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT, Cambridge (1998)
Farahmand, A.M., Ghavamzadeh, M., Szepesvari, C., Mannor, S.: Regularized policy iteration. In: 22nd Annual Conference on Neural Information Processing Systems, pp. 441–448. Curran Associates Press, Vancouver (2009)
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560 (2017)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1), 1929–1958 (2014)
Strehl, A.L., Li, L., Littman, M.L.: Reinforcement learning in finite MDPs: PAC analysis. JMLR 10, 2413–2444 (2009)
Smith, S.L., Le, Q.V.: Understanding generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451v1 (2017)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Whiteson, S., Tanner, B., Taylor, M.E., Stone, P.: Protecting against evaluation overfitting in empirical reinforcement learning. In: 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, Paris, pp. 120–127. IEEE Press (2011)
Bouthillier, X., Konda, K., Vincent, P., Memisevic, R.: Dropout as data augmentation. arXiv preprint arXiv:1506.08700 (2016)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. arXiv preprint arXiv:1506.02142 (2015)
Riedmiller, M.: Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS, vol. 3720, pp. 317–328. Springer, Heidelberg (2005). https://doi.org/10.1007/11564096_32
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Zaremba, W.: OpenAI gym. arXiv preprint arXiv:1606.01540 (2016)
Hahnloser, R.H.R., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (61873022, 61573052), the Beijing Natural Science Foundation (4182045), the China Postdoctoral Science Foundation (2018M640049) and the Fundamental Research Funds for the Central Universities (XK1802-4, ZY1839).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, D., Lei, C., Jin, Q., Han, M. (2019). Regularization in DQN for Parameter-Varying Control Learning Tasks. In: Lu, H., Tang, H., Wang, Z. (eds) Advances in Neural Networks – ISNN 2019. ISNN 2019. Lecture Notes in Computer Science(), vol 11555. Springer, Cham. https://doi.org/10.1007/978-3-030-22808-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-22808-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22807-1
Online ISBN: 978-3-030-22808-8
eBook Packages: Computer ScienceComputer Science (R0)