Abstract
The reinforcement learning approach allows learning desired control policy in different environments without explicitly providing system dynamics. A model-free deep Q-learning algorithm is proven to be efficient on a large set of discrete-action tasks. Extension of this method to the continuous control task usually solved with actor-critic methods which approximate a policy function with additional actor network and uses Q function to speed up policy network training. Another approach is to discretize action space which will not give a smooth policy and is not applicable for large action spaces. A direct continuous policy derivation from the Q network leads to optimization of action on each inference and training step which is not efficient but provides optimal and continuous action. Time-efficient Q function input optimization is required in order to apply this method in practice. In this work, we implement efficient action derivation method which allows using Q-learning in real-time continuous control tasks. In addition, we test our algorithm on robotics control tasks from robotics gym environments and compare this method with modern continuous RL methods. The results have shown that in some cases proposed approach learns smooth continuous policy keeping the implementation simplicity of the original discreet action space Q-learning algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bellman, R.E.: Dynamic Programming (1957)
Wu, L., Tian, F., Qin, T., Lai, J., Liu, T.Y.: A study of reinforcement learning for neural machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3612–3621 (2018)
Shalev-Shwartz, S., Shammah, S., Shashua, A.: Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295 (2016)
Andrychowicz, M., et al.: Learning Dexterous In-Hand Manipulation (2018)
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)
Mnih, V., et al.: Playing Atari with Deep Reinforcement Learning (2013)
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning (2015)
OpenAI gym RL environments. https://gym.openai.com/
OpenAI baselines. https://github.com/openai/baselines
Source code of Deep Policy Derivation Q-network research. https://github.com/AydarAkhmetzyanov/DPDQN-Continuous-control-in-deep-reinforcement-learning-with-direct-policy-derivation-from-Q-network
Tokic, M.: Adaptive ε-greedy exploration in reinforcement learning based on value differences. In: Proceedings of the 33rd Annual German Conference on Advances in Artificial Intelligence, pp. 203–210. Springer, Heidelberg (2010)
Paszke, A., et al.: Automatic differentiation in pytorch (2017)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1856–1865 (2018)
Fujimoto, S., Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning, pp. 1582–1591 (2018)
Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Acknowledgments
The work is supported by RFBR project 18-38-20186\18.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Akhmetzyanov, A., Yagfarov, R., Gafurov, S., Ostanin, M., Klimchik, A. (2020). Continuous Control in Deep Reinforcement Learning with Direct Policy Derivation from Q Network. In: Ahram, T., Taiar, R., Gremeaux-Bader, V., Aminian, K. (eds) Human Interaction, Emerging Technologies and Future Applications II. IHIET 2020. Advances in Intelligent Systems and Computing, vol 1152. Springer, Cham. https://doi.org/10.1007/978-3-030-44267-5_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-44267-5_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-44266-8
Online ISBN: 978-3-030-44267-5
eBook Packages: EngineeringEngineering (R0)