Abstract
In recent years, deep reinforcement learning has been widely used in the real world, such as in robot control and autonomous driving. However, the sample-inefficient of deep reinforcement learning prevents its application in robot control. In addition, complex real-world scenarios require the high robustness of robot controllers, and the design of controllers needs to consider the influence of the external environment. These problems become severer in environments with high-dimensional state action spaces and sparse delay rewards. In this paper, we provide a systematic introduction and summary of the existing methods for exploration. Firstly, we introduce the primary exploration techniques and summarize the challenges faced by intelligent body exploration. Then, we classify the existing exploration methods in terms of whether they generate other bonuses or not and elaborate on the ideas of different methods. Finally, we discuss the challenges of applying deep reinforcement learning to robot control and the applicability of different exploration methods to attitude control tasks, ruling out exploration methods unsuitable for attitude control tasks for subsequent research.
CAS Project for Young Scientists in Basic Research, Grant No. YSBR-040.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)
Zhang, S., Yao, L., Sun, A., Tay, Y.: Deep learning based recommender system: a survey and new perspectives. ACM Comput. Surv. (CSUR) 52(1), 1–38 (2019)
Mannion, P., Duggan, J., Howley, E.: An experimental review of reinforcement learning algorithms for adaptive traffic signal control. In: McCluskey, T.L., Kotsialos, A., Müller, J.P., Klügl, F., Rana, O., Schumann, R. (eds.) Autonomic Road Transport Support Systems. AS, pp. 47–66. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-25808-9_4
O’Kelly, M., Sinha, A., Namkoong, H., Tedrake, R., Duchi, J.C.: Scalable end-to-end autonomous vehicle testing via rare-event simulation. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Lai, T.L., Robbins, H., et al.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6(1), 4–22 (1985)
Raffin, A., Kober, J., Stulp, F.: Smooth exploration for robotic reinforcement learning. In: Conference on Robot Learning, pp. 1634–1644. PMLR (2022)
Ng, A.Y., Harada, D., Russell, S.: Policy invariance under reward transformations: theory and application to reward shaping. In: ICML, vol. 99, pp. 278–287 (1999)
Pathak, D., Agrawal, P., Efros, A.A., Darrell, T.: Curiosity-driven exploration by self-supervised prediction. In: International Conference on Machine Learning, pp. 2778–2787. PMLR (2017)
Burda, Y., Edwards, H., Storkey, A., Klimov, O.: Exploration by random network distillation. arXiv preprint arXiv:1810.12894 (2018)
Pathak, D., Gandhi, D., Gupta, A.: Self-supervised exploration via disagreement. In: International Conference on Machine Learning, pp. 5062–5071. PMLR (2019)
Aubret, A., Matignon, L., Hassas, S.: A survey on intrinsic motivation in reinforcement learning. arXiv preprint arXiv:1908.06976 (2019)
Yang, T., Tang, H., Bai, C., Liu, J., Hao, J., Meng, Z., Liu, P.: Exploration in deep reinforcement learning: a comprehensive survey. arXiv preprint arXiv:2109.06668 (2021)
Garaffa, L.C., Basso, M., Konzen, A.A., de Freitas, E.P.: Reinforcement learning for mobile robotics exploration: a survey. IEEE Trans. Neural Netw. Learn. Syst. (2021)
Ladosz, P., Weng, L., Kim, M., Oh, H.: Exploration in deep reinforcement learning: a survey. Inf. Fusion (2022)
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)
Lesort, T., Díaz-Rodríguez, N., Goudou, J.F., Filliat, D.: State representation learning for control: an overview. Neural Netw. 108, 379–392 (2018)
Laskin, M., Lee, K., Stooke, A., Pinto, L., Abbeel, P., Srinivas, A.: Reinforcement learning with augmented data. Adv. Neural. Inf. Process. Syst. 33, 19884–19895 (2020)
Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2), 235–256 (2002)
Azizzadenesheli, K., Brunskill, E., Anandkumar, A.: Efficient exploration through Bayesian deep Q-networks. In: 2018 Information Theory and Applications Workshop (ITA), pp. 1–9. IEEE (2018)
Janz, D., Hron, J., Mazur, P., Hofmann, K., Hernández-Lobato, J.M., Tschiatschek, S.: Successor uncertainties: exploration and uncertainty in temporal difference learning. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Osband, I., Blundell, C., Pritzel, A., Van Roy, B.: Deep exploration via bootstrapped DQN. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Ciosek, K., Vuong, Q., Loftin, R., Hofmann, K.: Better exploration with optimistic actor critic. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Shyam, P., Jaśkowski, W., Gomez, F.: Model-based active exploration. In: International Conference on Machine Learning, pp. 5779–5788. PMLR (2019)
Lee, K., Laskin, M., Srinivas, A., Abbeel, P.: Sunrise: a simple unified framework for ensemble learning in deep reinforcement learning. In: International Conference on Machine Learning, pp. 6131–6141. PMLR (2021)
Oh, J., Guo, Y., Singh, S., Lee, H.: Self-imitation learning. In: International Conference on Machine Learning, pp. 3878–3887. PMLR (2018)
Guo, Y., et al.: Memory based trajectory-conditioned policies for learning from sparse rewards. Adv. Neural. Inf. Process. Syst. 33, 4333–4345 (2020)
Dai, T., Liu, H., Anthony Bharath, A.: Episodic self-imitation learning with hindsight. Electronics 9(10), 1742 (2020)
Chen, Z., Lin, M.: Self-imitation learning in sparse reward settings. CoRR, abs/2010.06962 (2020)
Zhu, Z., Lin, K., Dai, B., Zhou, J.: Self-adaptive imitation learning: learning tasks with delayed rewards from sub-optimal demonstrations (2022)
Vecerik, M., et al.: Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817 (2017)
Hester, T., et al.: Deep Q-learning from demonstrations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Kang, B., Jie, Z., Feng, J.: Policy optimization with demonstrations. In: International Conference on Machine Learning, pp. 2469–2478. PMLR (2018)
Nair, A., McGrew, B., Andrychowicz, M., Zaremba, W., Abbeel, P.: Overcoming exploration in reinforcement learning with demonstrations. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6292–6299. IEEE (2018)
Kapturowski, S., Ostrovski, G., Quan, J., Munos, R., Dabney, W.: Recurrent experience replay in distributed reinforcement learning. In: International Conference on Learning Representations (2018)
Paine, T.L., et al.: Making efficient use of demonstrations to solve hard exploration problems. arXiv preprint arXiv:1909.01387 (2019)
Rengarajan, D., Vaidya, G., Sarvesh, A., Kalathil, D., Shakkottai, S.: Reinforcement learning with sparse rewards using guidance from offline demonstration. arXiv preprint arXiv:2202.04628 (2022)
Stadie, B.C., Levine, S., Abbeel, P.: Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814 (2015)
Oh, C., Cavallaro, A.: Learning action representations for self-supervised visual exploration. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 5873–5879. IEEE (2019)
Kim, H., Kim, J., Jeong, Y., Levine, S., Song, H.O.: EMI: exploration with mutual information. arXiv preprint arXiv:1810.01176 (2018)
Raileanu, R., Rocktäschel, T.: RIDE: rewarding impact-driven exploration for procedurally-generated environments. arXiv preprint arXiv:2002.12292 (2020)
Bougie, N., Ichise, R.: Fast and slow curiosity for high-level exploration in reinforcement learning. Appl. Intell. 51(2), 1086–1107 (2021)
Nguyen, T., Luu, T.M., Vu, T., Yoo, C.D.: Sample-efficient reinforcement learning representation learning with curiosity contrastive forward dynamics model. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3471–3477. IEEE (2021)
Tang, H., et al.: # exploration: a study of count-based exploration for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Ostrovski, G., Bellemare, M.G., Oord, A., Munos, R.: Count-based exploration with neural density models. In: International Conference on Machine Learning, pp. 2721–2730. PMLR (2017)
Martin, J., Sasikumar, S.N., Everitt, T., Hutter, M.: Count-based exploration in feature space for reinforcement learning. arXiv preprint arXiv:1706.08090 (2017)
Machado, M.C., Bellemare, M.G., Bowling, M.: Count-based exploration with the successor representation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5125–5133 (2020)
Zhang, T., Rashidinejad, P., Jiao, J., Tian, Y., Gonzalez, J.E., Russell, S.: MADE: exploration via maximizing deviation from explored regions. Adv. Neural. Inf. Process. Syst. 34, 9663–9680 (2021)
Zhang, T., et al.: Noveld: a simple yet effective exploration criterion. Adv. Neural. Inf. Process. Syst. 34, 25217–25230 (2021)
Zhang, T., et al.: BeBold: exploration beyond the boundary of explored regions. arXiv preprint arXiv:2012.08621 (2020)
Ecoffet, A., Huizinga, J., Lehman, J., Stanley, K.O., Clune, J.: Go-explore: a new approach for hard-exploration problems. arXiv preprint arXiv:1901.10995 (2019)
Fu, J., Co-Reyes, J., Levine, S.: EX2: exploration with exemplar models for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Savinov, N., et al.: Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274 (2018)
Badia, A.P., et al.: Never give up: learning directed exploration strategies. arXiv preprint arXiv:2002.06038 (2020)
Badia, A.P., et al.: Agent57: outperforming the atari human benchmark. In: International Conference on Machine Learning, pp. 507–517. PMLR (2020)
Seo, Y., Chen, L., Shin, J., Lee, H., Abbeel, P., Lee, K.: State entropy maximization with random encoders for efficient exploration. In: International Conference on Machine Learning, pp. 9443–9454. PMLR (2021)
Plappert, M., et al.: Parameter space noise for exploration. arXiv preprint arXiv:1706.01905 (2017)
Fortunato, M., et al.: Noisy networks for exploration. arXiv preprint arXiv:1706.10295 (2017)
Whitney, W.F., Bloesch, M., Springenberg, J.T., Abdolmaleki, A., Cho, K., Riedmiller, M.: Decoupled exploration and exploitation policies for sample-efficient reinforcement learning. arXiv preprint arXiv:2101.09458 (2021)
Schäfer, L., Christianos, F., Hanna, J.P., Albrecht, S.V.: Decoupled reinforcement learning to stabilise intrinsically-motivated exploration. In: AAMAS, pp. 1146–1154 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, C., Wu, F., Zhao, J. (2023). A Review of Deep Reinforcement Learning Exploration Methods: Prospects and Challenges for Application to Robot Attitude Control Tasks. In: Sun, F., Cangelosi, A., Zhang, J., Yu, Y., Liu, H., Fang, B. (eds) Cognitive Systems and Information Processing. ICCSIP 2022. Communications in Computer and Information Science, vol 1787. Springer, Singapore. https://doi.org/10.1007/978-981-99-0617-8_18
Download citation
DOI: https://doi.org/10.1007/978-981-99-0617-8_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-0616-1
Online ISBN: 978-981-99-0617-8
eBook Packages: Computer ScienceComputer Science (R0)