Abstract
Sparse-reward environments are famously challenging for deep reinforcement learning (DRL) algorithms. Yet, the prospect of solving tasks with intrinsically sparse rewards in an end-to-end fashion and without any extra reward engineering is highly appealing. Such aspiration has recently led to the development of numerous DRL algorithms able to handle reward sparsity to some extent. Some methods have even gone one step further and have tackled sparse-reward tasks involving different kinds of distractors (e.g., a broken TV, a self-moving phantom object and many more). In this work, we put forward two motivating new sparse-reward environments containing the so-far largely overlooked class of exploration-intensive distractors. Furthermore, we conduct a benchmarking that reveals that state-of-the-art algorithms are not yet all-around suitable for solving our proposed environments.
We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Andrychowicz, M., et al.: Hindsight experience replay. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Red Hook, NY, USA, pp. 5055–5065. Curran Associates Inc. (2017)
Badia, A.P., et al.: Agent57: outperforming the atari human benchmark. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13–18 July 2020, Virtual Event. Machine Learning Research, vol. 119, pp. 507–517. PMLR (2020). http://proceedings.mlr.press/v119/badia20a.html
Badia, A.P., et al.: Never give up: learning directed exploration strategies. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=Sye57xStvB
Bellemare, M.G., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., Munos, R.: Unifying count-based exploration and intrinsic motivation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Red Hook, NY, USA, pp. 1479–1487. Curran Associates Inc. (2016)
Burda, Y., Edwards, H., Pathak, D., Storkey, A.J., Darrell, T., Efros, A.A.: Large-scale study of curiosity-driven learning. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=rJNwDjAqYX
Burda, Y., Edwards, H., Storkey, A.J., Klimov, O.: Exploration by random network distillation. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=H1lJJnR5Ym
Choi, J., et al.: Contingency-aware exploration in reinforcement learning. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=HyxGB2AcY7
Espeholt, L., et al.: IMPALA: scalable distributed deep-RL with importance weighted actor-learner architectures. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Machine Learning Research, 10–15 Jul 2018, vol. 80, pp. 1407–1416. PMLR (2018). http://proceedings.mlr.press/v80/espeholt18a.html
Eysenbach, B., Gupta, A., Ibarz, J., Levine, S.: Diversity is all you need: learning skills without a reward function. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=SJx63jRqFm
Florensa, C., Held, D., Geng, X., Abbeel, P.: Automatic goal generation for reinforcement learning agents. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Machine Learning Research, 10–15 July 2018, vol. 80, pp. 1515–1528. PMLR (2018). http://proceedings.mlr.press/v80/florensa18a.html
Florensa, C., Held, D., Wulfmeier, M., Zhang, M., Abbeel, P.: Reverse curriculum generation for reinforcement learning. In: Levine, S., Vanhoucke, V., Goldberg, K. (eds.) Proceedings of the 1st Annual Conference on Robot Learning. Machine Learning Research, 13–15 November 2017, vol. 78, pp. 482–495. PMLR (2017). http://proceedings.mlr.press/v78/florensa17a.html
Gu, S., Holly, E., Lillicrap, T.P., Levine, S.: Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3389–3396 (2016)
Haarnoja, T., Ha, S., Zhou, A., Tan, J., Tucker, G., Levine, S.: Learning to walk via deep reinforcement learning. In: Robotics: Science and Systems (2019). https://doi.org/10.15607/RSS.2019.XV.011
Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning. In: McIlraith, S.A., Weinberger, K.Q. (eds.) AAAI, pp. 3215–3222. AAAI Press (2018). http://dblp.uni-trier.de/db/conf/aaai/aaai2018.html#HesselMHSODHPAS18
Horgan, D., et al.: Distributed prioritized experience replay. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=H1Dy---0Z
Kapturowski, S., Ostrovski, G., Dabney, W., Quan, J., Munos, R.: Recurrent experience replay in distributed reinforcement learning. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=r1lyTjAqYX
Kempka, M., Wydmuch, M., Runc, G., Toczek, J., Jaśkowski, W.: ViZDoom: a doom-based AI research platform for visual reinforcement learning. In: IEEE Conference on Computational Intelligence and Games, Santorini, Greece, pp. 341–348. IEEE, September 2016. http://arxiv.org/abs/1605.02097. The best paper award
Kim, Y., Nam, W., Kim, H., Kim, J.H., Kim, G.: Curiosity-bottleneck: exploration by distilling task-specific novelty. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Machine Learning Research, 09–15 June 2019, vol. 97, pp. 3379–3388. PMLR (2019). http://proceedings.mlr.press/v97/kim19c.html
Laversanne-Finot, A., Péré, A., Oudeyer, P.: Curiosity driven exploration of learned disentangled goal spaces. In: Proceedings of the 2nd Annual Conference on Robot Learning, CoRL 2018, Zürich, Switzerland, 29–31 October 2018. Machine Learning Research, vol. 87, pp. 487–504. PMLR (2018). http://proceedings.mlr.press/v87/laversanne-finot18a.html
Lee, L., Eysenbach, B., Parisotto, E., Xing, E.P., Levine, S., Salakhutdinov, R.: Efficient exploration via state marginal matching. CoRR abs/1906.05274 (2019). http://arxiv.org/abs/1906.05274
Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep visuomotor policies. J. Mach. Learn. Res. 17, 39:1–39:40 (2015)
Machado, M.C., Bellemare, M.G., Bowling, M.: A laplacian framework for option discovery in reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, ICML’17, vol. 70, pp. 2295–2304. JMLR.org (2017)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
O’Donoghue, B., Osband, I., Munos, R., Mnih, V.: The uncertainty bellman equation and exploration. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018. Machine Learning Research, vol. 80, pp. 3836–3845. PMLR (2018). http://proceedings.mlr.press/v80/o-donoghue18a.html
Oh, J., Guo, Y., Singh, S., Lee, H.: Self-imitation learning. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Machine Learning Research, 10–15 July 2018, vol. 80, pp. 3878–3887. PMLR (2018). http://proceedings.mlr.press/v80/oh18b.html
Osband, I., Blundell, C., Pritzel, A., Roy, B.V.: Deep exploration via bootstrapped DQN. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Red Hook, NY, USA, pp. 4033–4041. Curran Associates Inc. (2016)
Pathak, D., Agrawal, P., Efros, A.A., Darrell, T.: Curiosity-driven exploration by self-supervised prediction. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 488–489 (2017)
Savinov, N., et al.: Episodic curiosity through reachability. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=SkeK3s0qKQ
Schrittwieser, J., et al.: Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 588(7839), 604–609 (2020). https://doi.org/10.1038/s41586-020-03051-4
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. CoRR abs/1707.06347 (2017). http://arxiv.org/abs/1707.06347
Sekar, R., Rybkin, O., Daniilidis, K., Abbeel, P., Hafner, D., Pathak, D.: Planning to explore via self-supervised world models. In: International Conference on Machine Learning (2020)
Silver, D., et al.: Mastering the game of go without human knowledge. Nat. 550, 354–359 (2017)
Song, Y., et al.: Mega-reward: achieving human-level play without extrinsic rewards. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 5826–5833, April 2020. https://doi.org/10.1609/aaai.v34i04.6040. https://ojs.aaai.org/index.php/AAAI/article/view/6040
Stanton, C., Clune, J.: Deep curiosity search: intra-life exploration improves performance on challenging deep reinforcement learning problems. CoRR abs/1806.00553 (2018). http://arxiv.org/abs/1806.00553
Sukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A., Fergus, R.: Intrinsic motivation and automatic curricula via asymmetric self-play. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018, Conference Track Proceedings. OpenReview.net (2018). https://openreview.net/forum?id=SkT5Yg-RZ
Vezhnevets, A.S., et al.: Feudal networks for hierarchical reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, ICML’17, vol. 70, pp. 3540–3549. JMLR.org (2017)
Vinyals, O., et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019). https://doi.org/10.1038/s41586-019-1724-z
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ocana, J.M.C., Capobianco, R., Nardi, D. (2022). Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds) AIxIA 2021 – Advances in Artificial Intelligence. AIxIA 2021. Lecture Notes in Computer Science(), vol 13196. Springer, Cham. https://doi.org/10.1007/978-3-031-08421-8_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-08421-8_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08420-1
Online ISBN: 978-3-031-08421-8
eBook Packages: Computer ScienceComputer Science (R0)