Skip to main content

Structural and Compact Latent Representation Learning on Sparse Reward Environments

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13996))

Included in the following conference series:

  • 216 Accesses

Abstract

For the task of training RL agent in a sparse-reward, image-based observation environment, the agent should perfect both learning latent representation and having a good-exploration strategy. Standard approaches such as variational auto-encoder (VAE) could learn such representation. However, these approaches are only designed to encode the input observations into a pre-defined latent distribution and do not take into account the dynamics of the environment. To improve the training process from high-dimensional input images, we extend the standard VAE framework to learn a compact latent representation that can mimic the structures of the underlying Markov decision process. We further add an intrinsic reward based on the learned latent to encourage exploratory actions in the sparse reward environments. The intrinsic reward is designed to direct the policy to visit distant states in the latent space. Experiments on several gridworld environments with sparse rewards are carried out to demonstrate the effectiveness of our proposed approach. Compared to other baselines, our method has more stable performance and better exploration coverage by exploiting the learned latent structure property.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)

  2. Yarats, D., Zhang, A., Kostrikov, I., Amos, B., Pineau, J., Fergus, R.: Improving sample efficiency in model-free reinforcement learning from images. Proc. AAAI Conf. Artif. Intell. 35, 10674–10681 (2021)

    Google Scholar 

  3. Shelhamer, E., Mahmoudieh, P., Argus, M., Darrell, T.: Loss is its own reward: self-supervision for reinforcement learning. arXiv preprint arXiv:1612.07307 (2016)

  4. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  5. Higgins, I., et al.: Beta-vae: learning basic visual concepts with a constrained variational framework (2016)

    Google Scholar 

  6. Kim, H., Kim, J., Jeong, Y., Levine, S. and Song, H.O.: Emi: exploration with mutual information. arXiv preprint arXiv:1810.01176 (2018)

  7. Ermolov, A., Sebe, N.: Latent world models for intrinsically motivated exploration. Adv. Neural Inf. Process. Syst. 33, 5565–5575 (2020)

    Google Scholar 

  8. Tang, H., et al.: # exploration: a study of count-based exploration for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  9. Stadie, B.C., Levine, S., Abbeel, P.: Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814 (2015)

  10. Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., Munos, R.: Unifying count-based exploration and intrinsic motivation. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  11. Burda, Y., Edwards, H., Storkey, A., Klimov, O.: Exploration by random network distillation. arXiv preprint arXiv:1810.12894 (2018)

  12. Zhang, T., Rashidinejad, P., Jiao, J., Tian, Y., Gonzalez, J.E., Russell, S.: Exploration via maximizing deviation from explored regions. Adv. Neural Inf. Process. Syst. 34, 9663–9680 (2021)

    Google Scholar 

  13. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870. PMLR (2018)

    Google Scholar 

  14. Nachum, O., Gu, S.S., Lee, H., Levine, S.: Visual reinforcement learning with imagined goals. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  15. Zhang, A., McAllister, R., Calandra, R., Gal, Y., Levine, S.: Learning invariant representations for reinforcement learning without reconstruction. arXiv preprint arXiv:2006.10742 (2020)

  16. Hafner, D., et al.: Learning latent dynamics for planning from pixels. In: International Conference on Machine Learning, pp. 2555–2565. PMLR (2019)

    Google Scholar 

  17. Christopher, J.C.H.: Christopher JCH watkins and peter dayan: q-learning. Mach. Learn. 8(3), 279–292 (1992)

    Google Scholar 

  18. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  19. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, Cambridge (2018)

    Google Scholar 

  20. Han, S., Sung, Y.: Diversity actor-critic: sample-aware entropy regularization for sample-efficient exploration. In: International Conference on Machine Learning, pp. 4018–4029. PMLR (2021)

    Google Scholar 

  21. Savinov, N., et al.: Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274 (2018)

  22. Raileanu, R., Rocktäschel, T.: Ride: rewarding impact-driven exploration for procedurally-generated environments. arXiv preprint arXiv:2002.12292 (2020)

  23. Kostrikov, I., Yarats, D., Fergus, R.: Image augmentation is all you need: regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649 (2020)

  24. Chevalier-Boisvert, M., Willems, L., Pal, S.: Minimalistic gridworld environment for gymnasium (2018)

    Google Scholar 

Download references

Acknowledgements

This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-22-1-4026.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Viet-Cuong Ta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Le, BG., Hoang, TL., Kieu, HD., Ta, VC. (2023). Structural and Compact Latent Representation Learning on Sparse Reward Environments. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13996. Springer, Singapore. https://doi.org/10.1007/978-981-99-5837-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-5837-5_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-5836-8

  • Online ISBN: 978-981-99-5837-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics