Loading [MathJax]/extensions/MathMenu.js
Mix-up Consistent Cross Representations for Data-Efficient Reinforcement Learning | IEEE Conference Publication | IEEE Xplore

Mix-up Consistent Cross Representations for Data-Efficient Reinforcement Learning


Abstract:

Deep reinforcement learning (RL) has achieved re-markable performance in sequential decision-making problems. However, it is a challenge for deep RL methods to extract ta...Show More

Abstract:

Deep reinforcement learning (RL) has achieved re-markable performance in sequential decision-making problems. However, it is a challenge for deep RL methods to extract task-relevant semantic information when interacting with limited data from the environment. In this paper, we propose Mix-up Consistent Cross Representations (MCCR), a novel self-supervised auxiliary task, which aims to improve data efficiency and encourage representation prediction. Specifically, we calculate the contrastive loss between low-dimensional and high-dimensional representations of different state observations to boost the mutual information between states, thus improving data efficiency. Furthermore, we employ a mixed strategy to generate intermediate samples, increasing data diversity and the smoothness of representations prediction in nearby timesteps. Experimental results show that MCCR achieves competitive results over the state-of-the-art approaches for complex control tasks in DeepMind Control Suite, notably improving the ability of pretrained encoders to generalize to unseen tasks.
Date of Conference: 18-23 July 2022
Date Added to IEEE Xplore: 30 September 2022
ISBN Information:

ISSN Information:

Conference Location: Padua, Italy

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.