Loading [a11y]/accessibility-menu.js
Model-based Adversarial Imitation Learning from Demonstrations and Human Reward | IEEE Conference Publication | IEEE Xplore

Model-based Adversarial Imitation Learning from Demonstrations and Human Reward


Abstract:

Reinforcement learning (RL) can potentially be applied to real-world robot control in complex and uncertain environments. However, it is difficult or even unpractical to ...Show More

Abstract:

Reinforcement learning (RL) can potentially be applied to real-world robot control in complex and uncertain environments. However, it is difficult or even unpractical to design an efficient reward function for various tasks, especially those large and high-dimensional environments. Generative adversarial imitation learning (GAIL) - a general model-free imitation learning method, allows robots to directly learn policies from expert trajectories in large and high-dimensional environments. However, GAIL is still sample inefficient in terms of environmental interaction. In this paper, to solve this problem, we propose a model-based adversarial imitation learning from demonstrations and human reward (MAILDH), a novel model-based interactive imitation framework combining the advantages of GAIL, interactive RL and model-based RL. We tested our method in eight physics-based discrete and continuous control tasks for RL. Our results show that MAILDH can greatly improve the sample efficiency and robustness compared to the original GAIL.
Date of Conference: 01-05 October 2023
Date Added to IEEE Xplore: 13 December 2023
ISBN Information:

ISSN Information:

Conference Location: Detroit, MI, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.