Loading [a11y]/accessibility-menu.js
Adversarial Inverse Reinforcement Learning With Self-Attention Dynamics Model | IEEE Journals & Magazine | IEEE Xplore

Adversarial Inverse Reinforcement Learning With Self-Attention Dynamics Model


Abstract:

In many real-world applications where specifying a proper reward function is difficult, it is desirable to learn policies from expert demonstrations. Adversarial Inverse ...Show More

Abstract:

In many real-world applications where specifying a proper reward function is difficult, it is desirable to learn policies from expert demonstrations. Adversarial Inverse Reinforcement Learning (AIRL) is one of the most common approaches for learning from demonstrations. However, due to the stochastic policy, current computation graph of AIRL is no longer end-to-end differentiable like Generative Adversarial Networks (GANs), resulting in the need for high-variance gradient estimation methods and large sample size. In this work, we propose the Model-based Adversarial Inverse Reinforcement Learning (MAIRL), an end-to-end model-based policy optimization method with self-attention. By adopting the self-attention dynamics model to make the computation graph end-to-end differentiable, MAIRL has the low variance for policy optimization. We evaluate our approach thoroughly on various control tasks. The experimental results show that our approach not only learns near-optimal rewards and policies that match expert behavior but also outperforms previous inverse reinforcement learning algorithms in real robot experiments. Code is available at https://decisionforce.github.io/MAIRL/.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 2, April 2021)
Page(s): 1880 - 1886
Date of Publication: 23 February 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.