Abstract
As a powerful method for solving sequential decision problems, imitation learning (IL) aims to generate policy similar to expert behavior by imitating demonstrations. However, the quality of demonstrations directly limits the performance of the agent imitation policy. To solve this problem, self-adaptive inverse soft-Q learning for imitation (SAIQL) is proposed. SAIQL proposes a novel three-level buffer system by introducing an online excellent buffer based on the expert buffer and the normal buffer. Trajectories from interactions with superior performance are stored in the online excellent buffer. When the amount of data in the online excellent buffer and the expert buffer is equal, the former data will be cleaned and transferred to the latter, ensuring that demonstrations in the expert buffer are continuously optimized. Finally, we compare SAIQL with up-to-date IL methods in both the continuous control and the Atari tasks. The experimental results show the superiority of SAIQL. It improves the quality of expert demonstrations and the utilization of trajectories.
Supported by National Natural Science Foundation of China (61772355, 61702055, 61876217, 62176175). Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 1 (2004)
Arora, S., Doshi, P.: A survey of inverse reinforcement learning: challenges, methods and progress. Artif. Intell. 297, 103500 (2021)
Barde, P., Roy, J., Jeon, W., Pineau, J., Pal, C., Nowrouzezahrai, D.: Adversarial soft advantage fitting: imitation learning without policy optimization. In: Advances in Neural Information Processing Systems, vol. 33, pp. 12334–12344 (2020)
Dvijotham, K., Todorov, E.: Inverse optimal control with linearly-solvable MDPs. In: Proceedings of the 27th International Conference on Machine Learning, ICML 2010, pp. 335–342 (2010)
Garg, D., Chakraborty, S., Cundy, C., Song, J., Ermon, S.: IQ-learn: inverse soft-Q learning for imitation. In: Advances in Neural Information Processing Systems, vol. 34, pp. 4028–4039 (2021)
Goodfellow, I.J., et al.: Generative adversarial nets, pp. 2672–2680 (2014)
Ho, J., Ermon, S.: Generative adversarial imitation learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Imani, M., Ghoreishi, S.F.: Scalable inverse reinforcement learning through multifidelity Bayesian optimization. IEEE Trans. Neural Netw. Learn. Syst. 33(8), 4125–4132 (2021)
Kostrikov, I., Agrawal, K.K., Dwibedi, D., Levine, S., Tompson, J.: Discriminator-actor-critic: addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925 (2018)
Kostrikov, I., Fergus, R., Tompson, J., Nachum, O.: Offline reinforcement learning with fisher divergence critic regularization. In: International Conference on Machine Learning, vol. 139, pp. 5774–5783. PMLR (2021)
Le Mero, L., Yi, D., Dianati, M., Mouzakitis, A.: A survey on imitation learning techniques for end-to-end autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 23, 14128–14147 (2022)
Liu, Q., et al.: A survey on deep reinforcement learning. Chin. J. Comput. 41(1), 1–27 (2018)
Mohammed, H., Sayed, T., Bigazzi, A.: Microscopic modeling of cyclists on off-street paths: a stochastic imitation learning approach. Transportmetrica A Transp. Sci. 18(3), 345–366 (2022)
Piot, B., Geist, M., Pietquin, O.: Bridging the gap between imitation learning and inverse reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 28(8), 1814–1826 (2016)
Reddy, S., Dragan, A.D., Levine, S.: SQIL: imitation learning via reinforcement learning with sparse rewards. arXiv preprint arXiv:1905.11108 (2019)
Ross, S., Bagnell, D.: Efficient reductions for imitation learning. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, vol. 9, pp. 661–668. JMLR Workshop and Conference Proceedings (2010)
Sammut, C., Hurst, S., Kedzier, D., Michie, D.: Learning to fly. In: Machine Learning Proceedings 1992, pp. 385–393. Elsevier (1992)
Wang, L., et al.: Adversarial cooperative imitation learning for dynamic treatment regimes. In: Proceedings of the Web Conference 2020, pp. 1785–1795 (2020)
Zhang, K., Yu, Y.: Methodologies for imitation learning via inverse reinforcement learning: a review. J. Comput. Res. Develop. 56(2), 254–261 (2019)
Zhu, Z., Lin, K., Dai, B., Zhou, J.: Self-adaptive imitation learning: Learning tasks with delayed rewards from sub-optimal demonstrations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 9269–9277 (2022)
Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K., et al.: Maximum entropy inverse reinforcement learning. In: AAAI, Chicago, IL, USA, vol. 8, pp. 1433–1438 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, Z., Liu, Q., Zhang, X. (2024). Self-adaptive Inverse Soft-Q Learning for Imitation. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1963. Springer, Singapore. https://doi.org/10.1007/978-981-99-8138-0_1
Download citation
DOI: https://doi.org/10.1007/978-981-99-8138-0_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8137-3
Online ISBN: 978-981-99-8138-0
eBook Packages: Computer ScienceComputer Science (R0)