skip to main content
10.1145/3522749.3523075acmotherconferencesArticle/Chapter ViewAbstractPublication PagescceaiConference Proceedingsconference-collections
research-article

Offline reinforcement learning application in robotic manipulation with a COG method case

Authors Info & Claims
Published:13 April 2022Publication History

ABSTRACT

Artificial intelligence now has different applications in various industrial fields. Reinforcement learning (RL) is one of the hot topics in the artificial intelligence, also in robotics. It is an important learning method in the field of robotic manipulation. The training policies of reinforcement learning can be divided into online learning policy and offline learning policy. Besides, the reinforcement learning algorithm of offline policy has great potential in transforming large data sets into powerful decision engine. To solve the problem that most of robot applications involve collecting data from scratch for each new task, offline learning combined with online learning is to make the training more efficient and convenient. The aim of this paper is to clearly introduce the application of offline reinforcement learning in the field of robotic manipulation. The basic formulation of reinforcement learning includes two points: First, it introduces Markov Decision Process and one of method of solution – policy gradients. Then through analyzing an application of offline learning in the field of robotic manipulation - COG algorithm, this paper analyzes the process of offline learning combining the prior data to learn new robotic skills and uses this method to solve specific tasks of robotic, such as the problems of sample efficiency. The results show that the offline learning policy has important research value in the field of robotic manipulation by reducing training time and make process efficient, and it fully embodies its advantages in solving the problems of robotic sample efficiency.

References

  1. Yinlei Wen, Huaguang Zhang, Hanguang Su, He Ren 2020 Optimal tracking control for non-zero-sum games of linear discrete-time systems via off-policy reinforcement learning. Optimal Control Applications and MethodsGoogle ScholarGoogle Scholar
  2. Yoshida Hiroyuki 2019 Deep Learning and AlphaGo. Brain and nerve = Shinkei kenkyu no shinpoGoogle ScholarGoogle Scholar
  3. Priya Shukla, Hitesh Kumar;G C Nandi 2021 Robotic grasp manipulation using evolutionary computing and deep reinforcement learning. Intelligent Service RoboticsGoogle ScholarGoogle Scholar
  4. Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox 2020 Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning. arXiv preprint arXiv:2005.10872v2Google ScholarGoogle Scholar
  5. Brijen Thananjeyan, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, Ken Goldberg 2010 Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones. arXiv preprint arXiv:2010.15920v1Google ScholarGoogle Scholar
  6. Fei Guo, Xiaowei Zhou, Jiahuan Liu, Yun Zhang, Dequn Li, Huamin Zhou 2019 A reinforcement learning decision model for online process parameters optimization from offline data in injection molding. Applied Soft Computing JournalGoogle ScholarGoogle Scholar
  7. Berkenkamp, F., Turchetta, M., Schoellig, A., and Krause, A 2017 Safe model-based reinforcement learning with stability guarantees. In Advances in neural information processing systems, p 908–918Google ScholarGoogle Scholar
  8. Di Cao, Junbo Zhao, Guozhou Zhang, Bin Zhang, Zhou Liu, Zhe Chen, Frede Blaabjerg 2020 Reinforcement Learning and Its Applications: A Review. Journal of Modern Power Systems and Clean EnergyGoogle ScholarGoogle ScholarCross RefCross Ref
  9. Junzi Zhang, Jongho Kim, Brendan O'Donoghue, Stephen Boyd 2010 Sample Efficient Reinforcement Learning with REINFORCE. arXiv preprint arXiv:2010.11364v2Google ScholarGoogle Scholar
  10. J. Fu, A. Kumar, M. Soh, S. Levine 2019 Diagnosing bottlenecks in deep Q-learning algorithms. arXiv preprint arXiv:1902.10250Google ScholarGoogle Scholar
  11. S. Fujimoto, D. Meger, D. Precup 2018 Off-policy deep reinforcement learning without exploration. arXiv preprint arXiv:1812.02900Google ScholarGoogle Scholar
  12. Cabi, S., Colmenarejo, S. G., Novikov, A., Konyushkova, K., Reed, S., Jeong, R., Zołna, K., Aytar, ˙ Y., Budden, D., Vecerik, M., 2019 A framework for data-driven robotics. arXiv preprint arXiv:1909.12200Google ScholarGoogle Scholar
  13. Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine 2020 COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning. arXiv preprint arXiv:2010.14500v1Google ScholarGoogle Scholar
  14. Jun Jin, Daniel Graves, Cameron Haigh, Jun Luo, Martin Jagersand 2020 Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning. arXiv preprint arXiv:2011.05857v1Google ScholarGoogle Scholar
  15. Sergey Levine, Aviral Kumar, George Tucker, Justin Fu 2020 Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems. arXiv preprint arXiv:2005.01643v3Google ScholarGoogle Scholar
  16. T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, V. K. Jie Tan, H. Zhu, A. Gupta, P. Abbeel, S. Levine 2018 Soft actor-critic algorithms and applications. Technical reportGoogle ScholarGoogle Scholar
  17. A. Kumar, J. Fu, M. Soh, G. Tucker, S. Levine 2019 Stabilizing off-policy q-learning via bootstrapping error reduction. In NeurIPSGoogle ScholarGoogle Scholar
  18. A. Kumar, A. Zhou, G. Tucker, and S. Levine. 2020 Conservative q-learning for offline reinforcement learning. arXiv preprint arXiv:2006.04779Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    CCEAI '22: Proceedings of the 6th International Conference on Control Engineering and Artificial Intelligence
    March 2022
    130 pages
    ISBN:9781450385916
    DOI:10.1145/3522749

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)16
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format