skip to main content
10.1145/3205651.3208295acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Model parameter adaptive instance-based policy optimization for episodic control tasks of nonholonomic systems

Published:06 July 2018Publication History

ABSTRACT

Evolutionary Computation (EC) attracts more and more attention in Reinforcement Learning (RL) with successful applications such as robot control. Instance-Based Policy (IBP) is a promising alternative to policy representations based on Artificial Neural Networks (ANNs). The IBP has been reported superior to continuous policy representations such as ANNs in the stabilization control of non-holonomic systems due to its nature of bang-bang type control, and its understandability. A difficulty in applying an EC based policy optimization to an RL task is to choose appropriate hyper-parameters such as the network structure in ANNs and the parameters of EC. The same applies to the IBP, where the critical parameter is the number of instances that determines mode flexibility. In this paper, we propose a novel RL method combining the IBP representation and optimization by the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a state-of-the-art general-purpose search algorithm for black-box continuous optimization. The proposed method, called IBP-CMA, is a direct policy search that adapts the number of instances during the learning process and activates instances that do not contribute to the output. In the simulation, the IBP-CMA is compared with an ANN-based RL, CMA-TWEANN.

References

  1. Dirk V Arnold and Nikolaus Hansen. 2010. Active Covariance Matrix Adaptation for the (1+1)-CMA-ES. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation. ACM, 385--392. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, and Jean-Baptiste Mouret. 2017. Black-box Data-efficient Policy Search for Robotics. In IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 51--58.Google ScholarGoogle Scholar
  3. Kokolo Ikeda. 2005. Exemplar-based Direct Policy Search with Evolutionary Optimization. In 2005 IEEE Congress on Evolutionary Computation, Vol. 3. IEEE, 2357--2364.Google ScholarGoogle Scholar
  4. Kokolo Ikeda, Shigenobu Kobayashi, and Hajime Kita. 2010. Exemplar-Based Policy with Selectable Strategies and its Optimization Using GA. Transactions of the Japanese Society for Artificial Intelligence 25, 2 (2010), 351--362. (in Japanese).Google ScholarGoogle ScholarCross RefCross Ref
  5. Kokolo Ikeda and Isao Ono. 2013. Instance Based Policy Representation and Its Evolutionary Optimization(-Special Issue- New Trends of Population-Based Machine Learning). SYSTEMS, CONTROL AND INFORMATION 57, 10 (2013), 415--420. (in Japanese).Google ScholarGoogle Scholar
  6. Jan Hendrik Metzen, Mark Edgington, Yohannes Kassahun, and Frank Kirchner. 2008. Analysis of an Evolutionary Reinforcement Learning Method in a Multiagent Domain. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, Vol. 1. International Foundation for Autonomous Agents and Multiagent Systems, 291--298. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Miyamae, Jun Sakuma, Isao Ono, and Shigenobu Kobayashi. 2009. Optimization of Instance-Based Policy Based on Real-Coded Genetic Algorithms. In IEEE Conference on Soft Computing in Industrial Applications, SMCia'08. IEEE, 338--343.Google ScholarGoogle Scholar
  8. David E Moriarty, Alan C Schultz, and John J Grefenstette. 1999. Evolutionary Algorithms for Reinforcement Learning. J. Artif. Intell. Res.(JAIR) 11 (1999), 241--276. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Hirotaka Moriguchi and Shinichi Honiden. 2012. CMA-TWEANN: Efficient Optimization of Neural Networks via Self-adaptation and Seamless Augmentation. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation. ACM, 903--910. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. 2017. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. (2017). arXiv:1703.03864Google ScholarGoogle Scholar
  11. Kenneth O Stanley, David B D'Ambrosio, and Jason Gauci. 2009. A Hypercube-based Encoding for Evolving Large-scale Neural Networks. Artificial Life 15, 2 (2009), 185--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving Neural Networks through Augmenting Topologies. Evolutionary Computation 10, 2 (2002), 99--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. Vol. 1. MIT press Cambridge. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Chikao Tsuchiya, Yusuke Shiokawa, Kokolo Ikeda, Jun Sakuma, Isao Ono, and Shigenobu Kobayashi. 2006. SLIP: A Sophisticated Learner for Instance-based Policy using Hybrid GA. Transactions of the Society of Instrument and Control Engineers 42, 12 (2006), 1344--1352. (in Japanese).Google ScholarGoogle ScholarCross RefCross Ref
  15. Daniel Urieli, Patrick MacAlpine, Shivaram Kalyanakrishnan, Yinon Bentor. and Peter Stone. 2011. On Optimizing Interdependent Skills: A Case Study in Simulated 3D Humanoid Robot Soccer. In The 10th International Conference on Autonomous Agents and Multiagent Systems, Vol. 2. International Foundation for Autonomous Agents and Multiagent Systems, 769--776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3--4 (1992), 279--292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Shimon Whiteson and Peter Stone. 2006. Evolutionary Function Approximation for Reinforcement Learning. Journal of Machine Learning Research 7, May (2006), 877--917. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Model parameter adaptive instance-based policy optimization for episodic control tasks of nonholonomic systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference Companion
        July 2018
        1968 pages
        ISBN:9781450357647
        DOI:10.1145/3205651

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 6 July 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,669of4,410submissions,38%

        Upcoming Conference

        GECCO '24
        Genetic and Evolutionary Computation Conference
        July 14 - 18, 2024
        Melbourne , VIC , Australia
      • Article Metrics

        • Downloads (Last 12 months)3
        • Downloads (Last 6 weeks)0

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader