ABSTRACT
Evolutionary Computation (EC) attracts more and more attention in Reinforcement Learning (RL) with successful applications such as robot control. Instance-Based Policy (IBP) is a promising alternative to policy representations based on Artificial Neural Networks (ANNs). The IBP has been reported superior to continuous policy representations such as ANNs in the stabilization control of non-holonomic systems due to its nature of bang-bang type control, and its understandability. A difficulty in applying an EC based policy optimization to an RL task is to choose appropriate hyper-parameters such as the network structure in ANNs and the parameters of EC. The same applies to the IBP, where the critical parameter is the number of instances that determines mode flexibility. In this paper, we propose a novel RL method combining the IBP representation and optimization by the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a state-of-the-art general-purpose search algorithm for black-box continuous optimization. The proposed method, called IBP-CMA, is a direct policy search that adapts the number of instances during the learning process and activates instances that do not contribute to the output. In the simulation, the IBP-CMA is compared with an ANN-based RL, CMA-TWEANN.
- Dirk V Arnold and Nikolaus Hansen. 2010. Active Covariance Matrix Adaptation for the (1+1)-CMA-ES. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation. ACM, 385--392. Google ScholarDigital Library
- Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, and Jean-Baptiste Mouret. 2017. Black-box Data-efficient Policy Search for Robotics. In IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 51--58.Google Scholar
- Kokolo Ikeda. 2005. Exemplar-based Direct Policy Search with Evolutionary Optimization. In 2005 IEEE Congress on Evolutionary Computation, Vol. 3. IEEE, 2357--2364.Google Scholar
- Kokolo Ikeda, Shigenobu Kobayashi, and Hajime Kita. 2010. Exemplar-Based Policy with Selectable Strategies and its Optimization Using GA. Transactions of the Japanese Society for Artificial Intelligence 25, 2 (2010), 351--362. (in Japanese).Google ScholarCross Ref
- Kokolo Ikeda and Isao Ono. 2013. Instance Based Policy Representation and Its Evolutionary Optimization(-Special Issue- New Trends of Population-Based Machine Learning). SYSTEMS, CONTROL AND INFORMATION 57, 10 (2013), 415--420. (in Japanese).Google Scholar
- Jan Hendrik Metzen, Mark Edgington, Yohannes Kassahun, and Frank Kirchner. 2008. Analysis of an Evolutionary Reinforcement Learning Method in a Multiagent Domain. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, Vol. 1. International Foundation for Autonomous Agents and Multiagent Systems, 291--298. Google ScholarDigital Library
- A. Miyamae, Jun Sakuma, Isao Ono, and Shigenobu Kobayashi. 2009. Optimization of Instance-Based Policy Based on Real-Coded Genetic Algorithms. In IEEE Conference on Soft Computing in Industrial Applications, SMCia'08. IEEE, 338--343.Google Scholar
- David E Moriarty, Alan C Schultz, and John J Grefenstette. 1999. Evolutionary Algorithms for Reinforcement Learning. J. Artif. Intell. Res.(JAIR) 11 (1999), 241--276. Google ScholarDigital Library
- Hirotaka Moriguchi and Shinichi Honiden. 2012. CMA-TWEANN: Efficient Optimization of Neural Networks via Self-adaptation and Seamless Augmentation. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation. ACM, 903--910. Google ScholarDigital Library
- Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. 2017. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. (2017). arXiv:1703.03864Google Scholar
- Kenneth O Stanley, David B D'Ambrosio, and Jason Gauci. 2009. A Hypercube-based Encoding for Evolving Large-scale Neural Networks. Artificial Life 15, 2 (2009), 185--212. Google ScholarDigital Library
- Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving Neural Networks through Augmenting Topologies. Evolutionary Computation 10, 2 (2002), 99--127. Google ScholarDigital Library
- Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. Vol. 1. MIT press Cambridge. Google ScholarDigital Library
- Chikao Tsuchiya, Yusuke Shiokawa, Kokolo Ikeda, Jun Sakuma, Isao Ono, and Shigenobu Kobayashi. 2006. SLIP: A Sophisticated Learner for Instance-based Policy using Hybrid GA. Transactions of the Society of Instrument and Control Engineers 42, 12 (2006), 1344--1352. (in Japanese).Google ScholarCross Ref
- Daniel Urieli, Patrick MacAlpine, Shivaram Kalyanakrishnan, Yinon Bentor. and Peter Stone. 2011. On Optimizing Interdependent Skills: A Case Study in Simulated 3D Humanoid Robot Soccer. In The 10th International Conference on Autonomous Agents and Multiagent Systems, Vol. 2. International Foundation for Autonomous Agents and Multiagent Systems, 769--776. Google ScholarDigital Library
- Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3--4 (1992), 279--292.Google ScholarDigital Library
- Shimon Whiteson and Peter Stone. 2006. Evolutionary Function Approximation for Reinforcement Learning. Journal of Machine Learning Research 7, May (2006), 877--917. Google ScholarDigital Library
Index Terms
- Model parameter adaptive instance-based policy optimization for episodic control tasks of nonholonomic systems
Recommendations
Black-box optimization benchmarking of IPOP-saACM-ES and BIPOP-saACM-ES on the BBOB-2012 noiseless testbed
GECCO '12: Proceedings of the 14th annual conference companion on Genetic and evolutionary computationIn this paper, we study the performance of IPOP-saACM-ES and BIPOP-saACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategies. Both algorithms were tested using restarts till a total number of function ...
Multi-population differential evolution with adaptive parameter control for global optimization
GECCO '11: Proceedings of the 13th annual conference on Genetic and evolutionary computationDifferential evolution (DE) is one of the most successful evolutionary algorithms (EAs) for global numerical optimization. Like other EAs, maintaining population diversity is important for DE to escape from local optima and locate a near-global optimum. ...
Intensive surrogate model exploitation in self-adaptive surrogate-assisted cma-es (saacm-es)
GECCO '13: Proceedings of the 15th annual conference on Genetic and evolutionary computationThis paper presents a new mechanism for a better exploitation of surrogate models in the framework of Evolution Strategies (ESs). This mechanism is instantiated here on the self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategy ...
Comments