Abstract:
A learning strategy in Learning Classifier Systems (LCSs) defines how classifiers cover a state-action space in a problem. Previous analyses in classification problems ha...Show MoreMetadata
Abstract:
A learning strategy in Learning Classifier Systems (LCSs) defines how classifiers cover a state-action space in a problem. Previous analyses in classification problems have empirically claimed an adequate learning strategy can be decided depending on the types of noise in the problem. This issue is still arguable from two aspects. First, there lacks comparison of learning strategies in reinforcement learning problems with different types of noise. Second, when we can claim so, a further issue is how should classifiers cover the state-action space in order to improve the stability of LCS performance on as many types of noise as possible? This paper first attempts to empirically conclude these issues on a version of LCSs (i.e., the XCS classifier system). That is, we present a new concept of learning strategy for LCSs, and complement that claim by comparing it with the existing learning strategies on a reinforcement learning problem. Our learning strategy covers all state-action pairs but assigns more classifiers to the highest-return action at each state than other actions. Our results support that claim that existing learning strategies have dependencies on the types of noise in reinforcement learning problems. However, our learning strategy improves the stability of XCS performance compared with the existing strategies on all types of noise employed in this paper.
Published in: 2015 IEEE Congress on Evolutionary Computation (CEC)
Date of Conference: 25-28 May 2015
Date Added to IEEE Xplore: 14 September 2015
ISBN Information: