ABSTRACT
The XCS classifier system adaptively controls a rule-generality of a rule-condition through a rule-discovery process. However, there is no proof that the rule-generality can eventually converge to its optimum value even under some ideal assumptions. This paper conducts a convergence analysis of the rule-generality on the rule-discovery process with the ternary alphabet coding. Our analysis provides the first proof that an average rule-generality of rules in a population can converge to its optimum value under some assumptions. This proof can be used to mathematically conclude that the XCS framework has a natural pressure to explore rules toward optimum rules if XCS satisfies our derived conditions. In addition, our theoretical result returns a rough setting-up guideline for the maximum population size, the mutation rate, and the GA threshold, improving the convergence speed of the rule-generality and the XCS performance.
- Isidro M Alvarez, Will N Browne, and Mengjie Zhang. 2014. Reusing learned functionality in XCS: code fragments with constructed functionality and constructed features. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation. 969--976.Google ScholarDigital Library
- Muhammad Hassan Arif, Muhammad Iqbal, and Jianxin Li. 2019. Extracting and reusing blocks of knowledge in learning classifier systems for text classification: a lifelong machine learning approach. Soft Computing (2019), 1--10.Google Scholar
- Seok-Jun Bu and Sung-Bae Cho. 2020. A convolutional neural-based learning classifier system for detecting database intrusion via insider attack. Information Sciences 512 (2020), 123--136.Google ScholarDigital Library
- Martin V Butz, David E Goldberg, and Kurian Tharakunnel. 2003. Analysis and improvement of fitness exploitation in XCS: Bounding models, tournament selection, and bilateral accuracy. Evolutionary computation 11, 3 (2003), 239--277.Google Scholar
- M. V. Butz, T. Kovacs, P. L. Lanzi, and S. W. Wilson. 2004. Toward a theory of generalization and learning in XCS. IEEE Transactions on Evolutionary Computation 8, 1 (2004), 28--46. Google ScholarDigital Library
- Martin V. Butz and Stewart W. Wilson. 2002. An Algorithmic Description of XCS. Soft Computing 6, 3-4 (2002), 144--153.Google ScholarCross Ref
- J. H. Holland. 1986. Escaping brittleness: the possibilities of general purpose learning algorithms applied to parallel rule-based System. Machine Learning 2 (1986), 593--623.Google Scholar
- Andreas Holzinger, Chris Biemann, Constantinos S Pattichis, and Douglas B Kell. 2017. What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017).Google Scholar
- David Howard, Larry Bull, and Pier-Luca Lanzi. 2016. A cognitive architecture based on a learning classifier system with spiking classifiers. Neural Processing Letters 44, 1 (2016), 125--147.Google ScholarDigital Library
- Muhammad Iqbal, Will N Browne, and Mengjie Zhang. 2017. Extending xcs with cyclic graphs for scalability on complex boolean problems. Evolutionary computation 25, 2 (2017), 173--204.Google Scholar
- Ji-Yoon Kim and Sung-Bae Cho. 2019. Exploiting deep convolutional neural networks for a neural-based learning classifier system. Neurocomputing 354 (2019), 61--70.Google ScholarDigital Library
- Tim Kovacs. 1996. Evolving Optimal Populations with XCS Classifier Systems. Technical Report CSR-96-17 and CSRP-96-17. School of Computer Science, University of Birmingham, Birmingham, U.K.Google Scholar
- Pier Luca Lanzi. 1999. An Analysis of Generalization in the Xcs Classifier System. Evol. Comput. 7, 2 (June 1999), 125--149. Google ScholarDigital Library
- Megan Liang, Gabrielle Palado, and Will N Browne. 2019. Identifying Simple Shapes to Classify the Big Picture. In 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ). IEEE, 1--6.Google ScholarCross Ref
- Kazuma Matsumoto, Ryo Takano, Takato Tatsumi, Hiroyuki Sato, Tim Kovacs, and Keiki Takadama. 2018. XCSR based on compressed input by deep neural network for high dimensional data. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1418--1425.Google ScholarDigital Library
- Kazuma Matsumoto, Takato Tatsumi, Hiroyuki Sato, Tim Kovacs, and Keiki Takadama. 2017. XCSR Learning from Compressed Data Acquired by Deep Neural Network. Journal of Advanced Computational Intelligence and Intelligent Informatics 21, 5 (2017), 856--867. Google ScholarCross Ref
- Masaya Nakata, Will Browne, and Tomoki Hamagami. 2018. Theoretical adaptation of multiple rule-generation in XCS. In Proceedings of the Genetic and Evolutionary Computation Conference. 482--489.Google ScholarDigital Library
- Masaya Nakata, Will Browne, Tomoki Hamagami, and Keiki Takadama. 2017. Theoretical XCS parameter settings of learning accurate classifiers. In Proceedings of the Genetic and Evolutionary Computation Conference. 473--480.Google ScholarDigital Library
- Masaya Nakata and Will Neil Browne. 2019. How XCS can prevent misdistinguishing rule accuracy: a preliminary study. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 183--184.Google ScholarDigital Library
- Masaya Nakata and Will N Browne. 2021. Learning Optimality Theory for Accuracy-based Learning Classifier Systems. IEEE Transactions on Evolutionary Computation 25, 1 (2021), 61--74.Google ScholarCross Ref
- David Pätzel and Jörg Hähner. 2018. An algebraic description of XCS. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1434--1441.Google ScholarDigital Library
- David Pätzel, Anthony Stein, and Jörg Hähner. 2019. A Survey of Formal Theoretical Advances Regarding XCS. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO '19). ACM, New York, NY, USA, 1295--1302. Google ScholarDigital Library
- David Pätzel, Anthony Stein, and Masaya Nakata. 2020. An Overview of LCS Research from IWLCS 2019 to 2020. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion. 1782--1788.Google ScholarDigital Library
- Arun Rai. 2020. Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science 48, 1 (2020), 137--141.Google ScholarCross Ref
- Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. Explainable AI: interpreting, explaining and visualizing deep learning. Vol. 11700. Springer Nature.Google ScholarDigital Library
- Anthony Stein and Masaya Nakata. 2020. Learning classifier systems: from principles to modern systems. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion. 561--589.Google ScholarDigital Library
- Masakazu Tadokoro, Satoshi Hasegawa, Takato Tatsumi, Hiroyuki Sato, and Keiki Takadama. 2019. Knowledge Extraction from XCSR Based on Dimensionality Reduction and Deep Generative Models. In 2019 IEEE Congress on Evolutionary Computation (CEC). IEEE, 1883--1890.Google ScholarDigital Library
- Stewart W. Wilson. 1995. Classifier Fitness Based on Accuracy. Evolutionary Computation 3, 2 (June 1995), 149--175.Google ScholarDigital Library
Index Terms
- Convergence analysis of rule-generality on the XCS classifier system
Recommendations
Theoretical adaptation of multiple rule-generation in XCS
GECCO '18: Proceedings of the Genetic and Evolutionary Computation ConferenceMost versions of the XCS Classifier System have been designed to evolve only two rules for each rule discovery invocation, which restricts the search capacity. A difficulty behind generating multiple rules each time is the increase in the probability of ...
How XCS can prevent misdistinguishing rule accuracy: a preliminary study
GECCO '19: Proceedings of the Genetic and Evolutionary Computation Conference CompanionOn the XCS classifier system, an ideal assumption in the latest XCS learning theory means that it is impossible for XCS to distinguish accurate rules from any other rules with 100% success rate in practical use. This paper presents a preliminary work to ...
Theoretical XCS parameter settings of learning accurate classifiers
GECCO '17: Proceedings of the Genetic and Evolutionary Computation ConferenceXCS is the most popular type of Learning Classifier System, but setting optimum parameter values is more of an art than a science. Early theoretical work required the impractical assumption that classifier parameters had fully converged with infinite ...
Comments