skip to main content
research-article

An Analysis of the Ingredients for Learning Interpretable Symbolic Regression Models with Human-in-the-loop and Genetic Programming

Published: 23 February 2024 Publication History

Abstract

Interpretability is a critical aspect to ensure a fair and responsible use of machine learning (ML) in high-stakes applications. Genetic programming (GP) has been used to obtain interpretable ML models because it operates at the level of functional building blocks: if these building blocks are interpretable, there is a chance that their composition (i.e., the entire ML model) is also interpretable. However, the degree to which a model is interpretable depends on the observer. Motivated by this, we study a recently-introduced human-in-the-loop system that allows the user to steer GP’s generation process to their preferences, which shall be online-learned by an artificial neural network (ANN). We focus on the generation of ML models as analytical functions (i.e., symbolic regression) as this is a key problem in interpretable ML, and propose a two-fold contribution. First, we devise more general representations for the ML models for the ANN to learn upon, to enable the application of the system to a wider range of problems. Second, we delve into a deeper analysis of the system’s components. To this end, we propose an incremental experimental evaluation, aimed at (1) studying the effectiveness by which an ANN can capture the perceived interpretability for simulated users, (2) investigating how the GP’s outcome is affected across different simulated user feedback profiles, and (3) determining whether humans participants would prefer models that were generated with or without their involvement. Our results pose clarity on pros and cons of using a human-in-the-loop approach to discover interpretable ML models with GP.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 52138–52160. DOI:
[2]
Jaume Bacardit, Alexander EI Brownlee, Stefano Cagnoni, Giovanni Iacca, John McCall, and David Walker. 2022. The intersection of evolutionary computation and explainable AI. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1757–1762.
[3]
Alberto Bartoli, Andrea De Lorenzo, Eric Medvet, and Fabiano Tarlao. 2016. Inference of regular expressions for text extraction from examples. IEEE Transactions on Knowledge and Data Engineering 28, 5 (2016), 1217–1230.
[4]
Alberto Bartoli, Andrea De Lorenzo, Eric Medvet, and Fabiano Tarlao. 2017. Active learning of regular expressions for entity extraction. IEEE Transactions on Cybernetics 48, 3 (2017), 1067–1080.
[5]
Michaela Benk and Andrea Ferrario. 2020. Explaining interpretable machine learning: Theory, methods and applications. Methods and Applications (December 11, 2020), 87 pages. DOI:
[6]
J. Blank and K. Deb. 2020. pymoo: Multi-objective optimization in python. IEEE Access 8 (2020), 89497–89509. DOI:
[7]
Leonard A. Breslow and David W. Aha. 1997. Simplifying decision trees: A survey. The Knowledge Engineering Review 12, 01 (1997), 1–40.
[8]
Karina Brotto Rebuli, Mario Giacobini, Sara Silva, and Leonardo Vanneschi. 2023. A comparison of structural complexity metrics for explainable genetic programming. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation. 539–542.
[9]
Alberto Cano, Amelia Zafra, and Sebastián Ventura. 2013. An interpretable classification rule mining algorithm. Information Sciences 240 (2013), 1–20. DOI:
[10]
Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems 30 (2017), 4299–4307.
[11]
Leonardo Lucio Custode and Giovanni Iacca. 2023. Evolutionary learning of interpretable decision trees. IEEE Access 11 (2023), 6169–6184. DOI:
[12]
Leonardo Lucio Custode and Giovanni Iacca. 2021. A co-evolutionary approach to interpretable reinforcement learning in environments with continuous action spaces. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1–8.
[13]
Leonardo Lucio Custode and Giovanni Iacca. 2022. Interpretable AI for policy-making in pandemics. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO’22). Association for Computing Machinery, New York, NY, 1763–1769. DOI:
[14]
Leonardo Lucio Custode and Giovanni Iacca. 2022a. Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 224–227.
[15]
Arun Das and Paul Rad. 2020. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv:2006.11371. Retrieved from https://arxiv.org/abs/2006.11371
[16]
Junio De Freitas, Gisele L. Pappa, Altigran S. da Silva, Marcos A. Gonc, Edleno Moura, Adriano Veloso, Alberto H. F. Laender, and Moisés G. de Carvalho. 2010. Active learning genetic programming for record deduplication. In Proceedings of the IEEE Congress on Evolutionary Computation, IEEE, Barcelona, 1–8. DOI:
[17]
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 2 (2002), 182–197. DOI:
[18]
Persi Diaconis and Ronald L. Graham. 1977. Spearman’s footrule as a measure of disarray. Journal of the Royal Statistical Society: Series B (Methodological) 39, 2 (1977), 262–268.
[19]
Derek Doran, Sarah Schulz, and Tarek R. Besold. 2017. What does explainable AI really mean? A new conceptualization of perspectives. arXiv:1710.00794. Retrieved from https://arxiv.org/abs/1710.00794
[20]
Filip Karlo Došilović, Mario Brčić, and Nikica Hlupić. 2018. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). 0210–0215. DOI:
[21]
Aniko Ekart and Sandor Z. Nemeth. 2001. Selection based on the pareto nondomination criterion for controlling code growth in genetic programming. Genetic Programming and Evolvable Machines 2, 1 (2001), 61–73.
[22]
Andrea Ferigo, Leonardo Lucio Custode, and Giovanni Iacca. 2023. Quality diversity evolutionary learning of decision trees. In Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (SAC’23). Association for Computing Machinery, New York, NY, 425–432. DOI:
[23]
Alberto Fernandez, Francisco Herrera, Oscar Cordon, Maria Jose del Jesus, and Francesco Marcelloni. 2019. Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to? IEEE Computational Intelligence Magazine 14, 1 (2019), 69–81.
[24]
Richard P. Feynman, Robert B. Leighton, and Matthew Sands. 1965. The feynman lectures on physics; vol. i. American Journal of Physics 33, 9 (1965), 750–752.
[25]
Alex A. Freitas. 2014. Comprehensible classification models: A position paper. ACM SIGKDD Explorations Newsletter 15, 1 (2014), 1–10.
[26]
Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg, and Andreas Holzinger. 2018. Explainable AI: The new 42?. In Proceedings of the Machine Learning and Knowledge Extraction. Andreas Holzinger, Peter Kieseberg, A. Min Tjoa, and Edgar Weippl (Eds.), Springer International Publishing, Cham, 295–303.
[27]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018a. Local rule-based explanations of black box decision systems. arXiv:1805.10820. Retrieved from https://arxiv.org/abs/1805.10820
[28]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018b. A survey of methods for explaining black box models. ACM Computing Surveys 51, 5 (2018), 1–42.
[29]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved training of wasserstein gans. Advances in Neural Information Processing Systems 30 (2017), 5767–5777.
[30]
Hani Hagras. 2018. Toward human-understandable, explainable AI. Computer 51, 9 (2018), 28–36.
[31]
David Harrison Jr and Daniel L. Rubinfeld. 1978. Hedonic housing prices and the demand for clean air. Journal of Environmental Economics and Management 5, 1 (1978), 81–102.
[32]
Joshua James Hatherley. 2020. Limits of trust in medical AI. Journal of Medical Ethics 46, 7 (2020), 478–481.
[33]
Daniel Hein, Steffen Udluft, and Thomas A. Runkler. 2018. Interpretable policies for reinforcement learning by genetic programming. Engineering Applications of Artificial Intelligence 76 (2018), 158–169. DOI:
[34]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608. Retrieved from https://arxiv.org/abs/1812.04608
[35]
Andreas Holzinger. 2018. From machine learning to explainable AI. In Proceedings of the 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA). 55–66. DOI:
[36]
Johan Huysmans, Karel Dejaeger, Christophe Mues, Jan Vanthienen, and Bart Baesens. 2011. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems 51, 1 (2011), 141–154.
[37]
Robert Isele and Christian Bizer. 2013. Active learning of expressive linkage rules using genetic programming. Journal of Web Semantics 23 (2013), 2–15. DOI:
[38]
Yacine Izza, Alexey Ignatiev, and Joao Marques-Silva. 2020. On explaining decision trees. arXiv:2010.11034. Retrieved from https://arxiv.org/abs/2010.11034
[39]
Noman Javed, Fernand R. Gobet, and Peter Lane. 2022. Simplification of genetic programs: A literature survey. Data Mining and Knowledge Discovery 36 (2022), 1279–1300. DOI:
[40]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (2019), 389–399. DOI:
[41]
Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. 1996. Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4 (1996), 237–285. DOI:
[42]
Maarten Keijzer. 2003. Improving symbolic regression with interval arithmetic and linear scaling. In Proceedings of the European Conference on Genetic Programming. Springer, 70–82.
[43]
Maarten Keijzer. 2004. Scaled symbolic regression. Genetic Programming and Evolvable Machines 5, 3 (2004), 259–269.
[44]
Varun Kompella, Roberto Capobianco, Stacy Jong, Jonathan Browne, Spencer Fox, Lauren Meyers, Peter Wurman, and Peter Stone. 2020. Reinforcement learning for optimization of COVID-19 mitigation policies. arXiv:2010.10560. Retrieved from https://arxiv.org/abs/2010.10560
[45]
Boris Kovalerchuk, Muhammad Aurangzeb Ahmad, and Ankur Teredesai. 2021. Survey of explainable machine learning with visual and granular methods beyond quasi-explanations. Interpretable Artificial Intelligence: A Perspective of Granular Computing. Part of the Studies in Computational Intelligence book series (SCI, volume 937, chapter 8), 217–267. DOI:
[46]
John R. Koza. 1994. Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4, 2 (1994), 87–112.
[47]
William La Cava, Patryk Orzechowski, Bogdan Burlacu, Fabricio Olivetti de Franca, Marco Virgolin, Ying Jin, Michael Kommenda, and Jason H. Moore. 2021. Contemporary symbolic regression methods and their relative performance. In Proceedings of the 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
[48]
Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. 2016. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1675–1684.
[49]
Andrew Lensen. 2021. Mining feature relationships in data. In Proceedings of the European Conference on Genetic Programming (Part of EvoStar). Springer, 247–262.
[50]
Andrew Lensen, Bing Xue, and Mengjie Zhang. 2020. Genetic programming for evolving a front of interpretable models for data visualization. IEEE Transactions on Cybernetics 51, 11 (2020), 5468–5482.
[51]
Zachary C. Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31–57.
[52]
Dazhuang Liu, Marco Virgolin, Tanja Alderliesten, and Peter Bosman. 2022. Evolvability degeneration in multi-objective genetic programming for symbolic regression. In Proceedings of the Genetic and Evolutionary Computation Conference. 973–981.
[53]
Sean Luke and Liviu Panait. 2001. A survey and comparison of tree generation algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001). Citeseer, 81–88.
[54]
Zahra Mahoor, Jack Felag, and Josh Bongard. 2017. Morphology dictates a robot’s ability to ground crowd-proposed language. arXiv:1712.05881. Retrieved from https://arxiv.org/abs/1712.05881
[55]
Eric Medvet, Alberto Bartoli, Barbara Carminati, and Elena Ferrari. 2015. Evolutionary inference of attribute-based access control policies. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization. Springer, 351–365.
[56]
Yi Mei, Qi Chen, Andrew Lensen, Bing Xue, and Mengjie Zhang. 2022. Explainable artificial intelligence by genetic programming: A survey. IEEE Transactions on Evolutionary Computation 27, 3 (2022), 621–641. DOI:
[57]
Risto Miikkulainen, Olivier Francon, Elliot Meyerson, Xin Qiu, Darren Sargent, Elisa Canzani, and Babak Hodjat. 2021. From prediction to prescription: Evolutionary optimization of nonpharmaceutical interventions in the COVID-19 pandemic. IEEE Transactions on Evolutionary Computation 25, 2 (2021), 386–401.
[58]
Christoph Molnar. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Leanpub book, ISBN-10 0244768528, ISBN-13 978-0244768522, 318 pages.
[59]
Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, and Bernd Bischl. 2020. Pitfalls to avoid when interpreting machine learning models. XXAI: Extending Explainable AI Beyond Deep Models and Classifiers, ICML 2020 Workshop. http://eprints.cs.univie.ac.at/6427/
[60]
Christopher J. Moore, Alvin J. K. Chua, Christopher P. L. Berry, and Jonathan R. Gair. 2016. Fast methods for training gaussian processes on large datasets. Royal Society Open Science 3, 5 (2016), 160125.
[61]
Aidan Murphy, Gráinne Murphy, Jorge Amaral, Douglas MotaDias, Enrique Naredo, and Conor Ryan. 2021. Towards incorporating human knowledge in fuzzy pattern tree evolution. In Proceedings of the European Conference on Genetic Programming (Part of EvoStar). Springer, 66–81.
[62]
Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the Icml.
[63]
Michael O’Neill and Conor Ryan. 2001. Grammatical evolution. IEEE Transactions on Evolutionary Computation 5, 4 (2001), 349–358.
[64]
Michael O’Neill, Riccardo Poli, William B. Langdon, and Nicholas F. McPhee. 2009. A field guide to genetic programming. Genetic Programming and Evolvable Machines 10, 2 (2009), 229–230. DOI:
[65]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Advances in Neural Information Processing Systems 32. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), Curran Associates, Inc., 8024–8035. Retrieved from http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[66]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research 12 (2011), 2825–2830.
[67]
Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–52.
[68]
Apostolos F. Psaros, Xuhui Meng, Zongren Zou, Ling Guo, and George Em Karniadakis. 2023. Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons. Journal of Computational Physics 477 (2023), 111902. DOI:
[69]
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, and Xin Wang. 2021. A survey of deep active learning. ACM Computing Surveys 54, 9 (2021), 1–40.
[70]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1135–1144.
[71]
Luigi Rovito, Lorenzo Bonin, Luca Manzoni, and Andrea De Lorenzo. 2022. An evolutionary computation approach for twitter bot detection. Applied Sciences 12, 12 (2022), 5915–5939. DOI:
[72]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206–215.
[73]
Omer Sagi and Lior Rokach. 2020. Explainable decision forest: Transforming a decision forest into an interpretable tree. Information Fusion 61 (2020), 124–138. DOI:
[74]
Wojciech Samek and Klaus-Robert Müller. 2019. Towards explainable artificial intelligence. In Proceedings of the Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 5–22.
[75]
Jimmy Secretan, Nicholas Beato, David B. D’Ambrosio, Adelein Rodriguez, Adam Campbell, Jeremiah T. Folsom-Kovarik, and Kenneth O. Stanley. 2011. Picbreeder: A case study in collaborative evolutionary exploration of design space. Evolutionary Computation 19, 3 (2011), 373–403.
[76]
Burr Settles. 2009. Active Learning Literature Survey. University of Wisconsin-Madison, Department of Computer Sciences. http://digital.library.wisc.edu/1793/60660
[77]
Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2020. CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, (AIES’20, New York, NY, USA), Association for Computing Machinery, New York, NY, 166–172.
[78]
Guido F. Smits and Mark Kotanchek. 2005. Pareto-front exploitation in symbolic regression. In Proceedings of the Genetic Programming Theory and Practice II. Springer, 283–299.
[79]
Charles Spearman. 1906. Footrule for measuring correlation. British Journal of Psychology 2, 1 (1906), 89.
[80]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
[81]
Guolong Su, Dennis Wei, Kush R. Varshney, and Dmitry M. Malioutov. 2015. Interpretable two-level boolean rule learning for classification. arXiv:1511.07361. Retrieved from https://arxiv.org/abs/1511.07361
[82]
Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58, 1 (1996), 267–288.
[83]
Alexander Trott, Sunil Srinivasa, Douwe van der Wal, Sebastien Haneuse, and Stephan Zheng. 2021. Building a foundation for data-driven, interpretable, and robust policy design using the ai economist. arXiv:2108.02904. Retrieved from https://arxiv.org/abs/2108.02904
[84]
Athanasios Tsanas. 2012. Accurate Telemonitoring of Parkinson’s Disease Symptom Severity Using Nonlinear Speech Signal Processing and Statistical Machine Learning. Ph.D. Dissertation. Oxford University, UK.
[85]
Athanasios Tsanas and Angeliki Xifara. 2012. Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy and Buildings 49 (2012), 560–567. DOI:
[86]
Ryan J. Urbanowicz and Jason H. Moore. 2009. Learning classifier systems: A complete introduction, review, and roadmap. Journal of Artificial Evolution and Applications, Volume 2009, Article ID 736398, 25 pages. DOI:
[87]
Berk Ustun and Cynthia Rudin. 2016. Supersparse linear integer models for optimized medical scoring systems. Machine Learning 102, 3 (2016), 349–391.
[88]
Mathurin Videau, Alessandro Leite, Olivier Teytaud, and Marc Schoenauer. 2022. Multi-objective genetic programming for explainable reinforcement learning. In Proceedings of the European Conference on Genetic Programming (Part of EvoStar). Springer, 278–293.
[89]
Giulia Vilone and Luca Longo. 2020. Explainable artificial intelligence: A systematic review. arXiv:2006.00093. Retrieved from https://arxiv.org/abs/2006.00093
[90]
Marco Virgolin. 2022. genepro. Retrieved from https://github.com/marcovirgolin/genepro. Accessed 10 July 2022.
[91]
Marco Virgolin, Tanja Alderliesten, and Peter A. N. Bosman. 2019. Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In Proceedings of the Genetic and Evolutionary Computation Conference. 1084–1092.
[92]
Marco Virgolin, Tanja Alderliesten, and Peter A. N. Bosman. 2020a. On explaining machine learning models by evolving crucial and compact features. Swarm and Evolutionary Computation 53 (2020), 100640. DOI:
[93]
Marco Virgolin, Andrea De Lorenzo, Eric Medvet, and Francesca Randone. 2020b. Learning a formula of interpretability to learn interpretable formulas. In Proceedings of the Parallel Problem Solving from Nature - PPSN XVI: 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020 Part II. Springer-Verlag, Berlin,79–93. DOI:
[94]
Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, and Mattias Wahde. 2021. Model learning with personalized interpretability estimation (ML-PIE). In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1355–1364.
[95]
Marco Virgolin and Solon P. Pissis. 2022. Symbolic regression is NP-hard. Transactions on Machine Learning Research. https://openreview.net/forum?id=LTiaPxqe2e
[96]
Ekaterina (Katya) Vladislavleva, Guido Smits, and Dick den Hertog. 2009. Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming. IEEE Transactions on Evolutionary Computation 13, 2 (2009), 333–349. DOI:
[97]
Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3 (1992), 279–292.
[98]
Dennis G. Wilson, Sylvain Cussat-Blanc, Hervé Luga, and Julian F. Miller. 2018. Evolving simple programs for playing atari games. In Proceedings of the Genetic and Evolutionary Computation Conference. 229–236.
[99]
Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A brief survey on history, research areas, approaches and challenges. In Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing. Springer, 563–574.
[100]
Yazhou Yang and Marco Loog. 2016. Active learning using uncertainty information. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2646–2651.
[101]
Zijun Zhang. 2018. Improved adam optimizer for deep neural networks. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS). IEEE, 1–2.
[102]
Eckart Zitzler and Simon Künzli. 2004. Indicator-based selection in multiobjective search. In Proceedings of the International Conference on Parallel Problem Solving from Nature. Springer, 832–842.
[103]
Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, 2 (2005), 301–320.

Cited By

View all
  • (2024)Machine Learning-Based Process Optimization in Biopolymer Manufacturing: A ReviewPolymers10.3390/polym1623336816:23(3368)Online publication date: 29-Nov-2024
  • (2024)Interpretable Control CompetitionProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664051(11-12)Online publication date: 14-Jul-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Evolutionary Learning and Optimization
ACM Transactions on Evolutionary Learning and Optimization  Volume 4, Issue 1
Special Issue on Explainable AI in Evolutionary Computation
March 2024
119 pages
EISSN:2688-3007
DOI:10.1145/3613523
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 February 2024
Online AM: 30 January 2024
Accepted: 24 January 2024
Revised: 12 September 2023
Received: 16 November 2022
Published in TELO Volume 4, Issue 1

Check for updates

Author Tags

  1. Explainable artificial intelligence
  2. interpretable machine learning
  3. active learning
  4. neural networks
  5. genetic programming
  6. deep learning
  7. evolutionary computation
  8. evolutionary algorithms
  9. explainable evolutionary computation

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)247
  • Downloads (Last 6 weeks)30
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Machine Learning-Based Process Optimization in Biopolymer Manufacturing: A ReviewPolymers10.3390/polym1623336816:23(3368)Online publication date: 29-Nov-2024
  • (2024)Interpretable Control CompetitionProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664051(11-12)Online publication date: 14-Jul-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media