Abstract
Mainstream machine learning methods lack interpretability, explainability, incrementality, and data-economy. We propose using logic programming to rectify these problems. We discuss the FOLD family of rule-based machine learning algorithms that learn models from relational datasets as a set of default rules. These models are competitive with state-of-the-art machine learning systems in terms of accuracy and execution efficiency. We also motivate how logic programming can be useful for theory revision and explanation based learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aggarwal, C.C.: Neural Networks and Deep Learning - A Textbook. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94463-0
Arias, J., et al.: Constraint answer set programming without grounding. TPLP 18(3–4), 337–354 (2018)
Arias, J., et al.: Justifications for goal-directed constraint answer set programming. In: Proceedings 36th International Conference on Logic Programming (Technical Communications), vol. 325. EPTCS, pp. 59–72 (2020)
Basu, K., et al.: Symbolic reinforcement learning framework with incremental learning of rule-based policy. In: Proceedings of ICLP GDE’22 Workshop, vol. 3193. CEUR Workshop Proceedings. CEUR-WS.org (2022)
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD. KDD ’16, San Francisco, California, USA, pp. 785–794 (2016). ISBN 978-1-4503-4232-2
Cohen, W.W.: Fast effective rule induction. In: Proceedings of ICML, San Francisco, CA, USA, pp. 115–123 (1995)
Cropper, A., Dumancic, S.: Inductive logic programming at 30: a new introduction. arXiv:2008.07912 (2020)
DeJong, G., Mooney, R.J.: Explanation-based learning: an alternative view. Mach. Learn. 1(2), 145–176 (1986)
Dietz Saldanha, E.A., Hölldobler, S., Pereira, L.M.: Our themes on abduction in human reasoning: a synopsis. In: Abduction in Cognition and Action: Logical Reasoning, Scientific Inquiry, and Social Practice, pp. 279–293 (2021)
Dimopoulos, Y., Kakas, A.: Learning non-monotonic logic programs: learning exceptions. In: Lavrac, N., Wrobel, S. (eds.) ECML 1995. LNCS, vol. 912, pp. 122–137. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-59286-5_53
Richards, B.L., Mooney, R.J.: Automated refinement of first-order horn-clause domain theories. Mach. Learn. 19(2), 95–131 (1995)
Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning, and the Design of Intelligent Agents: the Answer-Set Programming Approach. Cambridge University Press, Cambridge (2014)
van Harmelen, F., Bundy, A.: Explanation-based generalisation = partial evaluation. Artif. Intell. 36(3), 401–412 (1988)
Laber, E., Molinaro, M., Pereira, F.M.: Binary partitions with approximate minimum impurity. In: by Dy, J., Krause, A. (eds.) Proceedings of ICML, vol. 80, pp. 2854–2862. Proceedings of Machine Learning Research. PMLR (2018)
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Minton, S., et al.: Explanation-based learning: a problem solving perspective. Artif. Intell. 40(1–3), 63–118 (1989)
Mitchener, L., et al.: Detect, understand, act: a neuro-symbolic hierarchical reinforcement learning framework. Mach. Learn. 111(4), 1523–1549 (2022)
Padalkar, P., Wang, H., Gupta, G.: NeSyFOLD: a system for generating logic-based explanations from convolutional neural networks. arXiv:2301.12667 (2023)
Quinlan, J.R.: Learning logical definitions from relations. Mach. Learn. 5, 239–266 (1990)
Reiter, R.: A logic for default reasoning. Artif. Intell. 13(1–2), 81–132 (1980)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of KDD, pp. 1135–1144. ACM (2016)
Shakerin, F.: Logic programming-based approaches in explainable AI and natural language processing. Ph.D. thesis, Department of Computer Science, The University of Texas at Dallas (2020)
Shakerin, F., Gupta, G.: Induction of non-monotonic logic programs to explain boosted tree models using LIME. In: Proceeding of AAAI, pp. 3052–3059. AAAI Press (2019)
Shakerin, F., Gupta, G.: Induction of non-monotonic rules from statistical learning models using high-utility itemset mining. arXiv:1905.11226 (2019)
Shakerin, F., Salazar, E., Gupta, G.: A new algorithm to automate inductive learning of default theories. TPLP 17(5–6), 1010–1026 (2017)
Srinivasan, A., Muggleton, S.H., Bain, M.: Distinguishing exceptions from noise in non-monotonic learning. In: Proceedings of International Workshop on Inductive Logic Programming (1992)
Stenning, K., van Lambalgen, M.: Human Reasoning and Cognitive Science. MIT Press, Cambridge (2008)
Wang, H., Gupta, G.: FOLD-R++: a scalable toolset for automated inductive learning of default theories from mixed data. In: Hanus, M., Igarashi, A. (eds.) FLOPS 2022. LNCS, vol. 13215, pp. 224–242. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99461-7_13, isbn: 978-3-030-99460-0
Wang, H., Gupta, G.: FOLD-SE: scalable explainable AI (2022)
Wang, H., Gupta, G.: FOLD-TR: a scalable and efficient inductive learning algorithm for learning to rank (2022). arXiv: 2206.07295
Wang, H., Shakerin, F., Gupta, G.: FOLD-RM: efficient scalable explainable AI. TPLP 22(5), 658–677 (2022)
Wusteman, J.: Explanation-based learning: a survey. Artif. Intell. Rev. 6(3), 243–262 (1992)
Yang, Z., Ishay, A., Lee, J.: NeurASP: embracing neural networks into answer set programming. In: Bessiere, C. (ed.) IJCAI 2020, pp. 1755–1762 (2020)
Acknowledgements
We are grateful to anonymous reviewers and to Bob Kowalski for insightful comments that helped in significantly improving this paper. Authors acknowledge partial support from NSF grants IIS 1910131, IIP 1916206, and US DoD.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Gupta, G. et al. (2023). Logic-Based Explainable and Incremental Machine Learning. In: Warren, D.S., Dahl, V., Eiter, T., Hermenegildo, M.V., Kowalski, R., Rossi, F. (eds) Prolog: The Next 50 Years. Lecture Notes in Computer Science(), vol 13900. Springer, Cham. https://doi.org/10.1007/978-3-031-35254-6_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-35254-6_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35253-9
Online ISBN: 978-3-031-35254-6
eBook Packages: Computer ScienceComputer Science (R0)