Skip to main content
Log in

A process framework for inducing and explaining Datalog theories

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

With the increasing prevalence of Machine Learning in everyday life, a growing number of people will be provided with Machine-Learned assessments on a regular basis. We believe that human users interacting with systems based on Machine-Learned classifiers will demand and profit from the systems’ decisions being explained in an approachable and comprehensive way. We developed a general process framework for logic-rule-based classifiers facilitating mutual exchange between system and user. The framework constitutes a guideline for how a system can apply Inductive Logic Programming in order to provide comprehensive explanations for classification choices and empowering users to evaluate and correct the system’s decisions. It also includes users’ corrections being integrated into the system’s core logic rules via retraining in order to increase the overall performance of the human-computer system. The framework suggests various forms of explanations—like natural language argumentations, near misses emphasizing unique characteristics, or image annotations—to be integrated into the system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):1–46

    Google Scholar 

  • Ceri S, Gottlob G, Tanca L (1990) Logic programming and databases. Springer, Berlin

    Book  Google Scholar 

  • De Raedt L (2008) Logical and relational learning. Springer, Berlin

    Book  Google Scholar 

  • Džeroski S, Muggleton SH, Russell SJ (1992) Pac-learnability of determinate logic programs. In: COLT ’92: proceedings of the fifth annual workshop on computational learning theory, pp 128–135. https://doi.org/10.1145/130385.130399

  • Ekman P, Friesen WV (1978) The facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto

    Google Scholar 

  • Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, van Gerven M (eds) (2018) Explainable and interpretable models in computer vision and machine learning. Springer, Berlin

    Google Scholar 

  • Kamilaris A, Kartakoullis A, Prenafeta-Boldú FX (2017) A review on the practice of big data analysis in agriculture. Comput Electron Agric 143:23–37

    Article  Google Scholar 

  • Lin D, Dechter E, Ellis K, Tenenbaum JB, Muggleton SH (2014) Bias reformulation for one-shot function induction. In: ECAI 2014, pp 525–530

  • Lombrozo T, Vasilyeva N (2017) Causal explanation. In: Waldmann M (ed) Oxford handbook of causal reasoning. Oxford University Press, Oxford, pp 415–432

    Google Scholar 

  • Losing V, Hammer B, Wersing H (2018) Incremental on-line learning: a review and comparison of state of the art algorithms. Neurocomputing 275:1261–1274

    Article  Google Scholar 

  • Markman AB, Gentner D (1996) Commonalities and differences in similarity comparisons. Mem Cogn 24(2):235–249

    Article  Google Scholar 

  • Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  MathSciNet  Google Scholar 

  • Muggleton S (1995) Inverse entailment and progol. New Gener Comput 13(3–4):245–286

    Article  Google Scholar 

  • Muggleton SH, De Raedt L (1994) Inductive logic programming: theory and methods. J Log Program 19–20:629–679

    Article  MathSciNet  Google Scholar 

  • Muggleton SH, Lin D, Tamaddoni-Nezhad A (2015) Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach Learn 100:49–73

    Article  MathSciNet  Google Scholar 

  • Muggleton SH, Schmid U, Zeller C, Tamaddoni-Nezhad A, Besold T (2018) Ultra-strong machine learning: comprehensibility of programs learned with ILP. Mach Learn 107(7):1119–1140. https://doi.org/10.1007/s10994-018-5707-3

  • Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. SIGKDD 2016:1135–1144

    Google Scholar 

  • Rieger I, Finzel B, Seuß D, Wittenberg T, Schmid U (2019) Make pain estimation transparent: a roadmap to fuse bayesian deep learning and inductive logic programming. In: IEEE EMBS 2019

  • Riguzzi F (2018) Foundations of probabilistic logic programming. River Publishers, Gistrup

    MATH  Google Scholar 

  • Schmid U (2019) Cooperative learning with mutual explanations, 2019. In: Invited talk at human-like computing third wave of AI workshop, London, UK

  • Siebers M, Schmid U (2019) Please delete that! Why should I? Explaining learned irrelevance classifications of digital objects. KI - Künstliche Intelligenz 33(1):35–44. https://doi.org/10.1007/s13218-018-0565-5

  • Siebers M, Göbel K, Niessen C, Schmid U (2017a) Requirements for a companion system to support identifying irrelevancy. In: 2017 international conference on companion technology, pp 1–2. https://doi.org/10.1109/COMPANION.2017.8287076

  • Siebers M, Schmid U, Göbel K, Niessen C (2017b) A psychonic approach to the design of a cognitive companion supporting intentional forgetting. Kognitive Systeme. https://doi.org/10.17185/duepublico/44537

  • Srinivasan A (2004) The aleph manual. http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/

  • Tintarev N, Masthoff J (2015) Explaining recommendations: design and evaluation. In: Ricci F, Rokach L, Shapira B (eds) Recommender systems handbook. Springer, Berlin, pp 353–382

    Chapter  Google Scholar 

  • Walicki M (2017) Introduction to mathematical logic. World Scientific, Singapore

    MATH  Google Scholar 

  • Winston PH (1975) Learning structural descriptions from examples. In: Winston PH (ed) The psychology of computer vision. McGraw-Hill, New York, pp 157–210

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Siebers.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—318286042 (Dare2Del); 405630557 (PainFaceReader).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gromowski, M., Siebers, M. & Schmid, U. A process framework for inducing and explaining Datalog theories. Adv Data Anal Classif 14, 821–835 (2020). https://doi.org/10.1007/s11634-020-00422-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-020-00422-7

Keywords

Mathematics Subject Classification

Navigation