Abstract
Decision lists (DLs) find a wide range of uses for classification problems in Machine Learning (ML), being implemented in anumber of ML frameworks. DLs are often perceived as interpretable. However, building on recent results for decision trees (DTs), we argue that interpretability is an elusive goal for some DLs. As a result, for some uses of DLs, it will be important to compute (rigorous) explanations. Unfortunately, and in clear contrast with the case of DTs, this paper shows that computing explanations for DLs is computationally hard. Motivated by this result, the paper proposes propositional encodings for computing abductive explanations (AXps) and contrastive explanations (CXps) of DLs. Furthermore, the paper investigates the practical efficiency of a MARCO-like approach for enumerating explanations. The experimental results demonstrate that, for DLs used in practical settings, the use of SAT oracles offers a very efficient solution, and that complete enumeration of explanations is most often feasible.
This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program “Investing for the Future – PIA3” under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA “Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Interpretability is a subjective concept, for which no rigorous accepted definition exists [46]. As clarified later in the paper, for a given pair ML model and instance, we equate interpretability with how succinct is the justification for the model’s prediction.
- 2.
The prototype is available at https://github.com/alexeyignatiev/xdl-tool.
- 3.
Recent alternative approaches to sparse decision lists [1, 2, 65] have also been considered but were eventually discarded for two reasons: (1) they can only deal with binary data and (2) they produce sparse decision lists containing a couple of rules and a few literals in total—i.e. these methods do not provide models that would be of interest for our work.
- 4.
References
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M.I., Rudin, C.: Learning certifiably optimal rule lists. In: KDD, pp. 35–44 (2017)
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M.I., Rudin, C.: Learning certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 18, 234:1–234:78 (2017). http://jmlr.org/papers/v18/17-716.html
Audemard, G., Koriche, F., Marquis, P.: On tractable XAI queries based on compiled representations. In: KR, pp. 838–849 (2020)
Audemard, G., Lagniez, J., Simon, L.: Improving glucose for incremental SAT solving with assumptions: application to MUS extraction. In: SAT, pp. 309–317 (2013)
Bailey, J., Stuckey, P.J.: Discovery of minimal unsatisfiable subsets of constraints using hitting set dualization. In: PADL, pp. 174–186 (2005)
Belov, A., Lynce, I., Marques-Silva, J.: Towards efficient MUS extraction. AI Commun. 25(2), 97–116 (2012)
Belov, A., Marques-Silva, J.: Accelerating MUS extraction with recursive model rotation. In: FMCAD, pp. 37–40 (2011)
Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Frontiers in Artificial Intelligence and Applications, vol. 336. IOS Press, Amsterdam (2021)
Birnbaum, E., Lozinskii, E.L.: Consistent subsets of inconsistent systems: structure and behaviour. J. Exp. Theor. Artif. Intell. 15(1), 25–46 (2003)
Bouckaert, R.R., et al.: WEKA - experiences with a java open-source project. J. Mach. Learn. Res. 11, 2533–2541 (2010). http://portal.acm.org/citation.cfm?id=1953016
Camburu, O., Giunchiglia, E., Foerster, J., Lukasiewicz, T., Blunsom, P.: Can I trust the explainer? verifying post-hoc explanatory methods. CoRR abs/1910.02065 (2019). http://arxiv.org/abs/1910.02065
Chen, C., Rudin, C.: An optimization approach to learning falling rule lists. In: AISTATS, pp. 604–612 (2018)
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: KDD, pp. 785–794 (2016)
Clark, P., Boswell, R.: Rule induction with CN2: some recent improvements. In: EWSL, pp. 151–163 (1991)
Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn. 3, 261–283 (1989)
Cohen, W.W.: Efficient pruning methods for separate-and-conquer rule learning systems. In: Bajcsy, R. (ed.) Proceedings of the 13th International Joint Conference on Artificial Intelligence, 28 August–3 September 1993, Chambéry, France. pp. 988–994. Morgan Kaufmann (1993)
Cohen, W.W.: Fast effective rule induction. In: ICML, pp. 115–123 (1995)
Cohen, W.W., Singer, Y.: A simple, fast, and effictive rule learner. In: AAAI, pp. 335–342 (1999)
Darwiche, A., Hirth, A.: On the reasons behind decisions. In: ECAI, pp. 712–720 (2020). https://doi.org/10.3233/FAIA200158
Darwiche, A., Marquis, P.: A knowledge compilation map. J. Artif. Intell. Res. 17, 229–264 (2002)
Davies, J., Bacchus, F.: Solving MAXSAT by solving a sequence of simpler SAT instances. In: CP, pp. 225–239 (2011)
Demsar, J., et al.: Orange: data mining toolbox in python. J. Mach. Learn. Res. 14(1), 2349–2353 (2013). http://dl.acm.org/citation.cfm?id=2567736, https://orangedatamining.com/
Auditing black-box predictive models. https://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html (2016)
Friedler, S., Scheidegger, C., Venkatasubramanian, S.: On algorithmic fairness, discrimination and disparate impact (2015)
Ignatiev, A.: Towards trustable explainable AI. In: IJCAI, pp. 5154–5158 (2020)
Ignatiev, A., Janota, M., Marques-Silva, J.: Quantified maximum satisfiability. Constraints An Int. J. 21(2), 277–302 (2016)
Ignatiev, A., Morgado, A., Marques-Silva, J.: Propositional abduction with implicit hitting sets. In: ECAI, pp. 1327–1335 (2016)
Ignatiev, A., Morgado, A., Marques-Silva, J.: PySAT: A Python toolkit for prototyping with SAT oracles. In: SAT, pp. 428–437 (2018)
Ignatiev, A., Morgado, A., Marques-Silva, J.: RC2: an efficient MaxSAT solver. J. Satisf. Boolean Model. Comput. 11(1), 53–64 (2019)
Ignatiev, A., Morgado, A., Weissenbacher, G., Marques-Silva, J.: Model-based diagnosis with multiple observations. In: IJCAI, pp. 1108–1115 (2019)
Ignatiev, A., Narodytska, N., Asher, N., Marques-Silva, J.: From contrastive to abductive explanations and back again. In: AI*IA (2020). preliminary version available from https://arxiv.org/abs/2012.11067
Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI, pp. 1511–1519 (2019)
Ignatiev, A., Narodytska, N., Marques-Silva, J.: On relating explanations and adversarial examples. In: NeurIPS, pp. 15857–15867 (2019)
Ignatiev, A., Narodytska, N., Marques-Silva, J.: On validating, repairing and refining heuristic ML explanations. CoRR abs/1907.02509 (2019). http://arxiv.org/abs/1907.02509
Ignatiev, A., Pereira, F., Narodytska, N., Marques-Silva, J.: A sat-based approach to learn explainable decision sets. In: IJCAR, pp. 627–645 (2018)
Ignatiev, A., Previti, A., Liffiton, M.H., Marques-Silva, J.: Smallest MUS extraction with minimal hitting set dualization. In: CP, pp. 173–182 (2015)
Izza, Y., Ignatiev, A., Marques-Silva, J.: On explaining decision trees. CoRR abs/2010.11034 (2020)
Junker, U.: QUICKXPLAIN: preferred explanations and relaxations for over-constrained problems. In: AAAI, pp. 167–172 (2004)
Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: KDD, pp. 1675–1684 (2016)
Lakkaraju, H., Bastani, O.: “How do I fool you?”: manipulating user trust via misleading black box explanations. In: AIES, pp. 79–85 (2020)
Liffiton, M.H., Malik, A.: Enumerating infeasibility: finding multiple MUSes quickly. In: CPAIOR, pp. 160–175 (2013)
Liffiton, M.H., Mneimneh, M.N., Lynce, I., Andraus, Z.S., Marques-Silva, J., Sakallah, K.A.: A branch and bound algorithm for extracting smallest minimal unsatisfiable subformulas. Constraints An Int. J. 14(4), 415–442 (2009)
Liffiton, M.H., Previti, A., Malik, A., Marques-Silva, J.: Fast, flexible MUS enumeration. Constraints An Int. J. 21(2), 223–250 (2016)
Liffiton, M.H., Sakallah, K.A.: On finding all minimally unsatisfiable subformulas. In: SAT, pp. 173–186 (2005)
Liffiton, M.H., Sakallah, K.A.: Algorithms for computing minimal unsatisfiable subsets of constraints. J. Autom. Reasoning 40(1), 1–33 (2008)
Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)
Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NeurIPS, pp. 4765–4774 (2017)
Lynce, I., Marques-Silva, J.: On computing minimum unsatisfiable cores. In: SAT (2004)
Marques-Silva, J., Gerspacher, T., Cooper, M.C., Ignatiev, A., Narodytska, N.: Explaining Naive Bayes and other linear classifiers with polynomial time and delay. In: NeurIPS (2020)
Marques-Silva, J., Heras, F., Janota, M., Previti, A., Belov, A.: On computing minimal correction subsets. In: IJCAI, pp. 615–622 (2013)
Marques-Silva, J., Lynce, I.: On improving MUS extraction algorithms. In: SAT, pp. 159–173 (2011)
Mencia, C., Ignatiev, A., Previti, A., Marques-Silva, J.: MCS extraction with sublinear oracle queries. In: SAT, pp. 342–360 (2016)
Mencia, C., Previti, A., Marques-Silva, J.: Literal-based MCS extraction. In: IJCAI, pp. 1973–1979 (2015)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Morgado, A., Liffiton, M.H., Marques-Silva, J.: MaxSAT-based MCS enumeration. In: HVC, pp. 86–101 (2012)
de Moura, L.M., Bjørner, N.: Z3: an efficient SMT solver. In: TACAS, pp. 337–340 (2008)
Narodytska, N., Shrotri, A., Meel, K.S., Ignatiev, A., Marques-Silva, J.: Assessing heuristic machine learning explanations with model counting. In: Janota, M., Lynce, I. (eds.) SAT 2019. LNCS, vol. 11628, pp. 267–278. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24258-9_19
Penn Machine Learning Benchmarks. https://github.com/EpistasisLab/penn-ml-benchmarks
Prestwich, S.D.: CNF encodings. In: Handbook of Satisfiability: Second Edition, Frontiers in Artificial Intelligence and Applications, vol. 336, pp. 75–100. IOS Press (2021)
Previti, A., Marques-Silva, J.: Partial MUS enumeration. In: AAAI (2013)
Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI, pp. 1527–1535 (2018)
Rivest, R.L.: Learning decision lists. Mach. Learn. 2(3), 229–246 (1987). https://doi.org/10.1007/BF00058680
Rudin, C., Ertekin, S.: Learning customized and optimized lists of rules with mathematical programming. Math. Program. Comput. 10(4), 659–702 (2018). https://doi.org/10.1007/s12532-018-0143-8
Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, pp. 5103–5111 (2018)
Shih, A., Choi, A., Darwiche, A.: Compiling Bayesian network classifiers into decision graphs. In: AAAI, pp. 7966–7974 (2019)
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: AIES, pp. 180–186 (2020)
UCI Machine Learning Repository. https://archive.ics.uci.edu/ml
Umans, C., Villa, T., Sangiovanni-Vincentelli, A.L.: Complexity of two-level logic minimization. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 25(7), 1230–1246 (2006)
Wang, F., Rudin, C.: Falling rule lists. In: AISTATS (2015)
Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: NeurIPS, pp. 2319–2328 (2017)
Yang, H., Rudin, C., Seltzer, M.I.: Scalable bayesian rule lists. In: ICML, pp. 3921–3930 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ignatiev, A., Marques-Silva, J. (2021). SAT-Based Rigorous Explanations for Decision Lists. In: Li, CM., Manyà, F. (eds) Theory and Applications of Satisfiability Testing – SAT 2021. SAT 2021. Lecture Notes in Computer Science(), vol 12831. Springer, Cham. https://doi.org/10.1007/978-3-030-80223-3_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-80223-3_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80222-6
Online ISBN: 978-3-030-80223-3
eBook Packages: Computer ScienceComputer Science (R0)