Abstract
We briefly review properties of explainable AI proposed by various researchers. We take a structural approach to the problem of explainable AI, examine the feasibility of these aspects and extend them where appropriate. Afterwards, we review combinatorial methods for explainable AI which are based on combinatorial testing-based approaches to fault localization. Last, we view the combinatorial methods for explainable AI through the lens provided by the properties of explainable AI that are elaborated in this work. We pose resulting research questions that need to be answered and point towards possible solutions, which involve a hypothesis about a potential parallel between software testing, human cognition and brain capacity.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
von Ahn, L., Blum, M., Hopper, N. J., Langford, J.: CAPTCHA: Using hard AI problems for security. In: Biham, E. (ed.) Advances in Cryptology — EUROCRYPT 2003, pp 294–311. Springer, Berlin, Heidelberg (2003)
Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2016)
Artelt, A., Hammer, B.: On the computation of counterfactual explanations–a survey. arXiv:1911.07749 (2019)
Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. In: IJCAI-17 workshop on explainable AI (XAI), vol. 8, pp 8–13 (2017)
Colbourn, C. J., McClary, D. W.: Locating and detecting arrays for interaction faults. J. Comb. Optim. 15(1), 17–48 (2008)
Došilović, F. K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: A survey. In: 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), pp 0210–0215. IEEE (2018)
Dubois, T.: No AI in cockpit anytime soon, onera, thales say. Aviation Week and Space Technology (Nov. 26) (2018)
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M. O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 263–274 (2019)
Ghandehari, L. S., Chandrasekaran, J., Lei, Y., Kacker, R., Kuhn, D. R.: BEN: A combinatorial testing-based fault localization tool. In: 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 1–4 (2015)
Grindal, M., Offutt, J.: Input parameter modeling for combination strategies. In: Proceedings of the 25th Conference on IASTED International Multi-Conference: Software Engineering, SE’07, pp 255–260. ACTA Press, Anaheim (2007)
Grochtmann, M., Grimm, K.: Classification trees for partition testing. Software Testing, Verification and Reliability 3(2), 63–82 (1993)
Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA). http://www.cc.gatech.edu/alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf (2017)
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z.: XAI—explainable artificial intelligence. Science Robotics 4(37). https://doi.org/10.1126/scirobotics.aay7120. https://robotics.sciencemag.org/content/4/37/eaay7120 (2019)
Hayhurst, K. J.: A practical tutorial on modified condition/decision coverage. National Aeronautics and Space Administration (2001)
Hilton, D. J.: Conversational processes and causal explanation. Psychol. Bull. 107(1), 65 (1990)
Jayaram, R., Krishnan, R.: Approaches to fault localization in combinatorial testing: A survey. In: Satapathy, S. C., Bhateja, V., Das, S. (eds.) Smart Computing and Informatics, pp 533–540. Springer Singapore, Singapore (2018)
Jin, H., Tsuchiya, T.: Constrained locating arrays for combinatorial interaction testing. J. Syst. Softw. 170, 110771 (2020)
Kasparov, G.: Deep thinking: where machine intelligence ends and human creativity begins. Hachette UK (2017)
Kuhn, D., Kacker, R., Lei, Y.: Practical combinatorial testing. NIST Special Publication 800–142 (2010)
Kuhn, D. R., Kacker, R. N., Lei, Y., Simos, D. E.: Combinatorial methods for explainable AI. In: 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 167–170 (2020)
Li, J., Nie, C., Lei, Y.: Improved delta debugging based on combinatorial testing. In: 2012 12th International Conference on Quality Software, pp. 102–105 (2012)
Lugano, G.: Virtual assistants and self-driving cars. In: 2017 15th International Conference on ITS Telecommunications (ITST), pp. 1–5 (2017)
Lundberg, S. M., Lee, S. I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp 4765–4774. Curran Associates Inc (2017)
Mandel, D. R., Hilton, D. J., Catellani, P. E.: The psychology of counterfactual thinking. Routledge (2005)
Miller, G. A.: The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63(2), 81 (1956)
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences arXiv:1712.00547 (2017)
Nie, C., Leung, H.: The minimal failure-causing schema of combinatorial testing. ACM Trans. Softw. Eng. Methodol 20(4) (2011)
Niu, X., Nie, C., Leung, H., Lei, Y., Wang, X., Xu, J., Wang, Y.: An interleaving approach to combinatorial testing and failure-inducing interaction identification. IEEE Trans. Softw. Eng. 46(6), 584–615 (2020)
Papadimitriou, C. H.: The euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 4(3), 237–244 (1977)
Phillips, P.J., Hahn, C.A., Fontana, P.C., Broniatowski, D.A., Przybocki, M.A.: Four principles of explainable artificial intelligence (draft). https://doi.org/10.6028/NIST.IR.8312-draft (2020)
Ribeiro, M. T., Singh, S., Guestrin, C.: ”why should i trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp 1135–1144. Association for Computing Machinery, New York (2016)
Russel, S., Norvig, P.: Artificial intelligence: a modern approach. Pearson Education Limited (2013)
Shahaf, D., Amir, E.: Towards a theory of AI completeness. In: AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, pp. 150–155 (2007)
Stockmeyer, L.: Planar 3-colorability is polynomial complete. ACM Sigact News 5(3), 19–25 (1973)
Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31(2) (2018)
Wotawa, F.: On the importance of system testing for assuring safety of AI systems. In: AISafety@IJCAI (2019)
Yilmaz, C., Cohen, M., Porter, A.: Covering arrays for efficient fault characterization in complex configuration spaces. IEEE Trans. Softw. Eng. 32(1), 20–34 (2006)
Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. IEEE Trans. Softw. Eng. 28(2), 183–200 (2002)
Zhang, Y., Chen, X.: Explainable recommendation: A survey and new perspectives. arXiv:1804.11192 (2018)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Acknowledgements
SBA Research (SBA-K1) is a COMET Centre within the framework of COMET - Competence Centers for Excellent Technologies Programme and funded by BMK, BMDW, and the federal state of Vienna. The COMET Programme is managed by FFG. Moreover, this work was performed partly under the following financial assistance award 70NANB18H207 from U.S. Department of Commerce, National Institute of Standards and Technology.
Author information
Authors and Affiliations
Corresponding author
Additional information
Disclaimer
Any mention of commercial products in this paper is for information only; it does not imply recommendation or endorsement by the National Institute of Standards and Technology (NIST).
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kampel, L., Simos, D.E., Kuhn, D.R. et al. An exploration of combinatorial testing-based approaches to fault localization for explainable AI. Ann Math Artif Intell 90, 951–964 (2022). https://doi.org/10.1007/s10472-021-09772-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10472-021-09772-0