Abstract
With advances of artificial intelligence (AI), there is a growing need for provisioning of transparency and accountability to AI systems. These properties can be achieved with eXplainable AI (XAI) methods, extensively developed over the last few years with relation for machine learning (ML) models. However, the practical usage of XAI is limited nowadays in most of the cases to the feature engineering phase of the data mining (DM) process. We argue that explainability as a property of a system should be used along with other quality metrics such as accuracy, precision, recall in order to deliver better AI models. In this paper we present a method that allows for weighted ML model stacking and demonstrates its practical use in an illustrative example.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
References
Almeida, A., Lopez-de Ipina, D.: Assessing ambiguity of context data in intelligent environments: towards a more reliable context managing systems. Sensors 12(4), 4934–4951 (2012). http://www.mdpi.com/1424-8220/12/4/4934
Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018)
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
DARPA: Broad agency announcement - explainable artificial intelligence (XAI). DARPA-BAA-16-53 (Aug 2016)
Dey, A.K.: Modeling and intelligibility in ambient environments. J. Ambient Intell. Smart Environ. 1(1), 57–62 (2009)
Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv preprint arXiv:1606.08813 (2016)
Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration (extended version). Technical report. TR-2010-10, University of British Columbia, Department of Computer Science (2010). http://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf
Jannach, D., Manzoor, A., Cai, W., Chen, L.: A survey on conversational recommender systems (2020)
Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2119–2128, CHI 2009. ACM, New York (2009). https://doi.org/10.1145/1518701.1519023
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777. NIPS2017, Curran Associates Inc. (2017)
Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems (2020)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, KDD 2016. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Publications, Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Robnik-Šikonja, M., Bohanec, M.: Perturbation-based explanations of prediction models. In: Zhou, J., Chen, F. (eds.) Human and Machine Learning. HIS, pp. 159–175. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90403-0_9
Roy, N., Das, S.K., Julien, C.: Resource-optimized quality-assured ambiguous context mediation framework in pervasive environments. IEEE Trans. Mob. Comput. 11(2), 218–229 (2012). http://dblp.uni-trier.de/db/journals/tmc/tmc11.html#RoyDJ12
Schank, R.C.: Explanation: A first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986)
Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. CoRR abs/1610.02391 (2016). http://arxiv.org/abs/1610.02391
Sokol, K., Flach, P.A.: Explainability fact sheets: A framework for systematic assessment of explainable approaches. CoRR abs/1912.05100 (2019)
Yeh, C.K., Hsieh, C.Y., Suggala, A.S., Inouye, D.I., Ravikumar, P.: On the (in)fidelity and sensitivity for explanations (2019)
Acknowledgements
The paper is funded from the PACMEL project funded by the National Science Centre, Poland under CHIST-ERA programme (NCN 2018/27/Z/ST6/03392). The authors are grateful to ACK Cyfronet, Krakow for granting access to the computing infrastructure built in the projects No. POIG.02.03.00-00-028/08 “PLATON - Science Services Platform” and No. POIG.02.03.00-00-110/13 “Deploying high-availability, critical services in Metropolitan Area Networks (MAN-HA)”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Bobek, S., Mozolewski, M., Nalepa, G.J. (2021). Explanation-Driven Model Stacking. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2021. ICCS 2021. Lecture Notes in Computer Science(), vol 12747. Springer, Cham. https://doi.org/10.1007/978-3-030-77980-1_28
Download citation
DOI: https://doi.org/10.1007/978-3-030-77980-1_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77979-5
Online ISBN: 978-3-030-77980-1
eBook Packages: Computer ScienceComputer Science (R0)