Abstract
To enable recourse, explanations provided to people should be actionable, that is, explain what a person should do to change the model’s decision. However, what actionability means in the context of explainable AI is unclear. In this paper, we explore existing tools that others developed to evaluate actionability in their respective domains. To our knowledge, no prior work in the XAI field has developed such a tool to evaluate the actionability of explanation. We conducted an experimental study to validate two existing actionability tools for discriminating the actionability of two types of explanations. Our results indicate that the two existing actionability tools reveal metrics relevant for conceptualising actionability for the XAI community.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘it’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, pp. 1–14 (2018)
Karimi, A.H., Schölkopf, B., Valera, I.: Algorithmic recourse: from counterfactual explanations to interventions. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 353–362. Association for Computing Machinery, New York (2021)
Kononenko, I.: Machine learning for medical diagnosis: history, state of the art and perspective. Artif. Intell. Med. 23(1), 89–109 (2001)
Peng, A., Simard-Halm, M.: The perils of objectivity: towards a normative framework for fair judicial decision-making. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 343–343 (2020)
Redmiles, E.M., et al.: A comprehensive quality evaluation of security and privacy advice on the web. In: 29th USENIX Security Symposium (USENIX Security 2020), pp. 89–108 (2020)
Russell, C.: Efficient search for diverse coherent explanations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 20–28 (2019)
Shoemaker, S.J., Wolf, M.S., Brach, C.: Development of the patient education materials assessment tool (pemat): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ. Couns. 96(3), 395–403 (2014)
Singh, R., et al.: Directive explanations for actionable explainability in machine learning applications. ACM Trans. Interact. Intell. Syst. (2023). https://doi.org/10.1145/3579363
Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 10–19 (2019)
Venkatasubramanian, S., Alfano, M.: The philosophical basis of algorithmic recourse. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 284–293. Association for Computing Machinery, New York (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Alotaibi, H., Singh, R. (2024). Metrics for Evaluating Actionability in Explainable AI. In: Liu, F., Sadanandan, A.A., Pham, D.N., Mursanto, P., Lukose, D. (eds) PRICAI 2023: Trends in Artificial Intelligence. PRICAI 2023. Lecture Notes in Computer Science(), vol 14326. Springer, Singapore. https://doi.org/10.1007/978-981-99-7022-3_44
Download citation
DOI: https://doi.org/10.1007/978-981-99-7022-3_44
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7021-6
Online ISBN: 978-981-99-7022-3
eBook Packages: Computer ScienceComputer Science (R0)