Skip to main content

Metrics for Evaluating Actionability in Explainable AI

  • Conference paper
  • First Online:
PRICAI 2023: Trends in Artificial Intelligence (PRICAI 2023)

Abstract

To enable recourse, explanations provided to people should be actionable, that is, explain what a person should do to change the model’s decision. However, what actionability means in the context of explainable AI is unclear. In this paper, we explore existing tools that others developed to evaluate actionability in their respective domains. To our knowledge, no prior work in the XAI field has developed such a tool to evaluate the actionability of explanation. We conducted an experimental study to validate two existing actionability tools for discriminating the actionability of two types of explanations. Our results indicate that the two existing actionability tools reveal metrics relevant for conceptualising actionability for the XAI community.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.kaggle.com/datasets/husainsb/lendingclub-issued-loans.

  2. 2.

    https://www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset.

  3. 3.

    https://www.qualtrics.com/au/.

References

  1. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘it’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, pp. 1–14 (2018)

    Google Scholar 

  2. Karimi, A.H., Schölkopf, B., Valera, I.: Algorithmic recourse: from counterfactual explanations to interventions. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 353–362. Association for Computing Machinery, New York (2021)

    Google Scholar 

  3. Kononenko, I.: Machine learning for medical diagnosis: history, state of the art and perspective. Artif. Intell. Med. 23(1), 89–109 (2001)

    Article  Google Scholar 

  4. Peng, A., Simard-Halm, M.: The perils of objectivity: towards a normative framework for fair judicial decision-making. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 343–343 (2020)

    Google Scholar 

  5. Redmiles, E.M., et al.: A comprehensive quality evaluation of security and privacy advice on the web. In: 29th USENIX Security Symposium (USENIX Security 2020), pp. 89–108 (2020)

    Google Scholar 

  6. Russell, C.: Efficient search for diverse coherent explanations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 20–28 (2019)

    Google Scholar 

  7. Shoemaker, S.J., Wolf, M.S., Brach, C.: Development of the patient education materials assessment tool (pemat): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ. Couns. 96(3), 395–403 (2014)

    Article  Google Scholar 

  8. Singh, R., et al.: Directive explanations for actionable explainability in machine learning applications. ACM Trans. Interact. Intell. Syst. (2023). https://doi.org/10.1145/3579363

  9. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 10–19 (2019)

    Google Scholar 

  10. Venkatasubramanian, S., Alfano, M.: The philosophical basis of algorithmic recourse. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 284–293. Association for Computing Machinery, New York (2020)

    Google Scholar 

  11. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hissah Alotaibi .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 69 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alotaibi, H., Singh, R. (2024). Metrics for Evaluating Actionability in Explainable AI. In: Liu, F., Sadanandan, A.A., Pham, D.N., Mursanto, P., Lukose, D. (eds) PRICAI 2023: Trends in Artificial Intelligence. PRICAI 2023. Lecture Notes in Computer Science(), vol 14326. Springer, Singapore. https://doi.org/10.1007/978-981-99-7022-3_44

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7022-3_44

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7021-6

  • Online ISBN: 978-981-99-7022-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics