Skip to main content

Estimating Causal Responsibility for Explaining Autonomous Behavior

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2023)

Abstract

There has been growing interest in causal explanations of stochastic, sequential decision-making systems. Structural causal models and causal reasoning offer several theoretical benefits when exact inference can be applied. Furthermore, users overwhelmingly prefer the resulting causal explanations over other state-of-the-art systems. In this work, we focus on one such method, MeanRESP, and its approximate versions that drastically reduce compute load and assign a responsibility score to each variable, which helps identify smaller sets of causes to be used as explanations. However, this method, and its approximate versions in particular, lack deeper theoretical analysis and broader empirical tests. To address these shortcomings, we provide three primary contributions. First, we offer several theoretical insights on the sample complexity and error rate of approximate MeanRESP. Second, we discuss several automated metrics for comparing explanations generated from approximate methods to those generated via exact methods. While we recognize the significance of user studies as the gold standard for evaluating explanations, our aim is to leverage the proposed metrics to systematically compare explanation-generation methods along important quantitative dimensions. Finally, we provide a more detailed discussion of MeanRESP and how its output under different definitions of responsibility compares to existing widely adopted methods that use Shapley values.

S. Mahmud and S. B. Nashed—Authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allahyari, M., et al.: Text summarization techniques: a brief survey. arXiv preprint arXiv:1707.02268 (2017)

  2. Bellman, R.: On the theory of dynamic programming. Natl. Acad. Sci. United States Am. 38(8), 716 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertossi, L., Li, J., Schleich, M., Suciu, D., Vagena, Z.: Causality-based explanation of classification outcomes (2020). arXiv preprint arXiv:2003.06868

  4. Bertram, J., Wei, P.: Explainable deterministic MDPs (2018). arXiv preprint arXiv:1806.03492

  5. Bertsekas, D.P.: Dynamic programming and optimal control (1995)

    Google Scholar 

  6. Brockman, G., et al.: OpenAI Gym (2016). https://arxiv.org/abs/1606.01540

  7. Chen, J.Y., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018)

    Article  Google Scholar 

  8. Chockler, H., Halpern, J.Y.: Responsibility and blame: a structural-model approach. J. Artif. Intell. Res. 22, 93–115 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  9. David Wong, E.: Understanding the generative capacity of analogies as a tool for explanation. J. Res. Sci. Teach. 30(10), 1259–1272 (1993)

    Article  Google Scholar 

  10. Elizalde, F., Sucar, E., Noguez, J., Reyes, A.: Generating explanations based on markov decision processes. In: Aguirre, A.H., Borja, R.M., Garciá, C.A.R. (eds.) MICAI 2009. LNCS (LNAI), vol. 5845, pp. 51–62. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-05258-3_5

    Chapter  Google Scholar 

  11. Halpern, J.Y., Pearl, J.: Causes and explanations: a structural-model approach. Part I: Causes. Brit. J. Phil. Sci. 52(3), 613–622 (2005)

    Google Scholar 

  12. Halpern, J.Y., Pearl, J.: Causes and explanations: a structural-model approach. Part II: explanations. Brit. J. Phil. Sci. 56(4), 889–911 (2005)

    Google Scholar 

  13. Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 303–312 (2017)

    Google Scholar 

  14. Juozapaitis, Z., Koul, A., Fern, A., Erwig, M., Doshi-Velez, F.: Explainable reinforcement learning via reward decomposition. In: IJCAI/ECAI Workshop on Explainable Artificial Intelligence (2019)

    Google Scholar 

  15. Karimi, A.H., Schölkopf, B., Valera, I.: Algorithmic recourse: from counterfactual explanations to interventions. In: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 353–362 (2021)

    Google Scholar 

  16. Khan, O., Poupart, P., Black, J.: Minimal sufficient explanations for factored Markov decision processes. In: International Conference on Automated Planning and Scheduling (ICAPS), vol. 19 (2009)

    Google Scholar 

  17. Leurent, E.: An environment for autonomous driving decision-making (2018)

    Google Scholar 

  18. Linegang, M.P., et al.: Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. Proc. Human Fact. Ergon. Soc. Annual Meet. 50(23), 2482–2486 (2006)

    Article  Google Scholar 

  19. Lucic, A., Haned, H., de Rijke, M.: Why does my model fail? contrastive local explanations for retail forecasting. In: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 90–98 (2020)

    Google Scholar 

  20. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  21. Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: Explainable reinforcement learning through a causal lens. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 2493–2500 (2020)

    Google Scholar 

  22. Mercado, J.E., Rupp, M.A., Chen, J.Y., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for Multi-UxV management. Hum. Fact. 58(3), 401–415 (2016)

    Article  Google Scholar 

  23. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  24. Mnih, V., et al.: Playing Atari with deep reinforcement learning (2013). https://arxiv.org/abs/1312.5602

  25. Molnar, C.: Interpretable Machine Learning, 2 edn. (2022). https://christophm.github.io/interpretable-ml-book

  26. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)

    Google Scholar 

  27. Nashed, S.B., Mahmud, S., Goldman, C.V., Zilberstein, S.: Causal explanations for sequential decision making under uncertainty (2022)

    Google Scholar 

  28. Nisioi, S., Štajner, S., Ponzetto, S.P., Dinu, L.P.: Exploring neural text simplification models. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 85–91 (2017)

    Google Scholar 

  29. Panigutti, C., Perotti, A., Pedreschi, D.: Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 629–639 (2020)

    Google Scholar 

  30. Pouget, H., Chockler, H., Sun, Y., Kroening, D.: Ranking policy decisions (2020). arXiv preprint arXiv:2008.13607

  31. Russell, J., Santos, E.: Explaining reward functions in Markov decision processes. In: Thirty-Second International FLAIRS Conference (2019)

    Google Scholar 

  32. Scharrer, L., Bromme, R., Britt, M.A., Stadtler, M.: The seduction of easiness: how science depictions influence laypeople’s reliance on their own evaluation of scientific information. Learn. Inst. 22(3), 231–243 (2012)

    Article  Google Scholar 

  33. Shapley, L.S., et al.: A value for n-person games (1953)

    Google Scholar 

  34. Srikanth, N., Li, J.J.: Elaborative simplification: content addition and explanation generation in text simplification (2020). arXiv preprint arXiv:2010.10035

  35. Stubbs, K., Hinds, P.J., Wettergreen, D.: Autonomy and common ground in human-robot interaction: a field study. IEEE Intell. Syst. 22(2), 42–50 (2007)

    Article  Google Scholar 

  36. Sukkerd, R., Simmons, R., Garlan, D.: Tradeoff-focused contrastive explanation for mdp planning. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1041–1048. IEEE (2020)

    Google Scholar 

  37. Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41, 647–665 (2014)

    Article  Google Scholar 

  38. Wang, N., Pynadath, D.V., Hill, S.G.: The impact of POMDP-generated explanations on trust and performance in human-robot teams. In: International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 997–1005 (2016)

    Google Scholar 

  39. Zhang, Y., Liao, Q.V., Bellamy, R.K.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, pp. 295–305 (2020)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Science Foundation grant number IIS-1954782.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saaduddin Mahmud .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mahmud, S., Nashed, S.B., Goldman, C.V., Zilberstein, S. (2023). Estimating Causal Responsibility for Explaining Autonomous Behavior. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2023. Lecture Notes in Computer Science(), vol 14127. Springer, Cham. https://doi.org/10.1007/978-3-031-40878-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40878-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40877-9

  • Online ISBN: 978-3-031-40878-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics