Skip to main content

Predictive Explanations for and by Reinforcement Learning

  • Conference paper
  • First Online:
Agents and Artificial Intelligence (ICAART 2023)

Abstract

In order to understand a reinforcement learning (RL) agent’s behavior within its environment, we propose an answer to ‘What is likely to happen?’ in the form of a predictive explanation. It is composed of three scenarios: best-case, worst-case and most-probable which we show are computationally difficult to find (W[1]-hard). We propose linear-time approximations by considering the environment as a favorable/hostile/neutral RL agent. Experiments validate this approach. Furthermore, we give a dynamic-programming algorithm to find an optimal summary of a long scenario.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amir, D., Amir, O.: HIGHLIGHTS: summarizing agent behavior to people. In: André, E., Koenig, S., Dastani, M., Sukthankar, G. (eds.) Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS, pp. 1168–1176. International Foundation for Autonomous Agents and Multiagent Systems/ACM (2018). http://dl.acm.org/citation.cfm?id=3237869

  2. Bastani, O., Pu, Y., Solar-Lezama, A.: Verifiable reinforcement learning via policy extraction. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) NeurIPS, pp. 2499–2509 (2018). https://proceedings.neurips.cc/paper/2018/hash/e6d8545daa42d5ced125a4bf747b3688-Abstract.html

  3. Brockman, G., et al.: OpenAI gym. arXiv preprint arXiv:1606.01540 (2016)

  4. Clouse, J.A.: On integrating apprentice learning and reinforcement learning. University of Massachusetts Amherst (1996)

    Google Scholar 

  5. Cruz, F., Dazeley, R., Vamplew, P.: Memory-based explainable reinforcement learning. In: Liu, J., Bailey, J. (eds.) AI 2019. LNCS (LNAI), vol. 11919, pp. 66–77. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35288-2_6

    Chapter  Google Scholar 

  6. Danesh, M.H., Koul, A., Fern, A., Khorram, S.: Re-understanding finite-state representations of recurrent policy networks. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 2388–2397. PMLR (2021). http://proceedings.mlr.press/v139/danesh21a.html

  7. Darwiche, A.: Human-level intelligence or animal-like abilities? Commun. ACM 61(10), 56–67 (2018). https://doi.org/10.1145/3271625

    Article  Google Scholar 

  8. European Commission: Artificial Intelligence Act (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975 &uri=CELEX%3A52021PC0206

  9. Gajcin, J., Dusparic, I.: ReCCoVER: detecting causal confusion for explainable reinforcement learning. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2022. LNCS, vol. 13283, pp. 38–56. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15565-9_3

    Chapter  Google Scholar 

  10. Greydanus, S., Koul, A., Dodge, J., Fern, A.: Visualizing and understanding Atari agents. In: Dy, J.G., Krause, A. (eds.) ICML. Proceedings of Machine Learning Research, vol. 80, pp. 1787–1796. PMLR (2018). http://proceedings.mlr.press/v80/greydanus18a.html

  11. Guo, W., Wu, X., Khan, U., Xing, X.: EDGE: explaining deep reinforcement learning policies. In: Ranzato, M., Beygelzimer, A., Dauphin, Y.N., Liang, P., Vaughan, J.W. (eds.) NeurIPS, pp. 12222–12236 (2021). https://proceedings.neurips.cc/paper/2021/hash/65c89f5a9501a04c073b354f03791b1f-Abstract.html

  12. Hasselt, H.: Double Q-learning. In: Advances in Neural Information Processing Systems, vol. 23 (2010)

    Google Scholar 

  13. Huber, T., Demmler, M., Mertes, S., Olson, M.L., André, E.: GANterfactual-RL: understanding reinforcement learning agents’ strategies through visual counterfactual explanations. CoRR abs/2302.12689 (2023). https://doi.org/10.48550/arXiv.2302.12689

  14. Iyer, R., Li, Y., Li, H., Lewis, M., Sundar, R., Sycara, K.P.: Transparency and explanation in deep reinforcement learning neural networks. In: Furman, J., Marchant, G.E., Price, H., Rossi, F. (eds.) Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES, pp. 144–150. ACM (2018). https://doi.org/10.1145/3278721.3278776

  15. Juozapaitis, Z., Koul, A., Fern, A., Erwig, M., Doshi-Velez, F.: Explainable reinforcement learning via reward decomposition. In: IJCAI/ECAI Workshop on Explainable Artificial Intelligence, p. 7 (2019)

    Google Scholar 

  16. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018). https://doi.org/10.1145/3233231

    Article  Google Scholar 

  17. Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: Explainable reinforcement learning through a causal lens. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pp. 2493–2500. AAAI Press (2020). https://ojs.aaai.org/index.php/AAAI/article/view/5631

  18. McDonald, R.: A study of global inference algorithms in multi-document summarization. In: Amati, G., Carpineto, C., Romano, G. (eds.) ECIR 2007. LNCS, vol. 4425, pp. 557–564. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-71496-5_51

    Chapter  Google Scholar 

  19. Milani, S., Topin, N., Veloso, M., Fang, F.: A survey of explainable reinforcement learning. CoRR abs/2202.08434 (2022). https://arxiv.org/abs/2202.08434

  20. Mishra, A., Soni, U., Huang, J., Bryan, C.: Why? Why not? When? Visual explanations of agent behavior in reinforcement learning. CoRR abs/2104.02818 (2021). https://arxiv.org/abs/2104.02818

  21. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). http://www.nature.com/articles/nature14236

  22. Olson, M.L., Neal, L., Li, F., Wong, W.: Counterfactual states for Atari agents via generative deep learning. CoRR abs/1909.12969 (2019). http://arxiv.org/abs/1909.12969

  23. Saulières, L., Cooper, M.C., Dupin de Saint Cyr, F.: Reinforcement learning explained via reinforcement learning: towards explainable policies through predictive explanation. In: 15th International Conference on Agents and Artificial Intelligence (ICAART 2023), pp. 35–44 (2023)

    Google Scholar 

  24. Sequeira, P., Gervasio, M.T.: Interestingness elements for explainable reinforcement learning: understanding agents’ capabilities and limitations. Artif. Intell. 288, 103367 (2020). https://doi.org/10.1016/j.artint.2020.103367

    Article  MathSciNet  Google Scholar 

  25. Shu, T., Xiong, C., Socher, R.: Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. CoRR abs/1712.07294 (2017). http://arxiv.org/abs/1712.07294

  26. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    Google Scholar 

  27. Tsirtsis, S., De, A., Rodriguez, M.: Counterfactual explanations in sequential decision making under uncertainty. In: Ranzato, M., Beygelzimer, A., Dauphin, Y.N., Liang, P., Vaughan, J.W. (eds.) NeurIPS 2021, pp. 30127–30139 (2021). https://proceedings.neurips.cc/paper/2021/hash/fd0a5a5e367a0955d81278062ef37429-Abstract.html

  28. Verma, A., Murali, V., Singh, R., Kohli, P., Chaudhuri, S.: Programmatically interpretable reinforcement learning. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018. Proceedings of Machine Learning Research, vol. 80, pp. 5052–5061. PMLR (2018). http://proceedings.mlr.press/v80/verma18a.html

  29. van der Waa, J., van Diggelen, J., van den Bosch, K., Neerincx, M.A.: Contrastive explanations for reinforcement learning in terms of expected consequences. CoRR abs/1807.08706 (2018). http://arxiv.org/abs/1807.08706

  30. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992)

    Article  Google Scholar 

  31. Yau, H., Russell, C., Hadfield, S.: What did you think would happen? Explaining agent behaviour through intended outcomes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) NeurIPS (2020). https://proceedings.neurips.cc/paper/2020/hash/d5ab8dc7ef67ca92e41d730982c5c602-Abstract.html

  32. Yu, Z., Ruan, J., Xing, D.: Explainable reinforcement learning via a causal world model. CoRR abs/2305.02749 (2023). https://doi.org/10.48550/arXiv.2305.02749

  33. Zahavy, T., Ben-Zrihem, N., Mannor, S.: Graying the black box: understanding DQNs. In: Balcan, M., Weinberger, K.Q. (eds.) Proceedings of the 33nd International Conference on Machine Learning, ICML 2016. JMLR Workshop and Conference Proceedings, vol. 48, pp. 1899–1908. JMLR.org (2016). http://proceedings.mlr.press/v48/zahavy16.html

Download references

Acknowledgements

The authors would like to thank Arnaud Lequen for his valuable suggestions that have led to the improvement of this paper. This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program “Investing for the Future - PIA3” under grant agreement no. ANR-19-PI3A-0004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Léo Saulières .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saulières, L., Cooper, M.C., de Saint-Cyr, F.D. (2024). Predictive Explanations for and by Reinforcement Learning. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2023. Lecture Notes in Computer Science(), vol 14546. Springer, Cham. https://doi.org/10.1007/978-3-031-55326-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-55326-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-55325-7

  • Online ISBN: 978-3-031-55326-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics