Skip to main content

Why Bad Coffee? Explaining Agent Plans with Valuings

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11094))

Included in the following conference series:

Abstract

An important issue in deploying an autonomous system is how to enable human users and stakeholders to develop an appropriate level of trust in the system. It has been argued that a crucial mechanism to enable appropriate trust is the ability of a system to explain its behaviour. Obviously, such explanations need to be comprehensible to humans. We argue that it makes sense to build on the results of extensive research in social sciences that explores how humans explain their behaviour. Using similar concepts for explanation is argued to help with comprehensibility, since the concepts are familiar. Following work in the social sciences, we propose the use of a folk-psychological model that utilises beliefs, desires, and “valuings”. We propose a formal framework for constructing explanations of the behaviour of an autonomous system, present an (implemented) algorithm for giving explanations, and present evaluation results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that using a BDI model does not necessarily require the system to be designed or implemented as BDI agents. It is in principle possible to use a BDI model to provide explanations of a system’s behaviour even if the system does not use BDI concepts.

  2. 2.

    For actions we assume that the name of the goal tree node and the name of the action coincide, i.e. that \(A=N\).

  3. 3.

    All explanations given in this section were produced by the implementation.

  4. 4.

    This has subsequently been implemented.

  5. 5.

    Kruskal-Wallis, since data is not expected to be normally distributed.

  6. 6.

    However, the prototype implementation does not tag nodes, so it recomputes \(n(G_i)\), leading to higher computational complexity.

References

  1. Bratman, M.E., Israel, D.J., Pollack, M.E.: Plans and resource-bounded practical reasoning. Comput. Intell. 4, 349–355 (1988)

    Article  Google Scholar 

  2. Bratman, M.E.: Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge (1987)

    Google Scholar 

  3. Burmeister, B., Arnold, M., Copaciu, F., Rimassa, G.: BDI-agents for agile goal-oriented business processes. In: Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS) [Industry Track], pp. 37–44. IFAAMAS (2008)

    Google Scholar 

  4. Chakraborti, T., Sreedharan, S., Zhang, Y., Kambhampati, S.: Plan explanations as model reconciliation: moving beyond explanation as soliloquy. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 156–163 (2017). https://doi.org/10.24963/ijcai.2017/23

  5. Cranefield, S., Winikoff, M., Dignum, V., Dignum, F.: No pizza for you: value-based plan selection in BDI agents. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 178–184 (2017). https://doi.org/10.24963/ijcai.2017/26

  6. EU: EU General Data Protection Regulation, April 2016. http://tinyurl.com/GDPREU2016 (see articles 13-15 and 22)

  7. Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J. (eds.) Syntax and Semantics Volume 3: Speech Acts. Academic Press, New York (1975)

    Google Scholar 

  8. Gunning, D.: Explainable Artificial Intelligence (XAI) (2018). https://www.darpa.mil/program/explainable-artificial-intelligence

  9. Harbers, M.: Explaining Agent Behavior in Virtual Training. SIKS dissertation series no. 2011-35, SIKS (Dutch Research School for Information and Knowledge Systems) (2011)

    Google Scholar 

  10. Lombrozo, T.: Explanation and abductive inference. In: Oxford Handbook of Thinking and Reasoning, pp. 260–276 (2012)

    Google Scholar 

  11. Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. The MIT Press, Cambridge (2004). ISBN 0-262-13445-4

    Google Scholar 

  12. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. CoRR abs/1706.07269 (2017). http://arxiv.org/abs/1706.07269

  13. Rao, A.S., Georgeff, M.P.: An abstract architecture for rational agents. In: Rich, C., Swartout, W., Nebel, B. (eds.) Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, pp. 439–449. Morgan Kaufmann Publishers, San Mateo (1992)

    Google Scholar 

  14. Schwartz, S.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012). https://doi.org/10.9707/2307-0919.1116

  15. Thangarajah, J., Padgham, L., Winikoff, M.: Detecting and avoiding interference between goals in intelligent agents. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), pp. 721–726 (2003)

    Google Scholar 

  16. Thangarajah, J., Padgham, L., Winikoff, M.: Detecting and exploiting positive goal interaction in intelligent agents. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 401–408. ACM Press (2003)

    Google Scholar 

  17. Visser, S., Thangarajah, J., Harland, J., Dignum, F.: Preference-based reasoning in BDI agent systems. Auton. Agents Multi-Agent Syst. 30(2), 291–330 (2016). https://doi.org/10.1007/s10458-015-9288-2

    Article  Google Scholar 

  18. van der Weide, T.: Arguing to motivate decisions. Dissertation, Utrecht University Repository (2011). https://dspace.library.uu.nl/handle/1874/210788

  19. Winikoff, M.: Towards Trusting Autonomous Systems. In: Fifth Workshop on Engineering Multi-Agent Systems (EMAS) (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Winikoff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Winikoff, M., Dignum, V., Dignum, F. (2018). Why Bad Coffee? Explaining Agent Plans with Valuings. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2018. Lecture Notes in Computer Science(), vol 11094. Springer, Cham. https://doi.org/10.1007/978-3-319-99229-7_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99229-7_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99228-0

  • Online ISBN: 978-3-319-99229-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics