Skip to main content

Do You Get It? User-Evaluated Explainable BDI Agents

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6251))

Abstract

In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cortellessa, G., Cesta, A.: Evaluating mixed-initiative systems: An experimental approach. In: ICAPS’06, pp. 172–181 (2006)

    Google Scholar 

  2. Gilbert, N.: Explanation and dialogue. The Knowledge Engineering Review 4(03), 235–247 (1989) 10.1017/S026988890000504X

    Article  Google Scholar 

  3. Core, M., Traum, T., Lane, H., Swartout, W., Gratch, J., Van Lent, M.: Teaching negotiation skills through practice and reflection with virtual humans. Simulation 82(11), 685–701 (2006)

    Article  Google Scholar 

  4. Graesser, A.C., Chipman, P., Haynes, B.C., Olney, A.: Autotutor: an intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education 48(4), 612–618 (2005)

    Article  Google Scholar 

  5. Broekens, J., DeGroot, D.: Formalizing cognitive appraisal: from theory to computation. In: Trapple, R. (ed.) Cybernetics and Systems 2006, Vienna, Austrian, Society for Cybernetics Studies, pp. 595–600 (2006)

    Google Scholar 

  6. Cavazza, M., Charles, F., Mead, S.J.: Character-based interactive storytelling. IEEE Intelligent Systems 17(4), 17–24 (2002)

    Article  MATH  Google Scholar 

  7. Theune, M., Faas, S., Heylen, D.K.J., Nijholt, A.: The virtual qstoryteller: Story creation by intelligent agents. In: TIDSE 2003: Technologies for Interactive Digital Storytelling and Entertainment, Darmstadt, pp. 204–215. Fraunhofer IRB Verlag (2003)

    Google Scholar 

  8. Keil, F.: Explanation and understanding. Annual Reviews Psychology 57, 227–254 (2006)

    Article  Google Scholar 

  9. Harbers, M., Van den Bosch, K., Meyer, J.: A study into preferred explanations of virtual agent behavior. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 132–145. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  10. Johnson, W.: Agents that learn to explain themselves. In: Proc. of the 12th Nat. Conf. on Artificial Intelligence, pp. 1257–1263 (1994)

    Google Scholar 

  11. Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: Proc. of IAAA 2004. AAAI Press, Menlo Park (2004)

    Google Scholar 

  12. Gomboc, D., Solomon, S., Core, M.G., Lane, H.C., van Lent, M.: Design recommendations to support automated explanation and tutoring. In: Proc. of BRIMS 2005, Universal City, CA (2005)

    Google Scholar 

  13. Core, M., Lane, H., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI (2006)

    Google Scholar 

  14. Hindriks, K.: Programming Rational Agents in GOAL. In: Multi-Agent Programming: Languages, Tools and Applications, pp. 119–157. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  15. Schraagen, J., Chipman, S., Shalin, V. (eds.): Cognitive Task Analysis. Lawrence Erlbaum Associates, Mahway (2000)

    Google Scholar 

  16. Malle, B.: How people explain behavior: A new theoretical framework. Personality and Social Psychology Review 3(1), 23–48 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Broekens, J., Harbers, M., Hindriks, K., van den Bosch, K., Jonker, C., Meyer, JJ. (2010). Do You Get It? User-Evaluated Explainable BDI Agents. In: Dix, J., Witteveen, C. (eds) Multiagent System Technologies. MATES 2010. Lecture Notes in Computer Science(), vol 6251. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16178-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16178-0_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16177-3

  • Online ISBN: 978-3-642-16178-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics