Abstract
Virtual training systems provide an effective means to train people for complex, dynamic tasks such as crisis management or firefighting. Intelligent agents are often used to play the characters with whom a trainee interacts. To increase the trainee’s understanding of played scenarios, several accounts of agents that can explain the reasons for their actions have been proposed. This paper describes an empirical study of what instructors consider useful agent explanations for trainees. It was found that different explanations types were preferred for different actions, e.g. conditions enabling action execution, goals underlying an action, or goals that become achievable after action execution. When an action has important consequences for other agents, instructors suggest that the others’ perspectives should be part of the explanation.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Core, M., Traum, T., Lane, H., Swartout, W., Gratch, J., van Lent, M.: Teaching negotiation skills through practice and reflection with virtual humans. Simulation 82(11), 685–701 (2006)
Dastani, M.: 2APL: a practical agent programming language. Autonomous Agents and Multi-agent Systems 16(3), 214–248 (2008)
Dennett, D.: The Intentional Stance. MIT Press, Cambridge (1987)
Gomboc, D., Solomon, S., Core, M.G., Lane, H.C., van Lent, M.: Design recommendations to support automated explanation and tutoring. In: Proc. of the 14th Conf. on Behavior Representation in Modeling and Simulation, Universal City, CA (2005)
Harbers, M., Van den Bosch, K., Meyer, J.: A methodology for developing self-explaining agents for virtual training. In: Decker, Sichman, Sierra, Castelfranchi (eds.) Proc. of 8th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2009), Budapest, Hungary, pp. 1129–1130 (2009)
Keil, F.: Explanation and understanding. Annual Reviews Psychology 57, 227–254 (2006)
Johnson, W.L.: Agents that learn to explain themselves. In: Proc. of the 12th Nat. Conf. on Artificial Intelligence, pp. 1257–1263 (1994)
Randolph, J.: Online kappa calculator (2008), http://justus.randolph.name/kappa (retrieved March 6, 2009)
Sardina, S., De Silva, L., Padgham, L.: Hierarchical planning in bdi agent programming languages: A formal approach. In: Proceedings of AAMAS 2006. ACM Press, New York (2006)
Van den Bosch, K., Harbers, M., Heuvelink, A., Van Doesburg, W.: Intelligent agents for training on-board fire fighting (to appear, 2009)
Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: Proc. of IAAA 2004. AAAI Press, Menlo Park (2004)
Ye, R., Johnson, P.: The impact of explanation facilities on user acceptance of expert systems advice. Mis Quarterly 19(2), 157–172 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Harbers, M., van den Bosch, K., Meyer, JJ.C. (2009). A Study into Preferred Explanations of Virtual Agent Behavior. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds) Intelligent Virtual Agents. IVA 2009. Lecture Notes in Computer Science(), vol 5773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04380-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-04380-2_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04379-6
Online ISBN: 978-3-642-04380-2
eBook Packages: Computer ScienceComputer Science (R0)