Abstract
Social AI agents interact with members of a community, thereby changing the behavior of the community. For example, in online learning, an AI social assistant may connect learners and thereby enhance social interaction. These social AI assistants too need to explain themselves in order to enhance transparency and trust with the learners. We present a method of self-explanation that uses introspection over a self-model of an AI social assistant. The self-model is captured as a functional model that specifies how the methods of the agent use knowledge to achieve its tasks. The process of generating self-explanations uses Chain of Thought to reflect on the self-model and ChatGPT to provide explanations about its functioning. We evaluate the self-explanation of the AI social assistant for completeness and correctness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
OpenAI’s gpt3.5-turbo-instruct model has been used.
- 3.
References
Garrison, D., Anderson, T., Archer, W.: Critical inquiry in a text-based environment: computer conferencing in higher education. Internet High. Educ. 2, 87–105 (1999)
Wang, Q., Jing, S., Camacho, I., Joyner, D., Goel, A., Jill Watson, S.A.: Design and evaluation of a virtual agent to build communities among online learners. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–8 (2020)
Goel, A.: AI-powered learning: making education accessible, affordable, and achievable. arXiv preprint arXiv:2006.01908 (2020)
Kakar, S., et al.: SAMI: an AI actor for fostering social interactions in online classrooms. In: Proceedings of 20th International Conference (ITS 2024). Springer, Thessaloniki (2024)
Murdock, J., Goel, A.: Meta-case-based reasoning: self-improvement through self-understanding. J. Exp. Theor. Artif. Intell. 20, 1–36 (2008)
Goel, A., Rugaber, S.: GAIA: a CAD-like environment for designing game-playing agents. IEEE Intell. Syst. 32, 60–67 (2017)
Goel, A., Sikka, H., Nandan, V., Lee, J., Lisle, M., Rugaber, S.: Explanation as Question Answering based on a Task Model of the Agent’s Design. arXiv preprint arXiv:2206.05030 (2022)
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 35, 24824–24837 (2022)
Mueller, S., Hoffman, R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876 (2019)
Confalonieri, R., Coba, L., Wagner, B., Besold, T.: A historical perspective of explainable Artificial Intelligence. Wiley Interdiscip. Rev. Data Mining Knowl. Discov. 11, e1391 (2021)
Gilpin, L., Bau, D., Yuan, B., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)
Goel, A., Polepeddi, L.: Jill Watson: A Virtual Teaching Assistant. Theoretical Contexts And Design-based Examples. Routledge, Learning Engineering For Online Education (2018)
Eicher, B., Polepeddi, L., Goel, A., Watson, J.: doesn’t care if you’re pregnant: grounding AI ethics in empirical studies. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 88–94 (2018)
Lee, J., Nass, C.: Trust in computers: the computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In: Trust and Technology in a Ubiquitous Modern Environment: Theoretical and Methodological Perspectives, pp. 1–15 (2010)
Chandrasekaran, B., Tanner, M., Josephson, J.: Explaining control strategies in problem solving. IEEE Intell. Syst. 4, 9–15 (1989)
Chandrasekaran, B., Swartout, W.: Explanations in knowledge systems: the role of explicit representation of design knowledge. IEEE Expert 6, 47–49 (1991)
Goel, A., Silver Garza, A., Grué, N., Murdock, J., Recker, M., Govindaraj, T.: Explanatory interface in interactive design environments. In: Artificial Intelligence in Design 1996, pp. 387–405 (1996)
Liao, Q., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)
Sipos, L., Schäfer, U., Glinka, K., Müller-Birn, C.: Identifying explanation needs of end-users: applying and extending the XAI question bank. Proc. Mensch Comput. 2023, 492–497 (2023)
Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55, 1–42 (2023)
Acknowledgements
This research has been supported by NSF Grants #2112532 and #2247790 to the National AI Institute for Adult Learning and Online Education. We thank members of the Design & Intelligence Laboratory for their contributions to this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Basappa, R., Tekman, M., Lu, H., Faught, B., Kakar, S., Goel, A.K. (2024). Social AI Agents Too Need to Explain Themselves. In: Sifaleras, A., Lin, F. (eds) Generative Intelligence and Intelligent Tutoring Systems. ITS 2024. Lecture Notes in Computer Science, vol 14798. Springer, Cham. https://doi.org/10.1007/978-3-031-63028-6_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-63028-6_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63027-9
Online ISBN: 978-3-031-63028-6
eBook Packages: Computer ScienceComputer Science (R0)