Skip to main content

Social AI Agents Too Need to Explain Themselves

  • Conference paper
  • First Online:
Generative Intelligence and Intelligent Tutoring Systems (ITS 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14798))

Included in the following conference series:

Abstract

Social AI agents interact with members of a community, thereby changing the behavior of the community. For example, in online learning, an AI social assistant may connect learners and thereby enhance social interaction. These social AI assistants too need to explain themselves in order to enhance transparency and trust with the learners. We present a method of self-explanation that uses introspection over a self-model of an AI social assistant. The self-model is captured as a functional model that specifies how the methods of the agent use knowledge to achieve its tasks. The process of generating self-explanations uses Chain of Thought to reflect on the self-model and ChatGPT to provide explanations about its functioning. We evaluate the self-explanation of the AI social assistant for completeness and correctness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    LangChain documentation.

  2. 2.

    OpenAI’s gpt3.5-turbo-instruct model has been used.

  3. 3.

    Meta’s FAISS documentation.

References

  1. Garrison, D., Anderson, T., Archer, W.: Critical inquiry in a text-based environment: computer conferencing in higher education. Internet High. Educ. 2, 87–105 (1999)

    Article  Google Scholar 

  2. Wang, Q., Jing, S., Camacho, I., Joyner, D., Goel, A., Jill Watson, S.A.: Design and evaluation of a virtual agent to build communities among online learners. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–8 (2020)

    Google Scholar 

  3. Goel, A.: AI-powered learning: making education accessible, affordable, and achievable. arXiv preprint arXiv:2006.01908 (2020)

  4. Kakar, S., et al.: SAMI: an AI actor for fostering social interactions in online classrooms. In: Proceedings of 20th International Conference (ITS 2024). Springer, Thessaloniki (2024)

    Google Scholar 

  5. Murdock, J., Goel, A.: Meta-case-based reasoning: self-improvement through self-understanding. J. Exp. Theor. Artif. Intell. 20, 1–36 (2008)

    Article  Google Scholar 

  6. Goel, A., Rugaber, S.: GAIA: a CAD-like environment for designing game-playing agents. IEEE Intell. Syst. 32, 60–67 (2017)

    Article  Google Scholar 

  7. Goel, A., Sikka, H., Nandan, V., Lee, J., Lisle, M., Rugaber, S.: Explanation as Question Answering based on a Task Model of the Agent’s Design. arXiv preprint arXiv:2206.05030 (2022)

  8. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 35, 24824–24837 (2022)

    Google Scholar 

  9. Mueller, S., Hoffman, R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876 (2019)

  10. Confalonieri, R., Coba, L., Wagner, B., Besold, T.: A historical perspective of explainable Artificial Intelligence. Wiley Interdiscip. Rev. Data Mining Knowl. Discov. 11, e1391 (2021)

    Google Scholar 

  11. Gilpin, L., Bau, D., Yuan, B., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)

    Google Scholar 

  12. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)

    Article  Google Scholar 

  13. Goel, A., Polepeddi, L.: Jill Watson: A Virtual Teaching Assistant. Theoretical Contexts And Design-based Examples. Routledge, Learning Engineering For Online Education (2018)

    Book  Google Scholar 

  14. Eicher, B., Polepeddi, L., Goel, A., Watson, J.: doesn’t care if you’re pregnant: grounding AI ethics in empirical studies. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 88–94 (2018)

    Google Scholar 

  15. Lee, J., Nass, C.: Trust in computers: the computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In: Trust and Technology in a Ubiquitous Modern Environment: Theoretical and Methodological Perspectives, pp. 1–15 (2010)

    Google Scholar 

  16. Chandrasekaran, B., Tanner, M., Josephson, J.: Explaining control strategies in problem solving. IEEE Intell. Syst. 4, 9–15 (1989)

    Google Scholar 

  17. Chandrasekaran, B., Swartout, W.: Explanations in knowledge systems: the role of explicit representation of design knowledge. IEEE Expert 6, 47–49 (1991)

    Article  Google Scholar 

  18. Goel, A., Silver Garza, A., Grué, N., Murdock, J., Recker, M., Govindaraj, T.: Explanatory interface in interactive design environments. In: Artificial Intelligence in Design 1996, pp. 387–405 (1996)

    Google Scholar 

  19. Liao, Q., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)

    Google Scholar 

  20. Sipos, L., Schäfer, U., Glinka, K., Müller-Birn, C.: Identifying explanation needs of end-users: applying and extending the XAI question bank. Proc. Mensch Comput. 2023, 492–497 (2023)

    Google Scholar 

  21. Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55, 1–42 (2023)

    Google Scholar 

Download references

Acknowledgements

This research has been supported by NSF Grants #2112532 and #2247790 to the National AI Institute for Adult Learning and Online Education. We thank members of the Design & Intelligence Laboratory for their contributions to this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rhea Basappa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Basappa, R., Tekman, M., Lu, H., Faught, B., Kakar, S., Goel, A.K. (2024). Social AI Agents Too Need to Explain Themselves. In: Sifaleras, A., Lin, F. (eds) Generative Intelligence and Intelligent Tutoring Systems. ITS 2024. Lecture Notes in Computer Science, vol 14798. Springer, Cham. https://doi.org/10.1007/978-3-031-63028-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63028-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63027-9

  • Online ISBN: 978-3-031-63028-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics