skip to main content
10.1145/3531073.3534481acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaviConference Proceedingsconference-collections
poster

Can Explainable AI Foster Trust in a Customer Dialogue System?

Published: 06 June 2022 Publication History

Abstract

In this poster paper we present a web user study about a customer dialogue system, in which participants assigned tickets based on an automatic classification to different departments and answered questions about the perceived classification performance. Completion times were significantly shorter when offering explanations on the classification process, while task success and trust in the interface did not depend on showing explanations or not. Based on the results, future studies should be confined to smaller scopes and investigate more techniques for explainable AI.

References

[1]
Cristina Aceta, Izaskun Fernández, and Aitor Soroa. 2022. KIDE4I: A Generic Semantics-Based Task-Oriented Dialogue System for Human-Machine Interaction in Industry 5.0. 12, 3 (2022), 1192. https://doi.org/10.3390/app12031192
[2]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, 1–13. https://doi.org/10.1145/3290605.3300233
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. 58 (2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[4]
Emma Beauxis-Aussalet, Michael Behrisch, Rita Borgo, Duen Horng Chau, Christopher Collins, David Ebert, Mennatallah El-Assady, Alex Endert, Daniel A. Keim, Jörn Kohlhammer, Daniela Oelke, Jaakko Peltonen, Maria Riveiro, Tobias Schreck, Hendrik Strobelt, and Jarke J. van Wijk. 2021-11. The Role of Interactive Visualization in Fostering Trust in AI. 41, 6 (2021-11), 7–12. https://doi.org/10.1109/MCG.2021.3107875
[5]
John Brooke. 1996. SUS: A ’Quick and Dirty’ Usability Scale. In Usability Evaluation In Industry. CRC Press.
[6]
Microsoft Corporation. 2018. Responsible Bots: 10 Guidelines for Developers of Conversational AI. https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf
[7]
Anthony M. Evans and William Revelle. 2008. Survey and behavioral measurements of interpersonal trust. 42, 6 (2008), 1585–1593. https://doi.org/10.1016/j.jrp.2008.07.011
[8]
Ella Glikson and Anita Williams Woolley. 2020. Human Trust in Artificial Intelligence: Review of Empirical Research. 14, 2 (2020), 627–660. https://doi.org/10.5465/annals.2018.0057
[9]
Akshit Gupta, Debadeep Basu, Ramya Ghantasala, Sihang Qiu, and Ujwal Gadiraju. 2022. To Trust or Not To Trust: How a Conversational Interface Affects Trust in a Decision Support System. In Proceedings of the ACM Web Conference 2022 (New York, NY, USA) (WWW ’22). Association for Computing Machinery, 3531–3540. https://doi.org/10.1145/3485447.3512248
[10]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2019. Metrics for Explainable AI: Challenges and Prospects. (2019). arxiv:1812.04608http://arxiv.org/abs/1812.04608
[11]
Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg. 2018. Towards a Grounded Dialog Model for Explainable Artificial Intelligence. (2018). https://doi.org/10.48550/ARXIV.1806.08055
[12]
Sina Mohseni, Jeremy E. Block, and Eric D. Ragan. 2020. A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning. (2020). arxiv:1801.05075http://arxiv.org/abs/1801.05075
[13]
High-Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
[14]
Mary Roszel, Robert Norvill, Jean Hilger, and Radu State. 2021-05-31. Know Your Model (KYM): Increasing Trust in AI and Machine Learning. (2021-05-31). arxiv:2106.11036http://arxiv.org/abs/2106.11036
[15]
Philipp Schmidt and Felix Biessmann. 2019. Quantifying Interpretability and Trust in Machine Learning Systems. (2019). arxiv:1901.08558http://arxiv.org/abs/1901.08558
[16]
Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S. Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 1–13. https://doi.org/10.1145/3313831.3376624
[17]
Oleksandra Vereschak. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. (2021). https://doi.org/10.1145/3476068
[18]
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. ”Do you trust me?”: Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (New York, NY, USA) (IVA ’19). Association for Computing Machinery, 7–9. https://doi.org/10.1145/3308532.3329441
[19]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. (2020), 295–305. https://doi.org/10.1145/3351095.3372852 arxiv:2001.02114
[20]
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas, Pedro Kringen, Melissa McCullough, Florian Möslein, Naveed Mushtaq, Gemma Roig, Norman Stürtz, Karsten Tolle, Jesmin Jahan Tithi, Irmhild van Halem, and Magnus Westerlund. 2021. Z-Inspection®: A Process to Assess Trustworthy AI. 2, 2 (2021), 83–97. https://doi.org/10.1109/TTS.2021.3066209

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AVI '22: Proceedings of the 2022 International Conference on Advanced Visual Interfaces
June 2022
414 pages
ISBN:9781450397193
DOI:10.1145/3531073
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2022

Check for updates

Author Tags

  1. Explainable AI
  2. Information Visualization
  3. Transparency
  4. Trust

Qualifiers

  • Poster
  • Research
  • Refereed limited

Conference

AVI 2022

Acceptance Rates

Overall Acceptance Rate 128 of 490 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 160
    Total Downloads
  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media