skip to main content
10.1145/3473856.3473886acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmundcConference Proceedingsconference-collections
research-article

The Effect of Explanations on Trust in an Assistance System for Public Transport Users and the Role of the Propensity to Trust

Published: 13 September 2021 Publication History

Abstract

The present study aimed to investigate whether explanations increase trust in an assistance system. Moreover, we wanted to take the role of the individual propensity to trust in technology into account. We conducted an empirical study in a virtual reality environment where 40 participants interacted with a specific assistance system for public transport users. The study was in a 2x2 mixed design with the within-subject factor assistance system feature (trip planner and connection request) and the between-subject factor explanation (with or without). We measured trust as explicit trust via a questionnaire and as implicit trust via an operationalization of the participants’ behavior. The results showed that trust propensity predicted explicit trust, and explanations increased explicit trust significantly. This was not the case for implicit trust, though, suggesting that explicit and implicit trust do not necessarily coincide. In conclusion, our results complement the literature on explainable artificial intelligence and trust in automation and provide topics for future research regarding the effect of explanations on trust in assistance systems or other technologies.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–18. https://doi.org/10.1145/3173574.3174156
[2]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58 (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[3]
Johannes Beller, Matthias Heesen, and Mark Vollrath. 2013. Improving the driver-automation interaction: An approach using automation uncertainty. Hum. Factors 55, 6 (December 2013), 1130–1141. https://doi.org/10.1177/0018720813482327
[4]
G. E. P. Box. 1953. Non-normality and tests on variances. Biometrika 40, ¾ (December 1953), 318–335. https://doi.org/10.2307/2333350
[5]
G. E. P. Box. 1954. Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in the one-way classification. Ann. Math. Statist. 25, 2 (June 1954), 290–302. https://doi.org/10.1214/aoms/1177728786
[6]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F. M. Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In CHI ‘19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–12. https://doi.org/10.1145/3290605.3300789
[7]
Millissa F. Y. Cheung and W. M. To. 2017. The influence of the propensity to trust on mobile users’ attitudes toward in-app advertisements: An extension of the theory of planned behavior. Comput. Hum. Behav. 76, 102–111. https://doi.org/10.1016/j.chb.2017.07.011
[8]
Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti. 1993. Surround-screen projection-based virtual reality. In SIGGRAPH ‘93: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. ACM, New York, NY, 135–142. https://doi.org/10.1145/166117.166134
[9]
Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and Hall P. Beck. 2003. The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58, 6 (June 2003), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
[10]
Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39, 2, 175–191. https://doi.org/10.3758/BF03193146
[11]
Morten Frederiksen. 2014. Trust in the face of uncertainty: A qualitative study of intersubjective trust and risk. Int. Rev. Sociol. 24, 1 (March 2014), 130–144. https://doi.org/10.1080/03906701.2014.894335
[12]
Ji Gao and John D. Lee. 2006. Effect of shared information on trust and reliance in a demand forecasting task. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 50, 3 (October 2006), 215–219. https://doi.org/10.1177/154193120605000302
[13]
Gene V. Glass, Percy D. Peckham, and James R. Sanders. 1972. Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Rev. Educ. Res. 42, 3 (September 1972), 237–288. https://doi.org/10.3102/00346543042003237
[14]
Carl G. Hempel and Paul Oppenheim. 1948. Studies in the logic of explanation. Philos. Sci. 15, 2 (April 1948), 135–175.
[15]
Denis J. Hilton. 1990. Conversational processes and causal explanation. Psychol. Bull. 107, 1 (January 1990), 65–81. https://doi.org/10.1037/0033-2909.107.1.65
[16]
Geoffrey Ho, Dana Wheatley, and Charles T. Scialfa. 2005. Age differences in trust and reliance of a medication management system. Interact. Comput. 17, 6 (December 2005), 690–710. https://doi.org/10.1016/j.intcom.2005.09.007
[17]
Kevin A. Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 57, 3 (May 2015), 407–434. https://doi.org/10.1177/0018720814547570
[18]
Robert R. Hoffman, Matthew Johnson, Jeffrey M. Bradshaw, and Al Underbrink. 2013. Trust in automation. IEEE Intell. Syst. 28, 1 (January 2013), 84–88. https://doi.org/10.1109/MIS.2013.24
[19]
Greg A. Jamieson, Lu Wang, and Heather F. Neyedli. 2008. Developing human-machine interfaces to support appropriate trust and reliance on automated combat identification systems. DRDC Contract Report No. CR 2008-114. Department of National Defence, Defence Research and Development, Toronto.
[20]
Sirkka L. Jarvenpaa, Kathleen Knoll, and Dorothy E. Leidner. 1998. Is anybody out there? Antecedents of trust in global virtual teams. J. Manag. Inf. Syst. 14, 4, 29–64. https://doi.org/10.1080/07421222.1998.11518185
[21]
Sarah A. Jessup, Tamera R. Schneider, Gene M. Alarcon, Tyler J. Ryan, and August Capiola. 2019. The measurement of the propensity to trust automation. In Virtual, Augmented and Mixed Reality. Applications and Case Studies. Lecture Notes in Computer Science, Vol. 11575. Springer, Cham, 476–489. https://doi.org/10.1007/978-3-030-21565-1_32
[22]
Elizabeth Kaltenbach and Igor Dolgov. 2017. On the dual nature of transparency and reliability: Rethinking factors that shape trust in automation. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 61, 1 (September 2017), 308–312. https://doi.org/10.1177/1541931213601558
[23]
Frank C. Keil. 2006. Explanation and understanding. Annu. Rev. Psychol. 57 (February 2006), 227–254. https://doi.org/10.1146/annurev.psych.57.102904.190100
[24]
René F. Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In CHI ‘16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, New York, NV, 2390–2395. https://doi.org/10.1145/2858036.2858402
[25]
Moritz Körber. 2019. Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics. Advances in Intelligent Systems and Computing, Vol. 823. Springer, Cham, 13–30. https://doi.org/10.1007/978-3-319-96074-6_2
[26]
John D. Lee and Neville Moray. 1994. Trust, self-confidence, and operators’ adaptation to automation. Int. J. Hum. Comput. Stud. 40, 1 (January 1994), 153–184. https://doi.org/10.1006/ijhc.1994.1007
[27]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Hum. Factors 46, 1 (March 2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
[28]
Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In UbiComp ‘09: Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, New York, NY, 195–204. https://doi.org/10.1145/1620545.1620576
[29]
Brian Y. Lim and Anind K. Dey. 2011. Design of an intelligible mobile context-aware application. In MobileHCI ‘11: Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services. ACM, New York, NY, 157–166. https://doi.org/10.1145/2037373.2037399
[30]
Tania Lombrozo. 2007. Simplicity and probability in causal explanation. Cogn. Psychol. 55, 3 (November 2007), 232–257. https://doi.org/10.1016/j.cogpsych.2006.09.006
[31]
Tania Lombrozo. 2010. Causal-explanatory pluralism: How intentions, functions, and mechanisms influence causal ascriptions. Cogn. Psychol. 61, 4 (December 2010), 303–332. https://doi.org/10.1016/j.cogpsych.2010.05.002
[32]
Adolfas Mackonis. 2013. Inference to the best explanation, coherence and other explanatory virtues. Synthese 190 (April 2013), 975–995. https://doi.org/10.1007/s11229-011-0054-y
[33]
Stephen Marsh and Mark R. Dibben. 2003. The role of trust in information science and technology. Ann. Rev. Info. Sci. Tech. 37, 1, 465–498. https://doi.org/10.1002/aris.1440370111
[34]
Stephen Marsh and Mark R. Dibben. 2005. Trust, untrust, distrust and mistrust – An exploration of the dark(er) side. In Trust Management. Lecture Notes in Computer Science, Vol. 3477. Springer, Berlin, Heidelberg, 17–33. https://doi.org/10.1007/11429760_2
[35]
D. H. McKnight, Michelle Carter, Jason B. Thatcher, and Paul F. Clay. 2011. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. 2, 2 (July 2011), 1–25. https://doi.org/10.1145/1985347.1985353
[36]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (February 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
[37]
Bonnie M. Muir. 1994. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 11 (November 1994), 1905–1922. https://doi.org/10.1080/00140139408964957
[38]
Bonnie M. Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (April 1996), 429–460. https://doi.org/10.1080/00140139608964474
[39]
Judea Pearl and Dana Mackenzie. 2019. The book of why. The new science of cause and effect. Penguin Books, London.
[40]
Wolter Pieters. 2011. Explanation and trust: What to tell the user in security and AI? Ethics Inf. Technol. 13, 1, 53–64. https://doi.org/10.1007/s10676-010-9253-3
[41]
Gloria Pöhler, Tobias Heine, and Barbara Deml. 2016. Itemanalyse und Faktorstruktur eines Fragebogens zur Messung von Vertrauen im Umgang mit automatischen Systemen. Z. Arb. Wiss. 70, 3 (September 2016), 151–160. https://doi.org/10.1007/s41449-016-0024-9
[42]
Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In IUI ‘06: Proceedings of the 11th International Conference on Intelligent User Interfaces. ACM, New York, NY, 93–100. https://doi.org/10.1145/1111449.1111475
[43]
Francisco Rebelo, Paulo Noriega, Emília Duarte, and Marcelo Soares. 2012. Using virtual reality to assess user experience. Hum. Factors 54, 6 (December 2012), 964–982. https://doi.org/10.1177/0018720812465006
[44]
Marco T. Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?” Explaining the predictions of any classifier. In KDD ‘16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, 1135–1144. https://doi.org/10.1145/2939672.2939778
[45]
Kristin E. Schaefer. 2013. The perception and measurement of human–robot trust. Doctoral dissertation, University of Central Florida, Orlando, FL.
[46]
Kristin E. Schaefer, Jessie Y. C. Chen, James L. Szalma, and P. A. Hancock. 2016. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Factors 58, 3 (May 2016), 377–400. https://doi.org/10.1177/0018720816634228
[47]
Younho Seong and Ann M. Bisantz. 2008. The impact of cognitive feedback on judgment performance and trust with decision aids. Int. J. Ind. Ergon. 38, 7-8 (July 2008), 608–625. https://doi.org/10.1016/j.ergon.2008.01.007
[48]
Thomas B. Sheridan. 2019. Individual differences in attributes of trust in automation: Measurement and application to system design. Front. Psychol. 10 (May 2019), 1117. https://doi.org/10.3389/fpsyg.2019.01117
[49]
Iis P. Tussyadiah, Florian J. Zach, and Jianxi Wang. 2020. Do travelers trust intelligent service robots? Ann. Tour. Res. 81, 102886. https://doi.org/10.1016/j.annals.2020.102886
[50]
Hartmut Wandke and Elke Wetzenstein-Ollenschläger. 2003. Assistenzsysteme: Woher und wohin? In Tagungsband UP03. Fraunhofer Verlag, Stuttgart, 3–7.
[51]
L. R. Ye and Paul E. Johnson. 1995. The impact of explanation facilities on user acceptance of expert systems advice. MIS Q. 19, 2 (June 1995), 157–172. https://doi.org/10.2307/249686
[52]
Petri Ylikoski and Jaakko Kuorikoski. 2010. Dissecting explanatory power. Philos. Stud. 148 (March 2010), 201–219. https://doi.org/10.1007/s11098-008-9324-z
[53]
Kun Yu, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2018. Do I trust a machine? Differences in user trust based on system performance. In Human and Machine Learning. Visible, Explainable, Trustworthy and Transparent. Human-Computer Interaction Series. Springer, Cham, 245–264. https://doi.org/10.1007/978-3-319-90403-0_12
[54]
Jeffrey C. Zemla, Steven Sloman, Christos Bechlivanidis, and David A. Lagnado. 2017. Evaluating everyday explanations. Psychon. Bull. Rev. 24 (March 2017), 1488–1500. https://doi.org/10.3758/s13423-017-1258-z
[55]
Jianlong Zhou, Syed Z. Arshad, Xiuying Wang, Zhidong Li, Dagan Feng, and Fang Chen. 2018. End-user development for interactive data analytics: Uncertainty, correlation and user confidence. IEEE Trans. Affect. Comput. 9, 3 (July 2018), 383–395. https://doi.org/10.1109/TAFFC.2017.2723402

Cited By

View all
  • (2025)Integrating Explainable Artificial Intelligence in Extended Reality Environments: A Systematic SurveyMathematics10.3390/math1302029013:2(290)Online publication date: 17-Jan-2025
  • (2025)Transparency of AI-XR Systems: Insights from Experts2025 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR63409.2025.00058(301-306)Online publication date: 27-Jan-2025
  • (2024)Human-centered evaluation of explainable AI applications: a systematic reviewFrontiers in Artificial Intelligence10.3389/frai.2024.14564867Online publication date: 17-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
MuC '21: Proceedings of Mensch und Computer 2021
September 2021
613 pages
ISBN:9781450386456
DOI:10.1145/3473856
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 September 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

MuC '21
MuC '21: Mensch und Computer 2021
September 5 - 8, 2021
Ingolstadt, Germany

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)78
  • Downloads (Last 6 weeks)8
Reflects downloads up to 27 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Integrating Explainable Artificial Intelligence in Extended Reality Environments: A Systematic SurveyMathematics10.3390/math1302029013:2(290)Online publication date: 17-Jan-2025
  • (2025)Transparency of AI-XR Systems: Insights from Experts2025 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR63409.2025.00058(301-306)Online publication date: 27-Jan-2025
  • (2024)Human-centered evaluation of explainable AI applications: a systematic reviewFrontiers in Artificial Intelligence10.3389/frai.2024.14564867Online publication date: 17-Oct-2024
  • (2024)Psychological Traits and Appropriate Reliance: Factors Shaping Trust in AIInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2348216(1-17)Online publication date: 13-May-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media