Abstract
This paper describes a web-based study testing the effects of different explanations on the human-computer trust relationship. Human-computer trust has shown to be very important in keeping the user motivated and cooperative in a human-computer interaction. Especially unexpected or not understandable situations may decrease the trust and by that the way of interacting with a technical system. Analogous to human-human interaction providing explanations in these situations can help to remedy negative effects. However, selecting the appropriate explanation based on users’ human-computer trust is an unprecedented approach because existing studies concentrate on trust as a one-dimensional concept. In this study we try to find a mapping between the bases of trust and the different goals of explanations. Our results show that transparency explanations seem to be the best way to influence the user’s perceived understandability and reliability.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Fogg BJ, Tseng H (1999) The elements of computer credibility. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’99ACM, New York, pp 80–87
Glass A, McGuinness DL, Wolverton M (2008) Toward establishing trust in adaptive agents. In: IUI ’08: Proceedings of the 13th international conference on intelligent user interfaces. ACM, New York, pp 227–236
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors: J Hum Factors Ergon Soc 46(1):50–80
Lim BY, Dey AK, Avrahami D (2009) Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’09ACM, New York, pp 2119–2128
Madsen M, Gregor S (2000) Measuring human-computer trust. In: Proceedings of the 11th australasian conference on information systems, pp 6–8
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734
Muir BM (1992) Trust in automation: Part i. Theoretical issues in the study of trust and human intervention in automated systems. In: Ergonomics, pp 1905–1922
Nothdurft F, Bertrand G, Lang H, Minker W (2012) Adaptive explanation architecture for maintaining human-computer trust. In: 36th Annual IEEE computer software and applications conference. COMPSAC
Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors: J Hum Factors Ergonomics Soc 39(2):230–253
Rammstedt B, John OP (2005) Short version of the ‘big five inventory’ (bfi-k). Diagnostica: Zeitschrift fuer psychologische Diagnostik und differentielle Psychologie 4:195–206
Sørmo F, Cassens J (2004) Explanation goals in case-based reasoning. In: Proceedings of the ECCBR 2004 workshops
Tseng S, Fogg BJ (1999) Credibility and computing technology. Commun ACM 42(5):39–44
Acknowledgments
This work was supported by the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” which is funded by the German Research Foundation (DFG).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Nothdurft, F., Minker, W. (2016). Justification and Transparency Explanations in Dialogue Systems to Maintain Human-Computer Trust. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds) Situated Dialog in Speech-Based Human-Computer Interaction. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-21834-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-21834-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21833-5
Online ISBN: 978-3-319-21834-2
eBook Packages: EngineeringEngineering (R0)