Skip to main content

Justification and Transparency Explanations in Dialogue Systems to Maintain Human-Computer Trust

  • Chapter
  • First Online:

Part of the book series: Signals and Communication Technology ((SCT))

Abstract

This paper describes a web-based study testing the effects of different explanations on the human-computer trust relationship. Human-computer trust has shown to be very important in keeping the user motivated and cooperative in a human-computer interaction. Especially unexpected or not understandable situations may decrease the trust and by that the way of interacting with a technical system. Analogous to human-human interaction providing explanations in these situations can help to remedy negative effects. However, selecting the appropriate explanation based on users’ human-computer trust is an unprecedented approach because existing studies concentrate on trust as a one-dimensional concept. In this study we try to find a mapping between the bases of trust and the different goals of explanations. Our results show that transparency explanations seem to be the best way to influence the user’s perceived understandability and reliability.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Fogg BJ, Tseng H (1999) The elements of computer credibility. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’99ACM, New York, pp 80–87

    Google Scholar 

  2. Glass A, McGuinness DL, Wolverton M (2008) Toward establishing trust in adaptive agents. In: IUI ’08: Proceedings of the 13th international conference on intelligent user interfaces. ACM, New York, pp 227–236

    Google Scholar 

  3. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors: J Hum Factors Ergon Soc 46(1):50–80

    Article  Google Scholar 

  4. Lim BY, Dey AK, Avrahami D (2009) Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’09ACM, New York, pp 2119–2128

    Google Scholar 

  5. Madsen M, Gregor S (2000) Measuring human-computer trust. In: Proceedings of the 11th australasian conference on information systems, pp 6–8

    Google Scholar 

  6. Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734

    Google Scholar 

  7. Muir BM (1992) Trust in automation: Part i. Theoretical issues in the study of trust and human intervention in automated systems. In: Ergonomics, pp 1905–1922

    Google Scholar 

  8. Nothdurft F, Bertrand G, Lang H, Minker W (2012) Adaptive explanation architecture for maintaining human-computer trust. In: 36th Annual IEEE computer software and applications conference. COMPSAC

    Google Scholar 

  9. Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors: J Hum Factors Ergonomics Soc 39(2):230–253

    Article  Google Scholar 

  10. Rammstedt B, John OP (2005) Short version of the ‘big five inventory’ (bfi-k). Diagnostica: Zeitschrift fuer psychologische Diagnostik und differentielle Psychologie 4:195–206

    Article  Google Scholar 

  11. Sørmo F, Cassens J (2004) Explanation goals in case-based reasoning. In: Proceedings of the ECCBR 2004 workshops

    Google Scholar 

  12. Tseng S, Fogg BJ (1999) Credibility and computing technology. Commun ACM 42(5):39–44

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” which is funded by the German Research Foundation (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Nothdurft .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Nothdurft, F., Minker, W. (2016). Justification and Transparency Explanations in Dialogue Systems to Maintain Human-Computer Trust. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds) Situated Dialog in Speech-Based Human-Computer Interaction. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-21834-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-21834-2_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-21833-5

  • Online ISBN: 978-3-319-21834-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics