Skip to main content

A Game for Eliciting Trust Between People and Devices Under Diverse Performance Conditions

  • Conference paper
  • First Online:
Computer Games (CGW 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 818))

Included in the following conference series:

Abstract

In this paper, we introduce a web-based game designed to investigate how different conditions affect people’s trust in devices. The game is set in a retirement village, where residents live in smart homes equipped with monitoring systems. Players, who “work” in the village, need to trade-off the time spent on administrative tasks (which enable them to earn extra income) against the time spent ensuring the welfare of the residents. The scenario of the game is complex enough to support the investigation of the influence of various factors, such as system accuracy, type of error made by the system, and risk associated with events, on players’ trust in the monitoring system. In this paper, we describe the game and its theoretical underpinnings, and present preliminary results from a trial where players interacted with two systems that have different levels of accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The game can be accessed at: http://bit.ly/MonashExp.

  2. 2.

    The games described in [24, 31] have misses and false alerts, but their consequences are identical.

References

  1. Bagheri, N., Jamieson, G.A.: The impact of context-related reliability on automation failure detection and scanning behaviour. In: 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 212–217. IEEE (2004)

    Google Scholar 

  2. Bean, N.H., Rice, S.C., Keller, M.D.: The effect of Gestalt psychology on the system-wide trust strategy in automation. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting, pp. 1417–1421 (2011)

    Google Scholar 

  3. Beck, H.P., Dzindolet, M.T., Pierce, L.G.: Automation usage decisions: controlling intent and appraisal errors in a target detection task. J. Hum. Factors Ergon. Soc. 49(3), 429–437 (2007)

    Article  Google Scholar 

  4. Berg, J., Dickhaut, J., McCabe, K.: Trust, reciprocity, and social history. Games Econ. Behav. 10(1), 122–142 (1995)

    Article  MATH  Google Scholar 

  5. Cook, D., Krishnan, N.: Mining the home environment. J. Intell. Inf. Syst. 43(3), 503–519 (2014)

    Article  Google Scholar 

  6. Dadashi, N., Stedmon, A., Pridmore, T.: Semi-automated CCTV surveillance: the effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload. Appl. Ergon. 44(5), 730–738 (2013)

    Article  Google Scholar 

  7. de Melo, C., Gratch, J.: People show envy, not guilt, when making decisions with machines. In: International Conference on Affective Computing and Intelligent Interaction, pp. 315–321 (2015)

    Google Scholar 

  8. Dzindolet, M., Pierce, L., Peterson, S., Purcell, L., Beck, H.: The influence of feedback on automation use, misuse, and disuse. In: Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting, pp. 551–555 (2002)

    Google Scholar 

  9. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  10. Gao, J., Lee, J.D.: Effect of shared information on trust and reliance in a demand forecasting task. In: Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting, pp. 215–219 (2006)

    Google Scholar 

  11. Gong, L.: How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 24(4), 1494–1509 (2008)

    Article  Google Scholar 

  12. Güth, W., Schmittberger, R., Schwarze, B.: An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3(4), 367–388 (1982)

    Article  Google Scholar 

  13. Hoff, K., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)

    Article  Google Scholar 

  14. Jamieson, G.A., Wang, L., Neyedli, H.F.: Developing human-machine interfaces to support appropriate trust and reliance on automated combat identification systems. Technical report, DTIC Document (2008)

    Google Scholar 

  15. Kirchner, W.: Age differences in short-term retention of rapidly changing information. J. Exp. Psychol. 55(4), 352–358 (1958)

    Article  Google Scholar 

  16. Lacson, F.C., Wiegmann, D.A., Madhavan, P.: Effects of attribute and goal framing on automation reliance and compliance. In: Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting, pp. 482–486 (2005)

    Google Scholar 

  17. Lee, E.J.: Flattery may get computers somewhere, sometimes: the moderating role of output modality, computer gender, and user gender. Int. J. Hum. Comput. Stud. 66(11), 789–800 (2008)

    Article  Google Scholar 

  18. Madhavan, P., Wiegmann, D.A., Lacson, F.C.: Automation failures on tasks easily performed by operators undermine trust in automated aids. J. Hum. Factors Ergon. Soc. 48(2), 241–256 (2006)

    Article  Google Scholar 

  19. Moray, N., Inagaki, T., Itoh, M.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6(1), 44–58 (2000)

    Article  Google Scholar 

  20. Moshtaghi, M., Zukerman, I., Russell, R.: Statistical models for unobtrusively detecting abnormal periods of inactivity in older adults. User Model. User-Adap. Inter. 25(3), 231–265 (2015)

    Article  Google Scholar 

  21. Oduor, K.F., Wiebe, E.N.: The effects of automated decision algorithm modality and transparency on reported trust and task performance. In: Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting, pp. 302–306 (2008)

    Google Scholar 

  22. Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)

    Article  Google Scholar 

  23. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)

    Article  Google Scholar 

  24. Sanchez, J.: Factors that affect trust and reliance on an automated aid. Ph.D. thesis, Georgia Institute of Technology (2006)

    Google Scholar 

  25. Seong, Y., Bisantz, A.M.: The impact of cognitive feedback on judgment performance and trust with decision aids. Int. J. Ind. Ergon. 38(7), 608–625 (2008)

    Article  Google Scholar 

  26. Spain, R.D., Madhavan, P.: The role of automation etiquette and pedigree in trust and dependence. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting, pp. 339–343 (2009)

    Google Scholar 

  27. de Visser, E.J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., Parasuraman, R.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, pp. 263–267 (2012)

    Google Scholar 

  28. Walliser, J.C., de Visser, E.J., Shaw, T.H.: Application of a system-wide trust strategy when supervising multiple autonomous agents. In: Proceedings of the Human Factors and Ergonomics Society 60th Annual Meeting, pp. 133–137 (2016)

    Google Scholar 

  29. Wang, L., Jamieson, G., Hollands, J.G.: The effects of design features on users’ trust in and reliance on a combat identification system. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting, pp. 375–379 (2011)

    Google Scholar 

  30. Wang, L., Jamieson, G.A., Hollands, J.G.: Trust and reliance on an automated combat identification system. J. Hum. Factors Ergon. Soc. 51(3), 281–291 (2009)

    Article  Google Scholar 

  31. Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: an investigation driven by differences in system performance. In: IUI 2017 - Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 307–317 (2017)

    Google Scholar 

  32. Zanatto, D., Patacchiola, M., Goslin, J., Cangelosi, A.: Priming anthropomorphism: can our trust in humanlike robots be transferred to non-humanlike robots? In: Proceeding of the 11th ACM/IEEE International Conference on Human Robot Interaction, pp. 543–544 (2016)

    Google Scholar 

Download references

Acknowledgments

The authors thank Matt Chen for his help in recording the training video, and Stephen Meagher for his assistance with the penalty estimations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ingrid Zukerman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zukerman, I., Partovi, A., Zhan, K., Hamacher, N., Stout, J., Moshtaghi, M. (2018). A Game for Eliciting Trust Between People and Devices Under Diverse Performance Conditions. In: Cazenave, T., Winands, M., Saffidine, A. (eds) Computer Games. CGW 2017. Communications in Computer and Information Science, vol 818. Springer, Cham. https://doi.org/10.1007/978-3-319-75931-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-75931-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-75930-2

  • Online ISBN: 978-3-319-75931-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics