Skip to main content

Platforms for Assessing Relationships: Trust with Near Ecologically-Valid Risk, and Team Interaction

  • Chapter
  • First Online:
Engineering Artificially Intelligent Systems

Abstract

Assessment of human-machine trust is difficult because of confounds in context, system capability and reliability. Trust indicates willingness to be vulnerable to the variable and unpredicted actions of another actor. Making people vulnerable to risk from decisions made by an intelligent agent is difficult to justify for research ethics purposes. Making expensive, physical intelligent agents vulnerable to human decisions is an inhibiting factor to exploring the development of trust or teams with embodied systems. These confounds can be addressed through use of virtual reality and immersive gaming systems. This chapter describes the development of two platforms, PAR-TNER and PARTI, for the exploration of human collaboration with autonomous systems, and provides an overview of a limited initial pilot of PAR-TNER. In PAR-TNER and PAR-TI, the test participant teams with either humans or machines to escape from a room collaboratively. PAR-TNER leverages virtual reality to stimulate risk, while PAR-TI allows researchers to explore team dynamics. While the data from the pilot test of PAR-TNER are limited, they indicate the ability to leverage the research platforms to discern trust from perceived capability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

eBook
USD 12.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Bailey, N., Scerbo, M.: The impact of operator trust on monitoring a highly reliable automated system. In: Proceedings of HCI International 2005, Las Vegas, 22-27 July 2005 (2005)

    Google Scholar 

  • Benbasat, I., Wang, W.: Trust in and adoption of online recommendation agents. J. Assoc. Inf. Syst. 6(3), 4 (2005)

    Google Scholar 

  • Cooke, N.J., Gorman, J.C., Myers, C.W., Duren, J.L.: Interactive team cognition. Cogn. Sci. 37(2), 255–285 (2013)

    Article  Google Scholar 

  • de Visser, E., Parasuraman, R.: Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J. Cogn. Eng. Decis. Making 5(2), 209–231 (2011)

    Article  Google Scholar 

  • Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., Yanco, H.: Impact of robot failures and feedback on real-time trust. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 251–258. IEEE, March 2013

    Google Scholar 

  • Desai, M., et al.: Effects of changing reliability on trust of robot systems. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 73–80. IEEE, March 2012

    Google Scholar 

  • Deutsch, M.: The effect of motivational orientation upon trust and suspicion. Hum. Relat. 13, 123–139 (1960)

    Article  Google Scholar 

  • Gorman, J.C., Cooke, N.J., Winner, J.L.: Measuring team situation awareness in decentralized command and control environments. In: Situational Awareness, pp. 183–196. Routledge (2017)

    Google Scholar 

  • Gorman, J.C., Cooke, N.J., Winner, J.L.: Measuring team situation awareness in decentralized command and control environments. Ergonomics 49(12–13), 1312–1325 (2006)

    Article  Google Scholar 

  • Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., De Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)

    Article  Google Scholar 

  • Hoff, K.A., Bashir, M.: Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)

    Article  Google Scholar 

  • Jackson, K.F., Prasov, Z., Vincent, E.C., Jones, E.M.: A heuristic based framework for improving design of unmanned systems by quantifying and assessing operator trust. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 60, no. 1, pp. 1696–1700. SAGE Publications, Los Angeles, September 2016

    Google Scholar 

  • Khalid, H.M., et al.: Exploring psycho-physiological correlates to trust: implications for human-robot-human interaction. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 60, no. 1, pp. 697–701. SAGE Publications, Los Angeles, September 2016

    Google Scholar 

  • Koller, M.: Risk as a determinant of trust. Basic Appl. Soc. Psychol. 9(4), 265–276 (1988)

    Article  Google Scholar 

  • Kosfeld, M., Heinrichs, M., Zak, P.J., Fischbacher, U., Fehr, E.: Oxytocin increases trust in humans. Nature 435(7042), 673–676 (2005)

    Article  Google Scholar 

  • Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  MathSciNet  Google Scholar 

  • Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)

    Article  Google Scholar 

  • Molm, L.D., Takahashi, N., Peterson, G.: Risk and trust in social exchange: an experimental test of a classical proposition. Am. J. Sociol. 105(5), 1396–1427 (2000)

    Article  Google Scholar 

  • Muir, B.M.: Trust between humans and machines, and the design of decision aids. Int. J. Man Mach. Stud. 27(5–6), 527–539 (1987)

    Article  Google Scholar 

  • Perkins, L., Miller, J.E., Hashemi, A., Burns, G.: Designing for human-centered systems: situational risk as a factor of trust in automation. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 54, no. 25, pp. 2130–2134. SAGE Publications, Los Angeles, September 2010

    Google Scholar 

  • Portal Perpetual Testing Initiative: Valve Corporation. Registered trademark (2011). https://www.thinkwithportals.com/

  • Ross, J.M., Szalma, J.L., Hancock, P.A., Barnett, J.S., Taylor, G.: The effect of automation reliability on user automation trust and reliance in a search-and-rescue scenario. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 52, no. 19, pp. 1340–1344. Sage Publications, Los Angeles, September 2008

    Google Scholar 

  • Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23(3), 393–404 (1998)

    Article  Google Scholar 

  • Salas, E., Dickinson, T., Converse, S., Tannenbaum, S.: Toward an understanding of team performance and training. In: Sweezey, R., Salas, E. (eds.) Teams: Theirtraining and Performance. Ablex, Norwood (1992)

    Google Scholar 

  • Schaefer, K.E., Chen, J.Y., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)

    Article  Google Scholar 

  • Seong, Y., Bisantz, A.M.: The impact of cognitive feedback on judgment performance and trust with decision aids. Int. J. Ind. Ergon. 38(7–8), 608–625 (2008)

    Article  Google Scholar 

  • Sifakis, J.: Can we trust autonomous systems? Boundaries and risks. In: Chen, Y.-F., Cheng, C.-H., Esparza, J. (eds.) ATVA 2019. LNCS, vol. 11781, pp. 65–78. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31784-3_4

    Chapter  Google Scholar 

  • Waytz, A., Heafner, J., Epley, N.: The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117 (2014)

    Article  Google Scholar 

  • Xu, H., Teo, H.H., Tan, B.: Predicting the adoption of location-based services: the role of trust and perceived privacy risk. In: ICIS 2005 Proceedings, vol. 71 (2005)

    Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Joshua Baker for his exceptional assistance with statistical analyses.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ariel M. Greenberg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Marble, J.L. et al. (2021). Platforms for Assessing Relationships: Trust with Near Ecologically-Valid Risk, and Team Interaction. In: Lawless, W.F., Llinas, J., Sofge, D.A., Mittu, R. (eds) Engineering Artificially Intelligent Systems. Lecture Notes in Computer Science(), vol 13000. Springer, Cham. https://doi.org/10.1007/978-3-030-89385-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89385-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89384-2

  • Online ISBN: 978-3-030-89385-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics