ABSTRACT
We propose a novel experiment paradigm to measure human trust on machine during a collaborative and egoistic theory-of-mind game. To show a different level of human trust on machine partners, we control the technical capability and humanlike cues of the autonomous agent in the cognitive experiments while recording participant's electroencephalography (EEG). The measured human trust values at various situations will be used to develop a dynamic trust model for efficient human-machine systems.
- Bernard Barber. 1983. The Logic and Limits of Trust. Rutgers University Press, New Brunswick, NJ.Google Scholar
- Arnaud Delorme, and Scott Makeig. 2004. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134, 1: 9--2Google ScholarCross Ref
- Angelika Dimoka. 2011. Brain mapping of psychological processes with psychometric scales: An fMRI method for social neuroscience. NeuroImage 54, 1: S263-S271.Google ScholarCross Ref
- Jiun-Yin Jian, Ann M. Bisantz and Colin G. Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Inter. Journal of Cognitive Ergonomics 4, 1: 53--71.Google ScholarCross Ref
- Brooks King-Casas, Damon Tomlin, Cedric Anen, Colin F. Camerer, Steven R. Quartz, and P. Read Montague. 2005. Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange. Science 308, 5718: 78--83.Google Scholar
- Frank Krueger, Kevin McCabe, Jorge Moll, Nikolaus Kriegeskorte, Roland Zahn, Maren Strenziok, Armin Heinecke, and Jordan Grafman. 2007. Neural correlates of trust. PNAS 104, 50: 20084--20089.Google ScholarCross Ref
- Poornima Madhaven and Douglas A. Wiegmann. 2007. Similarities and differences between human-human and human-automation trust: an integrative review. Theoretical Issues in Ergonomics Science 8, 4: 277--301.Google ScholarCross Ref
- Bruno Rossion and Corentin Jacques. 2008. Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. NeuroImage 39, 4: 1959--1979.Google ScholarCross Ref
- Ananth Uggiralaa, Anand K. Gramopadhyea, Brain J. Melloya, and Joe E. Tolerb. 2004. Measurement of trust in complex and dynamic systems using a quantitative approach. Inter. Journal of Industrial Ergonomics 34, 3: 175--186.Google ScholarCross Ref
- J.S. Winton, B.A. Stranger, J. O'Doherty, and R.J. Dolan. 2002. Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nature Neuroscience 5, 3: 277--283.Google ScholarCross Ref
Index Terms
- A Preliminary Study on Human Trust Measurements by EEG for Human-Machine Interactions
Recommendations
Modelling trust in human-like technologies
IndiaHCI '18: Proceedings of the 9th Indian Conference on Human-Computer InteractionTrust is an important decision-making construct helping users to adopt and continually use a system. Despite its importance, past research has mainly focused on studying and researching it in technology mediated interactions. As nature of technical ...
More Human-Likeness, More Trust?: The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation
MuC '19: Proceedings of Mensch und Computer 2019Computer agents are increasingly endowed with anthropomorphic characteristics and autonomous behavior to improve their capabilities for problem-solving and make interactions with humans more natural. This poses new challenges for human users who need to ...
Comparing Human Trust Attitudes Towards Human and Agent Teammates
HAI '20: Proceedings of the 8th International Conference on Human-Agent InteractionAgents' roles in our lives increasingly matter as they engage with people in a variety of important tasks. To achieve successful human-agent teamwork, it is critical to know the differences and similarities in people's attitudes towards human and agent ...
Comments