Abstract
Human-Robot Interaction (HRI) is an emerging topic within contemporary science and this topic spans the literatures between engineering, psychology, computer science, artificial intelligence, machine learning and robotics. The study of HRI requires scenarios and other affordances from which to contextualize the HRI to study the factors that shape human attitudes, behaviors, and biases as they relate to robots. The current paper details the rationale, software/hardware, and contextual considerations associated with the creation of HRI stimuli used in experiments, specifically the creation of a set of video stimuli. These stimuli centered on the concept of an autonomous security robot (ASR) and provided a scenario wherein humans (research confederates) would interact with the ASR in a realistic scenario. A video was created to foster a realistic visual and auditory representation of the HRI encounter from a dynamic perspective – meaning shifting perspectives between a first-person experiential vantage point to a birds-eye-view of the broader situation. The scenario used involved a security context where the confederates sought access to a secure facility. The ASR’s role was to examine visitor access credentials and determine if the visitor was authorized or not. The robot’s behaviors, while scripted, were depicted as autonomous in the video and included verbal interactions/instructions as well as physical limb motions, gestures, and instructions. Several important features of the stimuli were considered in its creation, namely: realism, immersive experience, and simulated vulnerability of the human to the robot. The current paper walks through each of these considerations with details provided in our approach. To date, the HRI stimuli have been used in multiple experiments and have demonstrated flexibility in addressing multiple research questions in the domain of HRI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Defense Science Board (DSB) Task Force on the Role of Autonomy in Department of Defense (DoD) Systems. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Washington, DC (2012)
Defense Science Board (DSB) Summer Study on Autonomy. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, DC (2016)
Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)
Lyons, J.B., Vo, T., Wynne, K.T., Mahoney, S., Nam, C.S., Gallimore, D.: Trusting autonomous robots: the role of reliability and stated social intent. Human Factors, (in press)
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrated model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)
Robinette, P., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: HRI 2016, 11th ACM/IEEE International Conference on Human-Robot Interaction, pp. 101–108 (2016)
Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (Faulty) robot? Effects of error, task type, personality on human-robot cooperation and trust. In: HRI 2015, 10th ACM/IEEE International Conference on Human-Robot Interaction, pp. 1–8 (2015)
Booth, S., Tomkin, J., Pfister, H., Waldo, J., Gajos, K., Nagpal, R.: Piggybacking robots: human-robot overtrust in University dormitory security. In: HRI 2017, 11th ACM/IEEE International Conference on Human-Robot Interaction, pp. 426–434 (2017)
Wiggers, K.: Meet the 400-pound robots that will soon patrol parking lots, offices, and malls (2017). https://www.digitaltrends.com/cool-tech/knightscope-robots-interview/. 6 Apr 2018
Inbar, O., Meyer, J.: Manners matter: trust in robotic peacekeepers. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59, no. 1, pp. 185–189 (2015)
Long, S.K., Karpinsky-Mosely, N.D., Bliss, J.: Trust of simulated robotic peacekeepers among resident and expatriate Americans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, pp. 2091–2095 (2017)
Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: Papers from the AAAI Spring Symposium (Technical Report SS-13-07). AAAI Press, Menlo Park (2013)
Gallimore, D., Lyons, J.B., Vo, T., Mahoney, S., Wynne, K.T.: Trusting robocop: gender-based effects on trust of an autonomous robot. Front. Psychol.: Human-Media Interact. 10, 482 (2019)
Lyons, J.B., Nam, C.S., Jessup, S.A., Vo, T.Q., Wynne, K.T.: The role of individual differences as predictors of trust in autonomous security robots. In: Proceedings of IEEE Human-Machine Systems Conference, Rome, IT (2020, under review)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply
About this paper
Cite this paper
Vo, T., Lyons, J.B. (2020). Producing an Immersive Experience Using Human-Robot Interaction Stimuli. In: Stephanidis, C., et al. HCI International 2020 – Late Breaking Papers: Cognition, Learning and Games. HCII 2020. Lecture Notes in Computer Science(), vol 12425. Springer, Cham. https://doi.org/10.1007/978-3-030-60128-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-60128-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60127-0
Online ISBN: 978-3-030-60128-7
eBook Packages: Computer ScienceComputer Science (R0)