ABSTRACT
As an interactive medium between drivers and vehicles, in-vehicle virtual robots significantly impact user performance and experience. Among them, those robots that have anthropomorphic appearances are the mainstream. As an emerging human-vehicle interaction interface, Augmented Reality Head-Up Display (AR-HUD) still has not been used to showcase in-vehicle virtual robots. For this reason, it is currently unclear what preferences users have for the anthropomorphic appearance of virtual robots when using AR-HUD. In this work, we conducted two experimental online studies focusing on two aspects of anthropomorphic appearance: human similarity and morphological completeness. A total of 257 participants’ valid data sets were collected. Participants rated the in-vehicle virtual robot’s anthropomorphic appearance on five preference dimensions on Pleasure, Fear, Trust, Comprehensibility, and Acceptance dimensions. We found that users prefer appearances of virtual robots with medium human similarity and high morphological completeness, which aligns with the current theories on robot appearance. Our future research will focus on developing and validating design principles for the appearances of in-vehicle virtual robots.
- Emily S Cross and Richard Ramsey. 2021. Mind meets machine: Towards a cognitive science of human–machine interactions. Trends in Cognitive Sciences 25, 3 (2021), 200–212.Google ScholarCross Ref
- Maartje MA De Graaf and Somaya Ben Allouch. 2013. Exploring influencing variables for the acceptance of social robots. Robotics and autonomous systems 61, 12 (2013), 1476–1486.Google Scholar
- Alexander Feierle, David Beller, and Klaus Bengler. 2019. Head-up displays in urban partially automated driving: Effects of using augmented reality. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 1877–1882.Google ScholarDigital Library
- Andy Field. 2013. Discovering statistics using IBM SPSS statistics. sage.Google Scholar
- Julia Fink. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In Social Robotics: 4th International Conference, ICSR 2012, Chengdu, China, October 29-31, 2012. Proceedings 4. Springer, 199–208.Google ScholarDigital Library
- Donald O Hebb. 1946. Emotion in man and animal: an analysis of the intuitive processes of recognition.Psychological review 53, 2 (1946), 88.Google Scholar
- Thomas Holz, Mauro Dragone, and Gregory MP O’Hare. 2009. Where robots and virtual agents meet: a survey of social interaction research across milgram’s reality-virtuality continuum. International Journal of Social Robotics 1 (2009), 83–93.Google ScholarCross Ref
- Shanee Honig and Tal Oron-Gilad. 2018. Understanding and resolving failures in human-robot interaction: Literature review and model development. Frontiers in psychology 9 (2018), 861.Google Scholar
- Bing Cai Kok and Harold Soh. 2020. Trust in robots: Challenges and opportunities. Current Robotics Reports 1 (2020), 297–309.Google ScholarCross Ref
- Szilveszter Kovács, Dávid Vincze, Márta Gácsi, Péter Korondi, 2009. Interpolation based fuzzy automaton for human-robot interaction. IFAC Proceedings Volumes 42, 16 (2009), 317–322.Google ScholarCross Ref
- Mirjam Lanzer, Franziska Babel, Fei Yan, Bihan Zhang, Fang You, Jianmin Wang, and Martin Baumann. 2020. Designing communication strategies of autonomous vehicles with pedestrians: an intercultural study. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 122–131.Google ScholarDigital Library
- Merle Lau, Meike Jipp, and Michael Oehl. 2021. Investigating the Interplay between eHMI and dHMI for Automated Buses: How Do Contradictory Signals Influence a Pedestrian’s Willingness to Cross?. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 152–155.Google ScholarDigital Library
- Seul Chan Lee and Myounghoon Jeon. 2022. A systematic review of functions and design features of in-vehicle agents. International Journal of Human-Computer Studies (2022), 102864.Google ScholarDigital Library
- Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies 77 (2015), 23–37.Google ScholarDigital Library
- Masahiro Mori, Karl F MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robotics & automation magazine 19, 2 (2012), 98–100.Google ScholarCross Ref
- Prajval Kumar Murali, Mohsen Kaboli, and Ravinder Dahiya. 2022. Intelligent In-Vehicle Interaction Technologies. Advanced Intelligent Systems 4, 2 (2022), 2100122.Google ScholarCross Ref
- S Murugan, A Sampathkumar, S Kanaga Suba Raja, S Ramesh, R Manikandan, and Deepak Gupta. 2022. Autonomous Vehicle Assisted by Heads up Display (HUD) with Augmented Reality Based on Machine Learning Techniques. In Virtual and Augmented Reality for Automobile Industry: Innovation Vision and Applications. Springer, 45–64.Google Scholar
- Shunta Nanbu, Hiroyuki Masuta, Y otaro Fuse, Noboru Takagi, Kei Sawai, and Tatsuo Motoyoshi. 2022. Effects of Non-Verbal Communication by In-Vehicle Robots on Passengers during Automated Driving. In 2022 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 955–961.Google ScholarCross Ref
- Martín Naya-Varela, Andres Faina, and Richard J Duro. 2021. Morphological Development in robotic learning: A survey. IEEE Transactions on Cognitive and Developmental Systems 13, 4 (2021), 750–768.Google ScholarCross Ref
- Hye Sun Park, Min Woo Park, Kwang Hee Won, Kyong-Ho Kim, and Soon Ki Jung. 2013. In-vehicle AR-HUD system to provide driving-safety information. ETRI journal 35, 6 (2013), 1038–1047.Google ScholarCross Ref
- Reza Rawassizadeh, Taylan Sen, Sunny Jung Kim, Christian Meurisch, Hamidreza Keshavarz, Max Mühlhäuser, and Michael Pazzani. 2019. Manifestation of virtual assistants and robots into daily life: Vision and challenges. CCF Transactions on Pervasive Computing and Interaction 1 (2019), 163–174.Google ScholarCross Ref
- Megan K Strait, Victoria A Floerke, Wendy Ju, Keith Maddox, Jessica D Remedios, Malte F Jung, and Heather L Urry. 2017. Understanding the uncanny: Both atypical features and category ambiguity provoke aversion toward humanlike robots. Frontiers in psychology (2017), 1366.Google Scholar
- Timo Strohmann, Dominik Siemon, and Susanne Robra-Bissantz. 2019. Designing virtual in-vehicle assistants: Design guidelines for creating a convincing user experience. AIS Transactions on Human-Computer Interaction 11, 2 (2019), 54–78.Google ScholarCross Ref
- Jianmin Wang, Yujia Liu, Tianyang Yue, Chengji Wang, Jinjing Mao, Yuxi Wang, and Fang You. 2021. Robot Transparency and Anthropomorphic Attribute Effects on Human–Robot Interactions. Sensors 21, 17 (2021), 5722.Google ScholarCross Ref
- Manhua Wang, Philipp Hock, Seul Chan Lee, Martin Baumann, and Myounghoon Jeon. 2021. Genie vs. Jarvis: Characteristics and Design Considerations of In-Vehicle Intelligent Agents. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 197–199.Google Scholar
- Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: opportunities and challenges in human–robot interaction. International journal of social robotics 7 (2015), 347–360.Google ScholarCross Ref
Index Terms
- Investigating User Preferences for In-Vehicle Virtual Robots’ Anthropomorphic Appearance in Augmented Reality Head-Up Display
Recommendations
Extending Virtual Reality Display Wall Environments Using Augmented Reality
SUI '19: Symposium on Spatial User InteractionTwo major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores ...
Virtual and Augmented Reality with Head-Tracking for Efficient Teleoperation of Groups of Robots
CW '08: Proceedings of the 2008 International Conference on CyberworldsThis paper deals with the usage of Virtual and Augmented Reality techniques in the field of teleoperation. The global context is a project for collaborative teleoperation of real robots groups. The aim is to enable to a group of teleoprators to control ...
Non-anthropomorphic robots as social entities on a neurophysiological level
The mirror-neuron-system (MNS) is involved in the perception of actions of humans and anthropomorphic robots. The current study investigates whether social interaction with a non-anthropomorphic robot is sufficient for a response of the MNS.Fifty-seven ...
Comments