skip to main content
10.1145/3652920.3652928acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article

Evaluations of Parallel Views for Sequential VR Search Tasks

Published:01 May 2024Publication History

ABSTRACT

In collaborative virtual environments, sharing a mutual first-person view can lead to different problem-solving strategies among users. What if all views are controlled and seen by the same user? Could this impact how visual search tasks are performed? This paper explores the effects of a user receiving different numbers of parallel views while solving object search tasks in a virtual environment. We developed a prototype and conducted a pilot study, comparing two (2Heads), four (4Heads), and eight (8Heads) additional views. The results suggested that participants’ behaviors could be influenced by the number of parallel views used to solve tasks. Participants found the experiences with 2Heads and 8Heads unpleasant. In 2Heads, the ability to see additional but partial perspectives discouraged participants from using parallel views. Conversely, in 8Heads, the view’s full and overlapping nature forced participants to use parallel views regardless of their preferences. 4Heads received the least complaints as it provided users with the freedom and flexibility to choose their task-solving strategies. These results were translated into design implications for future development and research involving parallel views.

References

  1. Frédérique de Vignemont. 2011. Embodiment, ownership and disownership. Consciousness and Cognition 20, 1 (2011), 82–93. https://doi.org/10.1016/j.concog.2010.09.004 Brain and Self: Bridging the Gap.Google ScholarGoogle ScholarCross RefCross Ref
  2. Kevin Fan, Jochen Huber, Suranga Nanayakkara, and Masahiko Inami. 2014. SpiderVision: Extending the Human Field of View for Augmented Awareness. In Proceedings of the 5th Augmented Human International Conference (Kobe, Japan) (AH ’14). Association for Computing Machinery, New York, NY, USA, Article 49, 8 pages. https://doi.org/10.1145/2582051.2582100Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2012. Teleoperation of Multiple Social Robots. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 42, 3 (2012), 530–544. https://doi.org/10.1109/TSMCA.2011.2164243Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. F. Guterstam, D. E. O. Larsson, J. Szczotka, and H.H. Ehrsson. 2020. Duplication of the bodily self: a perceptual illusion of dual full-body ownership and dual self-location. Royal Society open science 7, 12 (2020). https://doi.org/10.1098/rsos.201911Google ScholarGoogle ScholarCross RefCross Ref
  5. Yukiko Iwasaki, Kozo Ando, Shuhei Iizuka, Michiteru Kitazaki, and Hiroyasu Iwata. 2020. Detachable Body: The Impact of Binocular Disparity and Vibrotactile Feedback in Co-Presence Tasks. IEEE Robotics and Automation Letters 5, 2 (2020), 3477–3484. https://doi.org/10.1109/LRA.2020.2977320Google ScholarGoogle ScholarCross RefCross Ref
  6. Shunichi Kasahara, Mitsuhito Ando, Kiyoshi Suganuma, and Jun Rekimoto. 2016. Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1561–1572. https://doi.org/10.1145/2858036.2858495Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Hiroki Kawasaki, Hiroyuki Iizuka, Shin Okamoto, Hideyuki Ando, and Taro Maeda. 2010. Collaboration and skill transmission by first-person perspective view sharing system. In 19th International Symposium in Robot and Human Interactive Communication. 125–131. https://doi.org/10.1109/ROMAN.2010.5598668Google ScholarGoogle ScholarCross RefCross Ref
  8. Konstantina Kilteni, Raphaela Groten, and Mel Slater. 2012. The Sense of Embodiment in Virtual Reality. Presence 21, 4 (2012), 373–387. https://doi.org/10.1162/PRES_a_00124Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Sameer Kishore, Xavi Navarro Muncunill, Pierre Bourdin, Keren Or-Berkers, Doron Friedman, and Mel Slater. 2016. Multi-destination beaming: Apparently being in three places at once through robotic and virtual embodiment. Frontiers Robotics AI 3, NOV (nov 2016), 65. https://doi.org/10.3389/FROBT.2016.00065/BIBTEXGoogle ScholarGoogle ScholarCross RefCross Ref
  10. Masatomo Kobayashi and Takeo Igarashi. 2008. Ninja Cursors: Using Multiple Cursors to Assist Target Acquisition on Large Screens. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI ’08). Association for Computing Machinery, New York, NY, USA, 949–958. https://doi.org/10.1145/1357054.1357201Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ryota Kondo, Maki Sugimoto, Kouta Minamizawa, Takayuki Hoshi, Masahiko Inami, and Michiteru Kitazaki. 2018. Illusory body ownership of an invisible body interpolated between virtual hands and feet via visual-motor synchronicity. Scientific Reports 2018 8:1 8, 1 (may 2018), 1–8. https://doi.org/10.1038/s41598-018-25951-2Google ScholarGoogle ScholarCross RefCross Ref
  12. Moritz Kubesch, Michael Lankes, and Bernhard Maurer. 2019. Exploring the Effects of Time Pressure on Screen-Cheating Behaviour: Insights and Design Potentials. In Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (Barcelona, Spain) (CHI PLAY ’19 Extended Abstracts). Association for Computing Machinery, New York, NY, USA, 459–465. https://doi.org/10.1145/3341215.3356260Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Keitaro Kurosaki, Hiroki Kawasaki, Daisuke Kondo, Hiroyuki Iizuka, Hideyuki Ando, and Taro Maeda. 2011. Skill Transmission for Hand Positioning Task through View-Sharing System. In Proceedings of the 2nd Augmented Human International Conference (Tokyo, Japan) (AH ’11). Association for Computing Machinery, New York, NY, USA, Article 20, 4 pages. https://doi.org/10.1145/1959826.1959846Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, and Maki Sugimoto. 2021. MultiSoma: Distributed Embodiment with Synchronized Behavior and Perception. In Augmented Humans Conference 2021 (Rovaniemi, Finland) (AHs’21). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3458709.3458878Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, and Maki Sugimoto. 2022. MultiSoma: Motor and Gaze Analysis on Distributed Embodiment With Synchronized Behavior and Perception(4). Frontiers in Computer Science. https://doi.org/10.3389/fcomp.2022.788014Google ScholarGoogle ScholarCross RefCross Ref
  16. Yukiya Nakanishi, Masaaki Fukuoka, Shunichi Kasahara, and Maki Sugimoto. 2022. Synchronous and Asynchronous Manipulation Switching of Multiple Robotic Embodiment Using EMG and Eye Gaze. In Augmented Humans 2022 (Kashiwa, Chiba, Japan) (AHs 2022). Association for Computing Machinery, New York, NY, USA, 94–103. https://doi.org/10.1145/3519391.3522753Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Blanke O and Metzinger T. 2009. Full-body illusions and minimal phenomenal selfhood. Trends Cogn Sci 13, 7 (2009), 7–13. https://doi.org/10.1016/j.tics.2008.10.003Google ScholarGoogle ScholarCross RefCross Ref
  18. Eduard Ort and N. L. Olivers Christian. 2020. The Capacity of Multiple-Target Search. Visual Cognition 28 5-8, 25 (2020), 330–355. https://doi.org/10.1080/13506285.2020.1772430Google ScholarGoogle ScholarCross RefCross Ref
  19. Daniel Roth and Marc Erich Latoschik. 2020. Construction of the Virtual Embodiment Questionnaire (VEQ). IEEE Transactions on Visualization and Computer Graphics 26, 12 (2020), 3546–3556. https://doi.org/10.1109/TVCG.2020.3023603Google ScholarGoogle ScholarCross RefCross Ref
  20. MHD Yamen Saraiji, Shota Sugimoto, Charith Lasantha Fernando, Kouta Minamizawa, and Susumu Tachi. 2016. Layered Telepresence: Simultaneous Multi Presence Experience Using Eye Gaze Based Perceptual Awareness Blending. In ACM SIGGRAPH 2016 Emerging Technologies (Anaheim, California) (SIGGRAPH ’16). Association for Computing Machinery, New York, NY, USA, Article 14, 2 pages. https://doi.org/10.1145/2929464.2929467Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jonas Schjerlund, Kasper Hornbæk, and Joanna Bergström. 2021. Ninja Hands: Using Many Hands to Improve Target Selection in VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 130, 14 pages. https://doi.org/10.1145/3411764.3445759Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jonas Schjerlund, Kasper Hornbæk, and Joanna Bergström. 2022. OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 355, 13 pages. https://doi.org/10.1145/3491102.3501873Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Richard Skarbez, Nicholas F. Polys, J. Todd Ogle, Chris North, and Doug A. Bowman. 2019. Immersive Analytics: Theory and Research Agenda. Frontiers in Robotics and AI 6 (2019). https://doi.org/10.3389/frobt.2019.00082Google ScholarGoogle ScholarCross RefCross Ref
  24. Jakub Sypniewski, Steven Beck Klingberg, Jakub Rybar, and Robb Mitchell. 2018. Towards Dynamic Perspective Exchange in Physical Games. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3188674Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Kazuma Takada, Midori Kawaguchi, Yukiya Nakanishi, Akira Uehara, Mark Armstrong, Adrien Verhulst, Kouta Minamizawa, and Shunichi Kasahara. 2021. Parallel Ping-Pong: Demonstrating Parallel Interaction through Multiple Bodies by a Single User. In SIGGRAPH Asia 2021 Emerging Technologies (Tokyo, Japan) (SA ’21 Emerging Technologies). Association for Computing Machinery, New York, NY, USA, Article 12, 2 pages. https://doi.org/10.1145/3476122.3484836Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Kazuma Takada, Midori Kawaguchi, Akira Uehara, Yukiya Nakanishi, Mark Armstrong, Adrien Verhulst, Kouta Minamizawa, and Shunichi Kasahara. 2022. Parallel Ping-Pong: Exploring Parallel Embodiment through Multiple Bodies by a Single User. In Augmented Humans 2022 (Kashiwa, Chiba, Japan) (AHs 2022). Association for Computing Machinery, New York, NY, USA, 121–130. https://doi.org/10.1145/3519391.3519408Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Theophilus Teo, Kuniharu Sakurada, and Maki Sugimoto. 2023. Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks. In 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). 620–630. https://doi.org/10.1109/VR55154.2023.00077Google ScholarGoogle ScholarCross RefCross Ref
  28. Theophilus Teo, Maki Sugimoto, Gun Lee, and Mark Billinghurst. 2023. NinjaHeads: Gaze-Oriented Parallel View System for Asynchronous Tasks. In SIGGRAPH Asia 2023 XR (, Sydney, NSW, Australia, ) (SA ’23). Association for Computing Machinery, New York, NY, USA, Article 21, 2 pages. https://doi.org/10.1145/3610549.3614593Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Huiyuan Zhang and Jing Samantha Pan. 2022. Visual search as an embodied process: The effects of perspective change and external reference on search performance. Journal of Vision 22, 10 (09 2022), 13–13. https://doi.org/10.1167/jov.22.10.13Google ScholarGoogle ScholarCross RefCross Ref
  30. Chen Zhao, Luming Hu, Liuqing Wei, Chundi Wang, Xiaowei Li, Bin Hu, and Xuemin Zhang. 2020. How Do Humans Perform in Multiple Object Tracking With Unstable Features. Frontiers in Psychology 11 (2020). https://doi.org/10.3389/fpsyg.2020.01940Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Evaluations of Parallel Views for Sequential VR Search Tasks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        AHs '24: Proceedings of the Augmented Humans International Conference 2024
        April 2024
        355 pages
        ISBN:9798400709807
        DOI:10.1145/3652920

        Copyright © 2024 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 May 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)22
        • Downloads (Last 6 weeks)22

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format