skip to main content
10.1145/3678957.3685748acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Open access

MR-Driven Near-Future Realities: Previewing Everyday Life Real-World Experiences Using Mixed Reality

Published: 04 November 2024 Publication History

Abstract

Mixed reality (MR) provides users with novel affordances that allow them to overlay and experience reality in various visual manifestations. However, existing works mainly focused on using MR to augment a user’s present, which does not exhaust the full potential of contextual MR. In this paper, we empirically explore MR-Driven Near-Future Realities, a future multimodal MR experience that can overlay semantically-related augmentations within a user’s everyday life to allow them to preview, manipulate, and reflect on a near-future reality. We investigated this concept during a VR-based empirical study to understand users’ perceptions and opinions about MR-Driven Near-Future Realities. The results showed that users were positive about the concept of MR-Driven Near-Future Realities and were capable of eliciting near-future realities in various simulated real-world manifestations, but more demanding scenarios negatively affected their experience and task completion performance. Our goal is to spark meaningful and critical discussions about the use of MR-Driven Near-Future Realities to preview possible near-future realities in everyday life.

Supplemental Material

PDF File
Appendix

References

[1]
Amazon. 2024. Amazon AR View. https://www.amazon.com/products accessed 16 April 2024.
[2]
Ferran Argelaguet and Carlos Andujar. 2013. A survey of 3D object selection techniques for virtual environments. Computers & Graphics 37, 3 (2013), 121–136. https://doi.org/10.1016/j.cag.2012.12.003
[3]
Jas Brooks, Alireza Bahremand, Pedro Lopes, Christy Spackman, Judith Amores Fernandez, Hsin-Ni Ho, Masahiko Inami, and Simon Niedenthal. 2023. Sharing and Experiencing Hardware and Methods to Advance Smell, Taste, and Temperature Interfaces. In Ext. Abstracts of the CHI Conf.ACM, New York, NY, USA. https://doi.org/10.1145/3544549.3573828
[4]
Yi Fei Cheng, Hang Yin, Yukang Yan, Jan Gugenheimer, and David Lindlbauer. 2022. Towards Understanding Diminished Reality. In Proc. of the CHI Conf. (New Orleans, LA, USA). ACM, New York, NY, USA. https://doi.org/10.1145/3491102.3517452
[5]
Hyunsung Cho, Matthew L. Komar, and David Lindlbauer. 2023. RealityReplay: Detecting and Replaying Temporal Changes In Situ Using Mixed Reality. Proc. ACM IMWUT 7, 3 (2023). https://doi.org/10.1145/3610888
[6]
Sven Coppers, Kris Luyten, Davy Vanacken, David Navarre, Philippe Palanque, and Christine Gris. 2019. Fortunettes: Feedforward about the Future State of GUI Widgets. Proc. ACM Hum.-Comput. Interact. (2019). https://doi.org/10.1145/3331162
[7]
Scott Davidoff, Min Kyung Lee, Anind K. Dey, and John Zimmerman. 2007. Rapidly Exploring Application Design Through Speed Dating. In UbiComp 2007: Ubiquitous Computing. Springer Berlin Heidelberg, Berlin, Heidelberg.
[8]
Fred D Davis. 1985. A technology acceptance model for empirically testing new end-user information systems: Theory and results. Ph. D. Dissertation. MIT.
[9]
Tom Djajadiningrat, Kees Overbeeke, and Stephan Wensveen. 2002. But How, Donald, Tell Us How? On the Creation of Meaning in Interaction Design through Feedforward and Inherent Feedback. In Proc. of the DIS Conf. (London, England) (DIS ’02). ACM, New York, NY, USA. https://doi.org/10.1145/778712.778752
[10]
Julien Epps, Serge Lichman, and Mike Wu. 2006. A Study of Hand Shape Use in Tabletop Gesture Interaction. In CHI ’06 Ext. Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI EA ’06). ACM, New York, NY, USA. https://doi.org/10.1145/1125451.1125601
[11]
Andreas Rene Fender and Christian Holz. 2022. Causality-Preserving Asynchronous Reality. In Proc. of the 2022 CHI Conf. on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). ACM, New York, NY, USA, Article 634. https://doi.org/10.1145/3491102.3501836
[12]
Clifton Forlines, Chia Shen, and Bill Buxton. 2005. Glimpse: A Novel Input Model for Multi-Level Devices. In CHI ’05 Ext. Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA ’05). ACM, New York, NY, USA. https://doi.org/10.1145/1056808.1056920
[13]
Thomas Franke, Christiane Attig, and Daniel Wessel. 2019. A Personal Resource for Technology Interaction: Development and Validation of the Affinity for Technology Interaction (ATI) Scale. Int. Journal of Human–Computer Interaction (2019). https://doi.org/10.1080/10447318.2018.1456150
[14]
Jens Grubert, Tobias Langlotz, Stefanie Zollmann, and Holger Regenbrecht. 2017. Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality. IEEE TVCG (2017). https://doi.org/10.1109/TVCG.2016.2543720
[15]
Uwe Gruenefeld, Jonas Auda, Florian Mathis, Stefan Schneegass, Mohamed Khamis, Jan Gugenheimer, and Sven Mayer. 2022. VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In Proc. of the CHI Conf. (New Orleans, LA, USA). ACM, New York, NY, USA. https://doi.org/10.1145/3491102.3501821
[16]
Taejin Ha, Steven Feiner, and Woontack Woo. 2014. WeARHand: Head-worn, RGB-D camera-based, bare-hand user interface with visually enhanced depth perception. In 2014 IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR). 219–228. https://doi.org/10.1109/ISMAR.2014.6948431
[17]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Elsevier.
[18]
Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces. In Proc. of the CHI Conf. on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, USA) (CHI ’99). ACM, New York, NY, USA. https://doi.org/10.1145/302979.303030
[19]
Wolfgang Hürst, Daisuke Iwai, and Prabhakaran Balakrishnan. 2016. International workshop on multimodal virtual and augmented reality (workshop summary). In Proceedings of the 18th ACM International Conference on Multimodal Interaction (Tokyo, Japan) (ICMI ’16). Association for Computing Machinery, New York, NY, USA, 596–597. https://doi.org/10.1145/2993148.3007631
[20]
Edwin L. Hutchins, James D. Hollan, and Donald A. Norman. 1985. Direct Manipulation Interfaces. HCI (1985). https://doi.org/10.1207/s15327051hci0104_2
[21]
Jinki Jung, Hyeopwoo Lee, Jeehye Choi, Abhilasha Nanda, Uwe Gruenefeld, Tim Stratmann, and Wilko Heuten. 2018. Ensuring safety in augmented reality from trade-off between immersion and situation awareness. In IEEE ISMAR. IEEE.
[22]
Kangsoo Kim, Mark Billinghurst, Gerd Bruder, Henry Been-Lirn Duh, and Gregory F. Welch. 2018. Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017). IEEE TVCG (2018). https://doi.org/10.1109/TVCG.2018.2868591
[23]
Violet Kim. 2020. Virtual Reality, Real Grief: A South Korean documentary reunited a grieving mother with her dead daughter—in VR. https://slate.com/technology/2020/05/meeting-you-virtual-reality-documentary-mbc.html accessed 15 May 2023.
[24]
Pascal Knierim, Thomas Kosch, Gabrielle LaBorwit, and Albrecht Schmidt. 2020. Altering the Speed of Reality? Exploring Visual Slow-Motion to Amplify Human Perception Using Augmented Reality. In Proc. of the Augmented Humans Int. Conference (Kaiserslautern, Germany) (AHs ’20). ACM, New York, NY, USA. https://doi.org/10.1145/3384657.3384659
[25]
Benjamin Lafreniere, Parmit K. Chilana, Adam Fourney, and Michael A. Terry. 2015. These Aren’t the Commands You’re Looking For: Addressing False Feedforward in Feature-Rich Software. In ACM UIST (Charlotte, NC, USA) (UIST ’15). ACM, New York, NY, USA. https://doi.org/10.1145/2807442.2807482
[26]
David Lindlbauer and Andy D. Wilson. 2018. Remixed Reality: Manipulating Space and Time in Augmented Reality. In Proc. of the 2018 CHI Conf. on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA. https://doi.org/10.1145/3173574.3173703
[27]
Andrés Lucero. 2015. Using Affinity Diagrams to Evaluate Interactive Prototypes. In INTERACT.
[28]
Bernardo Marques, Samuel Silva, Rafael Maio, João Alves, Carlos Ferreira, Paulo Dias, and Beatriz Sousa Santos. 2023. Evaluating Outside the Box: Lessons Learned on eXtended Reality Multi-modal Experiments Beyond the Laboratory. In Proceedings of the 25th International Conference on Multimodal Interaction(ICMI ’23). Association for Computing Machinery, New York, NY, USA, 234–242. https://doi.org/10.1145/3577190.3614134
[29]
Florian Mathis. 2023. Moving usable security research out of the lab: evaluating the use of VR studies for real-world authentication research. Ph. D. Dissertation. University of Glasgow and University of Edinburgh.
[30]
Florian Mathis, Joseph O’Hagan, Kami Vaniea, and Mohamed Khamis. 2022. Stay Home! Conducting Remote Usability Evaluations of Novel Real-World Authentication Systems Using Virtual Reality. In Proc. of the 2022 Int. Conf. on Advanced Visual Interfaces (Frascati, Rome, Italy) (AVI 2022). ACM, New York, NY, USA, Article 14. https://doi.org/10.1145/3531073.3531087
[31]
Paul Milgram and Fumio Kishino. 1994. A taxonomy of mixed reality visual displays. (1994).
[32]
Samuel T. Moulton and Stephen M. Kosslyn. 2009. Imagining predictions: mental imagery as mental emulation. Philosophical Transactions of the Royal Society B: Biological Sciences (2009). https://doi.org/10.1098/rstb.2008.0314
[33]
Janet H Murray. 2020. Virtual/reality: how to tell the difference. Journal of visual culture (2020).
[34]
Alexander Ng, Daniel Medeiros, Mark McGill, Julie Williamson, and Stephen Brewster. 2021. The Passenger Experience of Mixed Reality Virtual Display Layouts in Airplane Environments. In IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR). https://doi.org/10.1109/ISMAR52148.2021.00042
[35]
Donald A. Norman. 1999. Affordance, Conventions, and Design. Interactions (1999). https://doi.org/10.1145/301153.301168
[36]
Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, and Hans Gellersen. 2017. Gaze + Pinch Interaction in Virtual Reality. In Proc. of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI ’17). ACM, New York, NY, USA. https://doi.org/10.1145/3131277.3132180
[37]
Fabio Pittarello, Alessandro Carrieri, Tommaso Pellegrini, and Alessandra Volo. 2022. Remembering the City: Stumbling Stones, Memory Sites and Augmented Reality. In Proc. of the 2022 Int. Conference on Advanced Visual Interfaces (Frascati, Rome, Italy) (AVI 2022). ACM, New York, NY, USA, Article 44, 9 pages. https://doi.org/10.1145/3531073.3531103
[38]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-Defined Gestures for Augmented Reality. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA ’13). ACM, New York, NY, USA. https://doi.org/10.1145/2468356.2468527
[39]
Arnaud Prouzeau, Yuchen Wang, Barrett Ens, Wesley Willett, and Tim Dwyer. 2020. Corsican Twin: Authoring In Situ Augmented Reality Visualisations in Virtual Reality. In Proceedings of the International Conference on Advanced Visual Interfaces (Salerno, Italy) (AVI ’20). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3399715.3399743
[40]
Jun Rekimoto, Takaaki Ishizawa, Carsten Schwesig, and Haruo Oba. 2003. PreSense: Interaction Techniques for Finger Sensing Input Devices. In Proc. of the ACM UIST (Vancouver, Canada) (UIST ’03). ACM, New York, NY, USA. https://doi.org/10.1145/964696.964719
[41]
Gian-Luca Savino, Niklas Emanuel, Steven Kowalzik, Felix Kroll, Marvin C. Lange, Matthis Laudan, Rieke Leder, Zhanhua Liang, Dayana Markhabayeva, Martin Schmeißer, Nicolai Schütz, Carolin Stellmacher, Zihe Xu, Kerstin Bub, Thorsten Kluss, Jaime Maldonado, Ernst Kruijff, and Johannes Schöning. 2019. Comparing Pedestrian Navigation Methods in Virtual Reality and Real Life. In 2019 International Conference on Multimodal Interaction (Suzhou, China) (ICMI ’19). Association for Computing Machinery, New York, NY, USA, 16–25. https://doi.org/10.1145/3340555.3353741
[42]
Hanna Kathrin Schraffenberger. 2018. Arguably augmented reality: relationships between the virtual and the real. Ph. D. Dissertation. Leiden University.
[43]
Julia Schwarz, Jennifer Mankoff, and Scott E. Hudson. 2015. An Architecture for Generating Interactive Feedback in Probabilistic User Interfaces. In Proc. of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). ACM, New York, NY, USA. https://doi.org/10.1145/2702123.2702228
[44]
Adalberto Simeone, Benjamin Weyers, Svetlana Bialkova, and Robert W. Lindeman. 2023. Introduction to Everyday Virtual and Augmented Reality. Springer Int. Publishing, Cham. https://doi.org/10.1007/978-3-031-05804-2_1
[45]
Maximilian Speicher, Brian D. Hall, and Michael Nebeling. [n. d.]. What is Mixed Reality?. In Proc. of the CHI Conference (Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA. https://doi.org/10.1145/3290605.3300767
[46]
Wolfgang Stuerzlinger and Chadwick A Wingrave. 2011. The value of constraints for 3D user interfaces. In Virtual Realities: Dagstuhl Seminar 2008. Springer.
[47]
Mark A Thornton and Diana I Tamir. 2021. Perceiving actions before they happen: Psychological dimensions scaffold neural action prediction. Social Cognitive and Affective Neuroscience (2021).
[48]
Adam S. Williams, Jason Garcia, and Francisco Ortega. 2020. Understanding Multimodal User Gesture and Speech Behavior for Object Manipulation in Augmented Reality Using Elicitation. IEEE Transactions on Visualization and Computer Graphics 26, 12 (2020), 3479–3489. https://doi.org/10.1109/TVCG.2020.3023566
[49]
Erwin Wu and Hideki Koike. 2019. FuturePose - Mixed Reality Martial Arts Training Using Real-Time 3D Human Pose Forecasting With a RGB Camera. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). https://doi.org/10.1109/WACV.2019.00152

Index Terms

  1. MR-Driven Near-Future Realities: Previewing Everyday Life Real-World Experiences Using Mixed Reality

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICMI '24: Proceedings of the 26th International Conference on Multimodal Interaction
      November 2024
      725 pages
      ISBN:9798400704628
      DOI:10.1145/3678957
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 November 2024

      Check for updates

      Author Tags

      1. Mixed Reality
      2. Previewing Reality
      3. Spatial Computing

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Meta Reality Labs Research

      Conference

      ICMI '24
      ICMI '24: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
      November 4 - 8, 2024
      San Jose, Costa Rica

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 200
        Total Downloads
      • Downloads (Last 12 months)200
      • Downloads (Last 6 weeks)56
      Reflects downloads up to 03 Mar 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media