skip to main content
research-article
Open access

Environment Texture Optimization for Augmented Reality

Published: 09 September 2024 Publication History

Abstract

Augmented reality (AR) platforms now support persistent, markerless experiences, in which virtual content appears in the same place relative to the real world, across multiple devices and sessions. However, optimizing environments for these experiences remains challenging; virtual content stability is determined by the performance of device pose tracking, which depends on recognizable environment features, but environment texture can impair human perception of virtual content. Low-contrast 'invisible textures' have recently been proposed as a solution, but may result in poor tracking performance when combined with dynamic device motion. Here, we examine the use of invisible textures in detail, starting with the first evaluation in a realistic AR scenario. We then consider scenarios with more dynamic device motion, and conduct extensive game engine-based experiments to develop a method for optimizing invisible textures. For texture optimization in real environments, we introduce MoMAR, the first system to analyze motion data from multiple AR users, which generates guidance using situated visualizations. We show that MoMAR can be deployed while maintaining an average frame rate > 59fps, for five different devices. We demonstrate the use of MoMAR in a realistic case study; our optimized environment texture allowed users to complete a task significantly faster (p=0.003) than a complex texture.

References

[1]
Al Horr, Y., Arif, M., Kaushik, A., Mazroei, A., Katafygiotou, M., and Elsarrag, E. Occupant productivity and office indoor environment quality: A review of the literature. Building and Environment 105 (2016), 369--389.
[2]
Alavi, H. S., Verma, H., Mlynar, J., and Lalanne, D. On the temporality of adaptive built environments. People, Personal Data and the Built Environment (2019), 13--40.
[3]
Apple. ARWorldMap. https://developer.apple.com/documentation/arkit/arworldmap, 2024.
[4]
Apple. ARWorldMappingStatus. https://developer.apple.com/documentation/arkit/arworldmappingstatus, 2024.
[5]
Apple. UUID. https://developer.apple.com/documentation/foundation/uuid, 2024.
[6]
Baird, K. M., and Barfield, W. Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality 4 (1999), 250--259.
[7]
Blender. Blender. https://www.blender.org/, 2024.
[8]
Bressa, N., Korsgaard, H., Tabard, A., Houben, S., and Vermeulen, J. What's the situation with situated visualization? A survey and perspectives on situatedness. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2021), 107--117.
[9]
Bressa, N., Vermeulen, J., and Willett, W. Data every day: Designing and living with personal situated visualizations. In Proceedings of ACM CHI (2022).
[10]
Büschel, W., Lehmann, A., and Dachselt, R. MIRIA: A mixed reality toolkit for the in-situ visualization and analysis of spatio-temporal interaction data. In Proceedings of ACM CHI (2021).
[11]
Campos, C., Elvira, R., Rodríguez, J. J. G., Montiel, J. M., and Tardós, J. D. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Transactions on Robotics (2021), 1--17.
[12]
Cannon, M. W., and Fullenkamp, S. C. Spatial interactions in apparent contrast: Inhibitory effects among grating patterns of different spatial frequencies, spatial positions and orientations. Vision Research 31, 11 (1991), 1985--1998.
[13]
Corchs, S. E., Ciocca, G., Bricolo, E., and Gasparini, F. Predicting complexity perception of real world images. PloS One 11, 6 (2016), e0157986.
[14]
De Amicis, R., Riggio, M., Shahbaz Badr, A., Fick, J., Sanchez, C. A., and Prather, E. A. Cross-reality environments in smart buildings to advance STEM cyberlearning. International Journal on Interactive Design and Manufacturing (IJIDeM) 13 (2019), 331--348.
[15]
Debernardis, S., Fiorentino, M., Gattullo, M., Monno, G., and Uva, A. E. Text readability in head-worn displays: Color and style optimization in video versus optical see-through devices. IEEE Transactions on Visualization and Computer Graphics 20, 1 (2013), 125--139.
[16]
Drouot, M., Le Bigot, N., Bolloc'h, J., Bricard, E., de Bougrenet, J.-L., and Nourrit, V. The visual impact of augmented reality during an assembly task. Displays 66 (2021), 101987.
[17]
Eckstein, M. P. Visual search: A retrospective. Journal of Vision 11, 5 (2011), 14--14.
[18]
Ellemberg, D., Allen, H. A., and Hess, R. F. Investigating local network interactions underlying first-and second-order processing. Vision Research 44, 15 (2004), 1787--1797.
[19]
FastAPI. FastAPI. https://fastapi.tiangolo.com/, 2024.
[20]
Fisher, A. V., Godwin, K. E., and Seltman, H. Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological Science 25, 7 (2014), 1362--1370.
[21]
Fleck, P., Calepso, A. S., Hubenschmid, S., Sedlmair, M., and Schmalstieg, D. RagRug: A toolkit for situated analytics. IEEE Transactions on Visualization and Computer Graphics (2022).
[22]
Funk, M., Kosch, T., Greenwald, S. W., and Schmidt, A. A benchmark for interactive augmented reality instructions for assembly tasks. In Proceedings of ACM MUM (2015).
[23]
Gabbard, J. L., Swan, J. E., and Hix, D. The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence 15, 1 (2006), 16--32.
[24]
Gabbard, J. L., Swan, J. E., Hix, D., Kim, S.-J., and Fitch, G. Active text drawing styles for outdoor augmented reality: A user-based study and design implications. In Proceedings of IEEE VR (2005).
[25]
Gabbard, J. L., Swan, J. E., Hix, D., Schulman, R. S., Lucas, J., and Gupta, D. An empirical user-based study of text drawing styles and outdoor background textures for augmented reality. In Proceedings of IEEE VR (2005).
[26]
Gan, Y., Chi, H., Gao, Y., Liu, J., Zhong, G., and Dong, J. Perception driven texture generation. In Proceedings of IEEE ICME (2017).
[27]
Gattullo, M., Uva, A. E., Fiorentino, M., and Gabbard, J. L. Legibility in industrial AR: Text style, color coding, and illuminance. IEEE Computer Graphics and Applications 35, 2 (2015), 52--61.
[28]
Groissboeck, W., Lughofer, E., and Thumfart, S. Associating visual textures with human perceptions using genetic algorithms. Information Sciences 180, 11 (2010), 2065--2084.
[29]
Hamburger Kunsthalle. Augmented reality application on Emil Nolde's painting technique. https://www.hamburger-kunsthalle.de/en/augmented-reality-application-emil-noldes-painting-technique-0, 2024.
[30]
Hart, S. G., and Staveland, L. E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology, vol. 52. Elsevier, 1988, pp. 139--183.
[31]
Itti, L., Koch, C., and Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 11 (1998), 1254--1259.
[32]
Jinyu, L., Bangbang, Y., Danpeng, C., Nan, W., Guofeng, Z., and Hujun, B. Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality. Virtual Reality & Intelligent Hardware 1, 4 (2019), 386--410.
[33]
Kotis, K., and Soularidis, A. ReconTraj4Drones: A framework for the reconstruction and semantic modeling of UAVs' trajectories on MovingPandas. Applied Sciences 13, 1 (2023), 670.
[34]
Kümmerle, R., Steder, B., Dornhege, C., Ruhnke, M., Grisetti, G., Stachniss, C., and Kleiner, A. On measuring the accuracy of SLAM algorithms. Autonomous Robots 27 (2009), 387--407.
[35]
Kurzhals, K., Becher, M., Pathmanathan, N., and Reina, G. Evaluating situated visualization in AR with eye tracking. In Proceedings of IEEE BELIV (2022).
[36]
Lanir, J., Kuflik, T., Sheidin, J., Yavin, N., Leiderman, K., and Segal, M. Visualizing museum visitors' behavior: Where do they go and what do they do there? Personal and Ubiquitous Computing 21 (2017), 313--326.
[37]
Li, M., Arning, K., Vervier, L., Ziefle, M., and Kobbelt, L. Influence of temporal delay and display update rate in an augmented reality application scenario. In Proceedings of ACM MUM (2015).
[38]
Li, Q., Zhang, T., Wang, H., and Zeng, Z. Dynamic accessibility mapping using floating car data: A network-constrained density estimation approach. Journal of Transport Geography 19, 3 (2011), 379--393.
[39]
Liu, J., Lughofer, E., and Zeng, X. Could linear model bridge the gap between low-level statistical features and aesthetic emotions of visual textures? Neurocomputing 168 (2015), 947--960.
[40]
Liu, J.-C., Li, K.-A., Yeh, S.-L., and Chien, S.-Y. Assessing perceptual load and cognitive load by fixation-related information of eye movements. Sensors 22, 3 (2022), 1187.
[41]
Liu, P., Zuo, X., Larsson, V., and Pollefeys, M. MBA-VO: Motion blur aware visual odometry. In Proceedings of the IEEE/CVF ICCV (2021).
[42]
Microsoft. HoloLens environment considerations. https://learn.microsoft.com/en-us/hololens/hololens-environment-considerations, 2024.
[43]
Moere, A. V., and Hill, D. Designing for the situated and public visualization of urban data. In Street Computing. Routledge, 2016, pp. 24--45.
[44]
MRTK2. Mrtk2. https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk2/, 2024.
[45]
Mustaniemi, J., Kannala, J., Särkkä, S., Matas, J., and Heikkilä, J. Fast motion deblurring for feature detection and matching using inertial measurements. In Proceedings of IEEE ICPR (2018).
[46]
Muséum national d'Histoire naturelle. Revivre, extinct animals in augmented reality. https://www.mnhn.fr/en/experience/revivre-extinct-animals-in-augmented-reality, 2024.
[47]
Näsänen, R., Ojanpää, H., and Kojo, I. Effect of stimulus contrast on performance and eye movements in visual search. Vision Research 41, 14(2001), 1817--1824.
[48]
Polycam. Polycam. https://poly.cam/, 2024.
[49]
Pérez Art Museum Miami. New Realities. https://www.pamm.org/en/new-realities/, 2024.
[50]
Qin, T., Li, P., and Shen, S. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34, 4 (2018), 1004--1020.
[51]
Qualtrics. Qualtrics. https://www.qualtrics.com, 2024.
[52]
Ran, X., Slocum, C., Gorlatova, M., and Chen, J. ShareAR: Communication-efficient multi-user mobile augmented reality. In Proceedings of ACM HotNets (2019).
[53]
Ren, J., Gao, L., Wang, X., Ma, M., Qiu, G., Wang, H., Zheng, J., and Wang, Z. Adaptive computation offloading for mobile augmented reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (2021), 1--30.
[54]
Rosenholtz, R., Li, Y., and Nakano, L. Measuring visual clutter. Journal of Vision 7, 2 (2007), 17--17.
[55]
Rosten, E., and Drummond, T. Machine learning for high-speed corner detection. In Proceedings of ECCV (2006).
[56]
Scargill, T., Chen, Y., Hu, T., and Gorlatova, M. SiTAR: Situated trajectory analysis for in-the-wild pose error estimation. In Proceedings of IEEE ISMAR (2023).
[57]
Scargill, T., Chen, Y., Marzen, N., and Gorlatova, M. Integrated design of augmented reality spaces using virtual environments. In Proceedings of IEEE ISMAR (2022).
[58]
Scargill, T., Hadziahmetovic, M., and Gorlatova, M. Invisible textures: Comparing machine and human perception of environment texture for ar. In Proceedings of ACM ImmerCom (co-located with ACM MobiCom) (2023).
[59]
Scargill, T., Premsankar, G., Chen, J., and Gorlatova, M. Here to stay: A quantitative comparison of virtual object stability in markerless mobile AR. In Proceedings of IEEE/ACM CPHS Workshop (co-located with CPS-IoT week) (2022).
[60]
Schallmo, M.-P., and Murray, S. O. Identifying separate components of surround suppression. Journal of Vision 16, 1 (2016), 2--2.
[61]
Schubert, D., Goll, T., Demmel, N., Usenko, V., Stückler, J., and Cremers, D. The TUM VI benchmark for evaluating visual-inertial odometry. In Proceedings of IEEE/RSJ IROS (2018).
[62]
See, J. E., Drury, C. G., Speed, A., Williams, A., and Khalandi, N. The role of visual inspection in the 21st century. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (2017).
[63]
Shannon, C. E. A mathematical theory of communication. The Bell System Technical Journal 27, 3 (1948), 379--423.
[64]
Smithsonian. Critical distance. https://naturalhistory.si.edu/exhibits/critical-distance, 2024.
[65]
Sundstedt, V., and Garro, V. A systematic review of visualization techniques and analysis tools for eye-tracking in 3D environments. Frontiers in Neuroergonomics 3 (2022), 910019.
[66]
Tang, A., Owen, C., Biocca, F., and Mou, W. Comparative effectiveness of augmented reality in object assembly. In Proceedings of ACM CHI (2003).
[67]
The Dalí Museum. Visual magic: Dalí's masterworks in AR. https://thedali.org/exhibit/visual-magic-dalis-masterworks-in-augmented-reality/, 2024.
[68]
Treisman, A. M., and Gelade, G. A feature-integration theory of attention. Cognitive Psychology 12, 1 (1980), 97--136.
[69]
Unity. Unity. https://unity.com/, 2024.
[70]
Wang, C., Lu, W., Ohno, R., and Gu, Z. Effect of wall texture on perceptual spaciousness of indoor space. International Journal of Environmental Research and Public Health 17, 11 (2020), 4177.
[71]
White, S., and Feiner, S. SiteLens: Situated visualization techniques for urban site visits. In Proceedings of ACM CHI (2009).
[72]
Willett, W., Jansen, Y., and Dragicevic, P. Embedded data representations. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2016), 461--470.
[73]
Wolfe, J. M., Oliva, A., Horowitz, T. S., Butcher, S. J., and Bompas, A. Segmentation of objects from backgrounds in visual search tasks. Vision Research 42, 28 (2002), 2985--3004.
[74]
Xing, J., and Heeger, D. J. Measurement and modeling of center-surround suppression and enhancement. Vision Research 41, 5 (2001), 571--583.
[75]
Yang, Z., Shi, J., Jiang, W., Sui, Y., Wu, Y., Ma, S., Kang, C., and Li, H. Influences of augmented reality assistance on performance and cognitive loads in different stages of assembly task. Frontiers in Psychology 10 (2019), 1703.
[76]
Ze-Yuan, C., and Jeong-Won, H. Exploring the cognitive attributes of visual texture in architectural materials-focused on the vein, grain, pattern. Journal of the Architectural Institute of Korea 39, 8 (2023), 67--78.
[77]
Zhang, L., Finkelstein, A., and Rusinkiewicz, S. High-precision localization using ground texture. In Proceedings of IEEE ICRA (2019).
[78]
Zhang, L., and Murdoch, M. J. Perceived transparency in optical see-through augmented reality. In Proceedings of IEEE ISMAR-Adjunct (2021).
[79]
Zhang, Z., and Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In Proceedings of IEEE/RSJ IROS (2018).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 8, Issue 3
September 2024
1782 pages
EISSN:2474-9567
DOI:10.1145/3695755
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2024
Published in IMWUT Volume 8, Issue 3

Check for updates

Author Tags

  1. Augmented reality
  2. VI-SLAM
  3. environment texture
  4. pose tracking
  5. visual perception

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 223
    Total Downloads
  • Downloads (Last 12 months)223
  • Downloads (Last 6 weeks)40
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media