skip to main content
research-article

Multi-Camera Lighting Estimation for Mobile Augmented Reality

Published: 22 October 2024 Publication History

Abstract

Lighting estimation is an important and long-standing task in computer vision and graphics communities[1]. Over the past few years, we have observed a rising demand for creating immersive user experiences in mobile augmented reality (AR) applications. In particular, environment lighting estimation is essential to render visually coherent virtual objects that need to be overlaid on physical scenes in mobile AR. In this work, we explore the key research questions in lighting understanding for an emerging application domain, mobile AR on multi-camera handheld devices, particularly try-on applications where end users leverage handheld mobile devices, such as smartphones, to overlay products of interest on their faces. Mobile AR try-on apps promote online shopping and often require photorealism to provide user experiences on par with physical try-on.

References

[1]
M. Garon, K. Sunkavalli, S. Hadap, N. Carr and J. F. Lalonde. 2019. Fast spatially-varying indoor lighting estimation. Conference on Computer Vision and Pattern Recognition, Long Beach, CA.
[2]
Y. Zhao and T. Guo. 2021. Xihe: A 3D visionbased lighting estimation framework for mobile augmented reality. Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services (Virtual Event).
[3]
P. Debevec, Image-based lighting, ACM SIGGRAPH 2006 Courses, 2006.
[4]
P. Paysan, R. Knothe, B. Amberg, S. Romdhani and T. Vetter. 2009. A 3D face model for pose and illumination invariant face recognition. Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, Genoa.
[5]
G. Somanath and D. Kurz. 2021. HDR environment map estimation for real-time augmented reality. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Virtual Event).
[6]
M.-A. Gardner, K. Sunkavalli, E. Yumer, X. Shen, E. Gambaretto, C. Gagne and J.-F. Lalonde. 2017. Learning to predict indoor illumination from a single image. 2017. ACM Transactions on Graphics (Proc. SIGGRAPH Asia).
[7]
Y. Zhao, C. Ma, H. Huang and T. Guo. 2022. LITAR: Visually coherent lighting for mobile augmented reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.
[8]
W.-S. Lai, Y. Shih, L.-C. Chu, X. Wu, S.-F. Tsai, M. Krainin, D. Sun and C.-K. Liang. 2022. Face deblurring using dual camera fusion on mobile phones. ACM Transactions on Graphics.
[9]
I. Grishchenko, A. Ablavatski, Y. Kartynnik, K. Raveendran and M. Grundmann. 2020. Attention mesh: High-fidelity face mesh prediction in real-time. arXiv preprint arXiv:2006.10962.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image GetMobile: Mobile Computing and Communications
GetMobile: Mobile Computing and Communications  Volume 28, Issue 3
September 2024
35 pages
EISSN:2375-0537
DOI:10.1145/3701701
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2024
Published in SIGMOBILE-GETMOBILE Volume 28, Issue 3

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 32
    Total Downloads
  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)8
Reflects downloads up to 08 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media