Abstract
We present neural compositing, a deep-learning based method for augmented reality rendering, which uses convolutional neural networks to composite rendered layers of a virtual object with a real photograph to emulate shadow and reflection effects. The method starts from estimating the lighting and roughness information from the photograph using neural networks, renders the virtual object with a virtual floor into color, shadow and reflection layers by applying the estimated lighting, and finally refines the reflection and shadow layers using neural networks and blends them with the color layer and input image to yield the output image. We assume low-frequency lighting environments and adopt PRT (precomputed radiance transfer) for layer rendering, which makes the whole pipeline differentiable and enables fast end-to-end network training with synthetic scenes. Working on a single photograph, our method can produce realistic reflections in a real scene with spatially-varying material and cast shadows on background objects with unknown geometry and material at real-time frame rates.
Similar content being viewed by others
References
Sloan P P, Kautz J, Snyder J. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. ACM Trans Graph, 2002, 21: 527–536
Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images. IEEE Comput Grap Appl, 2001, 21: 34–41
Pitie F, Kokaram A. The linear monge-kantorovitch linear colour mapping for example-based colour transfer. In: Proceedings of the 4th European Conference on Visual Media Production, 2007. 1–9
Sunkavalli K, Johnson M K, Matusik W, et al. Multi-scale image harmonization. ACM Trans Graph, 2010, 29: 1–10
Pérez P, Gangnet M, Blake A. Poisson image editing. ACM Trans Graph, 2003, 22: 313
Tao M W, Johnson M K, Paris S. Error-tolerant image compositing. In: Proceedings of European Conference on Computer Vision. Berlin: Springer, 2010. 31–44
Johnson M K, Dale K, Avidan S, et al. CG2Real: improving the realism of computer generated images using a large collection of photographs. IEEE Trans Visual Comput Graph, 2011, 17: 1273–1285
Lalonde J F, Efros A A. Using color compatibility for assessing image realism. In: Proceedings of 2007 IEEE 11th International Conference on Computer Vision, 2007. 1–8
Tsai Y H, Shen X, Lin Z, et al. Deep image harmonization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3789–3797
Cong W, Zhang J, Niu L, et al. Dovenet: deep image harmonization via domain verification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020. 8394–8403
Debevec P. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. New York: Association for Computing Machinery, 1998. 189–198
Agusanto K, Li L, Chuangui Z, et al. Photorealistic rendering for augmented reality using environment illumination. In: Proceedings of the 2nd IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. 208–216
Karsch K, Sunkavalli K, Hadap S, et al. Automatic scene inference for 3D object compositing. ACM Trans Graph, 2014, 33: 1–15
Aittala M. Inverse lighting and photorealistic rendering for augmented reality. Vis Comput, 2010, 26: 669–678
Boivin S, Gagalowicz A. Image-based rendering of diffuse, specular and glossy surfaces from a single image. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 2001. 107–116
Hold-Geoffroy Y, Sunkavalli K, Hadap S, et al. Deep outdoor illumination estimation. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 2017
Hold-Geoffroy Y, Athawale A, Lalonde J. Deep sky modeling for single image outdoor lighting estimation. 2019. ArXiv: 1905.03897
LeGendre C, Ma W C, Fyffe G, et al. Deeplight: learning illumination for unconstrained mobile mixed reality. In: Proceedings of ACM SIGGRAPH 2019 Talks. New York: Association for Computing Machinery, 2019
Song S, Funkhouser T. Neural illumination: lighting prediction for indoor environments. 2019. ArXiv: 1906.07370
Garon M, Sunkavalli K, Hadap S, et al. Fast spatially-varying indoor lighting estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 6908–6917
Sengupta S, Gu J, Kim K, et al. Neural inverse rendering of an indoor scene from a single image. In: Proceedings of International Conference on Computer Vision (ICCV), 2019
Li X, Dong Y, Peers P, et al. Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Trans Graph, 2017, 36: 1–11
Li Z, Xu Z, Ramamoorthi R, et al. Learning to reconstruct shape and spatially-varying reflectance from a single image. In: Proceedings of SIGGRAPH Asia 2018 Technical Papers. New York: ACM, 2018. 269
Gao D, Li X, Dong Y, et al. Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Trans Graph, 2019, 38: 1–15
Meka A, Maximov M, Zollhoefer M, et al. Lime: live intrinsic material estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 6315–6324
Karsch K, Hedau V, Forsyth D, et al. Rendering synthetic objects into legacy photographs. ACM Trans Graph, 2011, 30: 1–12
Kán P, Kaufmann H. High-quality reflections, refractions, and caustics in augmented reality and their contribution to visual coherence. In: Proceedings of 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2012. 99–108
Kán P, Kaufmann H. Differential irradiance caching for fast high-quality light transport between virtual and real worlds. In: Proceedings of 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013. 133–141
Meshry M, Goldman D B, Khamis S, et al. Neural rerendering in the wild. 2019. ArXiv: 1904.04290
Thies J, Zollhöfer M, Nieundefinedner M. Deferred neural rendering: image synthesis using neural textures. ACM Trans Graph, 2019, 38: 1–12
Li T M, Aittala M, Durand F, et al. Differentiable Monte Carlo ray tracing through edge sampling. ACM Trans Graph, 2019, 37: 1–11
Che C, Luan F, Zhao S, et al. Inverse transport networks. 2018. ArXiv: 1809.10820
Zhang C, Wu L, Zheng C, et al. A differential theory of radiative transfer. ACM Trans Graph, 2019, 38: 1–16
Loper M M, Black M J. Opendr: an approximate differentiable renderer. In: Computer Vision — ECCV 2014. Berlin: Springer, 2014. 154–169
Kato H, Ushiku Y, Harada T, et al. Neural 3D mesh renderer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018
Dobashi Y, Iwasaki W, Ono A, et al. An inverse problem approach for automatically adjusting the parameters for rendering clouds using photographs. ACM Trans Graph, 2012, 31: 1–10
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015, Berlin: Springer, 2015. 234–241
Walter B, Marschner S, Li H, et al. Microfacet models for refraction through rough surfaces. In: Proceedings of Eurographics Symposium on Rendering, 2007. 195–206
Gardner M A, Sunkavalli K, Yumer E, et al. Learning to predict indoor illumination from a single image. ACM Trans Graph, 2017, 36: 1–14
Wald I, Woop S, Benthin C, et al. Embree-a ray tracing kernel framework for efficient CPU ray tracing. ACM Trans Graph, 2014, 33: 1–8
Kingma D P, Ba J. Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations, San Diego, 2015
Bojanowski P, Joulin A, Lopez-Paz D, et al. Optimizing the latent space of generative networks. 2017. ArXiv: 1707.05776
Ng R, Ramamoorthi R, Hanrahan P. All-frequency shadows using non-linear wavelet lighting approximation. In: Proceedings of ACM SIGGRAPH 2003 Papers. New York: Association for Computing Machinery, 2003. 376–381
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ma, S., Shen, Q., Hou, Q. et al. Neural compositing for real-time augmented reality rendering in low-frequency lighting environments. Sci. China Inf. Sci. 64, 122101 (2021). https://doi.org/10.1007/s11432-020-3024-5
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11432-020-3024-5