Skip to main content

Environment Estimation for Glossy Reflections in Mixed Reality Applications Using a Neural Network

  • Chapter
  • First Online:
Transactions on Computational Science XXXVI

Part of the book series: Lecture Notes in Computer Science ((TCOMPUTATSCIE,volume 12060))

Abstract

Environment textures are used for the illumination of virtual objects within a virtual scene. Using these textures is crucial for high-quality lighting and reflection. In the case of an augmented reality context, the lighting is very important to seamlessly embed a virtual object within the real world scene. To ensure this, the lighting of the environment has to be captured according to the current light information. In this paper, we present a novel approach by stitching the current camera information onto a cube map. This cube map is enhanced in every single frame and is fed into a neural network to estimate missing parts. Finally, the output of the neural network and the currently stitched information is fused to make even mirror-like reflections possible on mobile devices. We provide an image stream stitching approach combined with a neural network to create plausible and high-quality environment textures that may be used for image-based lighting within mixed reality environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Badra, F., Qumsieh, A., Dudek, G.: Rotation and zooming in image mosaicing. In: Proceedings Fourth IEEE Workshop on Applications of Computer Vision, WACV 1998. Institute of Electrical and Electronics Engineers (IEEE) (1998). https://doi.org/10.1109/acv.1998.732857

  2. Chew, V.C.S., Lian, F.L.: Panorama stitching using overlap area weighted image plane projection and dynamic programming for visual localization. In: 2012 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 250–255. Institute of Electrical and Electronics Engineers (IEEE), July 2012. https://doi.org/10.1109/AIM.2012.6265995

  3. Dasgupta, S., Banerjee, A.: An augmented-reality-based real-time panoramic vision system for autonomous navigation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 36(1), 154–161 (2006). https://doi.org/10.1109/TSMCA.2005.859177

    Article  Google Scholar 

  4. Debevec, P.: Rendering synthetic objects into real scenes. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 1998, pp. 189–198. Association for Computing Machinery (ACM) (1998). https://doi.org/10.1145/280814.280864

  5. Franke, T.A.: Delta voxel cone tracing. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 39–44. Institute of Electrical & Electronics Engineers (IEEE), September 2014. https://doi.org/10.1109/ISMAR.2014.6948407

  6. Gardner, M.A., et al.: Learning to predict indoor illumination from a single image. ACM Trans. Graph. 36(6), 176:1–176:14 (2017). https://doi.org/10.1145/3130800.3130891. http://doi.acm.org/10.1145/3130800.3130891

    Article  Google Scholar 

  7. Georgoulis, S., Rematas, K., Ritschel, T., Fritz, M., Tuytelaars, T., Gool, L.V.: What is around the camera? In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October 2017, pp. 5180–5188. IEEE Computer Society (2017). https://doi.org/10.1109/ICCV.2017.553

  8. Gruber, L., Ventura, J., Schmalstieg, D.: Image-space illumination for augmented reality in dynamic environments. In: 2015 IEEE Virtual Reality (VR), pp. 127–134. Institute of Electrical & Electronics Engineers (IEEE), March 2015. https://doi.org/10.1109/VR.2015.7223334

  9. Iorns, T., Rhee, T.: Real-time image based lighting for 360-degree panoramic video. In: Huang, F., Sugimoto, A. (eds.) PSIVT 2015. LNCS, vol. 9555, pp. 139–151. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30285-0_12

    Chapter  Google Scholar 

  10. Kale, P., Singh, K.R.: A technical analysis of image stitching algorithm. Int. J. Comput. Sci. Inf. Technol. 6(1), 284–288 (2015)

    Google Scholar 

  11. Kán, P., Unterguggenberger, J., Kaufmann, H.: High-quality consistent illumination in mobile augmented reality by radiance convolution on the GPU. In: Bebis, G., et al. (eds.) ISVC 2015. LNCS, vol. 9474, pp. 574–585. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27857-5_52

    Chapter  Google Scholar 

  12. Karsch, K., et al.: Automatic scene inference for 3D object compositing. ACM Trans. Graph. 33(3), 1–15 (2014). https://doi.org/10.1145/2602146

    Article  MATH  Google Scholar 

  13. Kilbride, S., Kim, M.D., Ueda, J.: Real time image de-blurring and image stitching for muscle inspired camera orientation system. In: 2014 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), pp. 82–87 (2014)

    Google Scholar 

  14. Křivánek, J., Colbert, M.: Real-time shading with filtered importance sampling. Comput. Graph. Forum 27(4), 1147–1154 (2008). https://doi.org/10.1111/j.1467-8659.2008.01252.x

    Article  Google Scholar 

  15. Liao, W.-S., et al.: Real-time spherical panorama image stitching using OpenCL. In: International Conference on Computer Graphics and Virtual Reality, Las Vegas, America, July 2011

    Google Scholar 

  16. Mandl, D., et al.: Learning lightprobes for mixed reality illumination. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Institute of Electrical and Electronics Engineers (IEEE) (2017)

    Google Scholar 

  17. Mann, S., Picard, R.W.: Virtual bellows: constructing high quality stills from video. In: Proceedings of the IEEE International Conference on Image Processing, ICIP 1994, vol. 1, pp. 363–367. IEEE (1994)

    Google Scholar 

  18. Mistry, S., Patel, A.: Image stitching using Harris feature detection. Int. Res. J. Eng. Technol. (IRJET) 03(04), 2220–2226 (2016)

    Google Scholar 

  19. Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. CoRR abs/1604.07379 (2016). http://arxiv.org/abs/1604.07379

  20. Pravenaa, S., Menaka, R.: A methodical review on image stitching and video stitching techniques. Int. J. Appl. Eng. Res. 11(5), 3442–3448 (2016)

    Google Scholar 

  21. Richter-Trummer, T., Kalkofen, D., Park, J., Schmalstieg, D.: Instant mixed reality lighting from casual scanning. In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 27–36. Institute of Electrical and Electronics Engineers (IEEE), September 2016. https://doi.org/10.1109/ISMAR.2016.18

  22. Rohmer, K., Buschel, W., Dachselt, R., Grosch, T.: Interactive near-field illumination for photorealistic augmented reality on mobile devices. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 29–38. Institute of Electrical & Electronics Engineers (IEEE), September 2014. https://doi.org/10.1109/ISMAR.2014.6948406

  23. Ropinski, T., Wachenfeld, S., Hinrichs, K.: Virtual reflections for augmented reality environments. In: International Conference on Artificial Reality and Telexistence, pp. 311–318 (2004)

    Google Scholar 

  24. Schwandt, T., Broll, W.: A single camera image based approach for glossy reflections in mixed reality applications. In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 37–43. Institute of Electrical and Electronics Engineers (IEEE), September 2016. https://doi.org/10.1109/ISMAR.2016.12

  25. Schwandt, T., Broll, W.: Differential G-Buffer rendering for mediated reality applications. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10325, pp. 337–349. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60928-7_30

    Chapter  Google Scholar 

  26. Schwandt, T., Kunert, C., Broll, W.: Glossy reflections for mixed reality environments on mobile devices. In: Cyberworlds 2018. Institute of Electrical and Electronics Engineers (IEEE) (2018). https://doi.org/10.1007/978-3-319-60928-7_30

  27. Sébastien, L., Zanuttini, A.: Local image-based lighting with parallax-corrected cubemaps. In: ACM SIGGRAPH 2012 Talks, SIGGRAPH 2012, p. 36:1. ACM, New York (2012). https://doi.org/10.1145/2343045.2343094. http://doi.acm.org/10.1145/2343045.2343094

  28. State, A., Hirota, G., Chen, D.T., Garrett, W.F., Livingston, M.A.: Superior augmented reality registration by integrating landmark tracking and magnetic tracking. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 1996, pp. 429–438. Association for Computing Machinery (ACM) (1996). https://doi.org/10.1145/237170.237282

  29. Szeliski, R.: Image alignment and stitching: a tutorial. Found. Trend\(\textregistered \) Comput. Graph. Vis. 2(1), 1–104 (2006). https://doi.org/10.1561/0600000009. http://www.nowpublishers.com/product.aspx?product=CGV&doi=0600000009

  30. Xiao, J., Ehinger, K.A., Oliva, A., Torralba, A.: Recognizing scene viewpoint using panoramic place representation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2695–2702, June 2012. https://doi.org/10.1109/CVPR.2012.6247991

  31. Yao, X., Zhou, Y., Hu, X., Yang, B.: A new environment mapping method using equirectangular panorama from unordered images. In: 2011 International Conference on Optical Instruments and Technology: Optoelectronic Measurement Technology and Systems, pp. 82010V–82010V-9. SPIE-International Society for Optical Engineering, November 2011. https://doi.org/10.1117/12.904704

Download references

Acknowledgment

The underlying research of these results has been partially funded by the Free State of Thuringia with the number 2015 FE 9108 and co-financed by the European Union as part of the European Regional Development Fund (ERDF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Schwandt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Schwandt, T., Kunert, C., Broll, W. (2020). Environment Estimation for Glossy Reflections in Mixed Reality Applications Using a Neural Network. In: Gavrilova, M., Tan, C., Sourin, A. (eds) Transactions on Computational Science XXXVI. Lecture Notes in Computer Science(), vol 12060. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-61364-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-61364-1_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-61363-4

  • Online ISBN: 978-3-662-61364-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics