Skip to main content
Log in

Restoration of Lighting Parameters in Mixed Reality Systems Using Convolutional Neural Network Technology Based on RGBD Images

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

One of the main problems of mixed reality devices is the lack of universal methods and algorithms for the visualization of virtual world objects in real space. The key point of natural perception of virtual objects in the real world is the creation of natural lighting conditions for virtual world objects by light sources located in the real world, i.e. the formation of natural glares on virtual objects and shadows cast by these objects in the real world. The paper proposes a method for adequately determining the position of the main light sources of the real world in mixed reality systems. Modern technologies that combine the capability of forming 2.5D images created by depth cameras and their subsequent computer processing using neural networks make it possible to identify real-world objects, recognize their shadows, and correctly restore the light sources that create these shadows. The results of the proposed method are presented, the accuracy of restoring the position of the light sources is estimated, and the visual difference between the image of the scene with the original light sources and the same scene with the restored parameters of the light sources is demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.

Similar content being viewed by others

REFERENCES

  1. Parsons, S. and Cobb, S., State-of-the-art of virtual reality technologies for children on the autism spectrum, Eur. J. Special Needs Educ., 2011, vol. 26, no. 3, pp. 355–366.

    Article  Google Scholar 

  2. Palmarini, R., A systematic review of augmented reality applications in maintenance, Robotics Comput. Integ. Manufact., 2018, vol. 49, pp. 215–228.

    Article  Google Scholar 

  3. Izadi, S., The reality of mixed reality, Proc. of the 2016 Symposium on Spatial User Interaction, 2016, ACM, pp. 1–2.

  4. Oculus Rift. https://www.oculus.com. Cited April 10, 2019.

  5. Epson Moverio. https://moverio.epson.com. Cited April 10, 2019.

  6. Microsoft Hololens. https://www.microsoft.com/en-us/hololens. Cited April 10, 2019.

  7. William, R. and Craig, A., Understanding Virtual Reality: Interface, Application, and Design, Kaufmann, 2018.

    Google Scholar 

  8. Iis, T.P., and Jung, T.H., and Claudia M.D., Embodiment of wearable augmented reality technology in tourism experiences, J. Travel Res., 2018, vol. 57, no. 5, pp. 597–611.

    Article  Google Scholar 

  9. Trout, T., Collaborative mixed reality (MxR) and networked decision making, Next-Generation Analyst VI, Int. Soc. Optics, Photonics, 2018, Vol. 10653.

  10. Sheena, B., Anandasabapathy, S., and Shukl, R., Use of augmented reality and virtual reality technologies in endoscopic training, Clinical Gastroenter. Hepatol., 2018, vol. 16, no. 11, pp. 1688–1691.

    Article  Google Scholar 

  11. Kiljae, A., Ko, D., and Gim, S., A study on the architecture of mixed reality application for architectural design collaboration, Int. Conf. on Applied Computing and Information Technology, Springer, Cham, 2018, pp. 48–61.

  12. Livingston, M., Zhuming, A., and Decker, J.W., Human factors for military applications of head-worn augmented reality displays, Int. Conf. on Applied Human Factors and Ergonomics, Springer, Cham, 2018, pp. 56–65.

  13. Wang, X., Zhdanov, D.D., Potemin I.S., Wang, Y., and Cheng, H., The efficient model to define a single light source position by use of high dynamic range image of 3D scene, Proc. of SPIE, 2016, vol. 10020, p. 100200I.

    Article  Google Scholar 

  14. Mandl, D., Yi, K.M., Mohr, P., Roth, P.M., Fua, P., Lepetit, V., Schmalstieg, D., and Kalkofen, D., Learning lightprobes for mixed reality illumination, IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR), IEEE, 2017, pp. 82–89.

  15. Supan, P., Stuppacher, I., and Haller, M., Image based shadowing in real-time augmented reality, Int. J. Virtual Reality, 2006, vol. 5, no. 3, pp. 1–7.

    Article  Google Scholar 

  16. Haller, M., Drab, S., and Hartmann, W., A real-time shadow approach for an augmented reality application using shadow volumes, Proc. of the ACM symposium on Virtual reality software and technology, ACM, 2003, pp. 56–65.

  17. Everitt, C. and Kilgard, M.J., Practical and robust stenciled shadow volumes hardware-accelerated rendering, arXiv preprint cs/0301002, 2003.

  18. Randima, F., and Kilgard, M.J., The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, Addison-Wesley, 2003.

    Google Scholar 

  19. Kirk, D., CG Toolkit, User’s Manual,Nvidia, Santa Clara, CA, 2002.

    Google Scholar 

  20. Richter-Trummer, T., Instant mixed reality lighting from casual scanning, IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR), IEEE, 2016, pp. 27–36.

  21. Voloboi, A.G., Galaktionov, V.A., Kopylov, E.A., and Shapiro, L.Z., Computation of sunlight determined by a high dynamic range image, Proc. 16th Int. Conf. on Computer Graphics and Its Applications, GraphiCon'2006, Novosibirsk, 2006, pp. 467–472.

  22. Voloboi, A.G., Galaktionov, V.A., Kopylov, E.A., and Shapiro, L.Z., Simulation of natural daylight illumination determined by a high dynamic range image, Program. Comput. Software, 2006, vol. 32, no. 5, pp. 284–292.

    Article  Google Scholar 

  23. Valiev, I.V., Voloboi, A.G., and Galaktionov, V.A., A physically correct sunlight model specified by high dynamic range images, Vestn. Komput. Inform. Tekhnol., 2009, no. 9, pp. 10–17.

  24. Jiddi, S., Robert, P., and Marchand, E., Illumination estimation using cast shadows for realistic augmented reality applications, IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), 2017, Nantes, France, 2017.

  25. Vicente, T.F.Y., Hou, L., Yu, C.-P., Hoai, M., and Samaras, D., Large-scale training of shadow detectors with noisily annotated shadow examples, Proc. of the European Conference on Computer Vision, 2016.

Download references

Funding

This work was supported by the Russian Science Foundation, project no. 18-79-10190.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to M. I. Sorokin, D. D. Zhdanov, A. D. Zhdanov, I. S. Potemin or N. N. Bogdanov.

Additional information

Translated by A. Klimontovich

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sorokin, M.I., Zhdanov, D.D., Zhdanov, A.D. et al. Restoration of Lighting Parameters in Mixed Reality Systems Using Convolutional Neural Network Technology Based on RGBD Images. Program Comput Soft 46, 207–216 (2020). https://doi.org/10.1134/S0361768820030093

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768820030093

Navigation