Skip to main content
Log in

Light field variational estimation using a light field formation model

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Many light field acquisition methods have been developed thanks to special optical devices, such as microlens array and coded aperture. However, these methods produce light fields with resolutions limited by the employed optical device. In this paper, we propose an alternative method to acquire light fields with resolutions that users can select. This method relies on the variational estimation of a light field from a single image and a depth map taken with a digital still camera. The light field is obtained by solving an inverse problem built from a light field formation model, based on optical geometry and light ray radiometry. The effectiveness of this estimation method is demonstrated on synthetic and real data. The experimental results show that it is possible to estimate a light field without using special optical devices. In addition, realistic images can be recreated by the exploitation of estimated light fields in image formation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20. MIT Press (1991)

  2. An, J., Park, S., Ihm, I.: Construction of a flexible and scalable 4d light field camera array using raspberry pi clusters. The Visual Computer (2018)

  3. Bishop, T.E., Favaro, P.: The light field camera: extended depth of field, aliasing, and superresolution. IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012)

    Article  Google Scholar 

  4. Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: IEEE International Conf. on Computational Photography, pp. 1–10 (2014)

  5. Chia-Kai, L., Yi-Chang, S., Chen, H.: Light field analysis for modeling image formation. IEEE Trans. Image Process. 20(2), 446–460 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dansereau, D.G., Bruton, L.T.: Gradient-based depth estimation from 4D light fields. IEEE Int. Symp. Circuits Syst. 3, 549–552 (2004)

    MATH  Google Scholar 

  7. Flynn, J.R., Ward, S., Abich, J., Poole, D.: Image quality assessment using the ssim and the just noticeable difference paradigm. In: Engineering Psychology and Cognitive Ergonomics. Understanding Human Cognition, pp. 23–30. Springer, Berlin (2013)

    Chapter  Google Scholar 

  8. Georgiev, T.G., Lumsdaine, A.: Superresolution with plenoptic 2.0 cameras. In: Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, p. STuA6 (2009)

  9. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: the 23rd annual conf. on Computer graphics and interactive techniques, SIGGRAPH ’96, pp. 43–54 (1996)

  10. Hanika, J., Dachsbacher, C.: Efficient Monte Carlo rendering with realistic lenses. Comput. Graph. Forum 33(2), 323–332 (2014)

    Article  Google Scholar 

  11. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4d light fields. In: Asian Conference on Computer Vision, pp. 19–34 (2016)

    Chapter  Google Scholar 

  12. Horé, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 20th International Conf. on Pattern Recognition, pp. 2366–2369 (2010)

  13. Horn, B.K.: Robot Vision, 1st edn. McGraw-Hill Higher Education, New York (1986)

    Google Scholar 

  14. Horstmeyer, R., Euliss, G., Athale, R.: Flexible multimodal camera using a light field architecture. In: 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2009)

  15. Hung, M.F., Miaou, S.G., Chiang, C.Y.: Dual edge-confined inpainting of 3D depth map using color image’s edges and depth image’s edges. In: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference pp. 1–9 (2013)

  16. Kingslake, R., Johnson, R.B.: Lens design fundamentals. Academic Press, Edinburgh (2010)

    Google Scholar 

  17. Klein, G.A.: Industrial Color Physics, chap. 3, pp. 154–156. Springer, Berlin (2010)

    Book  Google Scholar 

  18. Kubota, A., Aizawa, K., Tsuhan, C.: Reconstructing dense light field from array of multifocus images for novel view synthesis. IEEE Trans. Image Process. 16(1), 269–279 (2007)

    Article  MathSciNet  Google Scholar 

  19. Levin, A., Freeman, W.T., Durand, F.: Understanding camera trade-offs through a Bayesian analysis of light field projections. In: the 10th European Conference on Computer Vision: Part IV, pp. 88–101 (2008)

    Google Scholar 

  20. Levin, A., Hasinoff, S.W., Green, P., Durand, F., Freeman, W.T.: 4D frequency analysis of computational cameras for depth of field extension. ACM Trans. Graph. 28(3), 97–111 (2009)

    Article  Google Scholar 

  21. Liang, C.K., Ramamoorthi, R.: A light transport framework for lenslet light field cameras. ACM Trans. Graph. 34(2), 16–35 (2015)

    Article  Google Scholar 

  22. Lippmann, M.G.: Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée 7(1), 821–825 (1908)

    Article  Google Scholar 

  23. Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32(4), 46–58 (2013)

    Article  MATH  Google Scholar 

  24. Mokrzycki, W., Tatol, M.: Color difference delta E–a survey. Mach. Graph. Vis. 20(4), 383–411 (2011)

    Google Scholar 

  25. Ng, R.: Fourier slice photography. ACM Trans. Graph. 24(3), 735–744 (2005)

    Article  Google Scholar 

  26. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Computer Science Technical Report 2 (2005)

  27. Ray, S.: Applied Photographic Optics, 3rd Edition, 3 edn. Focal Press (2002)

  28. Smisek, J., Jancosek, M., Pajdla, T.: 3D with kinect. In: IEEE International Conf. on Computer Vision Workshops, pp. 1154–1160 (2011)

  29. Smith, W.J.: Modern Optical Engineering: The Design of Optical Systems. McGraw-Hill, Maidenherd (1966)

    Google Scholar 

  30. Steinert, B., Dammertz, H., Hanika, J., Lensch, H.P.A.: General spectral camera lens simulation. Comput. Graph. Forum 30(6), 1643–1654 (2011)

    Article  Google Scholar 

  31. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. 26(3), 69 (2007)

    Article  Google Scholar 

  32. Venkataraman, K., Lelescu, D., Duparré, J., McMahon, A., Molina, G., Chatterjee, P., Mullis, R., Nayar, S.: Picam: an ultra-thin high performance monolithic camera array. ACM Trans. Graph. 32(6), 166–179 (2013)

    Article  Google Scholar 

  33. Wang, X., Tian, B., Liang, C., Shi, D.: Blind image quality assessment for measuring image blur. Cong. Image Signal Process. 1, 467–470 (2008)

    Article  Google Scholar 

  34. Wang, Y., Liu, Y., Heidrich, W., Dai, Q.: The light field attachment: turning a dslr into a light field camera using a low budget camera ring. IEEE Trans. Vis. Comput. Graph. 23(99), 1–8 (2016)

    Google Scholar 

  35. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)

    Article  Google Scholar 

  36. Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)

    Article  Google Scholar 

  37. Wu, G., Masia, B., Jarabo, A., Zhang, Y., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field image processing: an overview. IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017)

    Article  Google Scholar 

  38. Yang, R., Welch, G., Bishop, G.: Real-time consensus-based scene reconstruction using commodity graphics hardware. In: 10th Pacific Conference on Computer Graphics and Applications, pp. 225–234 (2002)

  39. Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognit. 44(9), 1852–1858 (2011)

    Article  Google Scholar 

  40. Ziou, D., Deschênes, F.: Depth from defocus estimation in spatial domain. Comput. Vis. Image Underst. 81(2), 143–165 (2001)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julien Couillaud.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 808 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Couillaud, J., Ziou, D. Light field variational estimation using a light field formation model. Vis Comput 36, 237–251 (2020). https://doi.org/10.1007/s00371-018-1599-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1599-2

Keywords

Navigation