Abstract
Many light field acquisition methods have been developed thanks to special optical devices, such as microlens array and coded aperture. However, these methods produce light fields with resolutions limited by the employed optical device. In this paper, we propose an alternative method to acquire light fields with resolutions that users can select. This method relies on the variational estimation of a light field from a single image and a depth map taken with a digital still camera. The light field is obtained by solving an inverse problem built from a light field formation model, based on optical geometry and light ray radiometry. The effectiveness of this estimation method is demonstrated on synthetic and real data. The experimental results show that it is possible to estimate a light field without using special optical devices. In addition, realistic images can be recreated by the exploitation of estimated light fields in image formation.
Similar content being viewed by others
References
Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20. MIT Press (1991)
An, J., Park, S., Ihm, I.: Construction of a flexible and scalable 4d light field camera array using raspberry pi clusters. The Visual Computer (2018)
Bishop, T.E., Favaro, P.: The light field camera: extended depth of field, aliasing, and superresolution. IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012)
Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: IEEE International Conf. on Computational Photography, pp. 1–10 (2014)
Chia-Kai, L., Yi-Chang, S., Chen, H.: Light field analysis for modeling image formation. IEEE Trans. Image Process. 20(2), 446–460 (2011)
Dansereau, D.G., Bruton, L.T.: Gradient-based depth estimation from 4D light fields. IEEE Int. Symp. Circuits Syst. 3, 549–552 (2004)
Flynn, J.R., Ward, S., Abich, J., Poole, D.: Image quality assessment using the ssim and the just noticeable difference paradigm. In: Engineering Psychology and Cognitive Ergonomics. Understanding Human Cognition, pp. 23–30. Springer, Berlin (2013)
Georgiev, T.G., Lumsdaine, A.: Superresolution with plenoptic 2.0 cameras. In: Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, p. STuA6 (2009)
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: the 23rd annual conf. on Computer graphics and interactive techniques, SIGGRAPH ’96, pp. 43–54 (1996)
Hanika, J., Dachsbacher, C.: Efficient Monte Carlo rendering with realistic lenses. Comput. Graph. Forum 33(2), 323–332 (2014)
Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4d light fields. In: Asian Conference on Computer Vision, pp. 19–34 (2016)
Horé, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 20th International Conf. on Pattern Recognition, pp. 2366–2369 (2010)
Horn, B.K.: Robot Vision, 1st edn. McGraw-Hill Higher Education, New York (1986)
Horstmeyer, R., Euliss, G., Athale, R.: Flexible multimodal camera using a light field architecture. In: 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2009)
Hung, M.F., Miaou, S.G., Chiang, C.Y.: Dual edge-confined inpainting of 3D depth map using color image’s edges and depth image’s edges. In: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference pp. 1–9 (2013)
Kingslake, R., Johnson, R.B.: Lens design fundamentals. Academic Press, Edinburgh (2010)
Klein, G.A.: Industrial Color Physics, chap. 3, pp. 154–156. Springer, Berlin (2010)
Kubota, A., Aizawa, K., Tsuhan, C.: Reconstructing dense light field from array of multifocus images for novel view synthesis. IEEE Trans. Image Process. 16(1), 269–279 (2007)
Levin, A., Freeman, W.T., Durand, F.: Understanding camera trade-offs through a Bayesian analysis of light field projections. In: the 10th European Conference on Computer Vision: Part IV, pp. 88–101 (2008)
Levin, A., Hasinoff, S.W., Green, P., Durand, F., Freeman, W.T.: 4D frequency analysis of computational cameras for depth of field extension. ACM Trans. Graph. 28(3), 97–111 (2009)
Liang, C.K., Ramamoorthi, R.: A light transport framework for lenslet light field cameras. ACM Trans. Graph. 34(2), 16–35 (2015)
Lippmann, M.G.: Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée 7(1), 821–825 (1908)
Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32(4), 46–58 (2013)
Mokrzycki, W., Tatol, M.: Color difference delta E–a survey. Mach. Graph. Vis. 20(4), 383–411 (2011)
Ng, R.: Fourier slice photography. ACM Trans. Graph. 24(3), 735–744 (2005)
Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Computer Science Technical Report 2 (2005)
Ray, S.: Applied Photographic Optics, 3rd Edition, 3 edn. Focal Press (2002)
Smisek, J., Jancosek, M., Pajdla, T.: 3D with kinect. In: IEEE International Conf. on Computer Vision Workshops, pp. 1154–1160 (2011)
Smith, W.J.: Modern Optical Engineering: The Design of Optical Systems. McGraw-Hill, Maidenherd (1966)
Steinert, B., Dammertz, H., Hanika, J., Lensch, H.P.A.: General spectral camera lens simulation. Comput. Graph. Forum 30(6), 1643–1654 (2011)
Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. 26(3), 69 (2007)
Venkataraman, K., Lelescu, D., Duparré, J., McMahon, A., Molina, G., Chatterjee, P., Mullis, R., Nayar, S.: Picam: an ultra-thin high performance monolithic camera array. ACM Trans. Graph. 32(6), 166–179 (2013)
Wang, X., Tian, B., Liang, C., Shi, D.: Blind image quality assessment for measuring image blur. Cong. Image Signal Process. 1, 467–470 (2008)
Wang, Y., Liu, Y., Heidrich, W., Dai, Q.: The light field attachment: turning a dslr into a light field camera using a low budget camera ring. IEEE Trans. Vis. Comput. Graph. 23(99), 1–8 (2016)
Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)
Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)
Wu, G., Masia, B., Jarabo, A., Zhang, Y., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field image processing: an overview. IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017)
Yang, R., Welch, G., Bishop, G.: Real-time consensus-based scene reconstruction using commodity graphics hardware. In: 10th Pacific Conference on Computer Graphics and Applications, pp. 225–234 (2002)
Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognit. 44(9), 1852–1858 (2011)
Ziou, D., Deschênes, F.: Depth from defocus estimation in spatial domain. Comput. Vis. Image Underst. 81(2), 143–165 (2001)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Couillaud, J., Ziou, D. Light field variational estimation using a light field formation model. Vis Comput 36, 237–251 (2020). https://doi.org/10.1007/s00371-018-1599-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-018-1599-2