Skip to main content
Log in

Radiometric model for plenoptic image formation

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper presents a general plenoptic image formation model based on light ray radiometry. Most existing plenoptic image formation models focus on projection geometry inside a camera equipped with a specific optical device, such as an array of micro-thin-lenses, while light ray radiometry is incomplete or simply neglected. This study aims to describe a general view of light ray radiometry that is then used to define a plenoptic image formation model. This model is based on a radiometry model and light ray projection geometry; hence, it explains phenomena, such as light transport and light direction. Moreover, the proposed plenoptic image formation model describes several radiometric phenomena, such as haze, defocus blur or natural vignetting. The genericity of this model is shown by retrieving existing non-plenoptic models from it. Moreover, the proposed model is assessed by simulating various phenomena, including vignetting and defocus blur, from both synthetic and real data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Landy, M.S., Movshon, A.J. (eds.) Computational Models of Visual Processing, pp. 3–20. MIT Press, Cambridge (1991)

    Google Scholar 

  2. Avcibas, I., Memon, N.D., Sankur, B.: Steganalysis of watermarking techniques using image quality metrics. In: Proceedings of SPIE, vol. 4314, pp. 523–531 (2001)

  3. Couillaud, J., Ziou, D.: Light field variational estimation using a light field formation model. Vis. Comput. 36, 237–251 (2020)

    Article  Google Scholar 

  4. Dansereau, D.G., Bruton, L.T.: Gradient-based depth estimation from 4D light fields. In: IEEE International Symposium on Circuits and Systems, vol. 3, pp. 549–552 (2004)

  5. Deschênes, F., Ziou, D., Fuchs, P.: An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts. Image Vis. Comput. 22(1), 35–57 (2004)

    Article  Google Scholar 

  6. Faraday, M.: Thoughts on ray vibrations. Philos. Mag. 28, 245–250 (1846)

    Google Scholar 

  7. Hahne, C., Aggoun, A., Velisavljevic, V., Fiebig, S., Pesch, M.: Refocusing distance of a standard plenoptic camera. Opt. Express 24(19), 21521–21540 (2016)

    Article  Google Scholar 

  8. Hahne, C., Aggoun, A., Velisavljevic, V., Fiebig, S., Pesch, M.: Baseline and triangulation geometry in a standard plenoptic camera. Int. J. Comput. Vis. 126, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  9. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010)

  10. Horn, B.K.: Robot Vision, Chap. 10, 1st edn, pp. 203–208. McGraw-Hill Higher Education, New York (1986)

    Google Scholar 

  11. Horstmeyer, R., Euliss, G., Athale, R.: Flexible multimodal camera using a light field architecture. In: IEEE International Conference on Computational Photography, pp. 1–8 (2009)

  12. Huang, F.C., Chen, K., Wetzstein, G.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Trans. Graph. 34(4), 60–72 (2015)

    Google Scholar 

  13. Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 297–306 (2000)

  14. Jeon., H.G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.W., Kweon, I.S.: Accurate depth map estimation from a lenslet light field camera. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1547–1555 (2015)

  15. Joshi, N., Szeliski, R., Kriegman, D.: Psf estimation using sharp edge prediction. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  16. Lai, Y.S., Chen, Y.L., Hsu, C.T.: Single image dehazing with optimal transmission map. In: 21st International Conference on Pattern Recognition, pp. 388–391 (2012)

  17. Levin, A., Freeman, W.T., Durand, F.: Understanding camera trade-offs through a bayesian analysis of light field projections. In: The 10th European Conference on Computer Vision: Part IV, pp. 88–101. Springer (2008)

  18. Levin, A., Hasinoff, S.W., Green, P., Durand, F., Freeman, W.T.: 4d frequency analysis of computational cameras for depth of field extension. ACM Trans. Graph. 28(3), 97–110 (2009)

    Article  Google Scholar 

  19. Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, pp. 31–42 (1996)

  20. Liang, C.K., Ramamoorthi, R.: A light transport framework for lenslet light field cameras. ACM Trans. Graph. 34(2), 16–35 (2015)

    Article  Google Scholar 

  21. Liang, C.K., Tai-Hsu, L., Bing-Yi, W., Chi, L., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graph. 27(3), 55–65 (2008)

    Article  Google Scholar 

  22. Liang, C.K., Yi-Chang, S., Chen, H.: Light field analysis for modeling image formation. IEEE Trans. Image Process. 20(2), 446–460 (2011)

    Article  MathSciNet  Google Scholar 

  23. Nakamura, J.: Image Sensors and Signal Processing for Digital Still Cameras. CRC Press Inc., Boca Raton (2005)

    Google Scholar 

  24. Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. Int. J. Comput. Vis. 48(3), 233–254 (2002)

    Article  Google Scholar 

  25. Ng, R.: Fourier slice photography. ACM Trans. Graph. 24(3), 735–744 (2005)

    Article  Google Scholar 

  26. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University Computer Science Technical Report CSTR 2005-02 (2005)

  27. Pentland, A.P.: A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987)

    Article  Google Scholar 

  28. Schneider, D., Schwalbe, E., Maas, H.G.: Validation of geometric models for fisheye lenses. J. Photogramm. Remote Sens. 64(3), 259–266 (2009)

    Article  Google Scholar 

  29. Stewart, J.: Optical Principles and Technology for Engineers. Mechanical Engineering. Taylor & Francis, Oxford (1996)

    Google Scholar 

  30. Subbarao, M.: Parallel depth recovery by changing camera parameters. In: Second International Conference on Computer Vision, pp. 149–155 (1988)

  31. Tai, Y., Chen, X., Kim, S., Kim, S.J., Li, F., Yang, J., Yu, J., Matsushita, Y., Brown, M.: Nonlinear camera response functions and image deblurring: theoretical analysis and practice. IEEE Trans. Pattern Anal. Mach. Intell. 35(10), 2498–2512 (2013)

    Article  Google Scholar 

  32. Tang, H., Kutulakos, K.N.: What does an aberrated photo tell us about the lens and the scene? In: 5th International Conference on Computational Photography (2013)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julien Couillaud.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 19372 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Couillaud, J., Ziou, D. Radiometric model for plenoptic image formation. Vis Comput 37, 1369–1383 (2021). https://doi.org/10.1007/s00371-020-01871-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01871-z

Keywords

Navigation