Skip to main content
Log in

Real-time simulation of accommodation and low-order aberrations of the human eye using light-gathering trees

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We present a real-time technique for simulating accommodation and low-order aberrations (e.g., myopia, hyperopia, and astigmatism) of the human eye. Our approach models the corresponding point spread function, producing realistic depth-dependent simulations. Real-time performance is achieved with the use of a novel light-gathering tree data structure, which allows us to approximate the contributions of over 300 samples per pixel under 6 ms per frame. For comparison, with the same time budget, an optimized ray tracer exploring specialized hardware acceleration traces two samples per pixel. We demonstrate the effectiveness of our approach through a series of qualitative and quantitative experiments on images with depth from real environments. Our results achieved SSIM values ranging from 0.94 to 0.99 and PSNR ranging from 32.4 to 43.0 in objective evaluations, indicating good agreement with the ground truth.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Barsky, B.A.: Vision-realistic rendering: simulation of the scanned foveal image from wavefront data of human subjects. In: Proceedings of APGV ’04, pp. 73–81. ACM (2004)

  2. Barsky, B., Bargteil, A.W., Garcia, D.D., Klein, S.: Introducing vision-realistic rendering. In: Proceedings of 13th Eurographics Workshop on Rendering, pp. 1–7 (2002)

  3. Barsky, B.A., Tobias, M.J., Horn, D.R., Chu, D.P.: Investigating occlusion and discretization problems in image-based blurring techniques. In: Proceedings of Vision, Video, and Graphics, VVG 2003, pp. 97–102 (2003)

  4. Bedggood, P., Daaboul, M., Ashman, R.A., Smith, G.G., Metha, A.: Characteristics of the human isoplanatic patch and implications for adaptive optics retinal imaging.J. Biomed. Opt. 13(2), 024008 (2008)

    Article  Google Scholar 

  5. Cholewiak, S.A., Love, G.D., Srinivasan, P.P., Ng, R., Banks, M.S.: Chromablur: Rendering chromatic eye aberration improves accommodation and realism. ACM Trans. Graph. 36(6), 210:1–210:12 (2017)

    Article  Google Scholar 

  6. Cook, R.L., Porter, T., Carpenter, L.: Distributed ray tracing. In: Proceedings of SIGGRAPH’84, pp. 137–145 (1984)

  7. Garcia, K.: Circular separable convolution depth of field. In: ACM SIGGRAPH 2017 Talks, pp. 16:1–16:2 (2017)

  8. Gilles, A., Gioia, P., Cozot, R., Morin, L.: Hybrid approach for fast occlusion processing in computer-generated hologram calculation. Appl. Opt. 55(20), 5459–5470 (2016)

    Article  Google Scholar 

  9. He, J., Burns, S., Marcos, S.: Monochromatic aberrations in the accommodated human eye. Vis. Res. 40(1), 41–48 (2000)

    Article  Google Scholar 

  10. Hillaire, S., Lecuyer, A., Cozot, R., Casiez, G.: Depth-of-field blur effects for first-person navigation in virtual environments. IEEE Comput. Graph. Appl. 28(6), 47–55 (2008)

    Article  Google Scholar 

  11. Kopf, J., Matzen, K., Alsisan, S., Quigley, O., Ge, F., Chong, Y., Patterson, J., Frahm, J.M., Wu, S., Yu, M., Zhang, P., He, Z., Vajda, P., Saraf, A., Cohen, M.: One shot 3d photography. ACM Trans. Graph. 39(4) (2020)

  12. Koulieris, G., Akşit, K., Stengel, M., Mantiuk, R., Mania, K., Richardt, C.: Near-eye display and tracking technologies for virtual and augmented reality. Comput. Graph. Forum 38(2), 493–519 (2019). https://doi.org/10.1111/cgf.13654

    Article  Google Scholar 

  13. Kraus, M., Strengert, M.: Depth-of-field rendering by pyramidal image processing. Comput. Graph. Forum 26, 645–654 (2007)

    Article  Google Scholar 

  14. Krueger, M.L., Oliveira, M.M., Kronbauer, A.L.: Personalized visual simulation and objective validation of low-order aberrations of the human eye. In: Proceedings of SIBGRAPI’16, pp. 64–71 (2016)

  15. Lee, S., Eisemann, E., Seidel, H.P.: Real-time lens blur effects and focus control. ACM Trans. Graph. 29, 65:1–65:7 (2010)

  16. Li, Z., Snavely, N.: Megadepth: learning single-view depth prediction from internet photos. In: Computer Vision and Pattern Recognition (CVPR) (2018)

  17. Luis, A.: Complementary Huygens principle for geometrical and nongeometrical optics. Eur. J. Phys. 28, 231–240 (2007)

    Article  MathSciNet  Google Scholar 

  18. Niemitalo, O.: Circularly symmetric convolution and lens blur. http://yehar.com/blog/?p=1495 (2011)

  19. Parker, S.G., Bigler, J., Dietrich, A., Friedrich, H., Hoberock, J., Luebke, D., McAllister, D., McGuire, M., Morley, K., Robison, A., Stich, M.: Optix: A general purpose ray tracing engine. ACM Trans. Graph. 29(3), 66:1–66:13 (2010)

  20. Policarpo, F., Oliveira, M.M.: Relief mapping of non-height-field surface details. In: Proceedings 2006 Symposium on Interactive 3D Graphics and Games, I3D’06, pp. 55–62 (2006)

  21. Policarpo, F., Oliveira, M.M., Comba, J.A.L.D.: Real-time relief mapping on arbitrary polygonal surfaces. In: Proceedings 2006 Symposium on Interactive 3D Graphics and Games, pp. 155–162 (2005)

  22. Polyanskiy, M.N.: Refractive index database. https://refractiveindex.info. Accessed 23 Jan 2019

  23. Potmesil, M., Chakravarty, I.: Synthetic image generation with a lens and aperture camera model. ACM Trans. Graph. 1(2), 85–108 (1982)

    Article  Google Scholar 

  24. Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv:1907.01341 (2019)

  25. Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nesic, N., Wang, X., Westling, P.: High-resolution stereo datasets with subpixel-accurate ground truth. In: GCPR, LNCS, vol. 8753, pp. 31–42 (2014)

  26. Schedl, D., Wimmer, M.: A layered depth-of-field method for solving partial occlusion. J. WSCG 20(3), 239–246 (2012)

    Google Scholar 

  27. Schwartz, S.H.: Visual Perception: A Clinical Orientation, 4th edn. McGraw-Hill Medical Pub. Division, New York (2010)

    Google Scholar 

  28. Scofield, C.: 2 1/2-d depth-of-field simulation for computer animation. In: Graphics Gems III, pp. 36–38. M. Kaufmann (1992)

  29. Shinya, M.: Post-filtering for depth of field simulation with ray distribution buffer. In: Proceedings Graphics Interface ’94, pp. 59–66 (1994)

  30. Thibos, L., Applegate, R.A., Schwiegerling, J.T., Webb, R.: Standards for reporting the optical aberrations of eyes. J. Refract. Surg. S652–S660 (2002)

  31. Xiao, L., Kaplanyan, A., Fix, A., Chapman, M., Lanman, D.: Deepfocus: learned image synthesis for computational display. In: ACM SIGGRAPH 2018 Talks, pp. 4:1–4:2 (2018)

Download references

Funding

This work was funded by CNPq-Brazil (fellowships and Grants 312975/2018-0, 423673/2016-5 and 131288/2016-4), and CAPES Finance Code 001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manuel M. Oliveira.

Ethics declarations

Conflict of interest

The authors declare they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A optical power and image adjustments

A optical power and image adjustments

Given the distance v from the camera lens to the external lens, known as vertex distance, the resulting anisotropic magnification due to astigmatism is given by

$$\begin{aligned} M_{\varphi } = \frac{1}{1 - v\,S}\quad \text {and}\quad M_{\varphi ^{\bot }} = \frac{1}{1 - v\,S_{pC}} , \end{aligned}$$
(5)

where \(M_{\varphi }\) and \(M_{\varphi ^{\bot }}\) are, respectively, the magnification factors along the directions that make angles \(\varphi \) and \(\varphi + 90^{\circ }\) with the horizontal axis. In the absence of astigmatism, the magnification is isotropic, with \({M_{\varphi } = M_{\varphi ^{\bot }}}\). The effective optical power is then obtained as \(S'_{\varphi } = SM_{\varphi }\), which is the same as \(S'_{\varphi ^{\bot }} = SM_{\varphi ^{\bot }}\).

Image magnification may introduce incorrect values due to interpolation. Thus, when comparing our results to ground-truth, rather than magnifying a smaller dimension to match a larger one, we downscale the larger to match the smaller. One should note, however, that magnification is a function of vertex distance and vanishes when \(v=0\). Thus, magnification and its compensation have only been used for the sake of the validation experiment that uses an external lens. This is not required by the LGT technique itself.

Brightness adjustment is required to compensate for some amount of light that is reflected/absorbed by the extra lens, effectively not reaching the sensor. The images captured with the extra lens tend to be darker than ones captured without it. To perform brightness adjustment, for each different external lens, a small white patch is taken from the same area in images captured with and without the additional lens. The ratio between the average intensities from the darker and brighter patches was used to modulate the brightness of the images simulated with our technique, making them exhibit brightness similar to the ground-truth images. This is important when performing quantitative comparisons using metrics such as SSIM and PSNR (Table 2).

Chromatic aberration due to the external lens is given by

$$\begin{aligned} S''_{\varphi c} = \frac{S'_{\varphi }(\mu _c - 1)}{\mu _y - 1} \quad \text {and}\quad S''_{\varphi ^{\bot } c} = \frac{S'_{\varphi ^{\bot }}(\mu _c - 1)}{\mu _y - 1}, \end{aligned}$$

where \(S''_{\varphi {c}}\) and \(S''_{\varphi ^{\bot } c}\) are the resulting aberrated powers (in diopters) for wavelength \(\lambda _c\). \(S'_{\varphi }\) and \(S'_{\varphi ^{\bot }}\) are the effective optical powers due to the vertex distance v, \(\mu _c\) is the lens refractive index for wavelength \(\lambda _c\), and \(\mu _y=1.5085\) is the reference refractive index, which is usually on the yellow region of the spectrum. For our experiments, we used the following indices of refraction for red, green, and blue, respectively: \(\mu _r = 1.4998\), \(\mu _g = 1.5085\), and \(\mu _b = 1.5152\), which were obtained from an online refractive index database [22], and correspond to the wavelengths \(\lambda _r=700\) nm, \(\lambda _g=510\) nm, and \(\lambda _b=440\) nm [14].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lima, A.R.C., Medeiros, A.M., Marques, V.G. et al. Real-time simulation of accommodation and low-order aberrations of the human eye using light-gathering trees. Vis Comput 37, 2581–2593 (2021). https://doi.org/10.1007/s00371-021-02194-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02194-3

Keywords

Navigation