Abstract
Typical light-field rendering uses a single focal plane to define the depth at which objects should appear sharp. This emulates the behavior of classical cameras. However, plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of conventional imaging. We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all in focus while the surrounding fore- and background is consistently rendered out of focus based on user-defined focal plane and aperture settings. Our approach generates soft occlusion boundaries with a natural appearance which is not possible with existing techniques. It furthermore does not rely on dense depth estimation and thus allows presenting complex scenes with non-physical visual effects.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baker, S., Szeliski, R., Anandan, P.: A layered approach to stereo reconstruction. In: CVPR, pp. 434–441 (1998)
Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M.: Unstructured lumigraph rendering. In: SIGGRAPH, pp. 425–432 (2001)
Chai, J.X., Tong, X., Chan, S.C., Shum, H.Y.: Plenoptic sampling. In: SIGGRAPH, pp. 307–318 (2000)
Chen, B., Ofek, E., Shum, H.Y., Levoy, M.: Interactive deformation of light fields. In: I3D, pp. 139–146 (2005)
Chen, C., Lin, H., Yu, Z., Kang, S.B., Yu, J.: Light field stereo matching using bilateral statistics of surface cameras. In: CVPR, pp. 1518–1525 (2014)
Dansereau, D.G., Pizarro, O., Williams, S.B.: Decoding, calibration and rectification for lenselet-based plenoptic cameras. In: CVPR, pp. 1027–1034 (2013)
Douglas, D.H., Peucker, T.K.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica 10(2), 112–122 (1973)
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: SIGGRAPH, pp. 43–54 (1996)
Horn, D.R., Chen, B.: Lightshop: interactive light field manipulation and rendering. In: I3D, pp. 121–128 (2007)
Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: SIGGRAPH, pp. 297–306 (2000)
Jarabo, A., Masia, B., Bousseau, A., Pellacini, F., Gutierrez, D.: How do people edit light fields? TOG 33(4), 146:1–146:10 (2014)
Jarabo, A., Masia, B., Gutierrez, D.: Efficient propagation of light field edits. In: SIACG, pp. 75–80 (2011)
Kosara, R., Miksch, S., Hauser, H.: Semantic depth of field. In: InfoVis, p. 97 (2001)
Kosloff, T.J., Barsky, B.A.: An algorithm for rendering generalized depth of field effects based on simulated heat diffusion. In: ICCSA, pp. 1124–1140 (2007)
Kosloff, T.J., Barsky, B.A.: Three techniques for rendering generalized depth of field effects. In: SIAM, pp. 42–48 (2009)
Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH, pp. 31–42 (1996)
Lo, W.Y., van Baar, J., Knaus, C., Zwicker, M., Gross, M.: Stereoscopic 3D copy & paste. TOG 29(6), 147:1–147:10 (2010)
Mortensen, E.N., Barrett, W.A.: Intelligent scissors for image composition. In: SIGGRAPH, pp. 191–198 (1995)
Price, B.L., Cohen, S.: Stereocut: Consistent interactive object selection in stereo image pairs. In: ICCV, pp. 1148–1155 (2011)
Rother, C., Kolmogorov, V., Blake, A.: “Grabcut”: interactive foreground extraction using iterated graph cuts. TOG 23(3), 309–314 (2004)
Schirmacher, H., Heidrich, W., Seidel, H.P.: High-quality interactive lumigraph rendering through warping. In: GI, pp. 87–94 (2000)
Seitz, S.M., Kutulakos, K.N.: Plenoptic image editing. IJCV 48(2), 115–129 (2002)
Shum, H.Y., Sun, J., Yamazaki, S., Li, Y., Tang, C.K.: Pop-up light field: an interactive image-based modeling and rendering system. TOG 23(2), 143–162 (2004)
Vaish, V., Adams, A.: The (new) stanford light field archive (2008). http://lightfield.stanford.edu
Wanner, S., Meister, S., Goldluecke, B.: Datasets and benchmarks for densely sampled 4D light fields. In: VMV, pp. 225–226 (2013)
Wetzstein, G., Ihrke, I., Lanman, D., Heidrich, W.: Computational plenoptic imaging. Comput. Graph. Forum. 30, 2397–2426 (2011)
Weyrich, T., Pfister, H., Gross, M.: Rendering deformable surface reflectance fields. TVCG 11(1), 48–58 (2005)
Wood, D.N., Azuma, D.I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D.H., Stuetzle, W.: Surface light fields for 3D photography. In: SIGGRAPH, pp. 287–296 (2000)
Zhang, Z., Wang, L., Guo, B., Shum, H.Y.: Feature-based light field morphing. TOG 21(3), 457–464 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 34922 KB)
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Schedl, D.C., Birklbauer, C., Gschnaller, J., Bimber, O. (2016). Generalized Depth-of-Field Light-Field Rendering. In: Chmielewski, L., Datta, A., Kozera, R., Wojciechowski, K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science(), vol 9972. Springer, Cham. https://doi.org/10.1007/978-3-319-46418-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-46418-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46417-6
Online ISBN: 978-3-319-46418-3
eBook Packages: Computer ScienceComputer Science (R0)