skip to main content
research-article

User-assisted image compositing for photographic lighting

Published: 21 July 2013 Publication History

Abstract

Good lighting is crucial in photography and can make the difference between a great picture and a discarded image. Traditionally, professional photographers work in a studio with many light sources carefully set up, with the goal of getting a near-final image at exposure time, with post-processing mostly focusing on aspects orthogonal to lighting. Recently, a new workflow has emerged for architectural and commercial photography, where photographers capture several photos from a fixed viewpoint with a moving light source. The objective is not to produce the final result immediately, but rather to capture useful data that are later processed, often significantly, in photo editing software to create the final well-lit image.
This new workflow is flexible, requires less manual setup, and works well for time-constrained shots. But dealing with several tens of unorganized layers is painstaking, requiring hours to days of manual effort, as well as advanced photo editing skills. Our objective in this paper is to make the compositing step easier. We describe a set of optimizations to assemble the input images to create a few basis lights that correspond to common goals pursued by photographers, e.g., accentuating edges and curved regions. We also introduce modifiers that capture standard photographic tasks, e.g., to alter the lights to soften highlights and shadows, akin to umbrellas and soft boxes. Our experiments with novice and professional users show that our approach allows them to quickly create satisfying results, whereas working with unorganized images requires considerably more time. Casual users particularly benefit from our approach since coping with a large number of layers is daunting for them and requires significant experience.

Supplementary Material

ZIP File (a36-boyadzhiev.zip)
Supplemental material.
MP4 File (tp043.mp4)

References

[1]
Adelson, E. H., and Bergen, J. R. 1991. The plenoptic function and the elements of early vision. In Computational Models of Visual Processing, MIT Press.
[2]
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. ACM Trans. Graph.
[3]
Akers, D., Losasso, F., Klingner, J., Agrawala, M., Rick, J., and Hanrahan, P. 2003. Conveying shape and features with image-based relighting. In Proceedings of the 14th IEEE Visualization 2003 (VIS'03), IEEE Computer Society.
[4]
Bishop, T., and Favaro, P. 2011. The light field camera: Extended depth of field, aliasing and super-resolution. IEEE Trans. Pattern. Anal. Mach. Intell.
[5]
Bousseau, A., Paris, S., and Durand, F. 2009. User-assisted intrinsic images. ACM Trans. Graph. 28, 5 (Dec.).
[6]
Bousseau, A., Chapoulie, E., Ramamoorthi, R., and Agrawala, M. 2011. Optimizing environment maps for material depiction. In CGF, Eurographics Association, EGSR'11.
[7]
Boyadzhiev, I., Bala, K., Paris, S., and Durand, F. 2012. User-guided white balance for mixed lighting conditions. ACM Trans. Graph. 31, 6 (Nov.).
[8]
Burt, P. J., and Adelson, E. H. 1983. A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2, 4.
[9]
Carroll, R., Ramamoorthi, R., and Agrawala, M. 2011. Illumination decomposition for material recoloring with consistent interreflections. ACM Trans. Graph. 30, 4 (July).
[10]
Cohen, M. F., Colburn, R. A., and Drucker, S. 2003. Image stacks. Tech. rep., Microsoft Research. MSR-TR-2003-40.
[11]
Debevec, P., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., and Sagar, M. 2000. Acquiring the reflectance field of a human face. In Proceedings of ACM SIGGRAPH 2000.
[12]
Eisemann, E., and Durand, F. 2004. Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. 23, 3.
[13]
Fattal, R., Agrawala, M., and Rusinkiewicz, S. 2007. Multiscale shape and detail enhancement from multi-light image collections. In ACM SIGGRAPH 2007 papers.
[14]
Guanzon, J., and Blake, M. 2011. Video of computational design workflow (https://http://vimeo.com/30363913). Tech. rep.
[15]
Hunter, F., Fuqua, P., and Biver, S. 2011. Light Science and Magic 4/e. Elsevier Science.
[16]
Judd, T., Durand, F., and Adelson, E. 2007. Apparent ridges for line drawing. ACM Transactions on Graphics 26, 3.
[17]
Kelley, M. P. 2011. Video of computational design workflow (https://www.youtube.com/watch?v=J-exuHchmSk). Tech. rep.
[18]
Kelley, M. P. 2012. Private communication with proffesional photographer. Tech. rep., Private Photographer.
[19]
Levin, A., Fergus, R., Durand, F., and Freeman, W. T. 2007. Image and depth from a conventional camera with a coded aperture. In ACM SIGGRAPH 2007 papers, ACM.
[20]
Mallick, S. P., Zickler, T., Belhumeur, P. N., and Kriegman, D. J. 2006. Specularity removal in images and videos: a pde approach. In Proceedings of the 9th European conference on Computer Vision - Volume Part I, Springer-Verlag, ECCV'06.
[21]
Mertens, T., Kautz, J., and Reeth, F. V. 2007. Exposure fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, IEEE Computer Society.
[22]
Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., and Hanrahan, P. 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Tech. rep., Apr.
[23]
Ostrovsky, Y., Cavanagh, P., and Sinha, P. 2005. Perceiving illumination inconsistencies in scenes. Perception 34.
[24]
Paris, S., and Durand, F. 2009. A fast approximation of the bilateral filter using a signal processing approach. International Journal of Computer Vision 81, 1.
[25]
Pellacini, F. 2010. Envylight: an interface for editing natural illumination. ACM Trans. Graph. 29 (July).
[26]
Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., and Toyama, K. 2004. Digital photography with flash and no-flash image pairs. In ACM SIGGRAPH 2004 Papers.
[27]
Raskar, R., Tan, K.-H., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. In ACM SIGGRAPH 2004 Papers, ACM, New York, NY, USA, SIGGRAPH '04.
[28]
Reinhard, E., Pouli, T., Kunkel, T., Long, B., Ballestad, A., and Damberg, G. 2012. Calibrated image appearance reproduction. ACM Trans. Graph. 31, 6 (Nov.).
[29]
Schoeneman, C., Dorsey, J., Smits, B., Arvo, J., and Greenberg, D. 1993. Painting with light. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH '93.
[30]
Tan, R., Nishino, K., and Ikeuchi, K. 2004. Separating reflection components based on chromaticity and noise analysis. IEEE Trans. Pattern. Anal. Mach. Intell.
[31]
Winnemöeller, H., Mohan, A., Tumblin, J., and Gooch, B. 2005. Light waving: Estimating light positions from photographs alone. Computer Graphics Forum 24, 3.

Cited By

View all
  • (2023)Targeting Shape and Material in Lighting DesignComputer Graphics Forum10.1111/cgf.1467841:7(299-309)Online publication date: 20-Mar-2023
  • (2023)Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation2023 IEEE International Conference on Computational Photography (ICCP)10.1109/ICCP56744.2023.10233761(1-12)Online publication date: 28-Jul-2023
  • (2022)Photographic Lighting Design with Photographer-in-the-Loop Bayesian OptimizationProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545690(1-11)Online publication date: 29-Oct-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 32, Issue 4
July 2013
1215 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/2461912
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 July 2013
Published in TOG Volume 32, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. light compositing
  2. lighting design

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Targeting Shape and Material in Lighting DesignComputer Graphics Forum10.1111/cgf.1467841:7(299-309)Online publication date: 20-Mar-2023
  • (2023)Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation2023 IEEE International Conference on Computational Photography (ICCP)10.1109/ICCP56744.2023.10233761(1-12)Online publication date: 28-Jul-2023
  • (2022)Photographic Lighting Design with Photographer-in-the-Loop Bayesian OptimizationProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545690(1-11)Online publication date: 29-Oct-2022
  • (2022)Computer Vision Based Method for Shadow DetectionIEEE Sensors Letters10.1109/LSENS.2022.31729676:6(1-4)Online publication date: Jun-2022
  • (2022)Interactive lighting editing system for single indoor low-light scene images with corresponding depth mapsVisual Informatics10.1016/j.visinf.2022.08.0016:4(90-99)Online publication date: Dec-2022
  • (2019)Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light ProbeProceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology10.1145/3332165.3347893(221-232)Online publication date: 17-Oct-2019
  • (2019)Exploratory Stage Lighting Design using Visual ObjectivesComputer Graphics Forum10.1111/cgf.1364838:2(417-429)Online publication date: 7-Jun-2019
  • (2019)A Dataset of Multi-Illumination Images in the Wild2019 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV.2019.00418(4079-4088)Online publication date: Oct-2019
  • (2019)Learning to Separate Multiple Illuminants in a Single Image2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR.2019.00390(3775-3784)Online publication date: Jun-2019
  • (2018)Deep hybrid real and synthetic training for intrinsic decompositionProceedings of the Eurographics Symposium on Rendering: Experimental Ideas & Implementations10.2312/sre.20181172(53-63)Online publication date: 1-Jul-2018
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media