“User-assisted image compositing for photographic lighting” by Boyadzhiev, Paris and Bala

  • ©

Conference:


Type(s):


Title:

    User-assisted image compositing for photographic lighting

Session/Category Title:   Color & Compositing


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Good lighting is crucial in photography and can make the difference between a great picture and a discarded image. Traditionally, professional photographers work in a studio with many light sources carefully set up, with the goal of getting a near-final image at exposure time, with post-processing mostly focusing on aspects orthogonal to lighting. Recently, a new workflow has emerged for architectural and commercial photography, where photographers capture several photos from a fixed viewpoint with a moving light source. The objective is not to produce the final result immediately, but rather to capture useful data that are later processed, often significantly, in photo editing software to create the final well-lit image.This new workflow is flexible, requires less manual setup, and works well for time-constrained shots. But dealing with several tens of unorganized layers is painstaking, requiring hours to days of manual effort, as well as advanced photo editing skills. Our objective in this paper is to make the compositing step easier. We describe a set of optimizations to assemble the input images to create a few basis lights that correspond to common goals pursued by photographers, e.g., accentuating edges and curved regions. We also introduce modifiers that capture standard photographic tasks, e.g., to alter the lights to soften highlights and shadows, akin to umbrellas and soft boxes. Our experiments with novice and professional users show that our approach allows them to quickly create satisfying results, whereas working with unorganized images requires considerably more time. Casual users particularly benefit from our approach since coping with a large number of layers is daunting for them and requires significant experience.

References:


    1. Adelson, E. H., and Bergen, J. R. 1991. The plenoptic function and the elements of early vision. In Computational Models of Visual Processing, MIT Press.Google Scholar
    2. Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. ACM Trans. Graph. Google ScholarDigital Library
    3. Akers, D., Losasso, F., Klingner, J., Agrawala, M., Rick, J., and Hanrahan, P. 2003. Conveying shape and features with image-based relighting. In Proceedings of the 14th IEEE Visualization 2003 (VIS’03), IEEE Computer Society. Google ScholarDigital Library
    4. Bishop, T., and Favaro, P. 2011. The light field camera: Extended depth of field, aliasing and super-resolution. IEEE Trans. Pattern. Anal. Mach. Intell. Google ScholarDigital Library
    5. Bousseau, A., Paris, S., and Durand, F. 2009. User-assisted intrinsic images. ACM Trans. Graph. 28, 5 (Dec.). Google ScholarDigital Library
    6. Bousseau, A., Chapoulie, E., Ramamoorthi, R., and Agrawala, M. 2011. Optimizing environment maps for material depiction. In CGF, Eurographics Association, EGSR’11. Google ScholarDigital Library
    7. Boyadzhiev, I., Bala, K., Paris, S., and Durand, F. 2012. User-guided white balance for mixed lighting conditions. ACM Trans. Graph. 31, 6 (Nov.). Google ScholarDigital Library
    8. Burt, P. J., and Adelson, E. H. 1983. A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2, 4. Google ScholarDigital Library
    9. Carroll, R., Ramamoorthi, R., and Agrawala, M. 2011. Illumination decomposition for material recoloring with consistent interreflections. ACM Trans. Graph. 30, 4 (July). Google ScholarDigital Library
    10. Cohen, M. F., Colburn, R. A., and Drucker, S. 2003. Image stacks. Tech. rep., Microsoft Research. MSR-TR-2003-40.Google Scholar
    11. Debevec, P., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., and Sagar, M. 2000. Acquiring the reflectance field of a human face. In Proceedings of ACM SIGGRAPH 2000. Google ScholarDigital Library
    12. Eisemann, E., and Durand, F. 2004. Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. 23, 3. Google ScholarDigital Library
    13. Fattal, R., Agrawala, M., and Rusinkiewicz, S. 2007. Multiscale shape and detail enhancement from multi-light image collections. In ACM SIGGRAPH 2007 papers. Google ScholarDigital Library
    14. Guanzon, J., and Blake, M. 2011. Video of computational design workflow (https://http://vimeo.com/30363913). Tech. rep.Google Scholar
    15. Hunter, F., Fuqua, P., and Biver, S. 2011. Light Science and Magic 4/e. Elsevier Science.Google Scholar
    16. Judd, T., Durand, F., and Adelson, E. 2007. Apparent ridges for line drawing. ACM Transactions on Graphics 26, 3. Google ScholarDigital Library
    17. Kelley, M. P. 2011. Video of computational design workflow (https://www.youtube.com/watch?v=J-exuHchmSk). Tech. rep.Google Scholar
    18. Kelley, M. P. 2012. Private communication with proffesional photographer. Tech. rep., Private Photographer.Google Scholar
    19. Levin, A., Fergus, R., Durand, F., and Freeman, W. T. 2007. Image and depth from a conventional camera with a coded aperture. In ACM SIGGRAPH 2007 papers, ACM. Google ScholarDigital Library
    20. Mallick, S. P., Zickler, T., Belhumeur, P. N., and Kriegman, D. J. 2006. Specularity removal in images and videos: a pde approach. In Proceedings of the 9th European conference on Computer Vision – Volume Part I, Springer-Verlag, ECCV’06. Google ScholarDigital Library
    21. Mertens, T., Kautz, J., and Reeth, F. V. 2007. Exposure fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, IEEE Computer Society. Google ScholarDigital Library
    22. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., and Hanrahan, P. 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Tech. rep., Apr.Google Scholar
    23. Ostrovsky, Y., Cavanagh, P., and Sinha, P. 2005. Perceiving illumination inconsistencies in scenes. Perception 34.Google Scholar
    24. Paris, S., and Durand, F. 2009. A fast approximation of the bilateral filter using a signal processing approach. International Journal of Computer Vision 81, 1. Google ScholarDigital Library
    25. Pellacini, F. 2010. Envylight: an interface for editing natural illumination. ACM Trans. Graph. 29 (July). Google ScholarDigital Library
    26. Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., and Toyama, K. 2004. Digital photography with flash and no-flash image pairs. In ACM SIGGRAPH 2004 Papers. Google ScholarDigital Library
    27. Raskar, R., Tan, K.-H., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. In ACM SIGGRAPH 2004 Papers, ACM, New York, NY, USA, SIGGRAPH ’04. Google ScholarDigital Library
    28. Reinhard, E., Pouli, T., Kunkel, T., Long, B., Ballestad, A., and Damberg, G. 2012. Calibrated image appearance reproduction. ACM Trans. Graph. 31, 6 (Nov.). Google ScholarDigital Library
    29. Schoeneman, C., Dorsey, J., Smits, B., Arvo, J., and Greenberg, D. 1993. Painting with light. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH ’93. Google ScholarDigital Library
    30. Tan, R., Nishino, K., and Ikeuchi, K. 2004. Separating reflection components based on chromaticity and noise analysis. IEEE Trans. Pattern. Anal. Mach. Intell. Google ScholarDigital Library
    31. Winnemöeller, H., Mohan, A., Tumblin, J., and Gooch, B. 2005. Light waving: Estimating light positions from photographs alone. Computer Graphics Forum 24, 3.Google Scholar


ACM Digital Library Publication:



Overview Page: