“Real-time lens blur effects and focus control” by Lee, Eisemann and Seidel

  • ©Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel




    Real-time lens blur effects and focus control



    We present a novel rendering system for defocus blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more general and can integrate advanced simulations, such as simple geometric lens models enabling various lens aberration effects. These latter is crucial for realism, but are often employed in artistic contexts, too. We show that available artistic lenses can be simulated by our method. In this spirit, our work introduces an intuitive control over depth-of-field effects. The physical basis is crucial as a starting point to enable new artistic renderings based on a generalized focal surface to emphasize particular elements in the scene while retaining a realistic look. Our real-time solution provides realistic, as well as plausible expressive results.


    1. Barsky, B. A., and Pasztor, E. 2004. Rendering skewed plane of sharp focus and associated depth of field. In SIGGRAPH Sketches, 92. Google ScholarDigital Library
    2. Barsky, B., Bargteil, A., Garcia, D., and Klein, S. 2002. Introducing vision-realistic rendering. In Proc. Eurographics Rendering Workshop, 26–28.Google Scholar
    3. Barsky, B., Tobias, M., Chu, D., and Horn, D. 2005. Elimination of artifacts due to occlusion and discretization problems in image space blurring techniques. Graphical Models 67, 584–599. Google ScholarDigital Library
    4. Bertalmío, M., Fort, P., and Sánchez-Crespo, D. 2004. Real-time, accurate depth of field using anisotropic diffusion and programmable graphics cards. In Proc. 3DPVT, 767–773. Google ScholarCross Ref
    5. Bousseau, A., 2009. Non-linear aperture for stylized depth of field. SIGGRAPH 2009 – Technical talk. Google ScholarDigital Library
    6. Cook, R. L., Porter, T., and Carpenter, L. 1984. Distributed ray tracing. Computer Graphics 18, 3, 137–145. Google ScholarDigital Library
    7. Décoret, X. 2005. N-buffers for efficient depth map query. In Proc. Eurographics, 393–400.Google ScholarCross Ref
    8. Demers, J. 2004. Depth of field: A survey of techniques. In GPU Gems, R. Fernando, Ed. Addison-Wesley, ch. 23, 375–390.Google Scholar
    9. Everitt, C. 2001. Interactive order-independent transparency. White paper, nVIDIA 2, 6, 7.Google Scholar
    10. Gleicher, M., and Witkin, A. P. 1992. Through-the-lens camera control. In SIGGRAPH, 331–340. Google ScholarDigital Library
    11. Haeberli, P., and Akeley, K. 1990. The accumulation buffer: Hardware support for high-quality rendering. Proc. ACM SIGGRAPH, 309–318. Google ScholarDigital Library
    12. Hammon, Jr., E. 2007. Practical post-process depth of field. In GPU Gems 3, H. Nguyen, Ed. Addison-Wesley, ch. 28, 583–606.Google Scholar
    13. Hensley, J., Scheuermann, T., Coombe, G., Singh, M., and Lastra, A. 2005. Fast summed-area table generation and its applications. Computer Graphics Forum 24, 3, 547–556.Google ScholarCross Ref
    14. Kass, M., Lefohn, A., and Owens, J. 2006. Interactive depth of field using simulated diffusion on a GPU. Tech. rep., Pixar.Google Scholar
    15. Kolb, C., Mitchell, D., and Hanrahan, P. 1995. A realistic camera model for computer graphics. In Proc. ACM SIGGRAPH, 317–324. Google ScholarDigital Library
    16. Kosara, R., Miksch, S., and Hauser, H. 2001. Semantic depth of field. In Proc. IEEE Information Visualization, 97–104. Google ScholarDigital Library
    17. Kosloff, T., and Barsky, B. 2007. An algorithm for rendering generalized depth of field effects based on simulated heat diffusion. In Proc. ICCSA, 1124–1140. Google ScholarDigital Library
    18. Kosloff, T., Tao, M., and Barsky, B. 2009. Depth of field postprocessing for layered scenes using constant-time rectangle spreading. In Proc. Graphics Interface, 39–46. Google ScholarDigital Library
    19. Kraus, M., and Strengert, M. 2007. Depth-of-field rendering by pyramidal image processing. In Proc. Eurographics, 645–654.Google Scholar
    20. Lee, S., Kim, G. J., and Choi, S. 2008. Real-time depth-of-field rendering using splatting on per-pixel layers. Computer Graphics Forum 27, 7, 1955–1962.Google ScholarCross Ref
    21. Lee, S., Eisemann, E., and Seidel, H.-P. 2009. Depth-of-Field Rendering with Multiview Synthesis. ACM Trans. Graph. 28, 5, 134:1–6. Google ScholarDigital Library
    22. Lee, S., Kim, G. J., and Choi, S. 2009. Real-time depth-of-field rendering using anisotropically filtered mipmap interpolation. IEEE Trans. Vis. and Computer Graphics 15, 3, 453–464. Google ScholarDigital Library
    23. Liu, F., Huang, M. C., Liu, X. H., and Wu, E. H. 2009. Single pass depth peeling via cuda rasterizer. In SIGGRAPH Talks. Google ScholarDigital Library
    24. Mather, G. 1996. Image blur as a pictorial depth cue. Biological Sciences 263, 1367, 169–172.Google Scholar
    25. Merklinger, H. M. 1996. FOCUSING the VIEW CAMERA. Nova Scotia.Google Scholar
    26. Patow, G., and Pueyo, X. 2003. A survey of inverse rendering problems. Computer Graphics Forum 22, 4, 663–687.Google ScholarCross Ref
    27. Pellacini, F., and Lawrence, J. 2007. AppWand: editing measured materials using appearance-driven optimization. ACM Trans. Graph. 26, 3, 54. Google ScholarDigital Library
    28. Pellacini, F., Tole, P., and Greenberg, D. P. 2002. A user interface for interactive cinematic shadow design. In Proc. ACM SIGGRAPH, 563–566. Google ScholarDigital Library
    29. Policarpo, F., and Oliveira, M. M. 2006. Relief mapping of non-height-field surface details. In Proc. I3D, 55–62. Google ScholarDigital Library
    30. Potmesil, M., and Chakravarty, I. 1981. A lens and aperture camera model for synthetic image generation. Proc. ACM SIGGRAPH 15, 3, 297–305. Google ScholarDigital Library
    31. Poulin, P., and Fournier, A. 1992. Lights from highlights and shadows. In SI3D, 31–38. Google ScholarDigital Library
    32. Riguer, G., Tatarchuk, N., and Isidoro, J. 2003. Real-time depth of field simulation. In ShaderX
    2, W. F. Engel, Ed. Wordware, ch. 4, 529–556.Google Scholar
    33. Ritschel, T., Ihrke, M., Frisvad, J. R., Coppens, J., Myszkowski, K., and Seidel, H.-P. 2009. Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye. In Proc. Eurographics.Google Scholar
    34. Rokita, P. 1996. Generating depth-of-field effects in virtual reality applications. IEEE Comp. Graph. and its App. 16, 2, 18–21. Google ScholarDigital Library
    35. Schoeneman, C., Dorsey, J., Smits, B., Arvo, J., and Greenberg, D. 1993. Painting with light. In Proc. ACM SIGGRAPH, 143–146. Google ScholarDigital Library
    36. Sellmeier, W. 1871. Zur Erklärung der abnormen Farbenfolge im Spectrum einiger Substanzen. Annalen der Physik und Chemie 219, 272–282.Google ScholarCross Ref
    37. Smith, W. 2004. Modern Lens Design. McGraw-Hill Professional.Google Scholar
    38. Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Proc. 13, 4, 600–612. Google ScholarDigital Library
    39. Zhou, T., Chen, J., and Pullen, M. 2007. Accurate depth of field simulation in real time. Computer Graphics Forum 26, 1.Google ScholarCross Ref

ACM Digital Library Publication: