“Next generation image based lighting using HDR video” by Unger, Gustavson, Kronander, Larsson, Bonnet, et al. …

  • ©Jonas Unger, Stefan Gustavson, Joel Kronander, Per Larsson, Gerhard Bonnet, and Gunnar Kaiser

Conference:


Type(s):


Title:

    Next generation image based lighting using HDR video

Presenter(s)/Author(s):



Abstract:


    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps: 1.) Capture – The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture. 2.) Scene recovery – Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene. 3.) Radiance processing – When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure. 4.) Rendering – Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail. We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

References:


    1. P. Debevec: Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In SIGGRAPH 98 Papers, ACM, New York, USA, 189–198, 1998.
    2. B. Triggs, P. Mclauchlan, R. Hartley, and A. Fitzgibbon: Bundle adjustment — a modern synthesis, Vision Algorithms: Theory and Practice, LNCS, 298–375, Springer Verlag, 2000.
    3. Y. Furukawa, B. Curless, S. M. Seitz and R. Szeliski: Manhattan world stereo, IEEE Conference on Computer Vision and Pattern Recognition, 2009.


ACM Digital Library Publication:



Overview Page: