“KinectFusion: real-time dynamic 3D surface reconstruction and interaction” by Izadi, Newcombe, Kim, Hilliges, Molyneaux, et al. …

  • ©Shahram Izadi, Richard A. Newcombe, David Kim, Otmar Hilliges, David Molyneaux, Steve Hodges, Pushmeet Kohli, Jamie Shotton, Andrew J. Davison, and Andrew Fitzgibbon

Conference:


Type(s):


Title:

    KinectFusion: real-time dynamic 3D surface reconstruction and interaction

Presenter(s)/Author(s):



Abstract:


    We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.

References:


    1. Besl, P., and McKay, N. 1992. A method for registration of 3D shapes. 239–256.
    2. Curless, B., and Levoy, M. 1996. A volumetric method for building complex models from range images. In ACM Transactions on Graphics (SIGGRAPH).


ACM Digital Library Publication:



Overview Page: