“Emptying, refurnishing, and relighting indoor spaces” by Zhang, Cohen and Curless
Conference:
Type(s):
Title:
- Emptying, refurnishing, and relighting indoor spaces
Session/Category Title: Indoor Scene Modeling
Presenter(s)/Author(s):
Abstract:
Visualizing changes to indoor scenes is important for many applications. When looking for a new place to live, we want to see how the interior looks not with the current inhabitant’s belongings, but with our own furniture. Before purchasing a new sofa, we want to visualize how it would look in our living room. In this paper, we present a system that takes an RGBD scan of an indoor scene and produces a scene model of the empty room, including light emitters, materials, and the geometry of the non-cluttered room. Our system enables realistic rendering not only of the empty room under the original lighting conditions, but also with various scene edits, including adding furniture, changing the material properties of the walls, and relighting. These types of scene edits enable many mixed reality applications in areas such as real estate, furniture retail, and interior design. Our system contains two novel technical contributions: a 3D radiometric calibration process that recovers the appearance of the scene in high dynamic range, and a global-illumination-aware inverse rendering framework that simultaneously recovers reflectance properties of scene surfaces and lighting properties for several light source types, including generalized point and line lights.
References:
1. Agarwal, S., Mierle, K., and Others. Ceres Solver.
2. Barron, J. T., and Malik, J. 2013. Intrinsic scene properties from a single rgb-d image. In CVPR.
3. Barron, J. T., and Malik, J. 2015. Shape, illumination, and reflectance from shading. PAMI 37, 8. Cross Ref
4. Boivin, S., and Gagalowicz, A. 2002. Inverse rendering from a single image. Conference on Colour in Graphics, Imaging, and Vision 2002, 1.
5. Cabral, R., and Furukawa, Y. 2014. Piecewise Planar and Compact Floorplan Reconstruction from Images. In CVPR.
6. Cohen, M. F., Wallace, J., and Hanrahan, P. 1993. Radiosity and realistic image synthesis. Academic Press Professional.
7. Colburn, A., Agarwala, A., Hertzmann, A., Curless, B., and Cohen, M. F. 2013. Image-based remodeling. TVCG 19, 1.
8. Colburn, R. A. 2014. Image-Based Remodeling: A Framework for Creating, Visualizing, and Editing Image-Based Models. PhD thesis, University of Washington.
9. Cossairt, O., Nayar, S., and Ramamoorthi, R. 2008. Light field transfer: global illumination between real and synthetic objects. ACM Trans. Graph. 27, 3.
10. Dai, A., Niessner, M., Zollhöfer, M., Izadi, S., and Theobalt, C. 2016. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Reintegration. arXiv:1604.01093.
11. Debevec, P. E., and Malik, J. 1997. Recovering high dynamic range radiance maps from photographs. In SIGGRAPH.
12. Debevec, P. 1998. Rendering synthetic objects into real scenes. In SIGGRAPH.
13. Forsyth, D. A. 2011. Variable-source shading analysis. IJCV 91, 3.
14. Furukawa, Y., Curless, B., Seitz, S., and Szeliski, R. 2009. Manhattan-world stereo. In CVPR.
15. Goldman, D. B. 2010. Vignette and exposure calibration and compensation. PAMI 32, 12.
16. Grossberg, M. D., and Nayar, S. K. 2002. What Can Be Known about the Radiometric Response from Images? In ECCV.
17. Grossberg, M., and Nayar, S. 2003. What is the space of camera response functions? In CVPR.
18. Grundmann, M., McClanahan, C., and Essa, I. 2013. Post-processing approach for radiometric self-calibration of video. In ICCP.
19. Ikehata, S., Yang, H., and Furukawa, Y. 2015. Structured indoor modelling. In ICCV.
20. Kajiya, J. T. 1986. The rendering equation. In SIGGRAPH.
21. Karsch, K., Hedau, V., Forsyth, D., and Hoiem, D. 2011. Rendering synthetic objects into legacy photographs. ACM Trans. Graph. 30, 6.
22. Karsch, K., Sunkavalli, K., Hadap, S., Carr, N., Jin, H., Fonte, R., Sittig, M., and Forsyth, D. 2014. Automatic Scene Inference for 3D Object Compositing. ACM Trans. Graph. 33, 3.
23. Kawai, J. K., Painter, J. S., and Cohen, M. F. 1993. Radioptimization. In SIGGRAPH.
24. Kazhdan, M., and Hoppe, H. 2013. Screened poisson surface reconstruction. ACM Trans. Graph. 32, 3.
25. Kim, S. J., and Pollefeys, M. 2008. Robust radiometric calibration and vignetting correction. PAMI 30, 4.
26. Kottas, D. G., Hesch, J. A., Bowman, S. L., and Roumeliotis, S. I. 2013. On the consistency of vision-aided inertial navigation. In Experimental Robotics, Springer, 303–317. Cross Ref
27. Li, S., Handa, A., Zhang, Y., and Calway, A. 2016. HDR-Fusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor. arXiv:1604.00895.
28. Lombardi, S., and Nishino, K. 2016. Reflectance and Illumination Recovery in the Wild. PAMI 38, 1.
29. Lombardi, S., and Nishino, K. 2016. Radiometric Scene Decomposition: Scene Reflectance, Illumination, and Geometry from RGB-D Images. arXiv:1604.01354.
30. Lourakis, M. I. A., and Argyros, A. A. 2009. SBA. ACM Transactions on Mathematical Software 36, 1.
31. Meilland, M., Barat, C., and Comport, A. 2013. 3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting. In ISMAR.
32. Mercier, B., Meneveaux, D., and Fournier, A. 2007. A framework for automatically recovering object shape, reflectance and light sources from calibrated images. IJCV 73, 1.
33. Mourikis, A. I., and Roumeliotis, S. I. 2007. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. In ICRA.
34. Newcombe, R. A., Davison, A. J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., and Fitzgibbon, A. 2011. KinectFusion: Real-time dense surface mapping and tracking. In ISMAR.
35. Niessner, M., Zollhöfer, M., Izadi, S., and Stamminger, M. 2013. Real-time 3D Reconstruction at Scale Using Voxel Hashing. ACM Trans. Graph. 32, 6.
36. Patow, G., and Pueyo, X. 2003. A Survey of Inverse Rendering Problems. Computer Graphics Forum 22, 4. Cross Ref
37. Pharr, M., and Humphreys, G. 2004. Physically Based Rendering: From Theory To Implementation. Morgan Kaufmann.
38. Ramamoorthi, R., and Hanrahan, P. 2001. A signal-processing framework for inverse rendering. In SIGGRAPH.
39. Ramanarayanan, G., Ferwerda, J., Walter, B., and Bala, K. 2007. Visual equivalence. ACM Trans. Graph. 26, 3.
40. Stauder, J. 2000. Point light source estimation from two images and its limits. IJCV 36, 3.
41. Subcommittee on Photometry of the IESNA Computer Committee. 2002. Iesna standard file format for the electronic transfer of photometric data and related information. Tech. Rep. LM-63-02.
42. Takai, T., Niinuma, K., Maki, A., and Matsuyama, T. 2004. Difference sphere: an approach to near light source estimation. In CVPR.
43. Unger, J., Kronander, J., Larsson, P., Gustavson, S., Löw, J., and Ynnerman, A. 2013. Spatially varying image based lighting using HDR-video. Computers & Graphics 37, 7.
44. Weber, M., and Cipolla, R. 2001. A practical method for estimation of point light-sources. BMVC 2.
45. Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., and McDonald, J. 2012. Kintinuous: Spatially Extended KinectFusion. Tech. rep., MIT CSAIL.
46. Whelan, T., Leutenegger, S., Moreno, R. S., Glocker, B., and Davison, A. 2015. ElasticFusion: Dense SLAM Without A Pose Graph. Robotics: Science and Systems 11.
47. Wood, D. N., Azuma, D. I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D. H., and Stuetzle, W. 2000. Surface light fields for 3D photography. In SIGGRAPH.
48. Wu, C., Zollhöfer, M., Niessner, M., Stamminger, M., Izadi, S., and Theobalt, C. 2014. Real-time shading-based refinement for consumer depth cameras. ACM Trans. Graph. 33, 6.
49. Xiao, J., and Furukawa, Y. 2014. Reconstructing the World’s Museums. IJCV 110, 3.
50. Xu, S., and Wallace, A. M. 2008. Recovering surface reflectance and multiple light locations and intensities from image data. Pattern Recogn. Lett. 29, 11.
51. Yu, Y., Debevec, P., Malik, J., and Hawkins, T. 1999. Inverse global illumination. In SIGGRAPH.
52. Zollhöfer, M., Dai, A., Innmann, M., Wu, C., Stamminger, M., Theobalt, C., and Niessner, M. 2015. Shading-based refinement on volumetric signed distance functions. ACM Trans. Graph. 34, 4.


