“Fusion4D: real-time performance capture of challenging scenes”
Conference:
Type(s):
Title:
- Fusion4D: real-time performance capture of challenging scenes
Session/Category Title: CAPTURING HUMANS
Presenter(s)/Author(s):
- Mingsong Dou
- Sameh Khamis
- Yury Degtyarev
- Philip L. Davidson
- Sean Ryan Fanello
- Adarsh Kowdle
- Christoph Rhemann
- David Kim
- Jonathan Taylor
- Pushmeet Kohli
- Vladimir Tankovich
- Shahram Izadi
Moderator(s):
Abstract:
We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras.
References:
1. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Transactions on Graphics (TOG) 30, 4, 75. Google ScholarDigital Library
2. Bleyer, M., Rhemann, C., and Rother, C. 2011. Patchmatch stereo: Stereo matching with slanted support windows. In Proc. BMVC, vol. 11, 1–11.Google Scholar
3. Bogo, F., Black, M. J., Loper, M., and Romero, J. 2015. Detailed full-body reconstructions of moving people from monocular RGB-D sequences. In ICCV, 2300–2308. Google ScholarDigital Library
4. Bojsen-Hansen, M., Li, H., and Wojtan, C. 2012. Tracking surfaces with evolving topology. ACM Trans. Graph. 31, 4, 53. Google ScholarDigital Library
5. Bradley, D., Popa, T., Sheffer, A., Heidrich, W., and Boubekeur, T. 2008. Markerless garment capture. ACM TOG (Proc. SIGGRAPH) 27, 3, 99. Google ScholarDigital Library
6. Cagniart, C., Boyer, E., and Ilic, S. 2010. Free-form mesh tracking: a patch-based approach. In Proc. CVPR.Google Scholar
7. Chen, Y., and Medioni, G. 1992. Object modelling by registration of multiple range images. CVIU 10, 3, 144–155. Google ScholarDigital Library
8. Chen, J., Bautembach, D., and Izadi, S. 2013. Scalable real-time volumetric surface reconstruction. ACM TOG. Google ScholarDigital Library
9. Collet, A., Chuang, M., Sweeney, P., Gillett, D., Evseev, D., Calabrese, D., Hoppe, H., Kirk, A., and Sullivan, S. 2015. High-quality streamable free-viewpoint video. ACM TOG 34, 4, 69. Google ScholarDigital Library
10. Curless, B., and Levoy, M. 1996. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, ACM, 303–312. Google ScholarDigital Library
11. de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.-P., and Thrun, S. 2008. Performance capture from sparse multi-view video. ACM TOG (Proc. SIGGRAPH) 27, 1–10. Google ScholarDigital Library
12. Dou, M., Fuchs, H., and Frahm, J.-M. 2013. Scanning and tracking dynamic objects with commodity depth cameras. In Proc. ISMAR, IEEE, 99–106.Google Scholar
13. Dou, M., Taylor, J., Fuchs, H., Fitzgibbon, A., and Izadi, S. 2015. 3d scanning deformable objects with a single rgbd sensor. In CVPR.Google Scholar
14. Engels, C., Stewénius, H., and Nistér, D. 2006. Bundle adjustment rules. Photogrammetric computer vision 2, 124–131.Google Scholar
15. Gall, J., Stoll, C., De Aguiar, E., Theobalt, C., Rosenhahn, B., and Seidel, H.-P. 2009. Motion capture using joint skeleton tracking and surface estimation. In Proc. CVPR, IEEE, 1746–1753.Google Scholar
16. Guo, K., Xu, F., Wang, Y., Liu, Y., and Dai, Q. 2015. Robust non-rigid motion tracking and surface reconstruction using 10 regularization. In ICCV, 3083–3091. Google ScholarDigital Library
17. Krähenbüh, P., and Koltun, V. 2011. Efficient inference in fully connected crfs with gaussian edge potentials. NIPS.Google Scholar
18. Kutulakos, K. N., and Seitz, S. M. 2000. A theory of shape by space carving. IJCV. Google ScholarDigital Library
19. Li, H., Adams, B., Guibas, L. J., and Pauly, M. 2009. Robust single-view geometry and motion reconstruction. ACM TOG. Google ScholarDigital Library
20. Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. IJCV. Google ScholarDigital Library
21. Mitra, N. J., Flöry, S., Ovsjanikov, M., Gelfand, N., Guibas, L. J., and Pottmann, H. 2007. Dynamic geometry registration. In Proc. SGP, 173–182. Google ScholarDigital Library
22. Mori, M., MacDorman, K. F., and Kageki, N. 2012. The uncanny valley {from the field}. Robotics & Automation Magazine, IEEE 19, 2, 98–100.Google Scholar
23. Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., and Fitzgibbon, A. 2011. KinectFusion: Real-time dense surface mapping and tracking. In Proc. ISMAR, 127–136. Google ScholarDigital Library
24. Newcombe, R. A., Fox, D., and Seitz, S. M. 2015. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In CVPR, 343–352.Google Scholar
25. Pons-Moll, G., Taylor, J., Shotton, J., Hertzmann, A., and Fitzgibbon, A. 2015. Metric regression forests for correspondence estimation. IJCV 113, 3, 163–175. Google ScholarDigital Library
26. Pradeep, V., Rhemann, C., Izadi, S., Zach, C., Bleyer, M., and Bathiche, S. 2013. MonoFusion: Real-time 3D reconstruction of small scenes with a single web camera. In Proc. ISMAR, IEEE, 83–88.Google Scholar
27. Revaud, J., Weinzaepfel, P., Harchaoui, Z., and Schmid, C. 2015. Epicflow: Edge-preserving interpolation of correspondences for optical flow. CVPR.Google Scholar
28. Rosten, E., and Drummond, T. 2005. Fusing points and lines for high performance tracking. In ICCV. Google ScholarDigital Library
29. Rusinkiewicz, S., and Levoy, M. 2001. Efficient variants of the icp algorithm. In 3DIM, 145–152.Google Scholar
30. Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., and Fitzgibbon, A. 2013. Scene coordinate regression forests for camera relocalization in rgb-d images. In CVPR. Google ScholarDigital Library
31. Smolic, A. 2011. 3d video and free viewpoint videofrom capture to display. Pattern recognition 44, 9, 1958–1968. Google ScholarDigital Library
32. Starck, J., and Hilton, A. 2007. Surface capture for performance-based animation. Computer Graphics and Applications 27, 3, 21–31. Google ScholarDigital Library
33. Stoll, C., Hasler, N., Gall, J., Seidel, H., and Theobalt, C. 2011. Fast articulated motion tracking using a sums of gaussians body model. In Proc. ICCV, IEEE, 951–958. Google ScholarDigital Library
34. Sumner, R. W., Schmid, J., and Pauly, M. 2007. Embedded deformation for shape manipulation. ACM TOG 26, 3, 80. Google ScholarDigital Library
35. Tevs, A., Berner, A., Wand, M., Ihrke, I., Bokeloh, M., Kerber, J., and Seidel, H.-P. 2012. Animation cartography-intrinsic reconstruction of shape and motion. ACM TOG. Google ScholarDigital Library
36. Theobalt, C., de Aguiar, E., Stoll, C., Seidel, H.-P., and Thrun, S. 2010. Performance capture from multi-view video. In Image and Geometry Processing for 3D-Cinematography, R. Ronfard and G. Taubin, Eds. Springer, 127ff.Google Scholar
37. Vineet, V., Warrell, J., and Torr, P. H. S. 2012. Filter-based mean-field inference for random fields with higher-order terms and product label-spaces. In ECCV. Google ScholarDigital Library
38. Vlasic, D., Baran, I., Matusik, W., and Popović, J. 2008. Articulated mesh animation from multi-view silhouettes. ACM TOG (Proc. SIGGRAPH). Google ScholarDigital Library
39. Vlasic, D., Peers, P., Baran, I., Debevec, P., Popovic, J., Rusinkiewicz, S., and Matusik, W. 2009. Dynamic shape capture using multi-view photometric stereo. ACM TOG (Proc. SIGGRAPH Asia) 28, 5, 174. Google ScholarDigital Library
40. Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., and Schilling, A. 2009. Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data. ACM TOG. Google ScholarDigital Library
41. Wang, S., Fanello, S. R., Rhemann, C., Izadi, S., and Kohli, P. 2016. The global patch collider. CVPR.Google Scholar
42. Waschbüsch, M., Würmlin, S., Cotting, D., Sadlo, F., and Gross, M. 2005. Scalable 3D video of dynamic scenes. In Proc. Pacific Graphics, 629–638.Google Scholar
43. Wei, L., Huang, Q., Ceylan, D., Vouga, E., and Li, H. 2015. Dense human body correspondences using convolutional networks. arXiv preprint arXiv:1511.05904.Google Scholar
44. Weinzaepfel, P., Revaud, J., Harchaoui, Z., and Schmid, C. 2013. Deepflow: Large displacement optical flow with deep matching. In ICCV. Google ScholarDigital Library
45. Ye, M., and Yang, R. 2014. Real-time simultaneous pose and shape estimation for articulated objects using a single depth camera. In CVPR, IEEE. Google ScholarDigital Library
46. Ye, M., Zhang, Q., Wang, L., Zhu, J., Yang, R., and Gall, J. 2013. A survey on human motion analysis from depth data. In Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications. Springer, 149–187.Google Scholar
47. Zach, C. 2014. Robust bundle adjustment revisited. In Computer Vision–ECCV 2014. Springer, 772–787.Google ScholarCross Ref
48. Zeng, M., Zheng, J., Cheng, X., and Liu, X. 2013. Template-less quasi-rigid shape modeling with implicit loop-closure. In Proc. CVPR, IEEE, 145–152. Google ScholarDigital Library
49. Zhang, Q., Fu, B., Ye, M., and Yang, R. 2014. Quality dynamic human body modeling using a single low-cost depth camera. In CVPR, IEEE, 676–683. Google ScholarDigital Library
50. Zollhöfer, M., Niessner, M., Izadi, S., Rhemann, C., Zach, C., Fisher, M., Wu, C., Fitzgibbon, A., Loop, C., Theobalt, C., et al. 2014. Real-time non-rigid reconstruction using an rgb-d camera. ACM TOG. Google ScholarDigital Library