“Temporal Upsampling of Performance Geometry Using Photometric Alignment” by Wilson, Ghosh, Peers, Chiang, Busch, et al. …

  • ©Cyrus A. Wilson, Abhijeet Ghosh, Pieter Peers, Jen-Yuan Chiang, Jay Busch, and Paul E. Debevec

Conference:


Type:


Title:

    Temporal Upsampling of Performance Geometry Using Photometric Alignment

Presenter(s)/Author(s):



Abstract:


    We present a novel technique for acquiring detailed facial geometry of a dynamic performance using extended spherical gradient illumination. Key to our method is a new algorithm for jointly aligning two photographs, under a gradient illumination condition and its complement, to a full-on tracking frame, providing dense temporal correspondences under changing lighting conditions. We employ a two-step algorithm to reconstruct detailed geometry for every captured frame. In the first step, we coalesce information from the gradient illumination frames to the full-on tracking frame, and form a temporally aligned photometric normal map, which is subsequently combined with dense stereo correspondences yielding a detailed geometry. In a second step, we propagate the detailed geometry back to every captured instance guided by the previously computed dense correspondences. We demonstrate reconstructed dynamic facial geometry, captured using moderate to video rates of acquisition, for every captured frame.

References:


    1. Ahmed, N., Theobalt, C., Dobrev, P., Seidel, H.-P., and Thrun, S. 2008. Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08). 1–8.
    2. Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., and Gross, M. 2007. Multi-Scale capture of facial geometry and motion. ACM Trans. Graph. 26, 3, 33: 1–10. 
    3. Brox, T., Bruhn, A., Papenberg, N., and Weickert, J. 2004. High accuracy optical flow estimation based on a theory for warping. In Proceedings of the European Conference on Computer Vision. 25–36.
    4. Davis, J., Nehab, D., Ramamoorthi, R., and Rusinkiewicz, S. 2005. Spacetime stereo: A unifying framework for depth from triangulation. IEEE Trans. Patt. Anal. Mach. Intell. 27, 2, 296–302. 
    5. Hernandez, C., Vogiatzis, G., Brostow, G. J., Stenger, B., and Cipolla, R. 2007. Non-Rigid photometric stereo with colored lights. In Proceedings of the IEEE International Conference on Computer Vision. 1–8.
    6. Kang, S., Uyttendaele, M., Winder, S., and Szeliski, R. 2003. High dynamic range video. ACM Trans. Graph. 22, 3, 319–325. 
    7. Lim, J., Ho, J., Yang, M.-H., and Kriegman, D. 2005. Passive photometric stereo from motion. In Proceedings of the IEEE International Conference on Computer Vision. 1635–1642. 
    8. Ma, W.-C., Hawkins, T., Peers, P., Chabert, C.-F., Weiss, M., and Debevec, P. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proceedings of the Eurographics Symposium on Rendering. 183–194. 
    9. Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27, 5, 121: 1–10. 
    10. Malzbender, T., Wilburn, B., Gelb, D., and Ambrisco, B. 2006. Surface enhancement using real-time photometric stereo and reflectance transformation. In Proceedings of the Eurographics Symposium on Rendering. 245–250. 
    11. Nehab, D., Rusinkiewicz, S., Davis, J., and Ramamoorthi, R. 2005. Efficiently combining positions and normals for precise 3D geometry. ACM Trans. Graph. 24, 3, 536–543. 
    12. Rusinkiewicz, S., Hall-Holt, O., and Levoy, M. 2002. Real-time 3d model acquisition. ACM Trans. Graph. 21, 3, 438–446. 
    13. Scharstein, D. and Szeliski, R. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 47, 1–3, 7–42. 
    14. Vedula, S., Baker, S., and Kanade, T. 2005. Image based spatio-temporal modeling and view interpolation of dynamic events. ACM Trans. Graph. 24, 2, 240–261. 
    15. Vlasic, D., Peers, P., Baran, I., Debevec, P., Popović, J., Rusinkiewicz, S., and Matusik, W. 2009. Dynamic shape capture using multi-view photometric stereo. ACM Trans. Graph. 28, 5, 174: 1–11. 
    16. Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., and Schilling, A. 2009. Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data. ACM Trans. Graph. 28, 2, 15: 1–15. 
    17. Wenger, A., Gardner, A., Tchou, C., Unger, J., Hawkins, T., and Debevec, P. 2005. Performance relighting and reflectance transformation with time-multiplexed illumination. ACM Trans. Graph. 24, 3, 756–764. 
    18. XYZRGB. 3D laser scanning—XYZ RGB Inc. http://www.xyzrgb.com/.
    19. Zhang, S., and Huang, P. 2006. High-Resolution, real-time three-dimensional shape measurement. Optical Engin. 45, 12, 123601: 1–8.
    20. Zhang, L., Curless, B., Hertzmann, A., and Seitz, S. M. 2003. Shape and motion under varying illumination: Unifying structure from motion, photometric stereo, and multi-view stereo. In Proceedings of the IEEE International Conference on Computer Vision. 618–625. 
    21. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High resolution capture for modeling and animation. ACM Trans. Graph. 23, 3, 548–558. 

ACM Digital Library Publication:



Overview Page: