“Real-time high-fidelity facial performance capture” by Cao, Bradley, Zhou and Beeler

  • ©Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler

Conference:


Type:


Title:

    Real-time high-fidelity facial performance capture

Session/Category Title: Face Reality


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    We present the first real-time high-fidelity facial capture method. The core idea is to enhance a global real-time face tracker, which provides a low-resolution face mesh, with local regressors that add in medium-scale details, such as expression wrinkles. Our main observation is that although wrinkles appear in different scales and at different locations on the face, they are locally very self-similar and their visual appearance is a direct consequence of their local shape. We therefore train local regressors from high-resolution capture data in order to predict the local geometry from local appearance at runtime. We propose an automatic way to detect and align the local patches required to train the regressors and run them efficiently in real-time. Our formulation is particularly designed to enhance the low-resolution global tracker with exactly the missing expression frequencies, avoiding superimposing spatial frequencies in the result. Our system is generic and can be applied to any real-time tracker that uses a global prior, e.g. blend-shapes. Once trained, our online capture approach can be applied to any new user without additional training, resulting in high-fidelity facial performance reconstruction with person-specific wrinkle details from a monocular video camera in real-time.

References:


    1. Beeler, T., Bickel, B., Sumner, R., Beardsley, P., and Gross, M. 2010. High-quality single-shot capture of facial geometry. ACM Trans. Graphics (Proc. SIGGRAPH). Google ScholarDigital Library
    2. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 75:1–75:10. Google ScholarDigital Library
    3. Beeler, T., Bradley, D., Zimmer, H., and Gross, M. 2012. Improved reconstruction of deforming surfaces by cancelling ambient occlusion. In ECCV. 30–43. Google ScholarDigital Library
    4. Bermano, A. H., Bradley, D., Beeler, T., Zund, F., Nowrouzezahrai, D., Baran, I., Sorkine-Hornung, O., Pfister, H., Sumner, R. W., Bickel, B., and Gross, M. 2014. Facial performance enhancement using dynamic shape space analysis. ACM Trans. Graphics 33, 2. Google ScholarDigital Library
    5. Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., and Gross, M. 2007. Multi-scale capture of facial geometry and motion. ACM Trans. Graphics (Proc. SIGGRAPH), 33. Google ScholarDigital Library
    6. Bouaziz, S., Wang, Y., and Pauly, M. 2013. Online modeling for realtime facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 40:1–40:10. Google ScholarDigital Library
    7. Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 41:1–41:10. Google ScholarDigital Library
    8. Cao, X., Wei, Y., Wen, F., and Sun, J. 2012. Face alignment by explicit shape regression. In IEEE CVPR, 2887–2894. Google ScholarDigital Library
    9. Cao, C., Weng, Y., Lin, S., and Zhou, K. 2013. 3d shape regression for real-time facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 41:1–41:10. Google ScholarDigital Library
    10. Cao, C., Hou, Q., and Zhou, K. 2014. Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graphics (Proc. SIGGRAPH) 33, 4, 43:1–43:10. Google ScholarDigital Library
    11. Chai, J.-X., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In SCA. Google ScholarDigital Library
    12. Chen, Y.-L., Wu, H.-T., Shi, F., Tong, X., and Chai, J. 2013. Accurate and robust 3d facial capture using a single rgbd camera. In ICCV. Google ScholarDigital Library
    13. Dutreve, L., Meyer, A., and Bouakaz, S. 2011. Easy acquisition and real-time animation of facial wrinkles. Comput. Animat. Virtual Worlds 22, 2-3, 169–176. Google ScholarDigital Library
    14. Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In CVPR.Google Scholar
    15. Garrido, P., Valgaerts, L., Wu, C., and Theobalt, C. 2013. Reconstructing detailed dynamic face geometry from monocular video. In ACM Trans. Graphics (Proc. SIGGRAPH Asia), vol. 32, 158:1–158:10. Google ScholarDigital Library
    16. Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Yu, X., and Debevec, P. 2011. Multiview face capture using polarized spherical gradient illumination. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 30, 6, 129:1–129:10. Google ScholarDigital Library
    17. Huang, H., Chai, J., Tong, X., and Wu, H.-T. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 74:1–74:10. Google ScholarDigital Library
    18. Klaudiny, M., and Hilton, A. 2012. High-detail 3d capture and non-sequential alignment of facial performance. In 3DIM-PVT. Google ScholarDigital Library
    19. Li, H., Yu, J., Ye, Y., and Bregler, C. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 42:1–42:10. Google ScholarDigital Library
    20. Li, J., Xu, W., Cheng, Z., Xu, K., and Klein, R. 2015. Lightweight wrinkle synthesis for 3d facial modeling and animation. Computer-Aided Design 58, 0, 117–122.Google ScholarDigital Library
    21. Lucas, B. D., and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th IJCAI, 674–679. Google ScholarDigital Library
    22. Ma, W.-C., Hawkins, T., Peers, P., Chabert, C.-F., Weiss, M., and Debevec, P. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Eurographics Symposium on Rendering, 183–194. Google ScholarDigital Library
    23. Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 27, 5, 121. Google ScholarDigital Library
    24. Rhee, T., Hwang, Y., Kim, J. D., and Kim, C. 2011. Realtime facial animation from live video tracking. In Proc. SCA, 215–224. Google ScholarDigital Library
    25. Schaefer, S., McPhail, T., and Warren, J. 2006. Image deformation using moving least squares. ACM Trans. Graphics 25, 3, 533–540. Google ScholarDigital Library
    26. Shi, F., Wu, H.-T., Tong, X., and Chai, J. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 33. Google ScholarDigital Library
    27. Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graphics 23, 3, 399–405. Google ScholarDigital Library
    28. Suwajanakorn, S., Kemelmacher-Shlizerman, I., and Seitz, S. M. 2014. Total moving face reconstruction. In ECCV.Google Scholar
    29. Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 31, 6. Google ScholarDigital Library
    30. Weise, T., Li, H., Van Gool, L., and Pauly, M. 2009. Face/off: live facial puppetry. In Proc. SCA, 7–16. Google ScholarDigital Library
    31. Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 77:1–77:10. Google ScholarDigital Library
    32. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graphics (Proc. SIGGRAPH), 548–558. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: