“Online modeling for realtime facial animation” by Bouaziz, Wang and Pauly

  • ©Sofien Bouaziz, Yangang Wang, and Mark Pauly




    Online modeling for realtime facial animation

Session/Category Title: Faces & Hands




    We present a new algorithm for realtime face tracking on commodity RGB-D sensing devices. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-based facial animation and virtual interaction at the consumer level. The key novelty of our approach is an optimization algorithm that jointly solves for a detailed 3D expression model of the user and the corresponding dynamic tracking parameters. Realtime performance and robust computations are facilitated by a novel subspace parameterization of the dynamic facial expression space. We provide a detailed evaluation that shows that our approach significantly simplifies the performance capture workflow, while achieving accurate facial tracking for realtime applications.


    1. Amberg, B., Blake, A., and Vetter, T. 2009. On compositional image alignment, with an application to active appearance models. In CVPR, 1714–1721.Google Scholar
    2. Baltrušaitis, T., Robinson, P., and Morency, L.-P. 2012. 3d constrained local model for rigid and non-rigid facial tracking. In CVPR, 2610–2617. Google ScholarDigital Library
    3. Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and der Vorst, H. V. 1994. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.Google Scholar
    4. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30, 75:1–75:10. Google ScholarDigital Library
    5. Bickel, B., Lang, M., Botsch, M., Otaduy, M. A., and Gross, M. 2008. Pose-space animation and transfer of facial details. In SCA, 57–66. Google ScholarDigital Library
    6. Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, 187–194. Google ScholarDigital Library
    7. Botsch, M., Sumner, R., Pauly, M., and Gross, M. 2006. Deformation Transfer for Detail-Preserving Surface Editing. In VMV, 357–364.Google Scholar
    8. Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., and Levy, B. 2010. Polygon Mesh Processing. AK Peters.Google Scholar
    9. Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29, 41:1–41:10. Google ScholarDigital Library
    10. Chai, J., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Proc. SCA, 193–206. Google ScholarDigital Library
    11. Deng, Z., Chiang, P.-Y., Fox, P., and Neumann, U. 2006. Animating blendshape faces by cross-mapping motion capture data. In I3D, 43–48. Google ScholarDigital Library
    12. Faceshift, 2013. http://www.faceshift.com.Google Scholar
    13. Fu, W. J. 1998. Penalized Regressions: The Bridge versus the Lasso. J. Comp. Graph. Stat. 7, 397–416.Google Scholar
    14. Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In CVPR, 1674–1681.Google Scholar
    15. Huang, H., Chai, J., Tong, X., and Wu, H. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graph. 30, 74:1–74:10. Google ScholarDigital Library
    16. Levy, B., and Zhang, R. H. 2010. Spectral geometry processing. In ACM SIGGRAPH Course Notes.Google Scholar
    17. Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29, 32:1–32:6. Google ScholarDigital Library
    18. Lin, I.-C., and Ouhyoung, M. 2005. Mirror mocap: Automatic and efficient capture of dense 3d facial motion parameters from video. The Visual Computer 21, 355–372.Google ScholarCross Ref
    19. Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27, 121:1–121:10. Google ScholarDigital Library
    20. Madsen, K., Nielsen, H. B., and Tingleff, O., 2004. Methods for non-linear least squares problems (2nd ed.).Google Scholar
    21. Pighin, F., and Lewis, J. P. 2006. Performance-driven facial animation. In ACM SIGGRAPH Course Notes.Google Scholar
    22. Rusinkiewicz, S., and Levoy, M. 2001. Efficient variants of the ICP algorithm. In 3DIM, 145–152.Google Scholar
    23. Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Real-time avatar animation from a single image. In FG, 213–220.Google Scholar
    24. Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graph. 23, 399–405. Google ScholarDigital Library
    25. Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Trans. Graph. 31, 187:1–187:11. Google ScholarDigital Library
    26. Viola, P., and Jones, M. 2001. Rapid object detection using a boosted cascade of simple features. In CVPR, 511–518.Google Scholar
    27. Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In SCA, 7–16. Google ScholarDigital Library
    28. Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Trans. Graph. 30, 77:1–77:10. Google ScholarDigital Library
    29. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23, 548–558. Google ScholarDigital Library

ACM Digital Library Publication:

Overview Page: