“Example-based facial rigging” by Li, Weise and Pauly

  • ©Hao Li, Thibaut Weise, and Mark Pauly




    Example-based facial rigging



    We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an optimal reproduction of the training poses. This enables a scalable design process, where the user can iteratively add more training poses to refine the blendshape expression space. However, plausible animations can be obtained even with a single training pose. We show how formulating the optimization in gradient space yields superior results as compared to a direct optimization on blendshape vertices. We provide examples for both hand-crafted characters and 3D scans of a real actor and demonstrate the performance of our system in the context of markerless art-directable facial tracking.


    1. Alexander, O., Rogers, M., Lambeth, W., Chiang, M., and Debevec, P. 2009. The digital emily project: photoreal facial modeling and animation. In SIGGRAPH ’09 Courses. Google ScholarDigital Library
    2. Baran, I., and Popović, J. 2007. Automatic rigging and animation of 3d characters. ACM Trans. Graph. 26, 3, 72. Google ScholarDigital Library
    3. Bergeron, P., and Lachapelle, P. 1985. Controlling facial expressions and body movements in the computer generated animated short ‘Tony de Peltrie’. In SIGGRAPH ’85 Tutorial Notes, Advanced Computer Animation Course.Google Scholar
    4. Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH ’99. Google ScholarDigital Library
    5. Blanz, V., Basso, C., Poggio, T., and Vetter, T. 2003. Reanimating faces in images and video. In EUROGRAPHICS ’03.Google Scholar
    6. Botsch, M., Sumner, R., Pauly, M., and Gross, M. 2006. Deformation transfer for detail-preserving surface editing. In Vision, Modeling, Visualization 2006, 357–364.Google Scholar
    7. Choe, B., and Ko, H.-S. 2005. Analysis and synthesis of facial expressions with hand-generated muscle actuation basis. In SIGGRAPH ’05 Courses. Google ScholarDigital Library
    8. Chuang, E. 2004. Analysis, Synthesis, and Retargeting of Facial Expressions. PhD thesis, Stanford University. Google ScholarDigital Library
    9. Coleman, T. F., and Li, Y. 1996. An interior trust region approach for nonlinear minimization subject to bounds. SIAM Journal on Optimization 6, 2, 418–445.Google ScholarDigital Library
    10. Ekman, P., and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press.Google Scholar
    11. Kähler, K., Haber, J., and Peter Seidel, H. 2001. Geometry-based muscle modeling for facial animation. In In Proc. Graphics Interface 2001. Google ScholarDigital Library
    12. Li, H., Adams, B., Guibas, L. J., and Pauly, M. 2009. Robust single-view geometry and motion reconstruction. ACM Transactions on Graphics (Proceedings SIGGRAPH Asia 2009) 28, 5. Google ScholarDigital Library
    13. Liu, X., Mao, T., Xia, S., Yu, Y., and Wang, Z. 2008. Facial animation by optimized blendshapes from motion capture data. Comput. Animat. Virtual Worlds 19, 3–4, 235–245. Google ScholarDigital Library
    14. Magnenat-Thalmann, N., Laperrière, R., and Thalmann, D. 1988. Joint-dependent local deformations for hand animation and object grasping. In Proceedings on Graphics interface ’88, 26–33. Google ScholarDigital Library
    15. Noh, J.-Y., and Neumann, U. 2001. Expression cloning. In Proc. SIGGRAPH ’01. Google ScholarDigital Library
    16. Orvalho, V. C. T., Zacur, E., and Susin, A. 2008. Transferring the rig and animations from a character to different face models. Comput. Graph. Forum 27, 8, 1997–2012.Google ScholarCross Ref
    17. Osipa, J. 2007. Stop Staring: Facial Modeling and Animation Done Right. Sybex, Second Edition. Google ScholarDigital Library
    18. Pighin, F., and Lewis, J. P. 2006. Facial motion retargeting. In SIGGRAPH ’06 Courses. Google ScholarDigital Library
    19. Pighin, F., Hecker, J., Lischinski, D., Szeliski, R., and Salesin, D. H. 1998. Synthesizing realistic facial expressions from photographs. In Proc. SIGGRAPH ’98. Google ScholarDigital Library
    20. Sifakis, E., Neverov, I., and Fedkiw, R. 2005. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph. 24, 3, 417–425. Google ScholarDigital Library
    21. Stahlberg, S., 2010. Nikita real-time character. Filmakademie Baden-Wuerttemberg/Institute of Animation’s R&D Labs.Google Scholar
    22. Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Transactions on Graphics (Proceedings SIGGRAPH 2004) 23, 3. Google ScholarDigital Library
    23. Terzopoulos, D., and Waters, K. 1990. Physically-based facial modeling, analysis and animation. Journal of Visualization and Computer Animation 1, 73–80.Google ScholarCross Ref
    24. Vlasic, D., Brand, M., Pfister, H., and Popović, J. 2005. Face transfer with multilinear models. ACM Trans. Graph. 24. Google ScholarDigital Library
    25. Waters, K. 1987. A muscle model for animation three-dimensional facial expression. In Proc. SIGGRAPH ’87. Google ScholarDigital Library
    26. Weise, T., Leibe, B., and Gool, L. V. 2007. Fast 3d scanning with automatic motion compensation. In Proc. CVPR’07.Google Scholar
    27. Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In Proc. SCA’09. Google ScholarDigital Library
    28. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conf. on Comp. Graphics. Google ScholarDigital Library

ACM Digital Library Publication: