“Key-frame removal method for blendshape-based cartoon lip-sync animation” by Kawamoto, Yotsukura and Nakamura

  • ©Shin-ichi Kawamoto, Tatsuo Yotsukura, and Satoshi Nakamura




    Key-frame removal method for blendshape-based cartoon lip-sync animation



     In this paper, we describe a novel approach for producing speaking cartoon animation with a focus on directability. When a cartoon character speaks a line, its lips should move synchronously along with the speech (”lip-synching”). While many researchers have achieved lip-sync animation for CG animation [Ezzat et al. 2002; Morishima and Nakamura 2004], most of them are generated fully automatically. Therefore, we propose a directable lip-sync animation method based on the blend shapes. The blendshapes (linear  shape interpolation models) approach is one of the most commonly employed techniques in animation, since the controls are intuitive and flexible. In addition, proposed method can control the number of key-frames based on a given key-frame rate by an animator.  


    1. Ezzat, T., Geiger, G., and Poggio, T. 2002. Trainable video-realistic speech animation. In SIGGRAPH, 388–398.
    2. Morishima, S., and Nakamura, S. 2004. Multimodal translation system using texture mapped lip-sync images for video mail and automatic dubbing applications. EURASIP Journal on Applied Signal Processing 2004-11, 1637–1647.

ACM Digital Library Publication:

Overview Page: