“Accelerating Facial Motion Capture With Video-driven Animation Transfer” by Serra, Williams and Moser

  • ©Jose Serra, Mark Williams, and Lucio Moser



Entry Number: 19


    Accelerating Facial Motion Capture With Video-driven Animation Transfer



    We describe a hybrid pipeline that leverages: 1) video-driven animation transfer [Moser et al. 2021] for regressing high-quality animation under partially-controlled conditions from a single input image, and 2) a marker-based tracking approach [Moser et al. 2017] that, while more complex and slower, is capable of handling the most challenging scenarios seen in the capture set. By applying the most suited approach to each shot, we have an overall pipeline that, without loss of quality, is faster and has less user intervention. We also improve the prior work [Moser et al. 2021] with augmentations during training to make it more robust for the Head Mounted Camera (HMC) scenario. The new pipeline is currently being integrated into our offline and real-time workflows.


    DD. 2022. Digital Domain’s Masquerade Offline Capture. Retrieved Feb 20, 2022 from https://digitaldomain.com/technology/masquerade-offline-capture/Google Scholar
    DI4D. 2022. DI4D Pro. Retrieved Feb 20, 2022 from https://www.di4d.com/di4d-pro/Google Scholar
    Disney. 2022. Anyma. Retrieved Feb 20, 2022 from https://studios.disneyresearch.com/anyma/Google Scholar
    Martin Klaudiny, Steven McDonagh, Derek Bradley, Thabo Beeler, and Kenny Mitchell. 2017. Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum 36 (2017).Google Scholar
    Lucio Moser, Chinyu Chien, Mark Williams, Jose Serra, Darren Hendler, and Doug Roble. 2021. Semi-Supervised Video-Driven Facial Animation Transfer for Production. ACM Trans. Graph. 40, 6 (2021).Google ScholarDigital Library
    Lucio Moser, Darren Hendler, and Doug Roble. 2017. Masquerade: Fine-Scale Details for Head-Mounted Camera Motion Capture Data. In ACM SIGGRAPH 2017 Talks. New York, NY, USA, Article 18, 2 pages.Google Scholar
    Weta. 2022. Weta Digital – FACETS. Retrieved Feb 20, 2022 from https://www.wetafx.co.nz/research-and-tech/technology/facets/Google Scholar
    Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Computer Vision (ICCV), 2017 IEEE International Conference on.Google ScholarCross Ref

ACM Digital Library Publication:

Overview Page: