“My Digital Face” by Shapiro, Suma, Wang, Debevec, Ichikari, et al. …


Notice: Pod Template PHP code has been deprecated, please use WP Templates instead of embedding PHP. has been deprecated since Pods version 2.3 with no alternative available. in /data/siggraph/websites/history/wp-content/plugins/pods/includes/general.php on line 518
  • ©Ari Shapiro, Evan A. Suma, Ruizhe Wang, Paul E. Debevec, Ryosuke Ichikari, Graham Fyffe, Andrew W. Feng, Oleg Alexander, and Dan Casas

Conference:


  • SIGGRAPH 2015
  • More from SIGGRAPH 2015:
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
     
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345

Type(s):


E-Tech Type(s):



Description:


    This project puts the capability of producing a photorealistic face into the hands of nearly anyone, without an expensive rig, special hardware, or 3D expertise.

    Using a single commodity depth sensor (Intel RealSense) and a laptop computer, the research team captures several scans of a single face with different expressions. From those scans, a near-automatic pipeline creates a set of blendshapes, which are puppeteered in real time using tracking software. An important stage of the blendshape pipeline is automated to identify and create correspondences between the geometry and textures of different scans, greatly reducing the amount of texture drifting between blendshapes. To expand the amount of control beyond individual shapes, the system can automatically include blendshape masks across various regions of the face in order to mix effects from different parts, resulting in independent control over blinks and lip shapes.

    The results are photorealistic and sufficiently representative of the capture subjects, so they could be used in social media, video conferencing, business communications, and other places where an accurate representation (as opposed to an artistic or stylized one) is desired or appropriate.

    During the demo, the team scans two people who then puppeteer their own faces in real time.


PDF:



ACM Digital Library Publication:



Overview Page: