“Digital Ira and Beyond: Creating Photoreal Real-Time Digital Characters” by Pahlen, Jimenez, Danvoye, Debevec, Fyffe, et al. …
Conference:
Type(s):
Entry Number: 01
Title:
- Digital Ira and Beyond: Creating Photoreal Real-Time Digital Characters
Course Organizer(s):
Presenter(s)/Author(s):
Abstract:
Prerequisites
Some experience with video game pipelines, facial animation, and shading models. The course is designed so that attendees with a wide range of experience levels will take away useful information and lessons from the course.
Who Should Attend
Digital Character Artists, Game Developers, Texture Painters, and Researchers working on Performance Capture, Facial Modeling, and Real-Time Shading research
Description
This course will present the process of creating “Digital Ira” seen at the SIGGRAPH 2013 Real-Time live venue, covering the complete set of technologies from high resolution facial scanning, blendshape rigging, video-based performance capture, animation compression, realtime skin and eye shading, and hair rendering. The course will also present and explain late-breaking results and refinements and point the way along future directions which may increase the quality and efficiency of this kind of digital character pipeline. The actor from this project was scanned in 30 high-resolution expressions from which eight were chosen for real-time performance rendering. Performance clips were captured using multi-view video. Expression UVs were interactively corresponded to the neutral expression, retopologized to an artist mesh. An animation solver creates a performance graph representing dense GPU optical flow between video frames and the eight expressions; dense optical flow and 3D triangulation are computed, yielding per-frame spatially varying blendshape weights approximating the performance. The performance is converted to standard bone animation on a 4k mesh using a boneweight and transform solver. Surface stress values are used to blend albedo, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. DX11 rendering includes SSS, translucency, eye refraction and caustics, physically based two-lobe specular reflection with microstructure, DOF, antialiasing, and grain. The course will explain each of processes, mentioning why each design choice was made and pointing to alternative components which may have been employed in place of any of the steps. We will also cover emerging technologies in performance capture and facial rendering. Attendees will receive a solid understanding of the techniques used to create photoreal digital characters in video games and other applications, and the confidence to incorporate some of the techniques into their own pipelines.