“Real-Time, Single Camera, Digital Human Development” by Roble, Hendler, Buttell, Cell, Briggs, et al. …

  • ©Douglas (Doug) Roble, Darren Hendler, Jeremy Buttell, Melissa Cell, Jason Briggs, Chad Reddick, Lonnie Iannazzo, Deer Li, Mark Williams, Lucio Moser, Cydney Wong, Dimitry Kachkovski, Jason Huang, Kai Zhang, David Mclean, Rickey Cloudsdale, Dan Milling, Ron Miller, JT Lawrence, and Chinyu Chien


    We have built a real-time (60 fps) photo-realistic facial motion capture system which uses a single camera, proprietary deep learning software, and Unreal Engine 4 to create photo-real digital humans and creatures. Our system uses thousands of frames of realistic captured 3D facial performance of an actor (generated from automated offline systems) instead of a traditional FACS-based facial rig to produce an accurate model of how an actor’s face moves. This 3D data is used to create a real-time machine learning model which uses a single image to accurately describe the exact facial pose in under 17 milliseconds. The motion of the face is highly realistic and includes region based blood flow, wrinkle activation, and pore structure changes, driven by geometry deformations in real-time. The facial performance of the actor can be transferred to a character with extremely high fidelity, and switching the machine learning models is instantaneous. We consider this a significant advancement over other real-time avatar projects in development. Building on top of our real-time facial animation technology, we seek to make interaction with our avatars more immersive and emotive. We built an AR system for the actor who is driving the human/character to see and interact with people in VR or others viewing in AR. With this technique, the character you are interacting with in VR can make correct eye contact, walk around you, and interact as if you were together all while still achieving the highest quality capture. This process allows for a much more tangible VR / AR experience than any other system. Another goal of ours is to achieve photo-real avatar telepresence with minimal latency. We have been able to successfully live-drive our digital humans from our office in Los Angeles to our office in Vancouver.


ACM Digital Library Publication:

Overview Page: