“PINSCREEN: CREATING PERFORMANCE- DRIVEN AVATARS IN SECONDS” by University of South Carolina
Conference:
Type(s):
E-Tech Type(s):
- Avatar / Agent
- Gaming and Entertainment
- Virtual Reality
- Hao Li
- Shun-Suke Saito
- Lingyu Wei
- Iman Sadeghi
- Liwen Hu
- Jaewoo Seo
- Koki Nagano
- Jens Fursund
- Yen-Chun Chen
- Stephen Chen
Title:
- PINSCREEN: CREATING PERFORMANCE- DRIVEN AVATARS IN SECONDS
Developer(s):
Project Affiliation:
- University of South Carolina
Description:
With this fully automatic framework for creating a complete 3D avatar from a single unconstrained image, users can upload any photograph to build a high-quality head model within seconds. The model can be immediately animated via performance capture using a webcam. It digitizes the entire model using a textured-mesh representation for the head and volumetric strips for the hair. A simple web interface uploads any photograph, and a high-quality head model, including animation-friendly blend shapes and joint-based rigs, is reconstructed within seconds. Several animation examples are instantly generated for preview purposes, and the model can be loaded into Unity for immediate performance capture using a webcam.
The system integrates state-of-the-art advances in facial-shape modeling, appearance inference, and a new pipeline for single-view hair generation based on hairstyle retrieval from a massive database, followed by a strand-to-hair-strip conversion method.
Pinscreen-generated models are visually comparable to state-of-the-art game characters. With its scalable and instant asset generation, the method can significantly influence next-generation virtual film and game production, as well as VR applications, in which personalized avatars can be used for social interactions.
This live demonstration shows that compelling avatars and animations can be generated in very little time by anyone, with minimal effort.