“FLARE: Fast Learning of Animatable and Relightable Mesh Avatars” by Bharadwaj, Zheng, Hilliges, Black and Fernandez – ACM SIGGRAPH HISTORY ARCHIVES

“FLARE: Fast Learning of Animatable and Relightable Mesh Avatars” by Bharadwaj, Zheng, Hilliges, Black and Fernandez

  • 2023 SA_Technical_Papers_Bharadwaj_FLARE_Fast Learning of Animatable and Relightable Mesh Avatars

Conference:


Type(s):


Title:

    FLARE: Fast Learning of Animatable and Relightable Mesh Avatars

Session/Category Title:   Full-Body Avatar


Presenter(s)/Author(s):



Abstract:


    Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering, by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. Specifically, we introduce FLARE, a technique that enables fast creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the pre-filtered environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.


Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org