“Multi-view Facial Capture using Binary Spherical Gradient Illumination” by Lattas, Wang, Zafeiriou and Ghosh

  • ©Alexander Lattas, Mingqian Wang, Stefanos Zafeiriou, and Abhijeet Ghosh

  • ©Alexander Lattas, Mingqian Wang, Stefanos Zafeiriou, and Abhijeet Ghosh

  • ©Alexander Lattas, Mingqian Wang, Stefanos Zafeiriou, and Abhijeet Ghosh

Conference:


Type:


Entry Number: 59

Title:

    Multi-view Facial Capture using Binary Spherical Gradient Illumination

Presenter(s)/Author(s):



Abstract:


     INTRODUCTION 

    High resolution facial capture has received significant attention in computer graphics due to its application in the creation of photorealistic digital humans for various applications ranging from film and VFX to games and VR. Here, the state of the art method for high quality acquisition of facial geometry and reflectance employs polarized spherical gradient illumination [Ghosh et al. 2011; Ma et al. 2007]. The technique has had a significant impact in facial capture for film VFX, recently receiving a Technical Achievement award from the Academy of Motion Picture Arts and Sciences [Aca 2019]. However, the method imposes a few constraints due to the employment of polarized illumination, and requires the camera viewpoints to be located close to the equator of the LED sphere for appropriate diffuse-specular separation for multiview capture [Ghosh et al. 2011]. The employment of polarization for reflectance separation also reduces the amount of light available for exposures and requires double the number of photographs (in cross and parallel polarization states), increasing the capture time and the number of photographs required for each face scan.

    In this work, we adapt our recently proposed diffuse-specular separation technique using binary spherical gradient illumination [Kampouris et al. 2018] for multiview face capture. Instead of relying on polarized illumination, diffuse-specular separation using binary spherical gradients relies on color-space separation of reflectance (assuming dichromatic reflectance of a dielectric material such as skin). Besides requiring acquisition of fewer images for facial capture (with higher light efficiency) than polarized spherical gradients, another advantage of the method is that it can be employed with LEDs that can only switch between a binary on-off state and does not require modulation of intensities to create gray levels. As can be seen in Figure 1, the high resolution facial normal map acquired using binary gradients exhibits sharper skin mesostructure details (obtained from specular reflectance) compared to polarized spherical gradients, while achieving high-quality reflectance separation for realistic rendering of skin appearance. In the following, we discuss some modifications to the processing of data acquired with binary spherical gradient illumination proposed by Kampouris et al. [2018] that we found to be useful for multiview face capture. 

References:


    • 2019. SCIENTIFIC & TECHNICAL AWARDS. https://www.oscars.org/sci-tech/ ceremonies/2019 
    • Abhijeet Ghosh, Graham Fyffe, Borom Tunwattanapong, Jay Busch, Xueming Yu, and Paul Debevec. 2011. Multiview face capture using polarized spherical gradient illumination. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 129. 
    • Christos Kampouris, Stefanos Zaferiou, and Abhijeet Ghosh. 2018. Diffuse-specular separation using binary spherical gradient illumination. In Proceedings of the 2018 Eurographics Symposium on Rendering: Experimental Ideas & Implementations (EGSR’18), Karlsruhe, Germany. 1–4. 
    • Wan-Chun Ma, Tim Hawkins, Pieter Peers, Charles-Felix Chabert, Malte Weiss, and Paul Debevec. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proceedings of the 18th Eurographics conference on Rendering Techniques. Eurographics Association, 183–194. 
    • Satya P. Mallick, Todd E. Zickler, David J. Kriegman, and Peter N. Belhumeur. 2005. Beyond Lambert: Reconstructing Specular Surfaces Using Color. In CVPR.

Keyword(s):



Additional Images:

©Alexander Lattas, Mingqian Wang, Stefanos Zafeiriou, and Abhijeet Ghosh

Acknowledgements:


    The work is supported by EPSRC Early Career Fellowship to Abhijeet Ghosh (EP/N006259/1), and a Google Faculty Fellowship and EPSRC Fellowship DEFORM to Stefanos Zafeiriou (EP/S010203/1).


PDF:



Overview Page: