“High-Fidelity Facial Reconstruction From a Single Photo Using Photo-Realistic Rendering” by Dias, Roche, Fernandes and Orvalho


Notice: Pod Template PHP code has been deprecated, please use WP Templates instead of embedding PHP. has been deprecated since Pods version 2.3 with no alternative available. in /data/siggraph/websites/history/wp-content/plugins/pods/includes/general.php on line 518
  • ©Mariana Dias, Alexis Roche, Margarida Fernandes, and Verónica Orvalho

Conference:


  • SIGGRAPH 2022
  • More from SIGGRAPH 2022:
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
     
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345

Type(s):


Title:


    High-Fidelity Facial Reconstruction From a Single Photo Using Photo-Realistic Rendering

Program Title:


    Labs Demo

Presenter(s):



Description:


    We propose a fully automated method for realistic 3D face reconstruction from a single frontal photo that produces a high-resolution head mesh and a diffuse map. The photo is input to a convolutional neural network that estimates the weights of a morphable model to produce an initial head shape that is further adjusted through landmark-guided deformation. Two key features of the method are: 1) the network is exclusively trained on synthetic photos that are photo-realistic enough to learn real shape predictive features, making it unnecessary to train with real facial photos and corresponding 3D scans; 2) the landmarking statistical errors are incorporated in the reconstruction for optimal accuracy. While the method is based on a limited amount of real data, we show that it robustly and quickly performs plausible face reconstructions from real photos.

References:


    1. V. Blanz and T. Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH’99. ACM Press/Addison-Wesley Publishing Co., USA, 187–194.
    2. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proc. CVPR’09. 248–255.
    3. G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger. 2017. Densely connected convolutional networks. In Proc. CVPR’20. 4700–4708.
    4. V. Kazemi and J. Sullivan. 2014. One millisecond face alignment with an ensemble of regression trees. In Proc. CVPR’14. 1867–1874.
    5. R. Li, K. Bladin, Y. Zhao, C. Chinara, O. Ingraham, P. Xiang, X. Ren, P. Prasad, B. Kishore, J. Xing, and H. Li. 2020. Learning formation of physically-based face attributes. In Proc. CVPR’20. 3410–3419.
    6. R. Wang, C.-F. Chen, H. Peng, X. Liu, O. Liu, and X. Li. 2019. Digital Twin: Acquiring High-Fidelity 3D Avatar from a Single Image. Technical Report. arxiv:1912.03455
    7. E. Wood, T. Baltrušaitis, C. Hewitt, S. Dziadzio, T.J. Cashman, and J. Shotton. 2021. Fake It Till You Make It: Face analysis in the wild using synthetic data alone. In Proc. IEEE International Conference on Computer Vision. 3681–3691.

ACM Digital Library Publication:



Overview Page: