“Learning an animatable detailed 3D face model from in-the-wild images” by Feng, Feng, Black and Bolkart

  • ©Yao Feng, Haiwen Feng, Michael J. Black, and Timo Bolkart

Conference:


Type:


Title:

    Learning an animatable detailed 3D face model from in-the-wild images

Presenter(s)/Author(s):



Abstract:


    While current monocular 3D face reconstruction methods can recover fine geometric details, they suffer several limitations. Some methods produce faces that cannot be realistically animated because they do not model how wrinkles vary with expression. Other methods are trained on high-quality face scans and do not generalize well to in-the-wild images. We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. Our model, DECA (Detailed Expression Capture and Animation), is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. To enable this, we introduce a novel detail-consistency loss that disentangles person-specific details from expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged. DECA is learned from in-the-wild images with no paired 3D supervision and achieves state-of-the-art shape reconstruction accuracy on two benchmarks. Qualitative results on in-the-wild data demonstrate DECA’s robustness and its ability to disentangle identity- and expression-dependent details enabling animation of reconstructed faces. The model and code are publicly available at https://deca.is.tue.mpg.de.

References:


    1. Victoria Fernández Abrevaya, Adnane Boukhayma, Philip HS Torr, and Edmond Boyer. 2020. Cross-modal Deep Face Normals with Deactivable Skip Connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4979–4989.Google Scholar
    2. Oswald Aldrian and William AP Smith. 2013. Inverse Rendering of Faces with a 3D Morphable Model. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 35, 5 (2013), 1080–1093.Google ScholarDigital Library
    3. Anil Bas, William A. P. Smith, Timo Bolkart, and Stefanie Wuhrer. 2017. Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences. In Asian Conference on Computer Vision Workshops. 377–391.Google ScholarCross Ref
    4. Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. High-quality single-shot capture of facial geometry. ACM Transactions on Graphics (TOG) 29, 4 (2010), 40.Google ScholarDigital Library
    5. Bernd Bickel, Manuel Lang, Mario Botsch, Miguel A. Otaduy, and Markus H. Gross. 2008. Pose-Space Animation and Transfer of Facial Details. In Eurographics/SIGGRAPH Symposium on Computer Animation (SCA), Markus H. Gross and Doug L. James (Eds.). 57–66.Google Scholar
    6. Volker Blanz, Sami Romdhani, and Thomas Vetter. 2002. Face identification across different poses and illuminations with a 3D morphable model. In International Conference on Automatic Face & Gesture Recognition (FG). 202–207.Google ScholarCross Ref
    7. Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. In SIGGRAPH. 187–194.Google Scholar
    8. Alan Brunton, Augusto Salazar, Timo Bolkart, and Stefanie Wuhrer. 2014. Review of statistical shape spaces for 3D data with comparative analysis for human faces. Computer Vision and Image Understanding (CVIU) 128, 0 (2014), 1–17.Google ScholarCross Ref
    9. Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1021–1030.Google ScholarCross Ref
    10. Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics (TOG) 34, 4 (2015), 1–9.Google ScholarDigital Library
    11. Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018b. VGGFace2: A dataset for recognising faces across pose and age. In International Conference on Automatic Face & Gesture Recognition (FG). 67–74.Google ScholarDigital Library
    12. Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Shiying Li, and Jingyi Yu. 2018a. Sparse Photometric 3D Face Reconstruction Guided by Morphable Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4635–4644.Google ScholarCross Ref
    13. Feng-Ju Chang, Anh Tuan Tran, Tal Hassner, Iacopo Masi, Ram Nevatia, and Gerard Medioni. 2018. ExpNet: Landmark-free, deep, 3D facial expressions. In International Conference on Automatic Face & Gesture Recognition (FG). 122–129.Google ScholarDigital Library
    14. Bindita Chaudhuri, Noranart Vesdapunt, Linda G. Shapiro, and Baoyuan Wang. 2020. Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting. In European Conference on Computer Vision (ECCV). 142–160.Google Scholar
    15. Anpei Chen, Zhang Chen, Guli Zhang, Kenny Mitchell, and Jingyi Yu. 2019. PhotoRealistic Facial Details Synthesis from Single Image. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 9429–9439.Google Scholar
    16. J. S. Chung, A. Nagrani, and A. Zisserman. 2018a. VoxCeleb2: Deep Speaker Recognition. In INTERSPEECH.Google Scholar
    17. Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. 2018b. VoxCeleb2: Deep Speaker Recognition. In Annual Conference of the International Speech Communication Association (Interspeech), B. Yegnanarayana (Ed.). ISCA, 1086–1090.Google ScholarCross Ref
    18. Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, and Michael Black. 2019. Capture, Learning, and Synthesis of 3D Speaking Styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10101–10111.Google ScholarCross Ref
    19. Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, and Xin Tong. 2019. Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set. In Computer Vision and Pattern Recognition Workshops. 285–295.Google ScholarCross Ref
    20. Pengfei Dou, Shishir K Shah, and Ioannis A Kakadiaris. 2017. End-to-end 3D face reconstruction with deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5908–5917.Google ScholarCross Ref
    21. Bernhard Egger, William A. P. Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhöfer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, Christian Theobalt, Volker Blanz, and Thomas Vetter. 2020. 3D Morphable Face Models – Past, Present, and Future. ACM Transactions on Graphics (TOG) 39, 5 (2020), 157:1–157:38.Google ScholarDigital Library
    22. Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. 2018b. Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network. In European Conference on Computer Vision (ECCV). 534–551.Google ScholarDigital Library
    23. Zhen-Hua Feng, Patrik Huber, Josef Kittler, Peter Hancock, Xiao-Jun Wu, Qijun Zhao, Paul Koppen, and Matthias Rätsch. 2018a. Evaluation of dense 3D reconstruction from 2D face images in the wild. In International Conference on Automatic Face & Gesture Recognition (FG).Google ScholarDigital Library
    24. Flickr image. 2021. https://www.flickr.com/photos/gageskidmore/14602415448/.Google Scholar
    25. Graham Fyffe, Andrew Jones, Oleg Alexander, Ryosuke Ichikari, and Paul E. Debevec. 2014. Driving High-Resolution Facial Scans with Video Performance Capture. ACM Transactions on Graphics (TOG) 34, 1 (2014), 8:1–8:14.Google ScholarDigital Library
    26. Pablo Garrido, Michael Zollhöfer, Dan Casas, Levi Valgaerts, Kiran Varanasi, Patrick Pérez, and Christian Theobalt. 2016. Reconstruction of personalized 3D face rigs from monocular video. ACM Transactions on Graphics (TOG) 35, 3 (2016), 28.Google Scholar
    27. Baris Gecer, Stylianos Ploumpis, Irene Kotsia, and Stefanos Zafeiriou. 2019. GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1155–1164.Google ScholarCross Ref
    28. Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T. Freeman. 2018. Unsupervised Training for 3D Morphable Model Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 8377–8386.Google Scholar
    29. Thomas Gerig, Andreas Morel-Forster, Clemens Blumer, Bernhard Egger, Marcel Luthi, Sandro Schönborn, and Thomas Vetter. 2018. Morphable face models-an open framework. In International Conference on Automatic Face & Gesture Recognition (FG). 75–82.Google ScholarDigital Library
    30. Partha Ghosh, Pravir Singh Gupta, Roy Uziel, Anurag Ranjan, Michael J. Black, and Timo Bolkart. 2020. GIF: Generative Interpretable Faces. In International Conference on 3D Vision (3DV). 868–878.Google Scholar
    31. Aleksey Golovinskiy, Wojciech Matusik, Hanspeter Pfister, Szymon Rusinkiewicz, and Thomas A. Funkhouser. 2006. A statistical model for synthesis of detailed facial geometry. ACM Transactions on Graphics (TOG) 25, 3 (2006), 1025–1034.Google ScholarDigital Library
    32. Riza Alp Güler, George Trigeorgis, Epameinondas Antonakos, Patrick Snape, Stefanos Zafeiriou, and Iasonas Kokkinos. 2017. DenseReg: Fully convolutional dense shape regression in-the-wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6799–6808.Google ScholarCross Ref
    33. Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. 2020. Towards Fast, Accurate and Stable 3D Dense Face Alignment. In European Conference on Computer Vision (ECCV). 152–168.Google ScholarDigital Library
    34. Yudong Guo, Jianfei Cai, Boyi Jiang, Jianmin Zheng, et al. 2018. CNN-based realtime dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 41, 6 (2018), 1294–1307.Google ScholarDigital Library
    35. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778.Google ScholarCross Ref
    36. Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, and Hao Li. 2017. Avatar Digitization from a Single Image for Real-time Rendering. ACM Transactions on Graphics (TOG) 36, 6 (2017), 195:1–195:14.Google ScholarDigital Library
    37. Alexandru Eugen Ichim, Sofien Bouaziz, and Mark Pauly. 2015. Dynamic 3D avatar creation from hand-held video input. ACM Transactions on Graphics (TOG) 34, 4 (2015), 45.Google ScholarDigital Library
    38. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5967–5976.Google Scholar
    39. Aaron S Jackson, Adrian Bulat, Vasileios Argyriou, and Georgios Tzimiropoulos. 2017. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1031–1039.Google ScholarCross Ref
    40. László A Jeni, Jeffrey F Cohn, and Takeo Kanade. 2015. Dense 3D face alignment from 2D videos in real-time. In International Conference on Automatic Face & Gesture Recognition (FG), Vol. 1. 1–8.Google ScholarCross Ref
    41. Luo Jiang, Juyong Zhang, Bailin Deng, Hao Li, and Ligang Liu. 2018. 3D face reconstruction with geometry details from a single image. Transactions on Image Processing 27, 10 (2018), 4756–4770.Google ScholarDigital Library
    42. Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen. 2017. Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Transactions on Graphics, (Proc. SIGGRAPH) 36, 4 (2017), 94:1–94:12.Google Scholar
    43. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR).Google Scholar
    44. Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4401–4410.Google ScholarCross Ref
    45. Ira Kemelmacher-Shlizerman and Steven M Seitz. 2011. Face reconstruction in the wild. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1746–1753.Google ScholarDigital Library
    46. Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt. 2018a. Deep video portraits. ACM Transactions on Graphics (TOG) 37, 4 (2018), 163:1–163:14.Google ScholarDigital Library
    47. Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, and Christian Theobalt. 2018b. InverseFaceNet: Deep Monocular Inverse Face Rendering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4625–4634.Google ScholarCross Ref
    48. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR).Google Scholar
    49. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2016. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), Vol. 48. 1558–1566.Google Scholar
    50. Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, and Stefanos Zafeiriou. 2020. AvatarMe: Realistically Renderable 3D Facial Reconstruction “In-the-Wild”. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 757–766.Google ScholarCross Ref
    51. Hao Li, Bart Adams, Leonidas J. Guibas, and Mark Pauly. 2009. Robust single-view geometry and motion reconstruction. ACM Transactions on Graphics (TOG) 28 (2009), 175.Google ScholarDigital Library
    52. Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime facial animation with on-the-fly correctives. ACM Transactions on Graphics (TOG) 32, 4 (2013), 42–1.Google ScholarDigital Library
    53. Tianye Li, Timo Bolkart, Michael. J. Black, Hao Li, and Javier Romero. 2017. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia) 36, 6 (2017), 194:1–194:17.Google Scholar
    54. Yue Li, Liqian Ma, Haoqiang Fan, and Kenny Mitchell. 2018. Feature-preserving detailed 3D face reconstruction from a single image. In European Conference on Visual Media Production. 1–9.Google ScholarDigital Library
    55. Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming datan. Behavior Research Methods volume 47 (2015), 1122–1135.Google ScholarCross Ref
    56. Wan-Chun Ma, Andrew Jones, Jen-Yuan Chiang, Tim Hawkins, Sune Frederiksen, Pieter Peers, Marko Vukovic, Ming Ouhyoung, and Paul E. Debevec. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Transactions on Graphics (TOG) 27, 5 (2008), 121.Google ScholarDigital Library
    57. Araceli Morales, Gemma Piella, and Federico M Sukno. 2021. Survey on 3D face reconstruction from uncalibrated images. Computer Science Review 40 (2021), 100400.Google ScholarDigital Library
    58. Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, and Hao Li. 2018. paGAN: real-time avatars using dynamic textures. ACM Transactions on Graphics (TOG) 37, 6 (2018), 258:1–258:12.Google ScholarDigital Library
    59. Yuval Nirkin, Iacopo Masi, Anh Tran Tuan, Tal Hassner, and Gerard Medioni. 2018. On face segmentation, face swapping, and face perception. In International Conference on Automatic Face & Gesture Recognition (FG). 98–105.Google ScholarDigital Library
    60. NoW challenge. 2019. https://ringnet.is.tue.mpg.de/challenge.Google Scholar
    61. Frederick Ira Parke. 1974. A parametric model for human faces. Technical Report. University of Utah.Google Scholar
    62. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems (NeurIPS).Google Scholar
    63. Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. 2009. A 3D face model for pose and illumination invariant face recognition. In International Conference on Advanced Video and Signal Based Surveillance. 296–301.Google ScholarDigital Library
    64. Pexels. 2021. https://www.pexels.com.Google Scholar
    65. Frédéric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, and David H. Salesin. 1998. Synthesizing Realistic Facial Expressions from Photographs. In SIGGRAPH. 75–84.Google Scholar
    66. Stylianos Ploumpis, Evangelos Ververas, Eimear O’Sullivan, Stylianos Moschoglou, Haoyang Wang, Nick Pears, William Smith, Baris Gecer, and Stefanos P Zafeiriou. 2020. Towards a complete 3D morphable model of the human head. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2020).Google ScholarCross Ref
    67. R. Ramamoorthi and P. Hanrahan. 2001. An efficient representation for irradiance environment maps. Proceedings of the 28th annual conference on Computer graphics and interactive techniques (2001).Google ScholarDigital Library
    68. Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. 2020. PyTorch3D. https://github.com/facebookresearch/pytorch3d.Google Scholar
    69. Alexander Richard, Michael Zollhöfer, Yandong Wen, Fernando De la Torre, and Yaser Sheikh. 2021. MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement. CoRR abs/2104.08223 (2021).Google Scholar
    70. E. Richardson, M. Sela, and R. Kimmel. 2016. 3D Face Reconstruction by Learning from Synthetic Data. In International Conference on 3D Vision (3DV). 460–469.Google Scholar
    71. Elad Richardson, Matan Sela, Roy Or-El, and Ron Kimmel. 2017. Learning Detailed Face Reconstruction From a Single Image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1259–1268.Google ScholarCross Ref
    72. Jérémy Riviere, Paulo F. U. Gotardo, Derek Bradley, Abhijeet Ghosh, and Thabo Beeler. 2020. Single-shot high-quality facial geometry and skin appearance capture. ACM Transactions on Graphics, (Proc. SIGGRAPH) 39, 4 (2020), 81.Google Scholar
    73. Sami Romdhani, Volker Blanz, and Thomas Vetter. 2002. Face identification by fitting a 3D morphable model using linear shape and texture error functions. In European Conference on Computer Vision (ECCV). 3–19.Google ScholarCross Ref
    74. S. Romdhani and T. Vetter. 2005. Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2. 986–993.Google Scholar
    75. Joseph Roth, Yiying Tong, and Xiaoming Liu. 2016. Adaptive 3D face reconstruction from unconstrained photo collections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4197–4206.Google ScholarCross Ref
    76. Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, and Hao Li. 2017. Photorealistic Facial Texture Inference Using Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5144–5153.Google ScholarCross Ref
    77. Soubhik Sanyal, Timo Bolkart, Haiwen Feng, and Michael Black. 2019. Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7763–7772.Google ScholarCross Ref
    78. Kristina Scherbaum, Tobias Ritschel, Matthias Hullin, Thorsten Thormählen, Volker Blanz, and Hans-Peter Seidel. 2011. Computer-suggested facial makeup. Computer Graphics Forum 30, 2 (2011), 485–492.Google ScholarCross Ref
    79. Matan Sela, Elad Richardson, and Ron Kimmel. 2017. Unrestricted facial geometry reconstruction using image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1576–1585.Google ScholarCross Ref
    80. Soumyadip Sengupta, Angjoo Kanazawa, Carlos D. Castillo, and David W. Jacobs. 2018. SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6296–6305.Google Scholar
    81. Jiaxiang Shang, Tianwei Shen, Shiwei Li, Lei Zhou, Mingmin Zhen, Tian Fang, and Long Quan. 2020. Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency. In European Conference on Computer Vision (ECCV), Vol. 12360. 53–70.Google ScholarDigital Library
    82. Fuhao Shi, Hsiang-Tao Wu, Xin Tong, and Jinxiang Chai. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Transactions on Graphics (TOG) 33, 6 (2014), 222.Google ScholarDigital Library
    83. Il-Kyu Shin, A Cengiz Öztireli, Hyeon-Joong Kim, Thabo Beeler, Markus Gross, and Soo-Mi Choi. 2014. Extraction and transfer of facial expression wrinkles for facial performance enhancement. In Pacific Conference on Computer Graphics and Applications. 113–118.Google Scholar
    84. Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs/1409.1556 (2014).Google Scholar
    85. Ron Slossberg, Gil Shamai, and Ron Kimmel. 2018. High quality facial surface and texture synthesis via generative adversarial networks. In European Conference on Computer Vision Workshops (ECCV-W).Google Scholar
    86. Supasorn Suwajanakorn, Ira Kemelmacher-Shlizerman, and Steven M Seitz. 2014. Total moving face reconstruction. In European Conference on Computer Vision (ECCV). 796–812.Google ScholarCross Ref
    87. Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, and Christian Theobalt. 2019. FML: Face Model Learning from Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10812–10822.Google ScholarCross Ref
    88. Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, and Christian Theobalt. 2020. StyleRig: Rigging StyleGAN for 3D Control Over Portrait Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6141–6150.Google ScholarCross Ref
    89. Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, and Christian Theobalt. 2018. Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2549–2559.Google ScholarCross Ref
    90. Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1274–1283.Google Scholar
    91. Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time expression transfer for facial reenactment. ACM Transactions on Graphics (TOG) 34, 6 (2015), 183–1.Google ScholarDigital Library
    92. Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2Face: Real-time face capture and reenactment of RGB videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2387–2395.Google ScholarDigital Library
    93. Anh Tuan Tran, Tal Hassner, Iacopo Masi, and Gerard Medioni. 2017. Regressing Robust and Discriminative 3D Morphable Models With a Very Deep Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1599–1608.Google ScholarCross Ref
    94. Anh Tuan Tran, Tal Hassner, Iacopo Masi, Eran Paz, Yuval Nirkin, and Gérard Medioni. 2018. Extreme 3D face reconstruction: Seeing through occlusions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3935–3944.Google ScholarCross Ref
    95. Luan Tran, Feng Liu, and Xiaoming Liu. 2019. Towards High-Fidelity Nonlinear 3D Face Morphable Model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1126–1135.Google ScholarCross Ref
    96. Xiaoguang Tu, Jian Zhao, Zihang Jiang, Yao Luo, Mei Xie, Yang Zhao, Linxiao He, Zheng Ma, and Jiashi Feng. 2019. Joint 3D Face Reconstruction and Dense Face Alignment from A Single Image with 2D-Assisted Self-Supervised Learning. Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019).Google Scholar
    97. Thomas Vetter and Volker Blanz. 1998. Estimating coloured 3D face models from single images: An example based approach. In European Conference on Computer Vision (ECCV). 499–513.Google ScholarCross Ref
    98. Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. 2019. Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).Google ScholarCross Ref
    99. Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, and Jiaya Jia. 2018. Image inpainting via generative multi-column convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS). 331–340.Google Scholar
    100. Huawei Wei, Shuang Liang, and Yichen Wei. 2019. 3D Dense Face Alignment via Graph Convolution Networks. arXiv preprint arXiv:1904.05562 (2019).Google Scholar
    101. Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime performance-based facial animation. ACM Transactions on Graphics, (Proc. SIGGRAPH) 30, 4 (2011), 77.Google Scholar
    102. Shugo Yamaguchi, Shunsuke Saito, Koki Nagano, Yajie Zhao, Weikai Chen, Kyle Olszewski, Shigeo Morishima, and Hao Li. 2018. High-fidelity Facial Reflectance and Geometry Inference from an Unconstrained Image. ACM Transactions on Graphics (TOG) 37, 4 (2018), 162:1–162:14.Google ScholarDigital Library
    103. Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, and Xun Cao. 2020. FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 601–610.Google ScholarCross Ref
    104. Xiaoxing Zeng, Xiaojiang Peng, and Yu Qiao. 2019. DF2Net: A Dense-Fine-Finer Network for Detailed 3D Face Reconstruction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).Google ScholarCross Ref
    105. Yajie Zhao, Zeng Huang, Tianye Li, Weikai Chen, Chloe LeGendre, Xinglei Ren, Ari Shapiro, and Hao Li. 2019. Learning perspective undistortion of portraits. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 7849–7859.Google ScholarCross Ref
    106. Xiangyu Zhu, Zhen Lei, Junjie Yan, Dong Yi, and Stan Z Li. 2015. High-fidelity pose and expression normalization for face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 787–796.Google Scholar
    107. M. Zollhöfer, J. Thies, P. Garrido, D. Bradley, T. Beeler, P. Pérez, M. Stamminger, M. Nießner, and C. Theobalt. 2018. State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications. Computer Graphics Forum (Eurographics State of the Art Reports 2018) 37, 2 (2018).Google Scholar


ACM Digital Library Publication:



Overview Page: