“HQ3DAvatar: High Quality Implicit 3D Head Avatar”
Conference:
Type(s):
Title:
- HQ3DAvatar: High Quality Implicit 3D Head Avatar
Presenter(s)/Author(s):
Abstract:
We present a novel method for rendering photorealistic human head avatars. Our method utilizes an implicitly learned canonical space constrained using optical flow in a multiresolution hash encoding framework. Our approach outperforms related methods and excels in reconstructing regions exhibiting complex geometric details e.g. mouth interior and hair.
References:
[1]
Rameen Abdal, Hsin-Ying Lee, Peihao Zhu, Menglei Chai, Aliaksandr Siarohin, Peter Wonka, and Sergey Tulyakov. 2023. 3DAvatarGAN: Bridging domains for personalized editable avatars. CoRR abs/2301.02700 (2023).
[2]
Matthew Amodio, David van Dijk, Ruth Montgomery, Guy Wolf, and Smita Krishnaswamy. 2019. Out-of-sample Extrapolation with Neuron Editing. arxiv:q-bio.QM/1805.12198 (2019).
[3]
ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman, and Zhixin Shu. 2022. RigNeRF: Fully controllable neural 3D portraits. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 20332?20341.
[4]
Yunpeng Bai, Yanbo Fan, Xuan Wang, Yong Zhang, Jingxiang Sun, Chun Yuan, and Ying Shan. 2022. High-fidelity facial avatar reconstruction from monocular video with generative priors. CoRR abs/2211.15064 (2022).
[5]
Alexander W. Bergman, Petr Kellnhofer, Yifan Wang, Eric R. Chan, David B. Lindell, and Gordon Wetzstein. 2022. Generative neural articulated radiance fields. In Conference on Advances in Neural Information Processing Systems.
[6]
Shrisha Bharadwaj, Yufeng Zheng, Otmar Hilliges, Michael J. Black, and Victoria Fernandez Abrevaya. 2023. FLARE: Fast learning of animatable and relightable mesh avatars. ACM Trans. Graph. 42 (Dec.2023), 15. DOI:
[7]
Mallikarjun B. R., Ayush Tewari, Hans-Peter Seidel, Mohamed Elgharib, and Christian Theobalt. 2021. Learning complete 3D morphable face models from images and videos. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 3361?3371.
[8]
Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason M. Saragih, Tomas Simon, and Yaser Sheikh. 2021. Real-time 3D neural facial animation from binocular video. ACM Trans. Graph. 40, 4 (2021), 87:1?87:17.
[9]
Chen Cao, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollh?fer, Shunsuke Saito, Stephen Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, and Jason M. Saragih. 2022. Authentic volumetric avatars from a phone scan. ACM Trans. Graph. 41, 4 (2022), 163:1?163:19.
[10]
Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2014. FaceWarehouse: A 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20, 3 (2014), 413?425.
[11]
Chen Cao, Hongzhi Wu, Yanlin Weng, Tianjia Shao, and Kun Zhou. 2016. Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35, 4 (2016), 126:1?126:12.
[12]
Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J. Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. 2022. Efficient geometry-aware 3D generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 16102?16112.
[13]
Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. 2021. Pi-GAN: Periodic implicit generative adversarial networks for 3D-aware image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 5799?5809.
[14]
Prashanth Chandran, Sebastian Winberg, Gaspard Zoss, J?r?my Riviere, Markus H. Gross, Paulo F. U. Gotardo, and Derek Bradley. 2021. Rendering with style: Combining traditional and neural approaches for high-quality face rendering. ACM Trans. Graph. 40, 6 (2021), 223:1?223:14.
[15]
Lele Chen, Chen Cao, Fernando De la Torre, Jason M. Saragih, Chenliang Xu, and Yaser Sheikh. 2021. High-fidelity face tracking for AR/VR via deep lighting adaptation. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 13059?13069.
[16]
Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. 2022. GRAM: Generative radiance manifolds for 3D-aware image generation. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 10663?10673.
[17]
Hao-Bin Duan, Miao Wang, Jin-Chuan Shi, Xu-Chuan Chen, and Yan-Pei Cao. 2023. BakedAvatar: Baking neural fields for real-time head avatar synthesis. ACM Trans. Graph. 42, 6, Article 225 (Sep.2023), 14 pages. DOI:
[18]
Bernhard Egger, William A. P. Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollh?fer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, Christian Theobalt, Volker Blanz, and Thomas Vetter. 2020. 3D morphable face models?Past, present, and future. ACM Trans. Graph. 39, 5 (2020), 157:1?157:38.
[19]
Mohamed Elgharib, Mohit Mendiratta, Justus Thies, Matthias Nie?ner, Hans-Peter Seidel, Ayush Tewari, Vladislav Golyanik, and Christian Theobalt. 2020. Egocentric videoconferencing. ACM Trans. Graph. 39, 6 (2020), 268:1?268:16.
[20]
Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance fields without neural networks. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 5491?5500.
[21]
Guy Gafni, Justus Thies, Michael Zollh?fer, and Matthias Nie?ner. 2021. Dynamic neural radiance fields for monocular 4D facial avatar reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 8649?8658.
[22]
Xuan Gao, Chenglai Zhong, Jun Xiang, Yang Hong, Yudong Guo, and Juyong Zhang. 2022. Reconstructing personalized semantic facial NeRF models from monocular video. ACM Trans. Graph. 41, 6 (2022), 200:1?200:12.
[23]
Baris Gecer, Stylianos Ploumpis, Irene Kotsia, and Stefanos Zafeiriou. 2019. GANFIT: Generative adversarial network fitting for high fidelity 3D face reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 1155?1164.
[24]
Thomas Gerig, Andreas Morel-Forster, Clemens Blumer, Bernhard Egger, Marcel L?thi, Sandro Sch?nborn, and Thomas Vetter. 2018. Morphable face models?An open framework. In Conference on Automatic Face & Gesture Recognition. IEEE Computer Society, 75?82.
[25]
Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. 2022. StyleNeRF: A style-based 3D aware generator for high-resolution image synthesis. In International Conference on Learning Representations.OpenReview.net.
[26]
Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, and Juyong Zhang. 2022. HeadNeRF: A realtime NeRF-based parametric head model. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 20342?20352.
[27]
Alexandru Eugen Ichim, Sofien Bouaziz, and Mark Pauly. 2015. Dynamic 3D avatar creation from hand-held video input. ACM Trans. Graph. 34, 4 (2015), 45:1?45:14.
[28]
Yoni Kasten, Dolev Ofri, Oliver Wang, and Tali Dekel. 2021. Layered neural atlases for consistent video editing. ACM Trans. Graph. 40, 6 (2021), 210:1?210:12.
[29]
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nie?ner, Patrick P?rez, Christian Richardt, Michael Zollh?fer, and Christian Theobalt. 2018. Deep video portraits. ACM Trans. Graph. 37, 4 (2018), 163.
[30]
Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Baris Gecer, Abhijeet Ghosh, and Stefanos Zafeiriou. 2022. AvatarMe
: Facial shape and BRDF inference with photorealistic rendering-aware GANs. IEEE Trans. Pattern Anal. Mach. Intell. 44, 12 (2022), 9269?9284.
[31]
Ruilong Li, Julian Tanke, Minh Vo, Michael Zollh?fer, J?rgen Gall, Angjoo Kanazawa, and Christoph Lassner. 2022b. TAVA: Template-free animatable volumetric actors. In European Conference on Computer Vision. (Lecture Notes in Computer Science), Vol. 13692. Springer, 419?436.
[32]
Tianye Li, Timo Bolkart, Michael J. Black, Hao Li, and Javier Romero. 2017. Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph. 36, 6 (2017), 194:1?194:17.
[33]
Tianye Li, Mira Slavcheva, Michael Zollh?fer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard A. Newcombe, and Zhaoyang Lv. 2022a. Neural 3D video synthesis from multi-view video. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 5511?5521.
[34]
Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2021. Neural scene flow fields for space-time view synthesis of dynamic scenes. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 6498?6508.
[35]
Jiangke Lin, Yi Yuan, Tianjia Shao, and Kun Zhou. 2020. Towards high-fidelity 3D face reconstruction from in-the-wild images using graph convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 5890?5899.
[36]
Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L. Curless, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2021. Real-time high-resolution background matting. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 8762?8771.
[37]
Stephen Lombardi, Jason M. Saragih, Tomas Simon, and Yaser Sheikh. 2018. Deep appearance models for face rendering. ACM Trans. Graph. 37, 4 (2018), 68.
[38]
Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: Learning dynamic renderable volumes from images. ACM Trans. Graph. 38, 4 (2019), 65:1?65:14.
[39]
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollh?fer, Yaser Sheikh, and Jason M. Saragih. 2021. Mixture of volumetric primitives for efficient neural rendering. ACM Trans. Graph. 40, 4 (2021), 59:1?59:13.
[40]
Shugao Ma, Tomas Simon, Jason M. Saragih, Dawei Wang, Yuecheng Li, Fernando De la Torre, and Yaser Sheikh. 2021. Pixel codec avatars. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 64?73.
[41]
Moustafa Meshry, Saksham Suri, Larry S. Davis, and Abhinav Shrivastava. 2021. Learned spatial representations for few-shot talking-head synthesis. In International Conference on Computer Vision.IEEE, 13809?13818.
[42]
Metashape. 2020. Agisoft Metashape (Version 1.8.4) (Software). (2020). Retrieved from https://www.agisoft.com/downloads/installer/
[43]
Marko Mihajlovic, Aayush Bansal, Michael Zollh?fer, Siyu Tang, and Shunsuke Saito. 2022. KeypointNeRF: Generalizing image-based volumetric avatars using relative spatial encoding of keypoints. In European Conference on Computer Vision. (Lecture Notes in Computer Science), Vol. 13675. Springer, 179?197.
[44]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2020), 99?106.
[45]
Thomas M?ller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41, 4 (2022), 102:1?102:15.
[46]
Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, and Hao Li. 2018. paGAN: Real-time avatars using dynamic textures. ACM Trans. Graph. 37, 6 (2018), 258.
[47]
Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. 2022. StyleSDF: High-resolution 3D-consistent image and geometry generation. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 13493?13503.
[48]
Jeong Joon Park, Peter Florence, Julian Straub, Richard A. Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning continuous signed distance functions for shape representation. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 165?174.
[49]
Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2021a. Nerfies: Deformable neural radiance fields. In International Conference on Computer Vision.IEEE, 5845?5854.
[50]
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021b. HyperNeRF: A higher-dimensional representation for topologically varying neural radiance fields. ACM Trans. Graph. 40, 6 (2021), 238:1?238:12.
[51]
Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In British Machine Vision Conference.
[52]
Amit Raj, Michael Zollh?fer, Tomas Simon, Jason M. Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. 2021. Pixel-aligned volumetric avatars. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 11733?11742.
[53]
Eduard Ramon, Gil Triginer, Janna Escur, Albert Pumarola, Jaime Garcia Giraldez, Xavier Gir?-i-Nieto, and Francesc Moreno-Noguer. 2021. H3D-Net: Few-shot high-fidelity 3D head reconstruction. In International Conference on Computer Vision.IEEE, 5600?5609.
[54]
Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, and Stefanos Zafeiriou. 2022. Facial geometric detail recovery via implicit representation. CoRR abs/2203.09692 (2022).
[55]
Gil Shamai, Ron Slossberg, and Ron Kimmel. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. CoRR abs/1901.06551 (2019).
[56]
Keqiang Sun, Shangzhe Wu, Zhaoyang Huang, Ning Zhang, Quan Wang, and Hongsheng Li. 2022. Controllable 3D face synthesis with conditional generative occupancy fields. CoRR abs/2206.08361 (2022).
[57]
Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles T. Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. 2021. Neural geometric level of detail: Real-time rendering with implicit 3D shapes. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 11358?11367.
[58]
Jiaxiang Tang. 2022. Torch-ngp: A PyTorch Implementation of Instant-ngp. (2022). Retrieved from https://github.com/ashawkey/torch-ngp
[59]
Ayush Tewari, Mohamed Elgharib, Mallikarjun B. R., Florian Bernard, Hans-Peter Seidel, Patrick P?rez, Michael Zollh?fer, and Christian Theobalt. 2020. PIE: Portrait image embedding for semantic control. ACM Trans. Graph. 39, 6 (2020), 223:1?223:14.
[60]
Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul P. Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Nie?ner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollh?fer, and Vladislav Golyanik. 2022. Advances in neural rendering. Comput. Graph. Forum 41, 2 (2022), 703?735.
[61]
Ayush Tewari, Michael Zollh?fer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick P?rez, and Christian Theobalt. 2018. Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 2549?2559.
[62]
Justus Thies, Michael Zollh?fer, and Matthias Nie?ner. 2019a. Deferred neural rendering: Image synthesis using neural textures. ACM Trans. Graph. 38, 4 (2019), 66:1?66:12.
[63]
Justus Thies, Michael Zollh?fer, Marc Stamminger, Christian Theobalt, and Matthias Nie?ner. 2016. Face2Face: Real-time face capture and reenactment of RGB videos. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society, 2387?2395.
[64]
Justus Thies, Michael Zollh?fer, Marc Stamminger, Christian Theobalt, and Matthias Nie?ner. 2019b. Face2Face: Real-time face capture and reenactment of RGB videos. Commun. ACM 62, 1 (2019), 96?104.
[65]
Luan Tran, Feng Liu, and Xiaoming Liu. 2019. Towards high-fidelity nonlinear 3D face morphable model. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 1126?1135.
[66]
Daoye Wang, Prashanth Chandran, Gaspard Zoss, Derek Bradley, and Paulo F. U. Gotardo. 2022a. MoRF: Morphable radiance fields for multiview neural head modeling. In SIGGRAPH. ACM, 55:1?55:9.
[67]
Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, and Yebin Liu. 2023. StyleAvatar: Real-time photo-realistic portrait avatar from a single video. In ACM SIGGRAPH Conference (SIGGRAPH?23). Association for Computing Machinery, New York, NY, Article 67, 10 pages. DOI:
[68]
Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu. 2021b. One-shot free-view neural talking-head synthesis for video conferencing. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 10039?10049.
[69]
Ziyan Wang, Timur M. Bagautdinov, Stephen Lombardi, Tomas Simon, Jason M. Saragih, Jessica K. Hodgins, and Michael Zollh?fer. 2021a. Learning compositional radiance fields of dynamic human heads. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 5704?5713.
[70]
Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600?612.
[71]
Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollh?fer, Jessica K. Hodgins, and Christoph Lassner. 2022b. HVH: Learning a hybrid neural volumetric representation for dynamic hair performance capture. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 6133? 6144.
[72]
Jianfeng Xiang, Jiaolong Yang, Yu Deng, and Xin Tong. 2022. GRAM-HD: 3D-consistent image generation at high resolution with generative radiance manifolds. CoRR abs/2206.07255 (2022).
[73]
Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, and Yebin Liu. 2023. AvatarMAV: Fast 3D head avatar reconstruction using motion-aware neural voxels. In ACM SIGGRAPH Conference (SIGGRAPH?23).
[74]
Shugo Yamaguchi, Shunsuke Saito, Koki Nagano, Yajie Zhao, Weikai Chen, Kyle Olszewski, Shigeo Morishima, and Hao Li. 2018. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Trans. Graph. 37, 4 (2018), 162.
[75]
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. PlenOctrees for real-time rendering of neural radiance fields. In International Conference on Computer Vision.IEEE, 5732?5741.
[76]
Hao Zhang, Tianyuan Dai, Yu-Wing Tai, and Chi-Keung Tang. 2022. FLNeRF: 3D facial landmarks estimation in neural radiance fields. CoRR abs/2211.11202 (2022).
[77]
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition.Computer Vision Foundation/IEEE Computer Society, 586?595.
[78]
Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, and Yebin Liu. 2023. HAvatar: High-fidelity head avatar via facial model conditioned neural radiance field. ACM Trans. Graph. 43, 1, Article 6 (Nov2023), 16 pages. DOI:
[79]
Yufeng Zheng, Victoria Fern?ndez Abrevaya, Marcel C. B?hler, Xu Chen, Michael J. Black, and Otmar Hilliges. 2022. I M avatar: Implicit morphable head avatars from videos. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 13535?13545.
[80]
Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, and Otmar Hilliges. 2023. PointAvatar: Deformable point-based head avatars from videos. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 21057?21067.
[81]
Wojciech Zielonka, Timo Bolkart, and Justus Thies. 2022. Towards metrical reconstruction of human faces. In European Conference on Computer Vision.
[82]
Wojciech Zielonka, Timo Bolkart, and Justus Thies. 2023. Instant volumetric head avatars. In IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 4574?4584.
[83]
Michael Zollh?fer, Justus Thies, Pablo Garrido, Derek Bradley, Thabo Beeler, Patrick P?rez, Marc Stamminger, Matthias Nie?ner, and Christian Theobalt. 2018. State of the art on monocular 3D face reconstruction, tracking, and applications. Comput. Graph. Forum 37, 2 (2018), 523?550.