“Real-time deep dynamic characters” by Habermann, Liu, Xu, Zollhoefer, Pons-Moll, et al. …

  • ©Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt

Conference:


Type:


Title:

    Real-time deep dynamic characters

Presenter(s)/Author(s):



Abstract:


    We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery. In contrast to previous work, our controllable 3D character displays dynamics, e.g., the swing of the skirt, dependent on skeletal body motion in an efficient data-driven way, without requiring complex physics simulation. Our character model also features a learned dynamic texture model that accounts for photo-realistic motion-dependent appearance details, as well as view-dependent lighting effects. During training, we do not need to resort to difficult dynamic 3D capture of the human; instead we can train our model entirely from multi-view video in a weakly supervised manner. To this end, we propose a parametric and differentiable character representation which allows us to model coarse and fine dynamic deformations, e.g., garment wrinkles, as explicit spacetime coherent mesh geometry that is augmented with high-quality dynamic textures dependent on motion and view point. As input to the model, only an arbitrary 3D skeleton motion is required, making it directly compatible with the established 3D animation pipeline. We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing, including dynamics, and a neural generative dynamic texture model creates corresponding dynamic texture maps. We show that by merely providing new skeletal motions, our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches, and even in real-time.

References:


    1. Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. 2019. Deep Video-Based Performance Cloning. Comput. Graph. Forum 38, 2 (2019), 219–233. Google ScholarCross Ref
    2. Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, and Gerard Pons-Moll. 2019. Learning to Reconstruct People in Clothing from a Single RGB Camera. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1175–1186.Google ScholarCross Ref
    3. Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. 2018a. Detailed Human Avatars from Monocular Video. In International Conference on 3D Vision. 98–109. Google ScholarCross Ref
    4. Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. 2018b. Video Based Reconstruction of 3D People Models. In IEEE Conference on Computer Vision and Pattern Recognition. CVPR Spotlight Paper.Google ScholarCross Ref
    5. Stephen W. Bailey, Dave Otte, Paul Dilorenzo, and James F. O’Brien. 2018. Fast and Deep Deformation Approximations. ACM Transactions on Graphics 37, 4 (Aug. 2018), 119:1–12. Presented at SIGGRAPH 2018, Los Angeles. Google ScholarDigital Library
    6. Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt, and Gerard Pons-Moll. 2019. Multi-Garment Net: Learning to Dress 3D People from Images. In IEEE International Conference on Computer Vision (ICCV). IEEE.Google ScholarCross Ref
    7. Blender 2020. Blender. https://www.blender.org/.Google Scholar
    8. Gunilla Borgefors. 1986. Distance transformations in digital images. Computer Vision, Graphics, and Image Processing 34, 3 (1986), 344 — 371. Google ScholarDigital Library
    9. Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. 2003. Free-viewpoint Video of Human Actors. ACM Trans. Graph. 22, 3 (July 2003).Google ScholarDigital Library
    10. Dan Casas, Marco Volino, John Collomosse, and Adrian Hilton. 2014. 4D Video Textures for Interactive Character Appearance. Comput. Graph. Forum 33, 2 (May 2014), 371–380. Google ScholarDigital Library
    11. Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody Dance Now. In International Conference on Computer Vision (ICCV).Google Scholar
    12. Wenzheng Chen, Jun Gao, Huan Ling, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. 2019. Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer. In Advances In Neural Information Processing Systems.Google Scholar
    13. Hongsuk Choi, Gyeongsik Moon, and Kyoung Mu Lee. 2020. Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose. In European Conference on Computer Vision (ECCV).Google ScholarDigital Library
    14. Kwang-Jin Choi and H. Ko. 2005. Research problems in clothing simulation. Comput. Aided Des. 37 (2005), 585–592.Google ScholarDigital Library
    15. Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, and Steve Sullivan. 2015. High-quality streamable free-viewpoint video. ACM Transactions on Graphics (TOG) 34, 4 (2015), 69.Google ScholarDigital Library
    16. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2017. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. arXiv:1606.09375 [cs.LG]Google Scholar
    17. Patrick Esser, Johannes Haux, Timo Milbich, and Bj orn Ommer. 2018. Towards Learning a Realistic Rendering of Human Behavior. In The European Conference on Computer Vision (ECCV) Workshops.Google Scholar
    18. Wei-Wen Feng, Yizhou Yu, and Byung-Uck Kim. 2010. A Deformation Transformer for Real-Time Cloth Animation. ACM Trans. Graph. 29, 4, Article 108 (July 2010), 9 pages. Google ScholarDigital Library
    19. Guy Gafni, Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2020. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. arXiv:2012.03065 [cs.CV]Google Scholar
    20. Peng Guan, Loretta Reiss, David A. Hirshberg, Alexander Weiss, and Michael J. Black. 2012. DRAPE: DRessing Any PErson. ACM Trans. Graph. 31, 4, Article 35 (July 2012), 10 pages. Google ScholarDigital Library
    21. Riza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. 2018. DensePose: Dense Human Pose Estimation In The Wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    22. Erhan Gundogdu, Victor Constantin, Amrollah Seifoddini, Minh Dang, Mathieu Salzmann, and Pascal Fua. 2019. Garnet: A Two-stream Network for Fast and Accurate 3D Cloth Draping. In IEEE International Conference on Computer Vision (ICCV). IEEE.Google ScholarCross Ref
    23. Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt. 2019. LiveCap: Real-time Human Performance Capture from Monocular Video. ACM Trans. Graph. (2019).Google Scholar
    24. Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt. 2020. DeepCap: Monocular Human Performance Capture Using Weak Supervision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    25. Fabian Hahn, Bernhard Thomaszewski, Stelian Coros, Robert W. Sumner, Forrester Cole, Mark Meyer, Tony DeRose, and Markus Gross. 2014. Subspace Clothing Simulation Using Adaptive Bases. ACM Trans. Graph. 33, 4, Article 105 (July 2014), 9 pages. Google ScholarDigital Library
    26. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    27. Anna Hilsmann, Philipp Fechteler, Wieland Morgenstern, Wolfgang Paier, Ingo Feldmann, Oliver Schreer, and Peter Eisert. 2020. Going beyond free viewpoint: creating animatable volumetric video of human performances. IET Computer Vision 14, 6 (Sep 2020), 350–358. Google ScholarCross Ref
    28. Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-Functioned Neural Networks for Character Control. ACM Trans. Graph. 36, 4, Article 42 (July 2017), 13 pages. Google ScholarDigital Library
    29. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to-Image Translation with Conditional Adversarial Networks. CoRR abs/1611.07004 (2016). arXiv:1611.07004 http://arxiv.org/abs/1611.07004Google Scholar
    30. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. CVPR (2017).Google Scholar
    31. Yue Jiang, Dantong Ji, Zhizhong Han, and Matthias Zwicker. 2020. SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
    32. Ning Jin, Yilin Zhu, Zhenglin Geng, and Ronald Fedkiw. 2018. A Pixel-Based Framework for Data-Driven Clothing. arXiv:1812.01677 [cs.CV]Google Scholar
    33. Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3D Mesh Renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
    34. Ladislav Kavan, Steven Collins, Jiří Žára, and Carol O’Sullivan. 2007. Skinning with dual quaternions. In Proceedings of the 2007 symposium on Interactive 3D graphics and games. ACM, 39–46.Google ScholarDigital Library
    35. Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollöfer, and Christian Theobalt. 2018a. Deep Video Portraits. ACM Transactions on Graphics (TOG) 37, 4 (2018), 163.Google ScholarDigital Library
    36. Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollöfer, and Christian Theobalt. 2018b. Deep Video Portraits. ACM Transactions on Graphics (TOG) 37, 4 (2018), 163.Google ScholarDigital Library
    37. Tae-Yong Kim and Eugene Vendrovsky. 2008. DrivenShape: A Data-Driven Approach for Shape Deformation. In ACM SIGGRAPH 2008 Talks (Los Angeles, California) (SIGGRAPH ’08). Association for Computing Machinery, New York, NY, USA, Article 69, 1 pages. Google ScholarDigital Library
    38. Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (12 2014).Google Scholar
    39. Zorah Lähner, Daniel Cremers, and Tony Tung. 2018. DeepWrinkles: Accurate and Realistic Clothing Modeling. ArXiv abs/1808.03417 (2018).Google Scholar
    40. J. H. Lambert. 1760. Photometria sive de mensure de gratibus luminis, colorum umbrae. In Photometria sive de mensure de gratibus luminis, colorum umbrae, Eberhard Klett.Google Scholar
    41. Guannan Li, Yebin Liu, and Qionghai Dai. 2014. Free-viewpoint Video Relighting from Multi-view Sequence Under General Illumination. Mach. Vision Appl. 25, 7 (Oct. 2014), 1737–1746. Google ScholarDigital Library
    42. Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. https://arxiv.org/abs/2011.13084 (2020).Google Scholar
    43. Junbang Liang, Ming C. Lin, and Vladlen Koltun. 2019. Differentiable Cloth Simulation for Inverse Problems. In Conference on Neural Information Processing Systems (NeurIPS).Google Scholar
    44. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020a. Neural Sparse Voxel Fields. NeurIPS (2020).Google Scholar
    45. Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, and Christian Theobalt. 2020b. Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation. arXiv:2001.04947 [cs.GR]Google Scholar
    46. Lingjie Liu, Weipeng Xu, Michael Zollhöfer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, and Christian Theobalt. 2019b. Neural Rendering and Reenactment of Human Actor Videos. ACM Trans. Graph. 38, 5, Article 139 (Oct. 2019), 14 pages. Google ScholarDigital Library
    47. Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. 2019a. Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning. The IEEE International Conference on Computer Vision (ICCV) (Oct 2019).Google ScholarCross Ref
    48. Stephen Lombardi, Jason Saragih, Tomas Simon, and Yaser Sheikh. 2018. Deep appearance models for face rendering. ACM Transactions on Graphics 37, 4 (Aug 2018), 1–13. Google ScholarDigital Library
    49. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: Learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 65.Google ScholarDigital Library
    50. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34, 6 (Oct. 2015), 248:1–248:16.Google ScholarDigital Library
    51. Matthew M. Loper and Michael J. Black. 2014. OpenDR: An Approximate Differentiable Renderer. In Computer Vision – ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, Cham, 154–169.Google Scholar
    52. Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. 2017. Pose guided person image generation. In Advances in Neural Information Processing Systems. 405–415.Google Scholar
    53. Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc van Gool, Bernt Schiele, and Mario Fritz. 2018. Disentangled Person Image Generation. Computer Vision and Pattern Recognition (CVPR) (2018).Google Scholar
    54. Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, and Michael Black. 2020. Learning to Dress 3D People in Generative Clothing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.Google ScholarCross Ref
    55. N. Magnenat-Thalmann, A. Laperri’ere, and D. Thalmann. 1988. Joint-Dependent Local Deformations for Hand Animation and Object Grasping. In Proceedings of Graphics Interface ’88 (Edmonton, Alberta, Canada) (GI ’88). Canadian Man-Computer Communications Society, Toronto, Ontario, Canada, 26–33. http://graphicsinterface.org/wp-content/uploads/gi1988-4.pdfGoogle Scholar
    56. Ricardo Martin-Brualla, Rohit Pandey, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Julien Valentin, Sameh Khamis, Philip Davidson, Anastasia Tkach, Peter Lincoln, Adarsh Kowdle, Christoph Rhemann, Dan B Goldman, Cem Keskin, Steve Seitz, Shahram Izadi, and Sean Fanello. 2018. LookinGood: Enhancing Performance Capture with Real-Time Neural Re-Rendering. ACM Trans. Graph. 37, 6, Article 255 (Dec. 2018), 14 pages. Google ScholarDigital Library
    57. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.Google Scholar
    58. Claus Mueller. 1966. Spherical Harmonics. In Spherical Harmonics, Springer.Google Scholar
    59. Rahul Narain, Armin Samii, and James F. O’Brien. 2012. Adaptive Anisotropic Remeshing for Cloth Simulation. ACM Transactions on Graphics 31, 6 (Nov. 2012), 147:1–10. http://graphics.berkeley.edu/papers/Narain-AAR-2012-11/ Proceedings of ACM SIGGRAPH Asia 2012, Singapore.Google ScholarDigital Library
    60. A. Nealen, Matthias Müller, Richard Keiser, Eddy Boxerman, and M. Carlson. 2005. Physically based deformable models in computer graphics. Eurographics: State of the Art Report (01 2005), 71–94.Google Scholar
    61. Keunhong Park, Utkarsh Sinha, Jonathan Barron, Sofien Bouaziz, Dan Goldman, Steven Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural Radiance Fields. https://arxiv.org/abs/2011.12948 (2020).Google Scholar
    62. Chaitanya Patel, Zhouyingcheng Liao, and Gerard Pons-Moll. 2020. TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.Google Scholar
    63. Photo Scan 2016. PhotoScan. http://www.agisoft.com.Google Scholar
    64. Juan Pineda. 1988. A Parallel Algorithm for Polygon Rasterization. SIGGRAPH Comput. Graph. 22, 4 (June 1988), 17–20. Google ScholarDigital Library
    65. Gerard Pons-Moll, Sergi Pujades, Sonny Hu, and Michael Black. 2017. ClothCap: Seamless 4D Clothing Capture and Retargeting. ACM Transactions on Graphics, (Proc. SIGGRAPH) 36, 4 (2017). Google ScholarDigital Library
    66. Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, and Francesc Moreno-Noguer. 2018. Unsupervised Person Image Synthesis in Arbitrary Poses. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    67. Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2020. D-NeRF: Neural Radiance Fields for Dynamic Scenes. https://arxiv.org/abs/2011.13961 (2020).Google Scholar
    68. Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. 2020. PVA: Pixel-aligned Volumetric Avatars. In arXiv:2101.02697.Google Scholar
    69. Igor Santesteban, Miguel A. Otaduy, and Dan Casas. 2019. Learning-Based Animation of Clothing for Virtual Try-On. Comput. Graph. Forum 38 (2019), 355–366.Google ScholarCross Ref
    70. Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, and Christian Theobalt. 2020. Neural Re-Rendering of Humans from a Single Image. In European Conference on Computer Vision (ECCV).Google ScholarDigital Library
    71. Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, and Victor Lempitsky. 2019. Textured Neural Avatars. arXiv:1905.08776 [cs.CV]Google Scholar
    72. Chenyang Si, Wei Wang, Liang Wang, and Tieniu Tan. 2018. Multistage Adversarial Losses for Pose-Based Human Image Synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    73. Aliaksandr Siarohin, Enver Sangineto, Stephane Lathuiliere, and Nicu Sebe. 2018. Deformable GANs for Pose-based Human Image Generation. In CVPR 2018.Google Scholar
    74. Yinghao Xu Qianqian Wang Qing Shuai Hujun Bao Xiaowei Zhou Sida Peng, Yuanqing Zhang. 2020. Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans. arXiv preprint arXiv:2012.15838 (2020).Google Scholar
    75. Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhöfer. 2019a. DeepVoxels: Learning Persistent 3D Feature Embeddings. In Computer Vision and Pattern Recognition (CVPR).Google Scholar
    76. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019b. Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations. In Advances in Neural Information Processing Systems.Google Scholar
    77. Olga Sorkine and Marc Alexa. 2007. As-rigid-as-possible Surface Modeling. In Proceedings of the Fifth Eurographics Symposium on Geometry Processing (Barcelona, Spain) (SGP ’07). Eurographics Association.Google ScholarDigital Library
    78. S. Starke, H. Zhang, T. Komura, and J. Saito. 2019. Neural state machine for character-scene interactions. ACM Transactions on Graphics (TOG) 38 (2019), 1 — 14.Google ScholarDigital Library
    79. Carsten Stoll, Juergen Gall, Edilson de Aguiar, Sebastian Thrun, and Christian Theobalt. 2010. Video-Based Reconstruction of Animatable Human Characters. ACM Trans. Graph. 29, 6, Article 139 (Dec. 2010), 10 pages. Google ScholarDigital Library
    80. Zhaoqi Su, Weilin Wan, Tao Yu, Lingjie Liu, Lu Fang, Wenping Wang, and Yebin Liu. 2020. MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera. IEEE Transactions on Visualization and Computer Graphics (2020), 1–1. Google ScholarCross Ref
    81. Robert W. Sumner, Johannes Schmid, and Mark Pauly. 2007. Embedded Deformation for Shape Manipulation. ACM Trans. Graph. 26, 3 (July 2007).Google ScholarDigital Library
    82. Min Tang, tongtong wang, Zhongyuan Liu, Ruofeng Tong, and Dinesh Manocha. 2018. I-Cloth: Incremental Collision Handling for GPU-Based Interactive Cloth Simulation. ACM Trans. Graph. 37, 6, Article 204 (Dec. 2018), 10 pages. Google ScholarDigital Library
    83. Yu Tao, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Dai Quionhai, Gerard Pons-Moll, and Yebin Liu. 2019. SimulCap : Single-View Human Performance Capture with Cloth Simulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
    84. TheCaptury 2020. The Captury. http://www.thecaptury.com/.Google Scholar
    85. Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred neural rendering: image synthesis using neural textures. ACM Transactions on Graphics 38 (2019).Google ScholarDigital Library
    86. Treedys 2020. Treedys. https://www.treedys.com/.Google Scholar
    87. Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. 2020. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video. https://arxiv.org/abs/2012.12247 (2020).Google Scholar
    88. Marco Volino, Dan Casas, John Collomosse, and Adrian Hilton. 2014. Optimal Representation of Multiple View Video. In Proceedings of the British Machine Vision Conference. BMVA Press.Google ScholarCross Ref
    89. Huamin Wang, Florian Hecht, Ravi Ramamoorthi, and James F. O’Brien. 2010. Example-Based Wrinkle Synthesis for Clothing Animation. ACM Trans. Graph. 29, 4, Article 107 (July 2010), 8 pages. Google ScholarDigital Library
    90. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018a. Video-to-Video Synthesis. In Advances in Neural Information Processing Systems (NeurIPS). 1152–1164.Google Scholar
    91. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018b. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. In CVPR.Google Scholar
    92. Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, and Michael Zollhöfer. 2020. Learning Compositional Radiance Fields of Dynamic Human Heads. arXiv:2012.09955 [cs.CV]Google Scholar
    93. Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. https://arxiv.org/abs/2011.12950 (2020).Google Scholar
    94. Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, and Christian Theobalt. 2011. Video-based Characters: Creating New Human Performances from a Multi-view Video Database. In ACM SIGGRAPH 2011 Papers (Vancouver, British Columbia, Canada) (SIGGRAPH ’11). ACM, New York, NY, USA, Article 32, 10 pages. Google ScholarDigital Library
    95. Weiwei Xu, Nobuyuki Umentani, Qianwen Chao, Jie Mao, Xiaogang Jin, and Xin Tong. 2014. Sensitivity-Optimized Rigging for Example-Based Real-Time Clothing Synthesis. ACM Trans. Graph. 33, 4, Article 107 (July 2014), 11 pages. Google ScholarDigital Library
    96. Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, and Christian Theobalt. 2020. Pose-Guided Human Animation from a Single Image in the Wild. arXiv:2012.03796 [cs.CV]Google Scholar
    97. Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020a. NERF++: Analyzing and Improving Neural Radiance Fields. https://arxiv.org/abs/2010.07492 (2020).Google Scholar
    98. Meng Zhang, Tuanfeng Wang, Duygu Ceylan, and Niloy J. Mitra. 2020b. Deep Detail Enhancement for Any Garment. arXiv:2008.04367 [cs.GR]Google Scholar
    99. Tiancheng Zhi, Christoph Lassner, Tony Tung, Carsten Stoll, Srinivasa G. Narasimhan, and Minh Vo. 2020. TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video. arXiv:2008.00158 [cs.CV]Google Scholar
    100. C Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. 2004. High-quality video view interpolation using a layered representation. In ACM Transactions on Graphics (TOG), Vol. 23. ACM, 600–608.Google Scholar
    101. J. S. Zurdo, J. P. Brito, and M. A. Otaduy. 2013. Animating Wrinkles by Example on Non-Skinned Cloth. IEEE Transactions on Visualization and Computer Graphics 19, 1 (2013), 149–158.Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: