“Match: differentiable material graphs for procedural material capture” by Shi, Li, Hašan, Sunkavalli, Boubekeur, et al. …
Conference:
Type(s):
Title:
- Match: differentiable material graphs for procedural material capture
Session/Category Title: Differentiable Graphics
Presenter(s)/Author(s):
Abstract:
We present MATch, a method to automatically convert photographs of material samples into production-grade procedural material models. At the core of MATch is a new library DiffMat that provides differentiable building blocks for constructing procedural materials, and automatic translation of large-scale procedural models, with hundreds to thousands of node parameters, into differentiable node graphs. Combining these translated node graphs with a rendering layer yields an end-to-end differentiable pipeline that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.
References:
1. Adobe. 2019. Substance. https://docs.substance3d.com/sat.Google Scholar
2. Miika Aittala, Timo Aila, and Jaakko Lehtinen. 2016. Reflectance Modeling by Neural Texture Synthesis. ACM Transactions on Graphics 35, 4 (July 2016), 65:1–65:13.Google ScholarDigital Library
3. Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. 2013. Practical SVBRDF Capture in the Frequency Domain. ACM Transactions on Graphics 32, 4 (July 2013), 110:1–110:12.Google ScholarDigital Library
4. Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. 2015. Two-shot SVBRDF Capture for Stationary Materials. ACM Transactions on Graphics 34, 4 (July 2015), 110:1–110:13.Google ScholarDigital Library
5. Brett Burley. 2012. Physically-based shading at Disney. In ACM SIGGRAPH 2012 Courses.Google Scholar
6. Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau. 2018. Single-image SVBRDF Capture with a Rendering-aware Deep Network. ACM Transactions on Graphics 37, 4 (July 2018), 128:1–128:15.Google ScholarDigital Library
7. Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, and Adrien Bousseau. 2019. Flexible SVBRDF Capture with a Multi-Image Deep Network. Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering) 38, 4 (July 2019).Google ScholarCross Ref
8. Valentin Deschaintre, George Drettakis, and Adrien Bousseau. 2020. Guided Fine-Tuning for Large-Scale Material Transfer. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 91–105.Google Scholar
9. Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.Google Scholar
10. Bruno Galerne, Ares Lagae, Sylvain Lefebvre, and George Drettakis. 2012. Gabor Noise by Example. ACM Transactions on Graphics 31, 4 (July 2012), 9.Google ScholarDigital Library
11. B. Galerne, A. Leclaire, and L. Moisan. 2017. Texton Noise. Computer Graphics Forum 36, 8 (2017), 205–218.Google ScholarCross Ref
12. Duan Gao, Xiao Li, Yue Dong, Pieter Peers, Kun Xu, and Xin Tong. 2019. Deep Inverse Rendering for High-resolution SVBRDF Estimation from an Arbitrary Number of Images. ACM Transactions on Graphics 38, 4 (July 2019), 134:1–134:15.Google ScholarDigital Library
13. Andrew Gardner, Chris Tchou, Tim Hawkins, and Paul Debevec. 2003. Linear Light Source Reflectometry. ACM Transactions on Graphics 22, 3 (July 2003), 749–758.Google ScholarDigital Library
14. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. arXiv:cs.CV/1508.06576Google Scholar
15. L. A. Gatys, A. S. Ecker, and M. Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2414–2423. Google ScholarCross Ref
16. Dar’ya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. 2016. BRDF Representation and Acquisition. Computer Graphics Forum (2016).Google Scholar
17. Yu Guo, Milos Hasan, Lingqi Yan, and Shuang Zhao. 2019. A Bayesian Inference Framework for Procedural Material Parameter Estimation. arXiv:cs.GR/1912.01067Google Scholar
18. Eric Heitz and Fabrice Neyret. 2018. High-Performance By-Example Noise Using a Histogram-Preserving Blending Operator. Proc. ACM Comput. Graph. Interact. Tech. 1, 2 (Aug. 2018), 25.Google ScholarDigital Library
19. Kato Hiroharu, Ushiku Yoshitaka, and Tatsuya Harada. 2018. Neural 3D Mesh Renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
20. Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand. 2020. DiffTaichi: Differentiable Programming for Physical Simulation. ICLR (2020).Google Scholar
21. Yiwei Hu, Julie Dorsey, and Holly Rushmeier. 2019. A Novel Framework for Inverse Procedural Texture Modeling. ACM Transactions on Graphics 38, 6 (Nov. 2019), 186:1–186:14.Google ScholarDigital Library
22. Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. 2018. Exposure: A white-box photo post-processing framework. ACM Transactions on Graphics 37, 2 (July 2018), 1–17.Google ScholarDigital Library
23. Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (12 2014).Google Scholar
24. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.Google Scholar
25. Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018a. Differentiable Monte Carlo Ray Tracing through Edge Sampling. ACM Transactions on Graphics 37, 6 (July 2018), 222:1–222:11.Google Scholar
26. Zhengqin Li, Kalyan Sunkavalli, and Manmohan Chandraker. 2018b. Materials for masses: SVBRDF acquisition with a single mobile phone image. In Proceedings of the European Conference on Computer Vision (ECCV). 72–87.Google ScholarDigital Library
27. Matthew M Loper and Michael J Black. 2014. OpenDR: An approximate differentiable renderer. In Proceedings of the European Conference on Computer Vision (ECCV). 154–169.Google ScholarCross Ref
28. Merlin Nimier-David, Delio Vicini, Tizian Zeltner, and Wenzel Jakob. 2019. Mitsuba 2: A Retargetable Forward and Inverse Renderer. ACM Transactions on Graphics 38, 6 (Nov. 2019), 203:2–203:17.Google ScholarDigital Library
29. E. Riba, D. Mishkin, D. Ponsa, E. Rublee, and G. Bradski. 2020. Kornia: an Open Source Differentiable Computer Vision Library for PyTorch. https://arxiv.org/pdf/1910.02190.pdfGoogle Scholar
30. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR).Google Scholar
31. Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, and Christian Theobalt. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In IEEE International Conference on Computer Vision (ICCV). 3735–3744.Google Scholar


