“Neural Importance Sampling” by Müller, Mcwilliams, Rousselle, Gross and Novák

  • ©

Conference:


Type(s):


Title:

    Neural Importance Sampling

Session/Category Title:   Machine Learning for Rendering


Presenter(s)/Author(s):



Abstract:


    We propose to use deep neural networks for generating samples in Monte Carlo integration. Our work is based on non-linear independent components estimation (NICE), which we extend in numerous ways to improve performance and enable its application to integration problems. First, we introduce piecewise-polynomial coupling transforms that greatly increase the modeling power of individual coupling layers. Second, we propose to preprocess the inputs of neural networks using one-blob encoding, which stimulates localization of computation and improves inference. Third, we derive a gradient-descent-based optimization for the Kullback-Leibler and the χ2 divergence for the specific application of Monte Carlo integration with unnormalized stochastic estimates of the target distribution. Our approach enables fast and accurate inference and efficient sample generation independently of the dimensionality of the integration domain. We show its benefits on generating natural images and in two applications to light-transport simulation: first, we demonstrate learning of joint path-sampling densities in the primary sample space and importance sampling of multi-dimensional path prefixes thereof. Second, we use our technique to extract conditional directional densities driven by the product of incident illumination and the BSDF in the rendering equation, and we leverage the densities for path guiding. In all applications, our approach yields on-par or higher performance than competing techniques at equal sample count.

References:


    1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, et al. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Retrieved from http://tensorflow.org/.
    2. Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. Learning to learn by gradient descent by gradient descent. arXiv:1606.04474 (June 2016).
    3. Benedikt Bitterli. 2016. Rendering resources. Retrieved from https://benedikt-bitterli.me/resources/.
    4. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. 2018. Neural ordinary differential equations. arXiv:1806.07366 (June 2018).
    5. Yutian Chen, Matthew W. Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, and Nando de Freitas. 2017. Learning to learn without gradient descent by gradient descent. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 748–756.
    6. Ken Dahm and Alexander Keller. 2018. Learning light transport the reinforced way. In Proceedings in Mathematics 8 Statistics Monte Carlo and Quasi-Monte Carlo Methods, Art B. Owen and Peter W. Glynn (Eds.). Vol. 241. Springer, 181–195.
    7. Laurent Dinh, David Krueger, and Yoshua Bengio. 2014. NICE: Non-linear independent components estimation. arXiv:1410.8516 (Oct. 2014).
    8. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2016. Density estimation using real NVP. arXiv:1605.08803 (March 2016).
    9. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. 2015. MADE: Masked autoencoder for distribution estimation. In International Conference on Machine Learning. 881–889.
    10. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proc. 13th International Conference on Artificial Intelligence and Statistics (May 13–15). JMLR.org, 249–256.
    11. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680.
    12. Jerry Jinfeng Guo, Pablo Bauszat, Jacco Bikker, and Elmar Eisemann. 2018. Primary sample space path guiding. In Eurographics Symposium on Rendering—Experimental Ideas 8 Implementations, Wenzel Jakob and Toshiya Hachisuka (Eds.). The Eurographics Association.
    13. Toshiya Hachisuka, Anton S. Kaplanyan, and Carsten Dachsbacher. 2014. Multiplexed metropolis light transport. ACM Trans. Graph. 33, 4, Article 100 (July 2014), 10 pages. DOI:https://doi.org/10.1145/2601097.2601138
    14. David Money Harris and Sarah L. Harris. 2013. 3.4.2—State encodings. In Digital Design and Computer Architecture (2nd Ed.). Morgan Kaufmann, Boston, 129–131. DOI:https://doi.org/10.1016/B978-0-12-394424-5.00002-1
    15. Sebastian Herholz, Oskar Elek, Jens Schindel, Jaroslav Křivánek, and Hendrik P. A. Lensch. 2018. A unified manifold framework for efficient BRDF sampling based on parametric mixture models. In Eurographics Symposium on Rendering—Experimental Ideas 8 Implementations, Wenzel Jakob and Toshiya Hachisuka (Eds.). The Eurographics Association.
    16. Sebastian Herholz, Oskar Elek, Jiří Vorba, Hendrik Lensch, and Jaroslav Křivánek. 2016. Product importance sampling for light transport path guiding. Computer Graphics Forum (2016). DOI:https://doi.org/10.1111/cgf.12950
    17. Heinrich Hey and Werner Purgathofer. 2002. Importance sampling with hemispherical particle footprints. In Proceedings of the 18th Spring Conference on Computer Graphics (SCCG’02). ACM, 107–114. DOI:https://doi.org/10.1145/584458.584476
    18. Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron C. Courville. 2018. Neural autoregressive flows. arXiv:1804.00779 (April 2018).
    19. Wenzel Jakob. 2010. Mitsuba renderer. Retrieved from http://www.mitsuba-renderer.org.
    20. Henrik Wann Jensen. 1995. Importance driven path tracing using the photon map. In Rendering Techniques. Springer Vienna, Vienna, 326–335. DOI:https://doi.org/10.1007/978-3-7091-9430-0_31
    21. James T. Kajiya. 1986. The rendering equation. Computer Graphics 20 (1986), 143–150.
    22. Csaba Kelemen, László Szirmay-Kalos, György Antal, and Ferenc Csonka. 2002. A simple and robust mutation strategy for the Metropolis light transport algorithm. Computer Graphics Forum 21, 3 (May 2002), 531–540. DOI:https://doi.org/10.1111/1467-8659.t01-1-00703
    23. Alexander Keller and Ken Dahm. 2019. Integral equations and machine learning. Mathematics and Computers in Simulation 161 (2019), 2–12.
    24. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980 (June 2014).
    25. Diederik P. Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1×1 convolutions. arXiv:1807.03039 (July 2018).
    26. Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems. 4743–4751.
    27. Eric P. Lafortune and Yves D. Willems. 1995. A 5D tree to reduce the variance of Monte Carlo ray tracing. In Rendering Techniques’95 (Proc. of the 6th Eurographics Workshop on Rendering). 11–20. DOI:https://doi.org/10.1007/978-3-7091-9430-0_2
    28. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV’15). IEEE Computer Society, Washington, D.C., 3730–3738. DOI:https://doi.org/10.1109/ICCV.2015.425
    29. Thomas Müller, Markus Gross, and Jan Novák. 2017. Practical path guiding for efficient light-transport simulation. Comput. Graphics Forum 36, 4 (June 2017), 91–100. DOI:https://doi.org/10.1111/cgf.13227
    30. Jacopo Pantaleoni and Eric Heitz. 2017. Notes on optimal approximations for importance sampling. arXiv:1707.08358 (July 2017).
    31. George Papamakarios, Iain Murray, and Theo Pavlakou. 2017. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems. 2338–2347.
    32. Vincent Pegoraro, Carson Brownlee, Peter S. Shirley, and Steven G. Parker. 2008a. Towards interactive global illumination effects via sequential Monte Carlo adaptation. In Proceedings of the 3rd IEEE Symposium on Interactive Ray Tracing. 107–114.
    33. Vincent Pegoraro, Ingo Wald, and Steven G. Parker. 2008b. Sequential Monte Carlo adaptation in low-anisotropy participating media. Comput. Graphics Forum 27, 4 (Sept. 2008), 1097–1104. DOI:https://doi.org/10.1111/j.1467-8659.2008.01247.x
    34. Danilo Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In International Conference on Machine Learning. 1530–1538.
    35. Fabrice Rousselle, Claude Knaus, and Matthias Zwicker. 2011. Adaptive sampling and reconstruction using greedy error minimization. ACM Trans. Graph. 30, 6 (Dec. 2011). DOI:https://doi.org/10.1145/2024156.2024193
    36. Joshua Steinhurst and Anselmo Lastra. 2006. Global importance sampling of glossy surfaces using the photon map. IEEE Symposium on Interactive Ray Tracing (Sept. 2006), 133–138. DOI:https://doi.org/10.1109/RT.2006.280224
    37. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016a. Wavenet: A generative model for raw audio. arXiv:1609.03499 (Sept. 2016).
    38. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016b. Pixel recurrent neural networks. In International Conference on Machine Learning. 1747–1756.
    39. Eric Veach. 1997. Robust Monte Carlo Methods for Light Transport Simulation. Ph.D. Dissertation. Stanford, CA.
    40. Eric Veach and Leonidas J. Guibas. 1994. Bidirectional estimators for light transport. In EG Rendering Workshop.
    41. Eric Veach and Leonidas J. Guibas. 1995. Optimally combining sampling techniques for Monte Carlo rendering. In Proc. SIGGRAPH. 419–428. DOI:https://doi.org/10.1145/218380.218498
    42. Petr Vévoda, Ivo Kondapaneni, and Jaroslav Křivánek. 2018. Bayesian online regression for adaptive direct illumination sampling. ACM Trans. Graph. 37, 4 (Aug. 2018).
    43. Jiří Vorba, Ondřej Karlík, Martin Šik, Tobias Ritschel, and Jaroslav Křivánek. 2014. On-line learning of parametric mixture models for light transport simulation. ACM Trans. Graph. 33, 4 (Aug. 2014).
    44. Quan Zheng and Matthias Zwicker. 2018. Learning to importance sample in primary sample space. arXiv:1808.07840 (Sept. 2018).

ACM Digital Library Publication:



Overview Page: