“Transport-based neural style transfer for smoke simulations” by Kim, Azevedo, Gross and Solenthaler – ACM SIGGRAPH HISTORY ARCHIVES

“Transport-based neural style transfer for smoke simulations” by Kim, Azevedo, Gross and Solenthaler

  • 2019 SA Technical Papers_Kim_Transport-based neural style transfer for smoke simulations

Conference:


Type(s):


Title:

    Transport-based neural style transfer for smoke simulations

Session/Category Title:   Fluids Aflow


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.

References:


    1. Adam W Bargteil, Funshing Sin, Jonathan E Michaels, Tolga G Goktekin, and James F O’Brien. 2006. A Texture Synthesis Method for Liquid Animations. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 345–351.Google ScholarDigital Library
    2. Connelly Barnes and Fang-Lue Zhang. 2017. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media 3, 1 (2017), 3–20.Google ScholarCross Ref
    3. Adrien Bousseau, Fabrice Neyret, Joëlle Thollot, and David Salesin. 2007. Video water-colorization using bidirectional texture advection. ACM ToG 26, 3 (2007), 104.Google ScholarDigital Library
    4. Mark Browning, Connelly Barnes, Samantha Ritter, and Adam Finkelstein. 2014. Stylized keyframe animation of fluid simulations. In Proceedings of NPAR. 63–70.Google ScholarDigital Library
    5. Mengyu Chu and Nils Thuerey. 2017. Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM ToG 36, 4 (2017), 1–14.Google ScholarDigital Library
    6. Marie-Lena Eckert, Wolfgang Heidrich, and Nils Thuerey. 2018. Coupled Fluid Density and Motion from Single Views. Computer Graphics Forum (2018).Google Scholar
    7. Raanan Fattal and Dani Lischinski. 2004. Target-driven smoke animation. In ACM SIGGRAPH 2004. 441.Google ScholarDigital Library
    8. Julian Fong, Magnus Wrenninge, Christopher Kulla, and Ralf Habel. 2017. Production volume rendering. In ACM SIGGRAPH 2017 Courses. 1–79.Google ScholarDigital Library
    9. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2015. A neural algorithm of artistic style. Nature Communications (2015).Google Scholar
    10. Tiffany Inglis, Marie-Lena Eckert, James Gregson, and Nils Thuerey. 2017. Primal-Dual Optimization for Fluids. Computer Graphics Forum 36, 8 (2017), 354–368.Google ScholarCross Ref
    11. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. 2015. Spatial Transformer Networks. In Proc. of the NIPS. 2017–2025.Google Scholar
    12. Ondřej Jamriška, Jakub Fišer, Paul Asente, Jingwan Lu, Eli Shechtman, and Daniel Sỳkora. 2015. LazyFluids: appearance transfer for fluid animations. ACM Transactions on Graphics (TOG) 34, 4 (2015), 92.Google ScholarDigital Library
    13. Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. 2019. Neural style transfer: A review. IEEE TVCG (2019).Google Scholar
    14. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision.Google ScholarCross Ref
    15. Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3d mesh renderer. In Proceedings of the IEEE Conference on CVPR. 3907–3916.Google ScholarCross Ref
    16. Byungsoo Kim, Vinicius C. Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, and Barbara Solenthaler. 2019. Deep Fluids: A Generative Network for Parameterized Fluid Simulations. Computer Graphics Forum (Proc. Eurographics) 38, 2 (2019).Google Scholar
    17. Theodore Kim, Jerry Tessendorf, and Nils Thuerey. 2013. Closest point turbulence for liquid surfaces. ACM ToG 32, 2 (2013), 15.Google ScholarDigital Library
    18. Theodore Kim, Nils Thürey, Doug James, and Markus Gross. 2008. Wavelet turbulence for fluid simulation. In ACM ToG, Vol. 27. ACM, 50.Google ScholarDigital Library
    19. Vivek Kwatra, David Adalsteinsson, Nipun Kwatra, Mark Carlson, and Ming C. Lin. 2006. Texturing fluids. In ACM SIGGRAPH 2006 Sketches. 63.Google Scholar
    20. Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. 2005. Texture optimization for example-based synthesis. In ACM SIGGRAPH 2005 Papers. 795.Google ScholarDigital Library
    21. L’ubor Ladický, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. 2015. Data-driven fluid simulations using regression forests. ACM ToG 34, 6 (2015).Google Scholar
    22. Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018. Differentiable Monte Carlo ray tracing through edge sampling. In SIGGRAPH Asia 2018. 1–11.Google Scholar
    23. Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. 2018. Paparazzi: Surface Editing by way of Multi-View Image Processing. ACM ToG (2018).Google Scholar
    24. Matthew M. Loper and Michael J. Black. 2014. OpenDR: An Approximate Differentiable Renderer. In Computer Vision – ECCV 2014, Vol. 8695. 154–169.Google Scholar
    25. Chongyang Ma, Li-Yi Wei, Baining Guo, and Kun Zhou. 2009. Motion field texture synthesis. In ACM ToG, Vol. 28. 110.Google ScholarDigital Library
    26. Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. 2015. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops. 37–45.Google ScholarDigital Library
    27. Antoine McNamara, Adrien Treuille, Zoran Popović, and Jos Stam. 2004. Fluid control using the adjoint method. In ACM SIGGRAPH 2004. 449.Google ScholarDigital Library
    28. Alexander Mordvintsev, Nicola Pezzotti, Ludwig Schubert, and Chris Olah. 2018. Differentiable Image Parameterizations. Distill (2018). Google ScholarCross Ref
    29. Rahul Narain, Vivek Kwatra, Huai-Ping Lee, Theodore Kim, Mark Carlson, and Ming C Lin. 2007. Feature-guided Dynamic Texture Synthesis on Continuous Flows. In Proceedings of the 18th Eurographics Conference on Rendering Techniques. 361–370.Google ScholarDigital Library
    30. Michael B. Nielsen and Robert Bridson. 2011. Guide shapes for high resolution naturalistic liquid simulation. In ACM SIGGRAPH 2011. 1.Google Scholar
    31. Michael B. Nielsen, Brian B. Christensen, Nafees Bin Zafar, Doug Roble, and Ken Museth. 2009. Guiding of smoke animations through variational coupling of simulations at different resolutions. In Proceedings ACM SIGGRAPH/Eurographics SCA. 217.Google ScholarDigital Library
    32. Makoto Okabe, Yoshinori Dobashi, Ken Anjyo, and Rikio Onai. 2015. Fluid volume modeling from sparse multi-view images by appearance transfer. ACM ToG 34, 4 (2015), 93.Google ScholarDigital Library
    33. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. 2017. Feature Visualization. Distill (2017). Google ScholarCross Ref
    34. Zherong Pan and Dinesh Manocha. 2017. Efficient Solver for Spacetime Control of Smoke. ACM Trans. Graph. 36, 4, Article 68a (July 2017).Google ScholarDigital Library
    35. Tobias Pfaff, Nils Thuerey, Andrew Selle, and Markus Gross. 2009. Synthetic turbulence using artificial boundary layers. ACM ToG 28, 5 (2009), 1.Google ScholarDigital Library
    36. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 652–660.Google Scholar
    37. Charles R Qi, Hao Su, Matthias Nießner, Angela Dai, Mengyuan Yan, and Leonidas J Guibas. 2016. Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on CVPR. 5648–5656.Google ScholarCross Ref
    38. N. Rasmussen, D. Enright, D. Nguyen, S. Marino, N. Sumner, W. Geiger, S. Hoon, and R. Fedkiw. 2004. Directable photorealistic liquids. In Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation. 193.Google Scholar
    39. Karthik Raveendran, Nils Thuerey, Chris Wojtan, and Greg Turk. 2012. Controlling Liquids Using Meshes. In Proc. of the ACM SIGGRAPH/Eurographics SCA. 255–264.Google Scholar
    40. Eric Risser, Pierre Wilmot, and Connelly Barnes. 2017. Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses. (jan 2017). arXiv:1701.08893 http://arxiv.org/abs/1701.08893Google Scholar
    41. Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. 2016. Artistic style transfer for videos. In German Conference on Pattern Recognition. Springer, 26–36.Google ScholarCross Ref
    42. Syuhei Sato, Yoshinori Dobashi, Theodore Kim, and Tomoyuki Nishita. 2018. Example-based turbulence style transfer. ACM ToG 37, 4 (2018), 84.Google ScholarDigital Library
    43. Hagit Schechter and Robert Bridson. 2008. Evolving Sub-Grid Turbulence for Smoke Animation. In In Proceedings of ACM Siggraph / Eurographics SCA.Google Scholar
    44. Andrew Selle, Ronald Fedkiw, ByungMoon Kim, Yingjie Liu, and Jarek Rossignac. 2008. An Unconditionally Stable MacCormack Method. Journal of Scientific Computing 35, 2–3 (2008), 350–371.Google ScholarDigital Library
    45. Lin Shi and Yizhou Yu. 2005. Taming liquids for rapidly changing targets. In Proceedings of ACM SIGGRAPH/Eurographics symposium on Computer animation. 229.Google ScholarDigital Library
    46. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going Deeper with Convolutions. In Computer Vision and Pattern Recognition (CVPR).Google Scholar
    47. Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2017. Accelerating eulerian fluid simulation with convolutional networks. In Proceedings of the 34th ICML Vol. 70. JMLR. org, 3424–3433.Google Scholar
    48. Adrien Treuille, Antoine McNamara, Zoran Popović, and Jos Stam. 2003. Keyframe control of smoke simulations. ACM ToG 22, 3 (2003), 716.Google ScholarDigital Library
    49. Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. 2017. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of the IEEE conference on CVPR. 2626–2634.Google ScholarCross Ref
    50. Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. 2016. Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. In Proceedings of the 33rd ICML – Vol. 48. 1349–1357.Google Scholar
    51. Zdravko Velinov, Marios Papas, Derek Bradley, Paulo Gotardo, Parsa Mirdehghan, Steve Marschner, Jan Novák, and Thabo Beeler. 2018. Appearance capture and modeling of human teeth. In SIGGRAPH Asia 2018. 1–13.Google Scholar
    52. Steffen Wiewel, Moritz Becher, and Nils Thuerey. 2019. Latent Space Physics: Towards Learning the Temporal Evolution of Fluid Flow. In CGF, Vol. 38. 71–82.Google ScholarCross Ref
    53. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on CVPR. 1912–1920.Google Scholar
    54. You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. 2018. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow. ACM ToG 37, 4 (2018).Google Scholar
    55. Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. 2016. Perspective Transformer Nets: Learning Single-view 3D Object Reconstruction Without 3D Supervision. In Proceedings of the 30th International Conference on NIPS. 1704–1712.Google Scholar
    56. Cheng Yang, Xubo Yang, and Xiangyun Xiao. 2016. Data-driven projection method in fluid simulation. Computer Animation and Virtual Worlds 27, 3–4 (2016), 415–424.Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org