skip to main content
research-article

Efficient Neural Style Transfer for Volumetric Simulations

Published:30 November 2022Publication History
Skip Abstract Section

Abstract

Artistically controlling fluids has always been a challenging task. Recently, volumetric Neural Style Transfer (NST) techniques have been used to artistically manipulate smoke simulation data with 2D images. In this work, we revisit previous volumetric NST techniques for smoke, proposing a suite of upgrades that enable stylizations that are significantly faster, simpler, more controllable and less prone to artifacts. Moreover, the energy minimization solved by previous methods is camera dependent. To avoid that, a computationally expensive iterative optimization performed for multiple views sampled around the original simulation is needed, which can take up to several minutes per frame. We propose a simple feed-forward neural network architecture that is able to infer view-independent stylizations that are three orders of the magnitude faster than its optimization-based counterpart.

References

  1. Kai Bai, Wei Li, Mathieu Desbrun, and Xiaopei Liu. 2019. Dynamic Upsampling of Smoke through Dictionary-based Learning. (oct 2019). arXiv:1910.09166 http://arxiv.org/abs/1910.09166Google ScholarGoogle Scholar
  2. Kai Bai, Chunhao Wang, Mathieu Desbrun, and Xiaopei Liu. 2021. Predicting high-resolution turbulence details in space and time. ACM Transactions on Graphics 40, 6 (dec 2021), 1--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mengyu Chu and Nils Thuerey. 2017. Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM Transactions on Graphics 36, 4 (jul 2017), 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mengyu Chu, Nils Thuerey, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. 2021. Learning meaningful controls for fluids. ACM Transactions on Graphics 40, 4 (aug 2021), 1--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Graham Collier. 2022. Raya and the Last Dragon. https://www.sidefx.com/community/raya-and-the-last-dragon/Google ScholarGoogle Scholar
  6. Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2017. A Learned Representation For Artistic Style. ICLR (2017). https://arxiv.org/abs/1610.07629Google ScholarGoogle Scholar
  7. Julian Fong, Magnus Wrenninge, Christopher Kulla, and Ralf Habel. 2017. Production volume rendering. In ACM SIGGRAPH 2017 Courses on - SIGGRAPH '17. ACM Press, New York, New York, USA, 1--79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Erik Franz, Barbara Solenthaler, and Nils Thuerey. 2021. Global Transport for Fluid Reconstruction with Learned Self-Supervision. (apr 2021). arXiv:2104.06031 http://arxiv.org/abs/2104.06031Google ScholarGoogle Scholar
  9. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2414--2423. Google ScholarGoogle ScholarCross RefCross Ref
  10. Jie Guo, Mengtian Li, Zijing Zong, Yuntao Liu, Jingwu He, Yanwen Guo, and Ling Qi Yan. 2021. Volumetric appearance stylization with stylizing kernel prediction network. ACM Transactions on Graphics (TOG) 40, 4 (jul 2021). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jun Han and Chaoli Wang. 2022. TSR-VFD: Generating temporal super-resolution for unsteady vector field data. Computers Graphics 103 (apr 2022), 168--179. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Philipp Holl, Nils Thuerey, and Vladlen Koltun. 2020. Learning to Control PDEs with Differentiable Physics. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  13. Yuanming Hu, Jiancheng Liu, Andrew Spielberg, Joshua B. Tenenbaum, William T. Freeman, Jiajun Wu, Daniela Rus, and Wojciech Matusik. 2018. ChainQueen: A Real-Time Differentiable Physical Simulator for Soft Robotics. (oct 2018). arXiv:1810.01054 http://arxiv.org/abs/1810.01054Google ScholarGoogle Scholar
  14. Yuanming Hu, Xinxin Zhang, Ming Gao, and Chenfanfu Jiang. 2019. On hybrid lagrangian-eulerian simulation methods: practical notes and high-performance aspects. In ACM SIGGRAPH 2019 Courses. ACM, 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Yongcheng Jing, Yang Liu, Yezhou Yang, Zunlei Feng, Yizhou Yu, Dacheng Tao, and Mingli Song. 2018. Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields. 244--260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. 2019. Neural Style Transfer: A Review. IEEE Transactions on Visualization and Computer Graphics (2019), 1--1. Google ScholarGoogle ScholarCross RefCross Ref
  17. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision.Google ScholarGoogle ScholarCross RefCross Ref
  18. Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3907--3916.Google ScholarGoogle ScholarCross RefCross Ref
  19. Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, and Barbara Solenthaler. 2019a. Transport-based neural style transfer for smoke simulations. ACM Transactions on Graphics 38, 6 (dec 2019), 1--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, and Barbara Solenthaler. 2020. Lagrangian neural style transfer for fluids. ACM Transactions on Graphics 39, 4 (aug 2020). Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Byungsoo Kim, Vinicius C. Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, and Barbara Solenthaler. 2019b. Deep Fluids: A Generative Network for Parameterized Fluid Simulations. Computer Graphics Forum (Proc. Eurographics) 38, 2 (2019).Google ScholarGoogle Scholar
  22. Byungsoo Kim, Xingchang Huang, Laura Wuelfroth, Jingwei Tang, Guillaume Cordonnier, Markus Gross, and Barbara Solenthaler. 2022. Deep Reconstruction of 3D Smoke Densities from Artist Sketches. Computer Graphics Forum (Proc. Eurographics) 41, 2 (2022).Google ScholarGoogle Scholar
  23. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Google ScholarGoogle ScholarCross RefCross Ref
  24. M. Kohlbrenner, U. Finnendahl, T. Djuren, and M. Alexa. 2021. Gauss Stylization: Interactive Artistic Mesh Modeling based on Preferred Surface Normals. Computer Graphics Forum 40, 5 (aug 2021), 33--43. Google ScholarGoogle ScholarCross RefCross Ref
  25. L'ubor Ladický, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. 2015. Data-driven fluid simulations using regression forests. ACM Transactions on Graphics 34, 6 (oct 2015), 1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Shaohua Li, Xinxing Xu, Liqiang Nie, and Tat-Seng Chua. 2017b. Laplacian-Steered Neural Style Transfer. In Proceedings of the 2017 ACM on Multimedia Conference - MM '17. ACM Press, New York, New York, USA, 1716--1724. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Xueting Li, Sifei Liu, Jan Kautz, and Ming-Hsuan Yang. 2019. Learning Linear Transformations for Fast Image and Video Style Transfer. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle Scholar
  28. Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. 2017a. Demystifying Neural Style Transfer. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17). AAAI Press, 2230--2236.Google ScholarGoogle ScholarCross RefCross Ref
  29. Zijie Li and Amir Barati Farimani. 2022. Graph neural network-accelerated Lagrangian fluid simulation. Computers Graphics 103 (apr 2022), 201--211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hsueh-Ti Derek Liu and Alec Jacobson. 2019. Cubic Stylization. (oct 2019). arXiv:1910.02926 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. 2018. Paparazzi: Surface Editing by way of Multi-View Image Processing. ACM Transactions on Graphics (2018).Google ScholarGoogle Scholar
  32. Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. 2017. Deep Photo Style Transfer. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 6997--7005. Google ScholarGoogle ScholarCross RefCross Ref
  33. Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, and Rana Hanocka. 2021. Text2Mesh: Text-Driven Neural Stylization for Meshes. (dec 2021). arXiv:2112.03221 http://arxiv.org/abs/2112.03221Google ScholarGoogle Scholar
  34. Mike Navarro and Jacob Rice. 2021. Stylizing Volumes with Neural Networks. In ACM SIGGRAPH 2021 Talks. ACM, New York, NY, USA, 1--2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Google ScholarGoogle ScholarCross RefCross Ref
  36. Eric Risser, Pierre Wilmot, and Connelly Barnes. 2017. Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses. (jan 2017). arXiv:1701.08893 http://arxiv.org/abs/1701.08893Google ScholarGoogle Scholar
  37. Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. 2018. Artistic Style Transfer for Videos and Spherical Images. International Journal of Computer Vision 126, 11 (nov 2018), 1199--1219. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Connor Schenck and Dieter Fox. 2018. SPNets: Differentiable Fluid Dynamics for Deep Neural Networks. (jun 2018). arXiv:1806.06094 http://arxiv.org/abs/1806.06094Google ScholarGoogle Scholar
  39. Ahmed Selim, Mohamed Elgharib, and Linda Doyle. 2016. Painting style transfer for head portraits using convolutional neural networks. ACM Transactions on Graphics 35, 4 (jul 2016), 1--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Andrew Selle, Ronald Fedkiw, ByungMoon Kim, Yingjie Liu, and Jarek Rossignac. 2008. An Unconditionally Stable MacCormack Method. Journal of Scientific Computing 35, 2--3 (jun 2008), 350--371. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. (sep 2014). arXiv:1409.1556 http://arxiv.org/abs/1409.1556Google ScholarGoogle Scholar
  42. Leslie N. Smith and Nicholay Topin. 2017. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. Google ScholarGoogle ScholarCross RefCross Ref
  43. Steven W Smith. 1997. The Scientist and Engineer's Guide to Digital Signal Processing. California Technical Publishing, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Jingwei Tang, Vinicius C. Azevedo, Guillaume Cordonnier, and Barbara Solenthaler. 2021. Honey, I Shrunk the Domain: Frequency-aware Force Field Reduction for Efficient Fluids Optimization. Computer Graphics Forum 40, 2 (may 2021), 339--353. Google ScholarGoogle ScholarCross RefCross Ref
  45. Nils Thuerey, Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, and Kiwon Um. 2021. Physics-based Deep Learning. (sep 2021). arXiv:2109.05237 http://arxiv.org/abs/2109.05237Google ScholarGoogle Scholar
  46. Nils Thuerey and Tobias Pfaff. 2018. MantaFlow. http://mantaflow.com.Google ScholarGoogle Scholar
  47. Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2016. Accelerating Eulerian Fluid Simulation With Convolutional Networks. (jul 2016). arXiv:1607.03597 http://arxiv.org/abs/1607.03597Google ScholarGoogle Scholar
  48. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance Normalization: The Missing Ingredient for Fast Stylization. (jul 2016). arXiv:1607.08022 http://arxiv.org/abs/1607.08022Google ScholarGoogle Scholar
  49. Kiwon Um, Robert Brand, Yun, Fei, Philipp Holl, and Nils Thuerey. 2020. Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers. (jun 2020). arXiv:2007.00016 http://arxiv.org/abs/2007.00016Google ScholarGoogle Scholar
  50. Nobuyuki Umetani and Bernd Bickel. 2018. Learning three-dimensional flow for interactive aerodynamic design. ACM Transactions on Graphics 37, 4 (jul 2018), 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Xin Wang, Geoffrey Oxholm, Da Zhang, and Yuan-Fang Wang. 2016. Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer. (nov 2016). arXiv:1612.01895 http://arxiv.org/abs/1612.01895Google ScholarGoogle Scholar
  52. You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. 2018. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow. arXiv preprint arXiv:1801.09710 (jan 2018). arXiv:1801.09710 http://arxiv.org/abs/1801.09710Google ScholarGoogle Scholar
  53. Guowei Yan, Zhili Chen, Jimei Yang, and Huamin Wang. 2020. Interactive liquid splash modeling by user sketches. ACM Transactions on Graphics 39, 6 (dec 2020), 1--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Cheng Yang, Xubo Yang, and Xiangyun Xiao. 2016. Data-driven projection method in fluid simulation. Computer Animation and Virtual Worlds 27, 3--4 (may 2016), 415--424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Kangxue Yin, Jun Gao, Maria Shugrina, Sameh Khamis, and Sanja Fidler. 2021. 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations. (aug 2021). arXiv:2108.12958 http://arxiv.org/abs/2108.12958Google ScholarGoogle Scholar
  56. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2242--2251. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Efficient Neural Style Transfer for Volumetric Simulations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Graphics
      ACM Transactions on Graphics  Volume 41, Issue 6
      December 2022
      1428 pages
      ISSN:0730-0301
      EISSN:1557-7368
      DOI:10.1145/3550454
      Issue’s Table of Contents

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 November 2022
      Published in tog Volume 41, Issue 6

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader