Skip to main content
Log in

Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Proper orthogonal decomposition (POD) allows reduced-order modeling of complex dynamical systems at a substantial level, while maintaining a high degree of accuracy in modeling the underlying dynamical systems. Advances in machine learning algorithms enable learning POD-based dynamics from data and making accurate and fast predictions of dynamical systems. This paper extends the recently proposed heavy-ball neural ODEs (HBNODEs) (Xia et al. NeurIPS, 2021] for learning data-driven reduced-order models (ROMs) in the POD context, in particular, for learning dynamics of time-varying coefficients generated by the POD analysis on training snapshots constructed by solving full-order models. HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees, including 1) HBNODE can learn long-range dependencies effectively from sequential observations, which is crucial for learning intrinsic patterns from sequential data, and 2) HBNODE is computationally efficient in both training and testing. We compare HBNODE with other popular ROMs on several complex dynamical systems, including the von Kármán Street flow, the Kurganov-Petrova-Popov equation, and the one-dimensional Euler equations for fluids modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Data Availability

All data and code related to this paper are available at https://github.com/JustinBakerMath/pod_hbnode/.

Notes

  1. see http://www.math.utah.edu/~bwang/mathds/Lecture8.pdf for details.

  2. HBNODE can be seen as a special GHBNODE with \(\xi =0\) and \(\sigma \) be the identity map.

  3. This \(\gamma \) is distinct from the \(\gamma \) discussed in Sect. 3.2.

References

  1. Antoulas, A.: Approximation of Large-Scale Dynamical Systems. Advances in Design and Control. Society for Industrial and Applied Mathematics (2005)

  2. Antoulas, A.C., Beattie, C.A., Gugercin, S.: Interpolatory Model Reduction of Large-Scale Dynamical Systems. In: Mohammadpour, J., Grigoriadis, K.M. (eds.) Efficient Modeling and Control of Large-Scale Systems, pp. 3–58. Springer, Boston, MA (2010)

    Chapter  MATH  Google Scholar 

  3. Baker, J., Cherkaev, E., Narayan, A., Wang, B,: Learning pod of complex dynamics usingheavy-ball neural odes: Animations. https://www.github.com/JustinBakerMath/pod_hbnode/blob/master/README.md#animations

  4. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Article  Google Scholar 

  5. Benner, P., Gugercin, S., Willcox, K.: A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4), 483–531 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  6. Benner, P., Gugercin, S., Willcox, K.: A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4), 483–531 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Berkooz, G., Holmes, P., Lumley, J.L.: The proper orthogonal decomposition in the analysis of turbulent flows. Annu. Rev. Fluid Mech. 25(1), 539–575 (1993)

    Article  MathSciNet  Google Scholar 

  8. Bittner, L., Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishechenko, E.F.: the mathematical theory of optimal processes. VIII + 360 S. New York/London 1962. Wiley. Preis 90/-. ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik, 43(10-11):514–515, (1963)

  9. Chen, R.T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc., New York (2018)

    Google Scholar 

  10. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  11. Cohen, M.A., Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. SMC–13(5), 815–826 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  12. Craster, R.V., Matar, O.K.: Dynamics and stability of thin liquid films. Rev. Mod. Phys. 81(3), 1131 (2009)

    Article  Google Scholar 

  13. Dormand, J.R., Prince, P.J.: A family of embedded runge-kutta formulae. J. Comput. Appl. Math. 6(1), 19–26 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dupont, E., Doucet, A., Teh, Y.W.: Augmented neural odes. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. (2019)

  15. Dutta, S., Rivera-Casillas, P., Cecil, O.M., Farthing, M.W, Perracchione, E., Putti, M.: Data-driven reduced order modeling of environmental hydrodynamics using deep autoencoders and neural odes. arXiv preprint arXiv:2107.02784, (2021)

  16. Dutta, S., Rivera-Casillas, P., Farthing, M.W.: Neural ordinary differential equations for data-driven reduced order modeling of environmental hydrodynamics. arXiv preprint arXiv:2104.13962 (2021)

  17. Germano, M., Piomelli, U., Moin, P., Cabot, W.H.: A dynamic subgrid-scale eddy viscosity model. Phys. Fluids A 3(7), 1760–1765 (1991)

    Article  MATH  Google Scholar 

  18. Gugercin, S., Antoulas, A.C.: A Survey of Model Reduction by Balanced Truncation and Some New Results. Int. J. Control 77(8), 748–766 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  19. Harten, A., Lax, P.D., van Leer, B.: On Upstream Differencing and Godunov-Type Schemes for Hyperbolic Conservation Laws. SIAM Rev. 25(1), 35–61 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  20. He, K., Zhang, X., Ren, S., Sun, J: Identity mappings in deep residual networks. In European Conference on Computer Vision, pp. 630–645 (2016)

  21. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  22. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  23. Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: convergence and generalization in neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8580–8589, (2018)

  24. Kani, J.N., Elsheikh, A.H.: Dr-rnn: A deep residual recurrent neural network for model reduction. arXiv preprint arXiv:1709.00939, (2017)

  25. Nagoor Kani, J., Elsheikh, A.H.: Reduced-order modeling of subsurface multi-phase flow models using deep residual recurrent neural networks. Transp. Porous Media 126(3), 713–741 (2019)

    Article  MathSciNet  Google Scholar 

  26. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  27. Kuramoto, Y.: Diffusion-Induced Chaos in Reaction Systems. Prog. Theor. Phys. Suppl. 64, 346–367 (1978). (02)

    Article  Google Scholar 

  28. Kurganov, A., Petrova, G., Popov, B.: Adaptive Semidiscrete Central-Upwind Schemes for Nonconvex Hyperbolic Conservation Laws. SIAM J. Sci. Comput. 29(6), 2381–2401 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  29. Lechner, M., Hasani, R.: Learning long-term dependencies in irregularly-sampled time series. arXiv preprint arXiv:2006.04418, (2020)

  30. Liang, Y.C., Lee, H.P., Lim, S.P., Lin, W.Z., Lee, K.H., Wu, C.G.: Proper orthogonal decomposition and its applications-part i: Theory. J. Sound Vib. 252(3), 527–544 (2002)

    Article  MATH  Google Scholar 

  31. Lui, H.F.S., Wolf, W.R.: Construction of reduced-order models for fluid flows using deep feedforward neural networks. J. Fluid Mech. 872, 963–994 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  32. Ma, C., Wang, J., et al.: Model reduction with memory and the machine learning of dynamical systems. arXiv preprint arXiv:1808.04258, (2018)

  33. Mannarino, A., Mantegazza, P.: Nonlinear aeroelastic reduced order modeling by recurrent neural networks. J. Fluids Struct. 48, 103–121 (2014)

    Article  Google Scholar 

  34. Massaroli, S., Poli, M., Park, J., Yamashita, A., Asama, H.: Dissecting neural odes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 3952–3963. Curran Associates, Inc., New York (2020)

    Google Scholar 

  35. Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Phys. Fluids 33(3), 037106 (2021)

    Article  Google Scholar 

  36. Mohebujjaman, M., Rebholz, L.G., Iliescu, T.: Physically constrained data-driven correction for reduced-order modeling of fluid flows. Int. J. Numer. Meth. Fluids 89(3), 103–122 (2019)

    Article  MathSciNet  Google Scholar 

  37. Moin, P., Mahesh, K.: Direct numerical simulation: a tool in turbulence research. Annu. Rev. Fluid Mech. 30(1), 539–578 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  38. Mou, C., Liu, H., Wells, D.R., Iliescu, T.: Data-driven correction reduced order models for the quasi-geostrophic equations: A numerical investigation. Int. J. Comput. Fluid Dyn. 34(2), 147–159 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  39. Murata, T., Fukami, K., Fukagata, K.: Nonlinear mode decomposition with convolutional neural networks for fluid dynamics. J. Fluid Mech. 882, A13 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  40. Nguyen, T., Baraniuk, R., Bertozzi, A., Osher, S., Wang, B.: MomentumRNN: Integrating momentum into recurrent neural networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1924–1936. Curran Associates, Inc., New York (2020)

    Google Scholar 

  41. Norcliffe, A., Bodnar, C., Day, B., Simidjievski, N., Lió, P.: On second order behaviour in augmented neural odes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 5911–5921. Curran Associates, Inc., New York (2020)

    Google Scholar 

  42. Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310–1318, (2013)

  43. Pearson, K.: Liii on lines and planes of closest fit to systems of points in space. London Edinburgh Dublin Philosoph. Magaz. J. Sci. 2(11), 559–572 (1901)

    Article  MATH  Google Scholar 

  44. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  45. Rojas, C.J.G., Dengel, A., Ribeiro, M.D.: Reduced-order Model for Fluid Flows via Neural Ordinary Differential Equations. arXiv:2102.02248 [physics], February (2021). arXiv: 2102.02248

  46. Rosenblatt, F.: Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab Inc Buffalo NY, (1961)

  47. Rubanova, Y., Chen, R.T.Q., Duvenaud, D.K: Latent ordinary differential equations for irregularly-sampled time series. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., (2019)

  48. San, O., Maulik, R.: Neural network closures for nonlinear model order reduction. arXiv preprint arXiv:1705.08532, (2017)

  49. San, O., Maulik, R.: Machine learning closures for model order reduction of thermal fluids. Appl. Math. Model. 60, 681–710 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  50. San, O., Maulik, R., Ahmed, M.: An artificial neural network framework for reduced order modeling of transient flows. Commun. Nonlinear Sci. Numer. Simul. 77, 271–287 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  51. Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  52. Sivashinsky, G.I.: On flame propagation under conditions of stoichiometry. SIAM J. Appl. Math. 39(1), 67–82 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  53. Sivashinsky, G.I.: Nonlinear analysis of hydrodynamic instability in laminar flames-i. derivation of basic equations. Acta Astronaut. 4(11), 1177–1206 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  54. Sun, T., Ling, H., Shi, Z., Li, D., Wang, B.: Training deep neural networks with adaptive momentum inspired by the quadratic optimization. arXiv preprint arXiv:2110.09057, (2021)

  55. Wang, B., Nguyen, T.M., Bertozzi, A.L., Baraniuk, R.G., Osher, S.J.: Scheduled restart momentum for accelerated stochastic gradient descent. arXiv preprint arXiv:2002.10583, (2020)

  56. Wang, B., Xia, H., Nguyen, T., Osher, S.: How does momentum benefit deep neural networks architecture design? a few case studies. arXiv preprint arXiv:2110.07034, (2021)

  57. Wang, B., Ye, Q.: Stochastic gradient descent with nonlinear conjugate gradient-style adaptive momentum. arXiv preprint arXiv:2012.02188, (2020)

  58. Wang, M., Li, H.-X., Chen, X., Chen, Y.: Deep learning-based model reduction for distributed parameter systems. IEEE Trans. Syst. Man Cybern. Syst. 46(12), 1664–1674 (2016)

    Article  Google Scholar 

  59. Xia, H., Suliafu, V., Ji, H., Nguyen, T., Bertozzi, A., Osher, S., Wang, B.: Heavy ball neural ordinary differential equation. In Advances in Neural Information Processing Systems, volume 34. Curran Associates, Inc., (2021)

  60. You, D., Moin, P.: A dynamic global-coefficient subgrid-scale eddy-viscosity model for large-eddy simulation in complex geometries. Phys. Fluids 19(6), 065110 (2007)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This material is based on research sponsored by NSF grants DMS-1848508, DMS-1924935, DMS-1952339, DMS-2110145, DMS-2111117, DMS-2152762, and DMS-2208361, DOE grant DE-SC0021142 and DE-SC0023490, and AFOSR FA9550-20-1-0338. We also acknowledge support from a seed grant from the College of Science at the University of Utah.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bao Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baker, J., Cherkaev, E., Narayan, A. et al. Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs. J Sci Comput 95, 54 (2023). https://doi.org/10.1007/s10915-023-02176-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02176-8

Keywords

Mathematics Subject Classification

Navigation