Abstract
Proper orthogonal decomposition (POD) allows reduced-order modeling of complex dynamical systems at a substantial level, while maintaining a high degree of accuracy in modeling the underlying dynamical systems. Advances in machine learning algorithms enable learning POD-based dynamics from data and making accurate and fast predictions of dynamical systems. This paper extends the recently proposed heavy-ball neural ODEs (HBNODEs) (Xia et al. NeurIPS, 2021] for learning data-driven reduced-order models (ROMs) in the POD context, in particular, for learning dynamics of time-varying coefficients generated by the POD analysis on training snapshots constructed by solving full-order models. HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees, including 1) HBNODE can learn long-range dependencies effectively from sequential observations, which is crucial for learning intrinsic patterns from sequential data, and 2) HBNODE is computationally efficient in both training and testing. We compare HBNODE with other popular ROMs on several complex dynamical systems, including the von Kármán Street flow, the Kurganov-Petrova-Popov equation, and the one-dimensional Euler equations for fluids modeling.
Similar content being viewed by others
Data Availability
All data and code related to this paper are available at https://github.com/JustinBakerMath/pod_hbnode/.
Notes
see http://www.math.utah.edu/~bwang/mathds/Lecture8.pdf for details.
HBNODE can be seen as a special GHBNODE with \(\xi =0\) and \(\sigma \) be the identity map.
This \(\gamma \) is distinct from the \(\gamma \) discussed in Sect. 3.2.
References
Antoulas, A.: Approximation of Large-Scale Dynamical Systems. Advances in Design and Control. Society for Industrial and Applied Mathematics (2005)
Antoulas, A.C., Beattie, C.A., Gugercin, S.: Interpolatory Model Reduction of Large-Scale Dynamical Systems. In: Mohammadpour, J., Grigoriadis, K.M. (eds.) Efficient Modeling and Control of Large-Scale Systems, pp. 3–58. Springer, Boston, MA (2010)
Baker, J., Cherkaev, E., Narayan, A., Wang, B,: Learning pod of complex dynamics usingheavy-ball neural odes: Animations. https://www.github.com/JustinBakerMath/pod_hbnode/blob/master/README.md#animations
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)
Benner, P., Gugercin, S., Willcox, K.: A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4), 483–531 (2015)
Benner, P., Gugercin, S., Willcox, K.: A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4), 483–531 (2015)
Berkooz, G., Holmes, P., Lumley, J.L.: The proper orthogonal decomposition in the analysis of turbulent flows. Annu. Rev. Fluid Mech. 25(1), 539–575 (1993)
Bittner, L., Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishechenko, E.F.: the mathematical theory of optimal processes. VIII + 360 S. New York/London 1962. Wiley. Preis 90/-. ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik, 43(10-11):514–515, (1963)
Chen, R.T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc., New York (2018)
Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
Cohen, M.A., Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. SMC–13(5), 815–826 (1983)
Craster, R.V., Matar, O.K.: Dynamics and stability of thin liquid films. Rev. Mod. Phys. 81(3), 1131 (2009)
Dormand, J.R., Prince, P.J.: A family of embedded runge-kutta formulae. J. Comput. Appl. Math. 6(1), 19–26 (1980)
Dupont, E., Doucet, A., Teh, Y.W.: Augmented neural odes. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. (2019)
Dutta, S., Rivera-Casillas, P., Cecil, O.M., Farthing, M.W, Perracchione, E., Putti, M.: Data-driven reduced order modeling of environmental hydrodynamics using deep autoencoders and neural odes. arXiv preprint arXiv:2107.02784, (2021)
Dutta, S., Rivera-Casillas, P., Farthing, M.W.: Neural ordinary differential equations for data-driven reduced order modeling of environmental hydrodynamics. arXiv preprint arXiv:2104.13962 (2021)
Germano, M., Piomelli, U., Moin, P., Cabot, W.H.: A dynamic subgrid-scale eddy viscosity model. Phys. Fluids A 3(7), 1760–1765 (1991)
Gugercin, S., Antoulas, A.C.: A Survey of Model Reduction by Balanced Truncation and Some New Results. Int. J. Control 77(8), 748–766 (2004)
Harten, A., Lax, P.D., van Leer, B.: On Upstream Differencing and Godunov-Type Schemes for Hyperbolic Conservation Laws. SIAM Rev. 25(1), 35–61 (1983)
He, K., Zhang, X., Ren, S., Sun, J: Identity mappings in deep residual networks. In European Conference on Computer Vision, pp. 630–645 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: convergence and generalization in neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8580–8589, (2018)
Kani, J.N., Elsheikh, A.H.: Dr-rnn: A deep residual recurrent neural network for model reduction. arXiv preprint arXiv:1709.00939, (2017)
Nagoor Kani, J., Elsheikh, A.H.: Reduced-order modeling of subsurface multi-phase flow models using deep residual recurrent neural networks. Transp. Porous Media 126(3), 713–741 (2019)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
Kuramoto, Y.: Diffusion-Induced Chaos in Reaction Systems. Prog. Theor. Phys. Suppl. 64, 346–367 (1978). (02)
Kurganov, A., Petrova, G., Popov, B.: Adaptive Semidiscrete Central-Upwind Schemes for Nonconvex Hyperbolic Conservation Laws. SIAM J. Sci. Comput. 29(6), 2381–2401 (2007)
Lechner, M., Hasani, R.: Learning long-term dependencies in irregularly-sampled time series. arXiv preprint arXiv:2006.04418, (2020)
Liang, Y.C., Lee, H.P., Lim, S.P., Lin, W.Z., Lee, K.H., Wu, C.G.: Proper orthogonal decomposition and its applications-part i: Theory. J. Sound Vib. 252(3), 527–544 (2002)
Lui, H.F.S., Wolf, W.R.: Construction of reduced-order models for fluid flows using deep feedforward neural networks. J. Fluid Mech. 872, 963–994 (2019)
Ma, C., Wang, J., et al.: Model reduction with memory and the machine learning of dynamical systems. arXiv preprint arXiv:1808.04258, (2018)
Mannarino, A., Mantegazza, P.: Nonlinear aeroelastic reduced order modeling by recurrent neural networks. J. Fluids Struct. 48, 103–121 (2014)
Massaroli, S., Poli, M., Park, J., Yamashita, A., Asama, H.: Dissecting neural odes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 3952–3963. Curran Associates, Inc., New York (2020)
Maulik, R., Lusch, B., Balaprakash, P.: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Phys. Fluids 33(3), 037106 (2021)
Mohebujjaman, M., Rebholz, L.G., Iliescu, T.: Physically constrained data-driven correction for reduced-order modeling of fluid flows. Int. J. Numer. Meth. Fluids 89(3), 103–122 (2019)
Moin, P., Mahesh, K.: Direct numerical simulation: a tool in turbulence research. Annu. Rev. Fluid Mech. 30(1), 539–578 (1998)
Mou, C., Liu, H., Wells, D.R., Iliescu, T.: Data-driven correction reduced order models for the quasi-geostrophic equations: A numerical investigation. Int. J. Comput. Fluid Dyn. 34(2), 147–159 (2020)
Murata, T., Fukami, K., Fukagata, K.: Nonlinear mode decomposition with convolutional neural networks for fluid dynamics. J. Fluid Mech. 882, A13 (2020)
Nguyen, T., Baraniuk, R., Bertozzi, A., Osher, S., Wang, B.: MomentumRNN: Integrating momentum into recurrent neural networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1924–1936. Curran Associates, Inc., New York (2020)
Norcliffe, A., Bodnar, C., Day, B., Simidjievski, N., Lió, P.: On second order behaviour in augmented neural odes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 5911–5921. Curran Associates, Inc., New York (2020)
Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310–1318, (2013)
Pearson, K.: Liii on lines and planes of closest fit to systems of points in space. London Edinburgh Dublin Philosoph. Magaz. J. Sci. 2(11), 559–572 (1901)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Rojas, C.J.G., Dengel, A., Ribeiro, M.D.: Reduced-order Model for Fluid Flows via Neural Ordinary Differential Equations. arXiv:2102.02248 [physics], February (2021). arXiv: 2102.02248
Rosenblatt, F.: Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab Inc Buffalo NY, (1961)
Rubanova, Y., Chen, R.T.Q., Duvenaud, D.K: Latent ordinary differential equations for irregularly-sampled time series. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., (2019)
San, O., Maulik, R.: Neural network closures for nonlinear model order reduction. arXiv preprint arXiv:1705.08532, (2017)
San, O., Maulik, R.: Machine learning closures for model order reduction of thermal fluids. Appl. Math. Model. 60, 681–710 (2018)
San, O., Maulik, R., Ahmed, M.: An artificial neural network framework for reduced order modeling of transient flows. Commun. Nonlinear Sci. Numer. Simul. 77, 271–287 (2019)
Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010)
Sivashinsky, G.I.: On flame propagation under conditions of stoichiometry. SIAM J. Appl. Math. 39(1), 67–82 (1980)
Sivashinsky, G.I.: Nonlinear analysis of hydrodynamic instability in laminar flames-i. derivation of basic equations. Acta Astronaut. 4(11), 1177–1206 (1977)
Sun, T., Ling, H., Shi, Z., Li, D., Wang, B.: Training deep neural networks with adaptive momentum inspired by the quadratic optimization. arXiv preprint arXiv:2110.09057, (2021)
Wang, B., Nguyen, T.M., Bertozzi, A.L., Baraniuk, R.G., Osher, S.J.: Scheduled restart momentum for accelerated stochastic gradient descent. arXiv preprint arXiv:2002.10583, (2020)
Wang, B., Xia, H., Nguyen, T., Osher, S.: How does momentum benefit deep neural networks architecture design? a few case studies. arXiv preprint arXiv:2110.07034, (2021)
Wang, B., Ye, Q.: Stochastic gradient descent with nonlinear conjugate gradient-style adaptive momentum. arXiv preprint arXiv:2012.02188, (2020)
Wang, M., Li, H.-X., Chen, X., Chen, Y.: Deep learning-based model reduction for distributed parameter systems. IEEE Trans. Syst. Man Cybern. Syst. 46(12), 1664–1674 (2016)
Xia, H., Suliafu, V., Ji, H., Nguyen, T., Bertozzi, A., Osher, S., Wang, B.: Heavy ball neural ordinary differential equation. In Advances in Neural Information Processing Systems, volume 34. Curran Associates, Inc., (2021)
You, D., Moin, P.: A dynamic global-coefficient subgrid-scale eddy-viscosity model for large-eddy simulation in complex geometries. Phys. Fluids 19(6), 065110 (2007)
Acknowledgements
This material is based on research sponsored by NSF grants DMS-1848508, DMS-1924935, DMS-1952339, DMS-2110145, DMS-2111117, DMS-2152762, and DMS-2208361, DOE grant DE-SC0021142 and DE-SC0023490, and AFOSR FA9550-20-1-0338. We also acknowledge support from a seed grant from the College of Science at the University of Utah.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Baker, J., Cherkaev, E., Narayan, A. et al. Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs. J Sci Comput 95, 54 (2023). https://doi.org/10.1007/s10915-023-02176-8
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-023-02176-8