Abstract
Most important computational problems nowadays are those related to processing of the
large data sets and to numerical solution of the high-dimensional integral-differential
equations. These problems arise in numerical modeling in quantum chemistry, material science,
and multiparticle dynamics, as well as in machine learning, computer simulation of stochastic
processes and many other applications related to big data analysis.
Modern tensor numerical methods enable solution of the multidimensional
partial differential equations (PDE) in
Numerical treatment of the high-dimensional problems by using the traditional
numerical methods suffers from the so-called “curse of dimensionality”.
This exponential growth
Traditional methods of separable approximation combine the canonical, Tucker,
as well as the matrix product states (MPS) formats, the latter known as the tensor train (TT)
decomposition [22, 21].
The recent tensor methods in combination with exponentially accurate sinc-based approximations
are proven to provide the
At present, tensor numerical methods and multilinear algebra continue to expand rapidly to a wide range of theoretical and applied fields, see for example [11, 14, 10]. We also refer to the recent research monographs [15, 12], where the tensor numerical methods in scientific computing with the particular focus on multi-dimensional PDEs and electronic structure calculations have been presented. These trends are reflected also in the papers from the present issue of CMAM.
This special issue is a collection of papers which demonstrate that the tensor techniques allow to solve various hard theoretical and computational problems including approximation of multi-dimensional elliptic/parabolic PDEs. This issue includes ten invited contributions on theoretical analysis and applications of tensor-based numerical methods. These papers cover a broad range of topics including construction of computational schemes for steady-state and dynamical problems as well as for stochastic and parametric equations, separation rank estimates for classes of functions and operators, numerical simulations etc. Below we briefly describe the content of the special issue.
The goal of the paper [1] is the efficient numerical solution of stochastic eigenvalue problems. Such problems often lead to prohibitively high-dimensional systems with tensor product structure when discretized with the stochastic Galerkin method. The authors exploit this inherent tensor product structure to develop a globalized low-rank inexact Newton method with which they tackle the stochastic eigenvalue problem. The effectiveness of the solver is illustrated by numerical experiments.
The paper [2] deals with an algorithm for solution of high-dimensional evolutionary equations (ODEs and discretized time-dependent PDEs) in the TT decomposition, assuming that the solution and the right-hand side of the ODE admit such a decomposition with a low rank parameter. A linear ODE, discretized via one-step or Chebyshev differentiation schemes, turns into a large linear system. The tensor decomposition allows to solve this system for several time points simultaneously. In numerical experiments with the transport and the chemical master equations, the author demonstrates that the new method is faster than traditional time stepping and stochastic simulation algorithms.
The paper [3] examines a completely non-intrusive, sample-based method for the computation of functional low-rank solutions of high-dimensional parametric random PDEs which have become an area of intensive research in Uncertainty Quantification. In order to obtain a generalized polynomial chaos representation of the approximate stochastic solution, a novel black-box rank-adapted tensor reconstruction procedure is proposed. The performance of the described approach is illustrated with several numerical examples and compared to Monte Carlo sampling.
The authors of [8] consider the abstract differential equations of the
heat and Schrödinger
type and discuss various N-parametric approximations on the base of the Cayley
transform and of the Laguerre expansion providing a sub-exponential accuracy,
i.e. the accuracy of the order
The paper [16] study a dynamical low-rank approximation on the manifold of fixed-rank tensor trains, and analyze projection methods for the time integration of such problems. The authors prove error estimates for the explicit Euler method, amended with quasi-optimal projections to the manifold, under suitable approximability assumptions. Then they discuss the possibilities and difficulties with higher order explicit and implicit projected Runge–Kutta methods, in particular, the ways for limiting rank growth in the increments, and robustness with respect to small singular values.
The paper [17] deals with a new algorithm for spectral learning of Hidden Markov Models (HMM). In contrast to standard approach, the parameters of the HMM are not approximated directly, but through an estimate for the joint probability distribution. Using TT-format, the authors get an approximation by minimizing the Frobenius distance between the empirical joint probability distribution and tensors with low TT-ranks with core tensors normalization constraints. An algorithm for the solution of optimization problem that is based on the alternating least squares (ALS) approach is proposed and its fast version for sparse tensors is developed. The authors compare the performance of the proposed algorithm with the existing schemes and found that it is much more robust if the number of hidden states is overestimated.
The paper [18] describes advanced numerical tools for working with multivariate functions and for the analysis of large data sets. In particular, covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, one can alternatively use a low-rank tensor formats, which reduce the computing and storage costs essentially. The authors apply the Tucker and canonical tensor decompositions to a family of Matérn-type radial functions with varying parameters and demonstrate theoretically and numerically that their tensor approximations exhibit exponentially fast convergence in the rank parameter, thus providing low computational complexity.
The paper [19] deals with a space-time isogeometric analysis scheme
for the discretization of parabolic evolution equations with diffusion coefficients
depending on both time and space variables. The problem is considered in a
space-time cylinder in
In [20] the authors propose an efficient algorithm to compute a low-rank approximation to the solution of so-called “Laplace-like” linear systems. The idea is to transform the problem into the frequency domain, and then to use cross approximation. In this case, we do not need to form explicit approximation to the inverse operator and can approximate the solution directly, which leads to reduced complexity. It is demonstrated that the proposed method is fast and robust by using it as a solver inside Uzawa iterative method for solving the Stokes problem.
The problem of approximately solving a system of univariate polynomials with one or more common roots and its coefficients corrupted by noise is studied in [23]. New Rayleigh quotient methods are proposed and evaluated for estimating the common roots. Using tensor algebra, reasonable starting values for the Rayleigh quotient methods can be computed. The new methods are compared to Gauss–Newton, solving an eigenvalue problem obtained from the generalized Sylvester matrix, and building a cluster among the roots of all polynomials. It is shown in a simulation study that Gauss–Newton and a new Rayleigh quotient method perform best, where the latter is more accurate when other roots than the true common roots are close together.
Acknowledgements
We would like to thank Professor Carsten Carstensen, the editor-in-chief of the Journal of Computational Methods in Applied Mathematics, for his kind support of this special issue and for the useful comments on this overview paper. We thank the managing editor Professor Piotr Matus, and Dr. Almas Sherbaf for the effective assistance of the review and production process. We appreciate the authors of all articles published in this special issue for the excellent contributions as well as the reviewers for their work on refereeing the manuscripts.
References
[1] P. Benner, A. Onwunta and M. Stoll, An low-rank inexact Newton–Krylov method for stochastic eigenvalue problems, Comput. Methods Appl. Math. 19 (2019), no. 1, 5–22. 10.1515/cmam-2018-0030Search in Google Scholar
[2] S. Dolgov, A tensor decomposition algorithm for large ODEs with conservation laws, Comput. Methods Appl. Math. 19 (2019), no. 1, 23–38. 10.1515/cmam-2018-0023Search in Google Scholar
[3] M. Eigel, J. Neumann, R. Schneider and S. Wolf, Non-intrusive tensor reconstruction for high dimensional random PDEs, Comput. Methods Appl. Math. 19 (2019), no. 1, 39–53. 10.1515/cmam-2018-0028Search in Google Scholar
[4] I. Gavrilyuk, Super exponentially convergent approximation to the solution of the Schrödinger equation in abstract setting, Comput. Methods Appl. Math. 10 (2010), no. 4, 345–358. 10.2478/cmam-2010-0020Search in Google Scholar
[5] I. Gavrilyuk, W. Hackbusch and B. Khoromskij, Data-sparse approximation to the operator-valued functions of elliptic operators, Math. Comp. 73 (2004), 1297–1324. 10.1090/S0025-5718-03-01590-4Search in Google Scholar
[6] I. Gavrilyuk, W. Hackbusch and B. Khoromskij, Tensor-product approximation to elliptic and parabolic solution operators in higher dimensions, Computing 74 (2005), 131–157. 10.1007/s00607-004-0086-ySearch in Google Scholar
[7] I. Gavrilyuk and B. Khoromskij, Quantized-TT-Cayley transform for computing the dynamics and the spectrum of high-dimensional Hamiltonians, Comput. Methods Appl. Math. 11 (2011), no. 3, 273–290. 10.2478/cmam-2011-0015Search in Google Scholar
[8] I. Gavrilyuk and B. Khoromskij, Quasi-optimal rank-structured approximation to multidimensional parabolic problems by Cayley transform and Chebyshev interpolation, Comput. Methods Appl. Math. 19 (2019), no. 1, 55–71. 10.1515/cmam-2018-0021Search in Google Scholar
[9] I. Gavrilyuk, V. Makarov and V. Vasylyk, Exponentially Convergent Algorithms for Abstract Differential Equations, Birkhäuser, Basel, 2011. 10.1007/978-3-0348-0119-5Search in Google Scholar
[10] L. Grasedyck, D. Kressner and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt. 36 (2013), no. 1, 53–78. 10.1002/gamm.201310004Search in Google Scholar
[11] W. Hackbusch, Tensor Spaces and Numerical Tensor Calculus, Springer, Berlin, 2012. 10.1007/978-3-642-28027-6Search in Google Scholar
[12] V. Khoromskaia and B. N. Khoromskij, Tensor Numerical Methods in Computational Quantum Chemistry, De Gruyter, Berlin, 2018. 10.1515/9783110365832Search in Google Scholar
[13]
B. N. Khoromskij,
[14] B. N. Khoromskij, Tensors-structured numerical methods in scientific computing: Survey on recent advances, Chemometr. Intell. Lab. Syst. 110 (2012), 1–19. 10.1016/j.chemolab.2011.09.001Search in Google Scholar
[15] B. N. Khoromskij, Tensor Numerical Methods in Scientific Computing, De Gruyter, Berlin, 2018. 10.1515/9783110365917Search in Google Scholar
[16] E. Kieri and B. Vandereycken, Projection methods for dynamical low-rank approximation of high-dimensional problems, Comput. Methods Appl. Math. 19 (2019), no. 1, 73–92. 10.1515/cmam-2018-0029Search in Google Scholar
[17] M. A. Kuznetsov and I. V. Oseledets, Tensor train spectral method for learning of hidden Markov models (HMM), Comput. Methods Appl. Math. 19 (2019), no. 1, 93–99. 10.1515/cmam-2018-0027Search in Google Scholar
[18] A. Litvinenko, D. Keyes, V. Khoromskaia, B. Khoromskij and H. Matthies, Tucker tensor analysis of Matérn functions in spatial statistics, Comput. Methods Appl. Math. 19 (2019), no. 1, 101–122. 10.1515/cmam-2018-0022Search in Google Scholar
[19] A. Mantzaflaris, F. Scholz and I. Toulopoulos, Low-rank space-time decoupled isogeometric analysis for parabolic problems with varying coefficients, Comput. Methods Appl. Math. 19 (2019), no. 1, 123–136. 10.1515/cmam-2018-0024Search in Google Scholar
[20] E. A. Muravleva and I. V. Oseledets, Approximate solution of linear systems with Laplace-like operators via cross approximation in the frequency domain, Comput. Methods Appl. Math. 19 (2019), no. 1, 137–145. 10.1515/cmam-2018-0026Search in Google Scholar
[21] I. V. Oseledets, Tensor train decomposition, SIAM J. Sci. Comput. 33 (2011), no. 5, 2295–2317. 10.1137/090752286Search in Google Scholar
[22] I. V. Oseledets and E. E. Tyrtyshnikov, Breaking the curse of dimensionality, or how to use SVD in many dimensions, SIAM J. Sci. Comput. 31 (2009), 3744–3759. 10.1137/090748330Search in Google Scholar
[23] A. Stegeman and L. De Lathauwer, Rayleigh quotient methods for estimating common roots of noisy univariate polynomials, Comput. Methods Appl. Math. 19 (2019), no. 1, 147–163. 10.1515/cmam-2018-0025Search in Google Scholar
© 2018 Walter de Gruyter GmbH, Berlin/Boston