Abstract
Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eigenvectors in the classical way is impossible. We consider a tractable case in which both the coefficient matrix and its eigenvectors can be represented in the low-rank tensor train formats. We propose a subspace optimization method combined with some suitable truncation steps to the given low-rank Tensor Train formats. Its performance can be further improved if the alternating minimization method is used to refine the intermediate solutions locally. Preliminary numerical experiments show that our algorithm is competitive to the state-of-the-art methods on problems arising from the discretization of the stationary Schrödinger equation.






Similar content being viewed by others
Notes
Downloadable from http://anchp.epfl.ch/TTeMPS.
References
Ballani, J., Grasedyck, L.: A projection method to solve linear systems in tensor format. Numer. Linear Algebra Appl. 20, 27–43 (2013)
Dolgov, S.V., Khoromskij, B.N., Oseledets, I.V., Savostyanov, D.V.: Computation of extreme eigenvalues in higher dimensions using block tensor train format. Comput. Phys. Commun. 185, 1207–C1216 (2013)
Grasedyck, L.: Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure. Computing 72, 247–265 (2004)
Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2009/10)
Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM Mitt. 36, 53–78 (2013)
Hackbusch, W.: Entwicklungen nach exponentialsummen, Technical Report 4, Max Planck Institute for Mathematics in the Sciences, 2005, MPI MIS Leipzig, Revised version (2010)
Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer, Heidelberg (2012)
Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimization in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)
Knyazev, A.V.: Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23, 517–541 (2001)
Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)
Kressner, D., Steinlechner, M., Uschmajew, A.: Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM J. Sci. Comput. 36, A2346–CA2368 (2014)
Kressner, D., Tobler, C.: Preconditioned low-rank methods for high-dimensional elliptic PDE eigenvalue problems. Comput. Methods Appl. Math. 11, 363–381 (2011)
Kressner, D., Tobler, C.: htucker, a MATLAB toolbox for tensors in hierarchical Tucker format, Technical Report (2012)
De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)
Lebedeva, O.S.: Tensor conjugate-gradient-type method for Rayleigh quotient minimization in block QTT-format. Rus. J. Numer. Anal. Math. Model. 26, 465–489 (2011)
Liu, X., Wen, Z., Zhang, Y.: Limited memory block Krylov subspace optimization for computing dominant singular value decompositions. SIAM J. Sci. Comput. 35–3, A1641–A1668 (2013)
Oseledets, I.: DMRG approach to fast linear algebra in the TT-format. Comput. Methods Appl. Math. 11, 382–393 (2011)
Oseledets, I.V.: Approximation of \(2^d \times 2^d\) matrices using tensor decomposition. SIAM J. Matrix Anal. Appl 31, 2130–2145 (2010)
Oseledets, I.V.: Tensor train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)
Wen, Z., Zhang, Y.: Block algorithms with augmented rayleigh-ritz projections for large-scale eigenpair computation, arXiv:1507.06078 (2015)
Acknowledgments
We thank D. Kressner, M. Steinlechner and A. Uschmajew for sharing online their matlab codes on EVAMEn and the TT/MPS tensor toolbox TTeMPS. The authors would like to thank the associate editor Prof. Wotao Yin and two anonymous referees for their detailed and valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
J. Zhang Research supported in part by NSF Grant CMMI-1462408. Z. Wen Research supported in part by NSFC Grants 11322109 and 91330202, and by the National Basic Research Project under the Grant 2015CB856002. Y. Zhang Research supported in part by NSF DMS-1115950 and NSF DMS-1418724.
Rights and permissions
About this article
Cite this article
Zhang, J., Wen, Z. & Zhang, Y. Subspace Methods with Local Refinements for Eigenvalue Computation Using Low-Rank Tensor-Train Format. J Sci Comput 70, 478–499 (2017). https://doi.org/10.1007/s10915-016-0255-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-016-0255-0