Skip to main content
Log in

Fundamental tensor operations for large-scale data analysis using tensor network formats

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac, Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the aim of breaking the curse-of-dimensionality in large-scale numerical analysis, we describe basic operations on large-scale vectors, matrices, and high-order tensors represented by TT decomposition. The proposed representations can be used for describing numerical methods based on TT decomposition for solving large-scale optimization problems such as systems of linear equations and symmetric eigenvalue problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. Alternatively, we can define the multi-index by \( \overline{i_1i_2\cdots i_N} = i_1 + (i_{2}-1)I_1 + \cdots + (i_N-1)I_1I_2\cdots I_{N-1} \). In any case, we assume that the Kronecker product is (re-)defined in a consistent manner as \((\mathbf {a}\otimes \mathbf {b})_{{\overline{ij}}} = a_i b_j\).

  2. In Proposition 1(e) and (f), factors of Kronecker products are in a different (reversed) order, i.e., increasing from 1 to N, compared to the order in the literature (De Lathauwer et al. 2000; Kolda 2006; Kolda and Bader 2009), i.e., decreasing from N to 1, due to the definition of matricizations in (1).

References

  • Bungartz, H.-J., & Griebel, M. (2004). Sparse grids. Acta Numerica, 13, 147–269.

    Article  MathSciNet  MATH  Google Scholar 

  • Cichocki, A. (2014a). Era of big data processing: A new approach via tensor networks and tensor decompositions. ArXiv:1403.2048.

  • Cichocki, A. (2014b). Tensor networks for big data analytics and large-scale optimization problems. ArXiv:1407.3124.

  • Cichocki, A., Zdunek, R., Phan, A. H., & Amari, S. (2009). Nonnegative matrix and tensor factorizations: Applications to exploratory multi-way data analysis and blind source separation. Chichester: Wiley.

    Book  Google Scholar 

  • Debals, O., & De Lathauwer, L. (2015). Stochastic and deterministic tensorization for blind signal separation. In E. Vincent, A. Yeredor, Z. Koldovský, & P. Tichavský (Eds.), Latent Variable Analysis and Signal Separation: LVA/ICA 2015, LNCS 9237 (pp. 3–13). Cham: Springer.

  • De Lathauwer, L. (2009). A survey of tensor methods. In 2009 IEEE international symposium on circuits and systems (ISCAS 2009) (pp. 2773–2776).

  • De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.

    Article  MathSciNet  MATH  Google Scholar 

  • de Launey, W., & Seberry, J. (1994). The strong Kronecker product. Journal of Combinatorial Theory, Series A, 66(2), 192–213.

    Article  MathSciNet  MATH  Google Scholar 

  • Dolgov, S. V., Khoromskij, B. N., Oseledets, I. V., & Savostyanov, D. V. (2014). Computation of extreme eigenvalues in higher dimensions using block tensor train format. Computer Physics Communications, 185(4), 1207–1216.

    Article  MATH  Google Scholar 

  • Dolgov, S. V., & Savostyanov, D. V. (2014). Alternating minimal energy methods for linear systems in higher dimensions. SIAM Journal on Scientific Computing, 36(5), A2248–A2271.

    Article  MathSciNet  MATH  Google Scholar 

  • Donoho, D. L., & Johnstone, J. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3), 425–455.

    Article  MathSciNet  MATH  Google Scholar 

  • Espig, M., Hackbusch, W., Handschuh, S., & Schneider, R. (2011). Optimization problems in contracted tensor networks. Computing and Visualization in Science, 14(6), 271–285.

    Article  MathSciNet  MATH  Google Scholar 

  • Espig, M., Naraparaju, K. K., & Schneider, J. (2012). A note on tensor chain approximation. Computing and Visualization in Science, 15(6), 331–344.

    Article  MathSciNet  MATH  Google Scholar 

  • Falcó, A., & Hackbusch, W. (2012). On minimal subspaces in tensor representations. Foundations of Computational Mathematics, 12(6), 765–803.

    Article  MathSciNet  MATH  Google Scholar 

  • Grasedyck, L. (2010). Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications, 31(4), 2029–2054.

    Article  MathSciNet  MATH  Google Scholar 

  • Grasedyck, L., Kressner, D., & Tobler, C. (2013). A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen, 36(1), 53–78.

    Article  MathSciNet  MATH  Google Scholar 

  • Hackbusch, W. (2012). Tensor spaces and numerical tensor calculus. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Hackbusch, W., & Kühn, S. (2009). A new scheme for the tensor representation. Journal of Fourier Analysis and Applications, 15(5), 706–722.

    Article  MathSciNet  MATH  Google Scholar 

  • Holtz, S., Rohwedder, T., & Schneider, R. (2012a). On manifolds of tensors of fixed TT-rank. Numerische Mathematik, 120(4), 701–731.

    Article  MathSciNet  MATH  Google Scholar 

  • Holtz, S., Rohwedder, T., & Schneider, R. (2012b). The alternating linear scheme for tensor optimization in the tensor train format. SIAM Journal on Scientific Computing, 34(2), A683–A713.

    Article  MathSciNet  MATH  Google Scholar 

  • Kazeev, V. A., & Khoromskij, B. N. (2012). Low-rank explicit QTT representation of the Laplace operator and its inverse. SIAM Journal on Matrix Analysis and Applications, 33(3), 742–758.

    Article  MathSciNet  MATH  Google Scholar 

  • Kazeev, V. A., Khoromskij, B. N., & Tyrtyshnikov, E. E. (2013). Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity. SIAM Journal on Scientific Computing, 35(3), A1511–A1536.

    Article  MathSciNet  MATH  Google Scholar 

  • Kolda, T. G. (2006). Multilinear operators for higher-order decompositions. Technical Report SAND2006-2081, Sandia National Laboratories, Albuquerque, NM and Livermore, CA.

  • Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455–500.

    Article  MathSciNet  MATH  Google Scholar 

  • Khoromskij, B. N. (2011). \(O(d\log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling. Constructive Approximation, 34(2), 257–280.

    Article  MathSciNet  MATH  Google Scholar 

  • Khoromskij, B. N. (2012). Tensors-structured numerical methods in scientific computing: Survey on recent advances. Chemometrics and Intelligent Laboratory Systems, 110(1), 1–19.

    Article  Google Scholar 

  • Khoromskij, B. N., & Oseledets, I.V. (2010). DMRG+QTT approach to computation of the ground state for the molecular Schrödinger operator. MIS-Preprint 69/2010, Max Planck Institute for Mathematics in the Sciences, Leipzig. www.mis.mpg.de/preprints/2010/preprint2010_69.

  • Kressner, D., Steinlechner, M., & Uschmajew, A. (2014). Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM Journal on Scientific Computing, 36(5), A2346–A2368.

    Article  MathSciNet  MATH  Google Scholar 

  • Lee, N., & Cichocki, A. (2014). Big data matrix singular value decomposition based on low-rank tensor train decomposition. In Z. Zeng, Y. Li, & I. King (Eds.), Advances in Neural Networks-ISNN 2014, LNCS 8866 (pp. 121–130). Cham: Springer.

  • Lee, N., & Cichocki, A. (2015). Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format. SIAM Journal on Matrix Analysis and Applications, 36(3), 994–1014.

    Article  MathSciNet  MATH  Google Scholar 

  • Lee, N., & Cichocki, A. (2016). Regularized computation of approximate pseudoinverse of large matrices using low-rank tensor train decompositions. SIAM Journal on Matrix Analysis and Applications, 37(2), 598–623.

    Article  MathSciNet  MATH  Google Scholar 

  • Oseledets, I. V. (2010). Approximation of \(2^d\times 2^d\) matrices using tensor decomposition. SIAM Journal on Matrix Analysis and Applications, 31(4), 2130–2145.

    Article  MATH  Google Scholar 

  • Oseledets, I. V. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5), 2295–2317.

    Article  MathSciNet  MATH  Google Scholar 

  • Oseledets, I. V. (2014). MATLAB TT-Toolbox, Version 2.3. https://github.com/oseledets/TT-Toolbox.

  • Oseledets, I. V., & Dolgov, S. V. (2012). Solution of linear systems and matrix inversion in the TT-format. SIAM Journal on Scientific Computing, 34(5), A2718–A2739.

    Article  MathSciNet  MATH  Google Scholar 

  • Oseledets, I. V., & Tyrtyshnikov, E. E. (2009). Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM Journal on Scientific Computing, 31(5), 3744–3759.

    Article  MathSciNet  MATH  Google Scholar 

  • Schollwöck, U. (2011). The density-matrix renormalization group in the age of matrix product states. Annals of Physics, 326(1), 96–192.

    Article  MathSciNet  MATH  Google Scholar 

  • Smolyak, S. A. (1963). Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Mathematics Doklady, 4, 240–243.

    MATH  Google Scholar 

  • Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.

    Article  MathSciNet  Google Scholar 

  • Vervliet, N., Debals, O., Sorber, L., & De Lathauwer, L. (2014). Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Processing Magazine, 31(5), 71–79.

    Article  Google Scholar 

  • White, S. R. (1993). Density-matrix algorithms for quantum renormalization groups. Physical Review B, 48(14), 10345–10356.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Namgil Lee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, N., Cichocki, A. Fundamental tensor operations for large-scale data analysis using tensor network formats. Multidim Syst Sign Process 29, 921–960 (2018). https://doi.org/10.1007/s11045-017-0481-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-017-0481-0

Keywords

Mathematics Subject Classification

Navigation