Abstract
The popular fully-connected tensor network (FCTN) decomposition has achieved successful applications in many fields. A standard method to this decomposition is the alternating least squares. However, it often converges slowly and suffers from issues of numerical stability. In this work, we investigate the SVD-based algorithms for FCTN decomposition to tackle the aforementioned deficiencies. On the basis of a result about FCTN-ranks, a deterministic algorithm, namely FCTN-SVD, is first proposed, which can approximate the FCTN decomposition under a fixed accuracy. Then, we present the randomized version of the algorithm. Both synthetic and real data are used to test our algorithms. Numerical results show that they perform much better than the existing methods, and the randomized algorithm can indeed yield acceleration on FCTN-SVD. Moreover, we also apply our algorithms to tensor-on-vector regression and achieve quite decent performance.










Similar content being viewed by others
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Notes
The two degenerated decompositions are a little different from the regular TT and TR decompositions in form. However, they are essentially the same, and can be converted to each other by using the Matlab functions \(\textsc {squeeze}\left( \cdot \right) \) and \(\textsc {permute}\left( \cdot \right) \).
References
Ahmadi-Asl S, Cichocki A, Phan AH, Asante-Mensah MG, Ghazani MM, Tanaka T, Oseledets IV (2020) Randomized algorithms for fast computation of low rank tensor ring model. Mach Learn 2(1):011001
Ahmadi-Asl S, Abukhovich S, Asante-Mensah MG, Cichocki A, Phan AH, Tanaka T, Oseledets IV (2021) Randomized algorithms for computation of Tucker decomposition and higher order SVD (HOSVD). IEEE Access 9:28684–28706
Al Daas H, Ballard G, Cazeaux P, Hallman E, Miedlar A, Pasha M, Reid TW, Saibaba AK (2023) Randomized algorithms for rounding in the tensor-train format. SIAM J Sci Comput 45(1):74–95
Bader BW, Kolda TG, et al (2021) Tensor Toolbox for MATLAB. Version 3.2.1
Bahadori MT, Yu QR, Liu Y (2014) Fast multivariate spatio-temporal analysis via low rank tensor learning. Advances in Neural Information Processing Systems 27
Che M, Wei Y (2019) Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv Comput Math 45(1):395–428
Che M, Wei Y, Yan H (2020) The computation of low multilinear rank approximations of tensors via power scheme and random projection. SIAM J Matrix Anal Appl 41(2):605–636
Che M, Wei Y, Yan H (2021) Randomized algorithms for the low multilinear rank approximations of tensors. J Comput Appl Math 390:113380
Che M, Wei Y, Yan H (2021) An efficient randomized algorithm for computing the approximate Tucker decomposition. J Sci Comput 88(2):32
Che M, Wei Y, Yan H (2023) Efficient algorithms for Tucker decomposition via approximate matrix multiplication. arXiv preprint arXiv:2303.11612
Cichocki A, Mandic DP, De Lathauwer L, Zhou G, Zhao Q, Caiafa C, Phan AH (2015) Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process Mag 32(2):145–163
Cichocki A, Lee N, Oseledets IV, Phan AH, Zhao Q, Mandic DP (2016) Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Foundations and Trends®in Machine Learning 9(4-5), 249–429
De Lathauwer L, De Moor B, Vandewalle J (2000) A multilinear singular value decomposition. SIAM J Matrix Anal Appl 21(4):1253–1278
Drineas P, Ipsen IC, Kontopoulou E-M, Magdon-Ismail M (2018) Structural convergence results for approximation of dominant subspaces from block krylov spaces. SIAM J Matrix Anal Appl 39(2):567–586
Gu M (2015) Subspace iteration randomization and singular value problems. SIAM J Sci Comput 37(3):1139–1173
Halko N, Martinsson P-G, Tropp JA (2011) Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev 53(2):217–288
Han Z, Huang T, Zhao X, Zhang H, Liu Y (2023) Multi-dimensional data recovery via feature-based fully-connected tensor network decomposition. IEEE Transactions on Big Data
Hitchcock FL (1927) The expression of a tensor or a polyadic as a sum of products. J Math Phys 6(1–4):164–189
Huber B, Schneider R, Wolf S (2017) A randomized tensor train singular value decomposition. In: Compressed Sensing and Its Applications, pp. 261–290
Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Review 51(3):455–500
Kressner D, Vandereycken B, Voorhaar R (2023) Streaming tensor train approximation. SIAM J Sci Comput 45(5):A2610–A2631
Li H, Zhu Y (2021) Randomized block krylov subspace methods for trace and log-determinant estimators. BIT Numerd Math 61:911–939
Liu Y, Zhao X, Song G, Zheng Y, Ng MK, Huang T (2024) Fully-connected tensor network decomposition for robust tensor completion problem. Inverse Prob Imaging 18(1):208–238
Lyu C, Zhao X, Li B, Zhang H, Huang T (2022) Multi-dimensional image recovery via fully-connected tensor network decomposition under the learnable transforms. J Sci Comput 93(2):49
Martinsson P-G, Tropp JA (2020) Randomized numerical linear algebra: foundations and algorithms. Acta Numerica 29:403–572
Martinsson P-G, Voronin S (2016) A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. SIAM J Sci Comput 38(5):485–507
Mickelin O, Karaman S (2020) On algorithms for and computing with the tensor ring decomposition. Numer Linear Algebra Appl 27(3):2289
Minster R, Saibaba AK, Kilmer ME (2020) Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM J Math Data Sci 2(1):189–215
Murray R, Demmel J, Mahoney MW, Erichson NB, Melnichenko M, Malik OA, Grigori L, Luszczek P, Dereziński M, Lopes ME, et al (2023) Randomized numerical linear algebra: A perspective on the field with an eye to software. arXiv preprint arXiv:2302.11474
Musco C, Musco C (2015) Randomized block krylov methods for stronger and faster approximate singular value decomposition. Advances in Neural Information Processing Systems 28
Oseledets IV (2011) Tensor-train decomposition. SIAM J Sci Comput 33(5):2295–2317
Rabusseau G, Kadri H (2016) Low-rank regression with tensor responses. Advances in Neural Information Processing Systems 29
Sidiropoulos ND, De Lathauwer L, Fu X, Huang K, Papalexakis EE, Faloutsos C (2017) Tensor decomposition for signal processing and machine learning. IEEE Trans Signal Process 65(13):3551–3582
Sun Y, Guo Y, Luo C, Tropp JA, Udell M (2020) Low-rank Tucker approximation of a tensor from streaming data. SIAM J Math Data Sci 2(4):1123–1150
Tropp JA, Webber RJ (2023) Randomized algorithms for low-rank matrix approximation: Design, analysis, and applications. arXiv preprint arXiv:2306.12418
Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279–311
Vervliet N, De Lathauwer L (2019) Numerical optimization-based algorithms for data fusion. In: Data Handling in Science and Technology vol. 31, pp. 81–128
Woolfe F, Liberty E, Rokhlin V, Tygert M (2008) A fast randomized algorithm for the approximation of matrices. Appl Comput Harmonic Ana 25(3):335–366
Yu W, Gu Y, Li Y (2018) Efficient randomized algorithms for the fixed-precision low-rank matrix approximation. SIAM J Matrix Anal Appl 39(3):1339–1359
Yuan L, Li C, Cao J, Zhao Q (2019) Randomized tensor ring decomposition and its application to large-scale data reconstruction. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2127–2131
Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. arXiv preprint arXiv:1606.05535
Zheng W, Zhao X, Zheng Y, Pang Z (2021b) Nonlocal patch-based fully connected tensor network decomposition for multispectral image inpainting. IEEE Geosci Remote Sens Lett 19:1–5
Zheng Y, Huang T, Zhao X, Zhao Q (2022) Tensor completion via fully-connected tensor network decomposition with regularized factors. J Sci Comput 92(1):8
Zheng W, Zhao X, Zheng Y, Huang T (2024) Provable stochastic algorithm for large-scale fully-connected tensor network decomposition. Journal of Scientific Computing 98(1):16
Zheng Y, Huang T, Zhao X, Zhao Q, Jiang T (2021a) Fully-connected tensor network decomposition and its application to higher-order tensor completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11071–11078
Zhou G, Cichocki A, Xie S (2014) Decomposition of big tensors with low multilinear rank. arXiv preprint arXiv:1412.1885
Acknowledgements
The authors would like to thank the editor and the anonymous reviewers for their detailed comments and helpful suggestions, which helped considerably to improve the quality of the paper.
Funding
This work was supported by the National Natural Science Foundation of China (No. 11671060) and the Natural Science Foundation Project of CQ CSTC (No. cstc2019jcyj-msxmX0267).
Author information
Authors and Affiliations
Contributions
Both authors contributed to the study conception and design, and read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Ethical approval
Not Applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Theorem 3
Proof of Theorem 3
For convenience of expression, we let
where \(\mathcal{G}_k\) with \(k=1,\cdots ,N\) are from Algorithm 4 and \({\overline{\times }}\) denotes some contracted tensor product which now has no explicit definition. Further, considering the procedure of Algorithm 4 gives
Note that \(\mathcal{G}_k\) with \(k=1,\cdots ,N\) are reshaped from orthonormal matrices. Hence, \(P_N(\mathcal{X})\) is actually an orthogonal projector. To find the bound between \(\mathcal X\) and \(P_N(\mathcal{X})\), we first present a definition.
Definition 7
(FCTN unfolding) The FCTN unfolding of a 3Nth-order tensor \(\mathcal{Z} \in \mathbb {R}^{I_1 \times I_2 \cdots \times I_{3N}}\) is the matrix \(\widehat{\textbf{Z}}\) of size \(\prod _{j=1}^{N} I_{3j-1} \times \prod _{j=1}^{N} I_{3j-2}I_{3j}\) defined element-wise via
Thus,
where \(\widehat{\textbf{X}}^{\left( N-1 \right) } \in {\mathbb {R}}^{R_{N-1,N} \times R_{1,N}\cdots R_{N-2,N}I_N}\) is from \(\mathcal{X}^{(N-1)}\), \(\textbf{X}^{\left( N-2 \right) }\) is the matrix in line 8 in Algorithm 4 for \(k=N-1\), and \(\left\langle \cdot \right\rangle \) denotes the classical inner product of matrices.
Since \( \textbf{Q}^{(k)}\) is orthonormal, for the matrix in line 8 in Algorithm 4, we have
where \(\textbf{X}^{(0)} = \textbf{X}_{<1>}\). This result together with the fact
leads to
Thus, combining the above recursive formula with (A1) implies
To continue, we need a lemma as follows.
Lemma 1
(Minster et al. (2020), Theorem 2.4) Let \(\textbf{A} \in {\mathbb {R}}^{m\times n}\), and choose a target rank \(r\ge 2\) and an oversampling parameter \(p \ge 2\), where \(r+p \le \min \left\{ {m,n} \right\} \). Draw a standard Gaussian matrix \(\mathbf{\Omega } \in {\mathbb {R}}^{n\times {(r+p)}}\) and construct \(\textbf{Y}=\textbf{A}{} \mathbf{\Omega }\). Assume \(\textbf{Q}_Y\) and \(\textbf{U}_{\textbf{Q}_Y^T\textbf{Y}}\) are the orthonormal basis matrices of \({{\,\textrm{range}\,}}(\textbf{Y})\) and \({{\,\textrm{range}\,}}(\textbf{Q}_Y^T\textbf{Y})\), respectively. Set \(\textbf{Q}=\textbf{Q}_Y\textbf{U}_{\textbf{Q}_Y^T\textbf{Y}}(:,1:r)\). Then the following expected approximation error holds:
where \(\sigma _{l }(A)\) is the lth singular value of \(\textbf{A}\).
Hence,
As a result,
Note that, by the procedure of Algorithm 4 and Theorem 1,
where \(\textbf{r}=[r,\cdots ,r]\) are the FCTN-ranks of \(\mathcal X\). Hence, the desired result holds.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, M., Li, H. SVD-based algorithms for fully-connected tensor network decomposition. Comp. Appl. Math. 43, 265 (2024). https://doi.org/10.1007/s40314-024-02772-w
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-02772-w
Keywords
- Fully-connected tensor network decomposition
- SVD
- Randomized algorithm
- Alternating least squares
- Sketching