Abstract
By combining the thin QR decomposition and the subsampled randomized Fourier transform (SRFT), we obtain an efficient randomized algorithm for computing the approximate Tucker decomposition with a given target multilinear rank. We also combine this randomized algorithm with the power iteration technique to improve the efficiency of the algorithm. By using the results about the singular values of the product of orthonormal matrices with the Kronecker product of SRFT matrices, we obtain the error bounds of these two algorithms. Finally, the efficiency of these algorithms is illustrated by several numerical examples.








Similar content being viewed by others
References
Ailon, N., Chazelle, B.: The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput. 39, 302–322 (2009)
Aizenbud, Y., Shabat, G., Averbuch, A.: Randomized LU decomposition using sparse projections. Comput. Math. Appl. 72, 2525–2534 (2016)
Bader, B.W., Kolda, T.G.: Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw. 32, 635–653 (2006)
Bader, B.W., Kolda, T.G. et al.: Matlab tensor toolbox version 3.0-dev. Available Online (2017). https://www.tensortoolbox.org
Boutsidis, C., Gittens, A.: Improved matrix algorithms via the subsampled randomized Hadamard transform. SIAM J. Matrix Anal. Appl. 34, 1301–1340 (2013)
Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45, 395–428 (2019)
Che, M., Wei, Y.: Theory and Computation of Complex Tensors and its Applications. Springer, Singapore (2020)
Che, M., Wei, Y., Yan, H.: The computation for low multilinear rank approximations of tensors via power scheme and random projection. SIAM J. Matrix Anal. Appl. 41, 605–636 (2020)
Che, M., Wei, Y., Yan, H.: Randomized algorithms for the low multilinear rank approximations of tensors. J. Comput. Appl. Math. 390, 113380 (2021)
Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.-I.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, New York (2009)
Clarkson, K.L., Woodruff, D.P.: Low rank approximation and regression in input sparsity time. In: Proceedings of the 45th Annual ACM Symposium on Theory of Computing, pp. 81–90 (2013)
De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)
De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((r_1, r_2,\ldots, r_n)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)
Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices II: computing a low-rank approximation to a matrix. SIAM J. Comput. 36, 158–183 (2006)
Drineas, P., Mahoney, M.W.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 420, 553–571 (2007)
Drineas, P., Mahoney, M.W.: RandNLA: randomized numerical linear algebra. Commun. ACM 59, 80–90 (2016)
Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multilinear rank-\((r_1, r_2, r_3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)
Goreinov, S.A., Oseledets, I.V., Savostyanov, D.V.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM J. Sci. Comput. 34, A1–A27 (2012)
Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitt. 36, 53–78 (2013)
Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011)
Ishteva, M., Absil, P.-A., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32, 115–135 (2011)
Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)
Litvak, A.E., Pajor, A., Rudelson, M., Tomczakjaegermann, N.: Smallest singular value of random matrices and geometry of random polytopes. Adv. Math. 195, 491–523 (2005)
Lorente, L.S., Vega, J.M., Velazquez, A.: Compression of aerodynamic databases using high-order singular value decomposition. Aerosp. Sci. Technol. 14, 168–177 (2010)
Lorente, L.S., Vega, J.M., Velazquez, A.: Generation of aerodynamic databases using high-order singular value decomposition. J. Aircr. 45, 1779–1788 (2010)
Mahoney, M.W.: Randomized algorithms for matrices and data. Found. Trends Mach. Learn. 3, 123–224 (2011)
Martinsson, P.-G., Voronin, S.: A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. SIAM J. Sci. Comput. 38, S485–S507 (2016)
Minster, R., Saibaba, A.K., Kilmer, M.E.: Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM J. Math. Data Sci. 2, 189–215 (2020)
Oseledets, I.V., Savostianov, D.V., Tyrtyshnikov, E.E.: Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl. 30, 939–956 (2008)
Oymak, S., Tropp, J.A.: Universality laws for randomized dimension reduction, with applications. Inf. Inference: a J. IMA 7, 337–446 (2018)
Reynolds, M., Doostan, A., Beylkin, G.: Randomized alternating least squares for canonical tensor decompositions: application to a PDE with random data. SIAM J. Sci. Comput. 38, A2634–A2664 (2015)
Savas, B., Elden, L.: Handwritten digit classification using higher order singular value decomposition. Pattern Recogn. 40, 993–1003 (2007)
Savas, B., Lim, L.-H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32, 3352–3393 (2010)
Sun, Y., Guo, Y., Luo, C., Tropp, J.A., Udell, M.: Low-rank Tucker approximation of a tensor from streaming data. SIAM J. Math. Data Sci. 2, 1123–1150 (2020)
Symeonidis, P.: ClustHOSVD: item recommendation by combining semantically enhanced tag clustering with tensor HOSVD. IEEE Trans. Syst., Man, Cybern.: Syst. 46, 1240–1251 (2016)
Tropp, J.A., Yurtsever, A., Udell, M., Cevher, V.: Practical sketching algorithms for low-rank matrix approximation. SIAM J. Matrix Anal. Appl. 38, 1454–1485 (2017)
Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)
Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation strategy for the higher-order singular value decomposition. SIAM J. Sci. Comput. 34, A1027–A1052 (2012)
Vasilescu, M., Terzopoulos, D.: Multilinear image analysis for facial recognition. In: Proceedings, 16th International Conference on Pattern Recognition, vol. 2, pp. 511–514. IEEE (2002)
Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 93–99. IEEE (2003)
Vasilescu, M., Terzopoulos, D.: Tensor textures: multilinear image-based rendering. ACM Trans. Graph. 23, 336–342 (2004)
Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, Y.C., Kutyniok, G. (eds.) Compressed Sensing: Theory and Practice, pp. 210–268. Cambridge University Press, Cambridge (2012)
Vervliet, N., Debals, O., Sorber, L., Van Barel, M., De Lathauwer, L.: Tensorlab 3.0. Available Online (2016). http://tensorlab.net
Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci. 10, 1–157 (2014)
Woolfe, F., Liberty, E., Rokhlin, V., Tygert, M.: A fast randomized algorithm for the approximation of matrices. Appl. Comput. Harmonic Anal. 25, 335–366 (2008)
Ying, J., Lu, H., Wei, Q., Cai, J., Guo, D., Wu, J., Chen, Z., Qu, X.: Hankel matrix nuclear norm regularized tensor completion for \(n\)-dimensional exponential signals. IEEE Trans. Signal Process. 65, 3702–3717 (2017)
Zhang, J., Saibaba, A.K., Kilmer, M.E., Aeron, S.: A randomized tensor singular value decomposition based on the t-product. Numer. Linear Algebra Appl. 25, e2179 (2018)
Zhou, G., Cichocki, A., Xie, S.: Decomposition of big tensors with low multilinear rank. ArXiv preprint (2014). arXiv:1412.1885v1
Acknowledgements
We would like to thank Editor-in-Chief Chi-Wang Shu and two anonymous reviewers for very helpful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
M. Che: This author is supported by the National Natural Science Foundation of China under Grant 11901471. Y. Wei: This author is supported by the National Natural Science Foundation of China under Grant 11771099 and Innovation Program of Shanghai Municipal Education Commission. H. Yan: This author is supported by the Hong Kong Innovation and Technology Commission (ITC) and City University of Hong Kong (Projects 7005230 and 9610460).
Appendices
Proof of Lemma 4.5
Note that
In order to obtain the bound of \(\Vert {\mathbf {E}}\Vert _2\), we need to obtain the upper bound of \({\mathfrak {E}}|e_{kk'}|^2\) for all \(k,k'=1,2,\dots ,K\).
Let \(i=i_1+(i_2-1)I_1\), \(i'=i_1'+(i_2'-1)I_1\), \(j=j_1+(j_2-1)I_1\), \(j'=j_1'+(j_2'-1)I_1\) and \(l=l_1+(l_2-1)L_1\). By (4.1), we have the expectation
By using the fact that \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\), we obtain that
To bound the first term in the right-hand side of (A.1), we have
Note that
By the fact that \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\), we obtain that
The independence of random variables involves that
From Lemma 4.1, the fact that \({\mathfrak {E}}(d_{i_1i_1}^{(1)})={\mathfrak {E}}(d_{i_2i_2}^{(2)})=0\) and the fact that the columns of \({\mathbf {U}}\) are unitary, we have
which implies that
We now bound the second term in the right-hand side of (A.1). Note that we have
By the fact that \({\mathfrak {E}}(d_{i_1i_1}^{(1)})={\mathfrak {E}}(d_{i_2i_2}^{(2)})=0\), \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\) and \({\mathfrak {E}}((d_{i_1i_1}^{(1)})^2)={\mathfrak {E}}((d_{i_2i_2}^{(2)})^2) ={\mathfrak {E}}(({\bar{d}}_{i_1i_1}^{(1)})^2)={\mathfrak {E}} (({\bar{d}}_{i_2i_2}^{(2)})^2)=0\), we have
Similarly, we can prove that
which implies that
Now we have
which implies that \({\mathfrak {E}}\Vert {\mathbf {E}}\Vert _2\le {\mathfrak {E}}\Vert {\mathbf {E}}\Vert _F\le \sqrt{K^2L_1L_2} \). By the Markov inequality, we have that
holds with probability at least \(1-1/\beta \). Hence, the proof of Lemma 4.5 is complete.
Some Remarks for the Existing Algorithms
For Tucker-SVD, Tucker-pSVD, tucker_als, mlsvd, mlsvd_rsi, Adap-Tucker and ran-Tucker, we have the following statements:
-
For Tucker-pSVD, the number of subspace iterations to be performed is 1. For Tucker-SVD, Tucker-pSVD, Adap-Tucker and ran-Tucker, the positive integer K is set to 10.
-
For tucker_als, the maximum number of iterations is set to 50, the order to loop through dimensions is \(\{1,2,3\}\), the entries of initial values are i.i.d. standard Gaussian variables and the tolerance on difference in fit is set to 0.0001.
-
For mlsvd, the order to loop through dimensions is \(\{1,2,3\}\) and a faster but possibly less accurate eigenvalue decomposition is used to compute the factor matrices.
-
For mlsvd_rsi, the oversampling parameter is 10, the number of subspace iterations to be performed is 2 and we remove the parts of the factor matrices and core tensor corresponding due to the oversampling.
Rights and permissions
About this article
Cite this article
Che, M., Wei, Y. & Yan, H. An Efficient Randomized Algorithm for Computing the Approximate Tucker Decomposition. J Sci Comput 88, 32 (2021). https://doi.org/10.1007/s10915-021-01545-5
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-021-01545-5
Keywords
- Approximate Tucker decomposition
- Randomized algorithms
- Random projection
- Power iteration technique
- Subsampled randomized Fourier transform
- Thin QR decomposition
- Dimension reduction maps