Skip to main content
Log in

An Efficient Randomized Algorithm for Computing the Approximate Tucker Decomposition

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

By combining the thin QR decomposition and the subsampled randomized Fourier transform (SRFT), we obtain an efficient randomized algorithm for computing the approximate Tucker decomposition with a given target multilinear rank. We also combine this randomized algorithm with the power iteration technique to improve the efficiency of the algorithm. By using the results about the singular values of the product of orthonormal matrices with the Kronecker product of SRFT matrices, we obtain the error bounds of these two algorithms. Finally, the efficiency of these algorithms is illustrated by several numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Ailon, N., Chazelle, B.: The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput. 39, 302–322 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aizenbud, Y., Shabat, G., Averbuch, A.: Randomized LU decomposition using sparse projections. Comput. Math. Appl. 72, 2525–2534 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bader, B.W., Kolda, T.G.: Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw. 32, 635–653 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bader, B.W., Kolda, T.G. et al.: Matlab tensor toolbox version 3.0-dev. Available Online (2017). https://www.tensortoolbox.org

  5. Boutsidis, C., Gittens, A.: Improved matrix algorithms via the subsampled randomized Hadamard transform. SIAM J. Matrix Anal. Appl. 34, 1301–1340 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45, 395–428 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  7. Che, M., Wei, Y.: Theory and Computation of Complex Tensors and its Applications. Springer, Singapore (2020)

    Book  MATH  Google Scholar 

  8. Che, M., Wei, Y., Yan, H.: The computation for low multilinear rank approximations of tensors via power scheme and random projection. SIAM J. Matrix Anal. Appl. 41, 605–636 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Che, M., Wei, Y., Yan, H.: Randomized algorithms for the low multilinear rank approximations of tensors. J. Comput. Appl. Math. 390, 113380 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.-I.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, New York (2009)

    Book  Google Scholar 

  11. Clarkson, K.L., Woodruff, D.P.: Low rank approximation and regression in input sparsity time. In: Proceedings of the 45th Annual ACM Symposium on Theory of Computing, pp. 81–90 (2013)

  12. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  13. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((r_1, r_2,\ldots, r_n)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)

    MathSciNet  MATH  Google Scholar 

  14. Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices II: computing a low-rank approximation to a matrix. SIAM J. Comput. 36, 158–183 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  15. Drineas, P., Mahoney, M.W.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 420, 553–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Drineas, P., Mahoney, M.W.: RandNLA: randomized numerical linear algebra. Commun. ACM 59, 80–90 (2016)

    Article  Google Scholar 

  17. Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multilinear rank-\((r_1, r_2, r_3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Goreinov, S.A., Oseledets, I.V., Savostyanov, D.V.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM J. Sci. Comput. 34, A1–A27 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitt. 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Ishteva, M., Absil, P.-A., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32, 115–135 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Litvak, A.E., Pajor, A., Rudelson, M., Tomczakjaegermann, N.: Smallest singular value of random matrices and geometry of random polytopes. Adv. Math. 195, 491–523 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lorente, L.S., Vega, J.M., Velazquez, A.: Compression of aerodynamic databases using high-order singular value decomposition. Aerosp. Sci. Technol. 14, 168–177 (2010)

    Article  Google Scholar 

  25. Lorente, L.S., Vega, J.M., Velazquez, A.: Generation of aerodynamic databases using high-order singular value decomposition. J. Aircr. 45, 1779–1788 (2010)

    Article  Google Scholar 

  26. Mahoney, M.W.: Randomized algorithms for matrices and data. Found. Trends Mach. Learn. 3, 123–224 (2011)

    MATH  Google Scholar 

  27. Martinsson, P.-G., Voronin, S.: A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. SIAM J. Sci. Comput. 38, S485–S507 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  28. Minster, R., Saibaba, A.K., Kilmer, M.E.: Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM J. Math. Data Sci. 2, 189–215 (2020)

    Article  MathSciNet  Google Scholar 

  29. Oseledets, I.V., Savostianov, D.V., Tyrtyshnikov, E.E.: Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl. 30, 939–956 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  30. Oymak, S., Tropp, J.A.: Universality laws for randomized dimension reduction, with applications. Inf. Inference: a J. IMA 7, 337–446 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  31. Reynolds, M., Doostan, A., Beylkin, G.: Randomized alternating least squares for canonical tensor decompositions: application to a PDE with random data. SIAM J. Sci. Comput. 38, A2634–A2664 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Savas, B., Elden, L.: Handwritten digit classification using higher order singular value decomposition. Pattern Recogn. 40, 993–1003 (2007)

    Article  MATH  Google Scholar 

  33. Savas, B., Lim, L.-H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32, 3352–3393 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Sun, Y., Guo, Y., Luo, C., Tropp, J.A., Udell, M.: Low-rank Tucker approximation of a tensor from streaming data. SIAM J. Math. Data Sci. 2, 1123–1150 (2020)

    Article  MathSciNet  Google Scholar 

  35. Symeonidis, P.: ClustHOSVD: item recommendation by combining semantically enhanced tag clustering with tensor HOSVD. IEEE Trans. Syst., Man, Cybern.: Syst. 46, 1240–1251 (2016)

    Article  Google Scholar 

  36. Tropp, J.A., Yurtsever, A., Udell, M., Cevher, V.: Practical sketching algorithms for low-rank matrix approximation. SIAM J. Matrix Anal. Appl. 38, 1454–1485 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  37. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  38. Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation strategy for the higher-order singular value decomposition. SIAM J. Sci. Comput. 34, A1027–A1052 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  39. Vasilescu, M., Terzopoulos, D.: Multilinear image analysis for facial recognition. In: Proceedings, 16th International Conference on Pattern Recognition, vol. 2, pp. 511–514. IEEE (2002)

  40. Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 93–99. IEEE (2003)

  41. Vasilescu, M., Terzopoulos, D.: Tensor textures: multilinear image-based rendering. ACM Trans. Graph. 23, 336–342 (2004)

    Article  Google Scholar 

  42. Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, Y.C., Kutyniok, G. (eds.) Compressed Sensing: Theory and Practice, pp. 210–268. Cambridge University Press, Cambridge (2012)

    Chapter  Google Scholar 

  43. Vervliet, N., Debals, O., Sorber, L., Van Barel, M., De Lathauwer, L.: Tensorlab 3.0. Available Online (2016). http://tensorlab.net

  44. Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci. 10, 1–157 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  45. Woolfe, F., Liberty, E., Rokhlin, V., Tygert, M.: A fast randomized algorithm for the approximation of matrices. Appl. Comput. Harmonic Anal. 25, 335–366 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  46. Ying, J., Lu, H., Wei, Q., Cai, J., Guo, D., Wu, J., Chen, Z., Qu, X.: Hankel matrix nuclear norm regularized tensor completion for \(n\)-dimensional exponential signals. IEEE Trans. Signal Process. 65, 3702–3717 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  47. Zhang, J., Saibaba, A.K., Kilmer, M.E., Aeron, S.: A randomized tensor singular value decomposition based on the t-product. Numer. Linear Algebra Appl. 25, e2179 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  48. Zhou, G., Cichocki, A., Xie, S.: Decomposition of big tensors with low multilinear rank. ArXiv preprint (2014). arXiv:1412.1885v1

Download references

Acknowledgements

We would like to thank Editor-in-Chief Chi-Wang Shu and two anonymous reviewers for very helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

M. Che: This author is supported by the National Natural Science Foundation of China under Grant 11901471. Y. Wei: This author is supported by the National Natural Science Foundation of China under Grant 11771099 and Innovation Program of Shanghai Municipal Education Commission. H. Yan: This author is supported by the Hong Kong Innovation and Technology Commission (ITC) and City University of Hong Kong (Projects 7005230 and 9610460).

Appendices

Proof of Lemma 4.5

Note that

$$\begin{aligned} \Vert {\mathbf {E}}\Vert _2\le \Vert {\mathbf {E}}\Vert _F=\sqrt{\sum _{k,k'=1}^K|e_{kk'}|^2}. \end{aligned}$$

In order to obtain the bound of \(\Vert {\mathbf {E}}\Vert _2\), we need to obtain the upper bound of \({\mathfrak {E}}|e_{kk'}|^2\) for all \(k,k'=1,2,\dots ,K\).

Let \(i=i_1+(i_2-1)I_1\), \(i'=i_1'+(i_2'-1)I_1\), \(j=j_1+(j_2-1)I_1\), \(j'=j_1'+(j_2'-1)I_1\) and \(l=l_1+(l_2-1)L_1\). By (4.1), we have the expectation

$$\begin{aligned} \begin{aligned} {\mathfrak {E}}|e_{kk'}|^2&={\mathfrak {E}}\left( \sum _{i_1=1}^{I_1} \sum _{i_2=1}^{I_2}d_{i_1i_1}^{(1)}d_{i_2i_2}^{(2)}u_{ik}\sum _{i_1'\ne i_1} \sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right. \\&\left. \quad \quad \cdot \sum _{j_1=1}^{I_1}\sum _{j_2=1}^{I_2} {\bar{d}}_{j_1j_1}^{(1)}{\bar{d}}_{j_2j_2}^{(2)}{\bar{u}}_{jk} \sum _{j_1'\ne j_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)} u_{j'k'}h^{(1)}_{j_1j_1'}h^{(2)}_{j_2j_2'}\right) \\&={\mathfrak {E}}\left( \sum _{i_1,j_1=1}^{I_1}\sum _{i_2,j_2=1}^{I_2} \left( d_{i_1i_1}^{(1)}d_{i_2i_2}^{(2)}u_{ik}\sum _{i_1'\ne i_1}\sum _{i_2' \ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right) \right. \\&\left. \quad \quad \cdot \left( {\bar{d}}_{j_1j_1}^{(1)} {\bar{d}}_{j_2j_2}^{(2)}{\bar{u}}_{jk}\sum _{j_1'\ne j_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{j_1j_1'} h^{(2)}_{j_2j_2'}\right) \right) . \end{aligned} \end{aligned}$$

By using the fact that \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\), we obtain that

$$\begin{aligned}&{\mathfrak {E}}\left( \sum _{i_1,j_1=1}^{I_1}\sum _{i_2,j_2=1}^{I_2} \left( d_{i_1i_1}^{(1)}d_{i_2i_2}^{(2)}u_{ik}\sum _{i_1'\ne i_1} \sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right) \right. \nonumber \\&\left. \quad \quad \cdot \left( {\bar{d}}_{j_1j_1}^{(1)} {\bar{d}}_{j_2j_2}^{(2)}{\bar{u}}_{jk}\sum _{j_1'\ne j_1}\sum _{j_2' \ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{j_1j_1'} h^{(2)}_{j_2j_2'}\right) \right) \nonumber \\&\quad ={\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2=1}^{I_2}|u_{ik}|^2 \left| \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}d_{i_1'i_1'}^{(1)} d_{i_2'i_2'}^{(2)}u_{i'k'}h^{(1)}_{i_1i_1'}h^{(2)}_{i_2i_2'}\right| ^2 \right) \nonumber \\&\quad \quad +\,{\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2\ne j_2} d_{i_2i_2}^{(2)}{\bar{d}}_{j_2j_2}^{(2)}u_{ik}{\bar{u}}_{jk}\sum _{i_1' \ne i_1}\sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}{\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}\right. \nonumber \\&\left. \quad \quad \sum _{j_1'\ne i_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)} d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{i_1j_1'}h^{(2)}_{j_2j_2'}\right) \nonumber \\&\quad \quad +\,{\mathfrak {E}}\left( \sum _{i_1\ne j_1}\sum _{i_2=1}^{I_2} d_{i_1i_1}^{(1)}{\bar{d}}_{j_1j_1}^{(1)}u_{ik}{\bar{u}}_{jk}\sum _{i_1' \ne i_1}\sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}{\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}\right. \nonumber \\&\left. \quad \quad \sum _{j_1'\ne j_1}\sum _{j_2'\ne i_2}d_{j_1'j_1'}^{(1)} d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{j_1j_1'}h^{(2)}_{i_2j_2'}\right) \nonumber \\&\quad \quad +\,{\mathfrak {E}}\left( \sum _{i_1\ne j_1}\sum _{i_2\ne j_2} d_{i_1i_1}^{(1)}d_{i_2i_2}^{(2)}{\bar{d}}_{j_1j_1}^{(1)} {\bar{d}}_{j_2j_2}^{(2)}u_{ik}{\bar{u}}_{jk}\sum _{i_1'\ne i_1} \sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right. \nonumber \\&\left. \quad \quad \sum _{j_1'\ne j_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'} h^{(1)}_{j_1j_1'}h^{(2)}_{j_2j_2'}\right) . \end{aligned}$$
(A.1)

To bound the first term in the right-hand side of (A.1), we have

$$\begin{aligned} \begin{aligned}&{\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2=1}^{I_2}|u_{ik}|^2 \left| \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2} d_{i_1'i_1'}^{(1)}d_{i_2'i_2'}^{(2)}u_{i'k'}h^{(1)}_{i_1i_1'} h^{(2)}_{i_2i_2'}\right| ^2\right) =\sum _{i_1=1}^{I_1}\sum _{i_2=1}^{I_2} |u_{ik}|^2\\&\qquad {\mathfrak {E}}\left| \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}d_{i_1'i_1'}^{(1)}d_{i_2'i_2'}^{(2)}u_{i'k'}h^{(1)}_{i_1i_1'} h^{(2)}_{i_2i_2'}\right| ^2. \end{aligned} \end{aligned}$$

Note that

$$\begin{aligned} \begin{aligned}&{\mathfrak {E}}\left| \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2} d_{i_1'i_1'}^{(1)}d_{i_2'i_2'}^{(2)}u_{i'k'}h^{(1)}_{i_1i_1'} h^{(2)}_{i_2i_2'}\right| ^2\\&\quad ={\mathfrak {E}}\left( \sum _{i_1'\ne i_1}\sum _{i_2' \ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'} \sum _{j_1'\ne i_1}\sum _{j_2'\ne i_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)} u_{j'k'}h^{(1)}_{i_1j_1'}h^{(2)}_{i_2j_2'}\right) \\&\quad =\sum _{i_1',j_1'\ne i_1}\sum _{i_2',j_2'\ne i_2}{\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}} \left( {\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'} h^{(1)}_{i_1j_1'}h^{(2)}_{i_2j_2'}\right) . \end{aligned} \end{aligned}$$

By the fact that \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\), we obtain that

$$\begin{aligned} \begin{aligned}&\sum _{i_1',j_1'\ne i_1}\sum _{i_2',j_2'\ne i_2} {\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)} {\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1j_1'} h^{(2)}_{i_2j_2'}\right) \\&\quad =\sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}|u_{i'k'}|^2{\mathfrak {E}} \left( |h^{(1)}_{i_1i_1'}|^2|h^{(2)}_{i_2i_2'}|^2\right) \\&\qquad +\,\sum _{i_1'\ne i_1}\sum _{i_2',j_2'\ne i_2, i_2'\ne j_2'} {\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_2'i_2'}^{(2)} d_{j_2'j_2'}^{(2)}|h^{(1)}_{i_1i_1'}|^2{\bar{h}}^{(2)}_{i_2i_2'} h^{(2)}_{i_2j_2'}\right) \\&\qquad +\,\sum _{i_1',j_1'\ne i_1, i_1'\ne j_1'}\sum _{i_2'\ne i_2} {\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)} d_{j_1'j_1'}^{(1)}|h^{(2)}_{i_2i_2'}|^2{\bar{h}}^{(1)}_{i_1i_1'} h^{(1)}_{i_1j_1'}\right) \\&\qquad +\,\sum _{i_1',j_1'\ne i_1, i_1'\ne j_1'}\sum _{i_2',j_2'\ne i_2, i_2'\ne j_2'}{\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)} {\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1j_1'} h^{(2)}_{i_2j_2'}\right) . \end{aligned} \end{aligned}$$

The independence of random variables involves that

$$\begin{aligned} \begin{aligned}&\sum _{i_1',j_1'\ne i_1}\sum _{i_2',j_2'\ne i_2}{\bar{u}}_{i'k'} u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)} {\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1j_1'} h^{(2)}_{i_2j_2'}\right) \\&\quad =\sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}|u_{i'k'}|^2{\mathfrak {E}} \left( |h^{(1)}_{i_1i_1'}|^2\right) {\mathfrak {E}} \left( |h^{(2)}_{i_2i_2'}|^2\right) \\&\qquad +\,\sum _{i_1'\ne i_1}\sum _{i_2',j_2'\ne i_2, i_2'\ne j_2'} {\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_2'i_2'}^{(2)} \right) {\mathfrak {E}}\left( d_{j_2'j_2'}^{(2)}\right) {\mathfrak {E}}|h^{(1)}_{i_1i_1'}|^2{\mathfrak {E}} \left( {\bar{h}}^{(2)}_{i_2i_2'}h^{(2)}_{i_2j_2'}\right) \\&\qquad +\,\sum _{i_1',j_1'\ne i_1, i_1'\ne j_1'}\sum _{i_2'\ne i_2} {\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)} \right) {\mathfrak {E}}\left( d_{j_1'j_1'}^{(1)}\right) {\mathfrak {E}} |h^{(2)}_{i_2i_2'}|^2{\mathfrak {E}}\left( {\bar{h}}^{(1)}_{i_1i_1'} h^{(1)}_{i_1j_1'}\right) \\&\qquad +\,\sum _{i_1',j_1'\ne i_1, i_1'\ne j_1'}\sum _{i_2',j_2'\ne i_2, i_2'\ne j_2'}{\bar{u}}_{i'k'}u_{j'k'}{\mathfrak {E}} \left( {\bar{d}}_{i_1'i_1'}^{(1)}\right) {\mathfrak {E}} \left( {\bar{d}}_{i_2'i_2'}^{(2)}\right) {\mathfrak {E}} \left( d_{j_1'j_1'}^{(1)}\right) {\mathfrak {E}}\left( d_{j_2'j_2'}^{(2)} \right) \\&\qquad {\mathfrak {E}}\left( {\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1j_1'}h^{(2)}_{i_2j_2'}\right) . \end{aligned} \end{aligned}$$

From Lemma 4.1, the fact that \({\mathfrak {E}}(d_{i_1i_1}^{(1)})={\mathfrak {E}}(d_{i_2i_2}^{(2)})=0\) and the fact that the columns of \({\mathbf {U}}\) are unitary, we have

$$\begin{aligned} \begin{aligned}&\sum _{i_1',j_1'\ne i_1}\sum _{i_2',j_2'\ne i_2}{\bar{u}}_{i'k'}u_{j'k'} {\mathfrak {E}}\left( {\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1j_1'}h^{(2)}_{i_2j_2'}\right) \\&\quad =\sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}|u_{i'k'}|^2{\mathfrak {E}} \left( |h^{(1)}_{i_1i_1'}|^2\right) {\mathfrak {E}} \left( |h^{(2)}_{i_2i_2'}|^2\right) =L_1L_2\sum _{i_1'\ne i_1}\sum _{i_2' \ne i_2}|u_{i'k'}|^2\\&\quad \le L_1L_2\sum _{i_1'=1}^{I_1}\sum _{i_2'=1}^{I_2}|u_{i'k'}|^2=L_1L_2, \end{aligned} \end{aligned}$$

which implies that

$$\begin{aligned} {\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2=1}^{I_2}|u_{ik}|^2 \left| \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2} d_{i_1'i_1'}^{(1)}d_{i_2'i_2'}^{(2)}u_{i'k'}h^{(1)}_{i_1i_1'} h^{(2)}_{i_2i_2'}\right| ^2\right) \le L_1L_2. \end{aligned}$$

We now bound the second term in the right-hand side of (A.1). Note that we have

$$\begin{aligned} \begin{aligned}&{\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2\ne j_2} d_{i_2i_2}^{(2)}{\bar{d}}_{j_2j_2}^{(2)}u_{ik}{\bar{u}}_{jk} \sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)} {\bar{d}}_{i_2'i_2'}^{(2)}{\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}\sum _{j_1'\ne i_1}\sum _{j_2'\ne j_2} d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{i_1j_1'} h^{(2)}_{j_2j_2'}\right) \\&\quad ={\mathfrak {E}}\left( \sum _{i_1\ne i_1'}\sum _{i_2\ne j_2} \left| d_{i_1'i_1'}^{(1)}\right| ^2\left( d_{i_2i_2}^{(2)}\right) ^2 \left( {\bar{d}}_{j_2j_2}^{(2)}\right) ^2u_{ik}u_{j'k'}{\bar{u}}_{jk} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2j_2} h^{(1)}_{i_1i_1'}h^{(2)}_{j_2i_2}\right) \\&\qquad +\,{\mathfrak {E}}\left( \sum _{i_1\ne i_1'}\sum _{i_2\ne j_2}\sum _{j_2' \ne i_2,j_2}\left| d_{i_1'i_1'}^{(1)}\right| ^2d_{i_2i_2}^{(2)} d_{j_2'j_2'}^{(2)}\left( {\bar{d}}_{j_2j_2}^{(2)}\right) ^2u_{ik}u_{j'k'} {\bar{u}}_{jk}{\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2j_2}h^{(1)}_{i_1i_1'}h^{(2)}_{j_2j_2'}\right) \\&\qquad +\,{\mathfrak {E}}\left( \sum _{i_1\ne i_1'}\sum _{i_2\ne j_2}\sum _{i_2' \ne i_2,j_2}\left| d_{i_1'i_1'}^{(1)}\right| ^2\left( d_{i_2i_2}^{(2)} \right) ^2{\bar{d}}_{i_2'i_2'}^{(2)}{\bar{d}}_{j_2j_2}^{(2)}u_{ik}u_{j'k'} {\bar{u}}_{jk}{\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'} {\bar{h}}^{(2)}_{i_2i_2'}h^{(1)}_{i_1i_1'}h^{(2)}_{j_2i_2}\right) \\&\qquad +\,{\mathfrak {E}}\left( \sum _{\begin{array}{c} i_1',j_1'\ne i_1\\ i_1'\ne j_1' \end{array}}\sum _{i_2\ne j_2}d_{i_2i_2}^{(2)}{\bar{d}}_{j_2j_2}^{(2)}u_{ik}{\bar{u}}_{jk} \sum _{i_2'\ne i_2,j_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'} \sum _{j_2'\ne i_2,j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{i_1j_1'} h^{(2)}_{j_2j_2'}\right) . \end{aligned} \end{aligned}$$

By the fact that \({\mathfrak {E}}(d_{i_1i_1}^{(1)})={\mathfrak {E}}(d_{i_2i_2}^{(2)})=0\), \(|d_{i_1i_1}^{(1)}|=|d_{i_2i_2}^{(2)}|=1\) and \({\mathfrak {E}}((d_{i_1i_1}^{(1)})^2)={\mathfrak {E}}((d_{i_2i_2}^{(2)})^2) ={\mathfrak {E}}(({\bar{d}}_{i_1i_1}^{(1)})^2)={\mathfrak {E}} (({\bar{d}}_{i_2i_2}^{(2)})^2)=0\), we have

$$\begin{aligned} \begin{aligned}&{\mathfrak {E}}\left( \sum _{i_1=1}^{I_1}\sum _{i_2\ne j_2}d_{i_2i_2}^{(2)} {\bar{d}}_{j_2j_2}^{(2)}u_{ik}{\bar{u}}_{jk}\sum _{i_1'\ne i_1} \sum _{i_2'\ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right. \\&\left. \quad \quad \sum _{j_1'\ne i_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'} h^{(1)}_{i_1j_1'}h^{(2)}_{j_2j_2'}\right) =0. \end{aligned} \end{aligned}$$

Similarly, we can prove that

$$\begin{aligned} \begin{aligned}&{\mathfrak {E}}\left( \sum _{i_1\ne j_1}\sum _{i_2=1}^{I_2}d_{i_1i_1}^{(1)} {\bar{d}}_{j_1j_1}^{(1)}u_{ik}{\bar{u}}_{jk}\sum _{i_1'\ne i_1}\sum _{i_2' \ne i_2}{\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)} {\bar{u}}_{i'k'}{\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right. \\&\left. \quad \quad \sum _{j_1'\ne j_1}\sum _{j_2'\ne i_2}d_{j_1'j_1'}^{(1)} d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{j_1j_1'}h^{(2)}_{i_2j_2'}\right) =0,\\&{\mathfrak {E}}\left( \sum _{i_1\ne j_1}\sum _{i_2\ne j_2}d_{i_1i_1}^{(1)} d_{i_2i_2}^{(2)}{\bar{d}}_{j_1j_1}^{(1)}{\bar{d}}_{j_2j_2}^{(2)}u_{ik} {\bar{u}}_{jk}\sum _{i_1'\ne i_1}\sum _{i_2'\ne i_2} {\bar{d}}_{i_1'i_1'}^{(1)}{\bar{d}}_{i_2'i_2'}^{(2)}{\bar{u}}_{i'k'} {\bar{h}}^{(1)}_{i_1i_1'}{\bar{h}}^{(2)}_{i_2i_2'}\right. \\&\left. \quad \quad \sum _{j_1'\ne j_1}\sum _{j_2'\ne j_2}d_{j_1'j_1'}^{(1)}d_{j_2'j_2'}^{(2)}u_{j'k'}h^{(1)}_{j_1j_1'} h^{(2)}_{j_2j_2'}\right) =0, \end{aligned} \end{aligned}$$

which implies that

$$\begin{aligned} {\mathfrak {E}}|e_{kk'}|^2\le L_1L_2. \end{aligned}$$

Now we have

$$\begin{aligned} {\mathfrak {E}}\Vert {\mathbf {E}}\Vert _F^2={\mathfrak {E}} \sum _{k,k'=1}^K|e_{kk'}|^2\le K^2L_1L_2, \end{aligned}$$

which implies that \({\mathfrak {E}}\Vert {\mathbf {E}}\Vert _2\le {\mathfrak {E}}\Vert {\mathbf {E}}\Vert _F\le \sqrt{K^2L_1L_2} \). By the Markov inequality, we have that

$$\begin{aligned} \Vert {\mathbf {E}}\Vert _2\le \sqrt{\beta K^2L_1L_2}, \end{aligned}$$

holds with probability at least \(1-1/\beta \). Hence, the proof of Lemma 4.5 is complete.

Some Remarks for the Existing Algorithms

For Tucker-SVD, Tucker-pSVD, tucker_als, mlsvd, mlsvd_rsi, Adap-Tucker and ran-Tucker, we have the following statements:

  • For Tucker-pSVD, the number of subspace iterations to be performed is 1. For Tucker-SVD, Tucker-pSVD, Adap-Tucker and ran-Tucker, the positive integer K is set to 10.

  • For tucker_als, the maximum number of iterations is set to 50, the order to loop through dimensions is \(\{1,2,3\}\), the entries of initial values are i.i.d. standard Gaussian variables and the tolerance on difference in fit is set to 0.0001.

  • For mlsvd, the order to loop through dimensions is \(\{1,2,3\}\) and a faster but possibly less accurate eigenvalue decomposition is used to compute the factor matrices.

  • For mlsvd_rsi, the oversampling parameter is 10, the number of subspace iterations to be performed is 2 and we remove the parts of the factor matrices and core tensor corresponding due to the oversampling.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Che, M., Wei, Y. & Yan, H. An Efficient Randomized Algorithm for Computing the Approximate Tucker Decomposition. J Sci Comput 88, 32 (2021). https://doi.org/10.1007/s10915-021-01545-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01545-5

Keywords

Mathematics Subject Classification

Navigation