Skip to main content
Log in

Randomized algorithms for the computation of multilinear rank-\((\mu _1,\mu _2,\mu _3)\) approximations

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

We present some randomized algorithms for computing multilinear rank-\((\mu _1,\mu _2,\mu _3)\) approximations of tensors by combining the sparse subspace embedding and the singular value decomposition. The error bound for this algorithm with the high probability is obtained by the properties of sparse subspace embedding. Furthermore, combining the power scheme and the proposed randomized algorithm, we derive a three-stage randomized algorithm and make a probabilistic analysis for its error bound. The efficiency of the proposed algorithms is illustrated via numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. A tensor \({\mathcal {A}}\in {\mathbb {R}}^{I\times I\times I}\) is called symmetric [34], if for all ijk, we have \(a_{ijk}=a_{ikj}=a_{jik}=a_{jki}=a_{kij}=a_{kji}\).

  2. The video dataset is at http://trace.eas.asu.edu/yuv/.

  3. The Yale Face Database is at http://vision.ucsd.edu/~iskwak/ExtYaleDatabase/ExtYaleB.html.

References

  1. Ahmadiasl, S., Cichocki, A., Phan, A., Oseledets, I., Abukhovich, S., Tanaka, T.: Randomized algorithms for computation of Tucker decomposition and higher order SVD (HOSVD). IEEE Access 9, 28684–28706 (2021)

    Article  Google Scholar 

  2. Ailon, N., Chazelle, B.: The fast Johnson-Lindenstrauss transform and approximate nearest neighbors. SIAM Journal on Computing 39, 302–322 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aizenbud, Y., Shabat, G., Averbuch, A.: Randomized LU decomposition using sparse projections. Computers and Mathematics with Applications 72, 2525–2534 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bader, B.W., Kolda, T.G.: Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software 32, 635–653 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bader, B.W., Kolda, T.G. et al.: Matlab tensor toolbox version 3.0-dev. Available online (Oct. 2017). https://www.tensortoolbox.org

  6. Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. Theoretical Computer Science 312, 3–15 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  7. Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Advances in Computational Mathematics 45, 395–428 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  8. Che, M., Wei, Y.: Theory and Computation of Complex Tensors and its Applications. Springer, Singapore (2020)

    Book  MATH  Google Scholar 

  9. Che, M., Wei, Y., Yan, H.: The computation for low multilinear rank approximations of tensors via power scheme and random projection. SIAM Journal on Matrix Analysis and Applications 41, 605–636 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  10. Che, M., Wei, Y., Yan, H.: An efficient randomized algorithm for computing the approximate Tucker decomposition. Journal of Scientific Computing, 88 (2021). article no. 32

  11. Che, M., Wei, Y., Yan, H.: Randomized algorithms for the low multilinear rank approximations of tensors. Journal of Computational and Applied Mathematics, 390 (2021). article no. 113380

  12. Cichocki, A.: Tensor networks for big data analytics and large-scale optimization problems, arXiv preprint arXiv:1407.3124, (2014)

  13. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.-I.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. John Wiley & Sons (2009)

  14. Clarkson, K.L., Woodruff, D.P.: Low rank approximation and regression in input sparsity time. In: Proceedings of the 45th Annual ACM Symposium on Theory of Computing, (2013), pp. 81–90

  15. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  16. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((r_1, r_2,\cdots, r_n)\) approximation of higher-order tensors. SIAM Journal on Matrix Analysis and Applications 21, 1324–1342 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  17. Drineas, P., Mahoney, M.W.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra and its Applications 420, 553–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  18. Drineas, P., Mahoney, M.W.: RandNLA: randomized numerical linear algebra. Comm. ACM 59, 80–90 (2016)

    Article  Google Scholar 

  19. Eldén, L., Savas, B.: A Newton-Grassmann method for computing the best multilinear rank-\((r_1, r_2, r_3)\) approximation of a tensor. SIAM Journal on Matrix Analysis and Applications 31, 248–271 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 643–660 (2001)

    Article  Google Scholar 

  21. Goreinov, S.A., Oseledets, I.V., Savostyanov, D.V.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM Journal on Scientific Computing, 34 (2012), pp. A1–A27

  22. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review 53, 217–288 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  24. Ishteva, M., Absil, P.-A., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM Journal on Matrix Analysis and Applications 32, 115–135 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Review 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kressner, D., Perisa, L.: Recompression of Hadamard products of tensors in Tucker format. SIAM Journal on Scientific Computing 39, A1879–A1902 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  27. Lorente, L.S., Vega, J.M., Velazquez, A.: Compression of aerodynamic databases using high-order singular value decomposition. Aerospace Science and Technology 14, 168–177 (2010)

    Article  Google Scholar 

  28. Lorente, L.S., Vega, J.M., Velazquez, A.: Generation of aerodynamic databases using high-order singular value decomposition. Journal of Aircraft 45, 1779–1788 (2010)

    Article  Google Scholar 

  29. Mahoney, M.W.: Randomized algorithms for matrices and data, Foundations and Trends in Machine. Learning 3, 123–224 (2011)

    MATH  Google Scholar 

  30. Malik, O.A., Becker, S.: Fast randomized matrix and tensor interpolative decomposition using CountSketch. Adv. Comput. Math. 46, 76 (2020). https://doi.org/10.1007/s10444-020-09816-9

  31. Matousek, J.: On variants of the Johnson-Lindenstrauss lemma. Random Structures and Algorithms 33, 142–156 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  32. Minster, R., Saibaba, A.K., Kilmer, M.E.: Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM Journal on Mathematics of Data Science 2, 189–215 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  33. Navasca, C., De Lathauwer, L.: Low multilinear rank tensor approximation via semidefinite programming In: IEEE 17th European Signal Processing Conference, pp. 520–524 (2009)

  34. Qi, L.: Eigenvalues of a real supersymmetric tensor. Journal of Symbolic Computation 40, 1302–1324 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  35. Saibaba, A.K.: HOID: higher order interpolatory decomposition for tensors based on Tucker representation. SIAM Journal on Matrix Analysis and Applications 37, 1223–1249 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  36. Savas, B., Eldén, L.: Handwritten digit classification using higher order singular value decomposition. Pattern Recognition 40, 993–1003 (2007)

    Article  MATH  Google Scholar 

  37. Savas, B., Eldén, L.: Krylov-type methods for tensor computations I. Linear Algebra and its Applications 438, 891–918 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  38. Savas, B., Lim, L.-H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM Journal on Scientific Computing 32, 3352–3393 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  39. Song, Z., Woodruff, D.P., Zhong, P.: Relative error tensor low rank approximation In: SODA ’19: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2772–2789 (2019)

  40. Sun, Y., Guo, Y., Luo, C., Tropp, J.A., Udell, M.: Low-rank Tucker approximation of a tensor from streaming data. SIAM Journal on Mathematics of Data Science 2, 1123–1150 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  41. Symeonidis, P.: ClustHOSVD: Item recommendation by combining semantically enhanced tag clustering with tensor HOSVD. IEEE Transactions on Systems, Man, and Cybernetics: Systems 46, 1240–1251 (2016)

    Article  Google Scholar 

  42. Tropp, J.A., Yurtsever, A., Udell, M., Cevher, V.: Practical sketching algorithms for low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications 38, 1454–1485 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  43. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  44. Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation strategy for the higher-order singular value decomposition. SIAM Journal on Scientific Computing 34, A1027–A1052 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  45. Vasilescu, M., Terzopoulos, D.: Multilinear image analysis for facial recognition. In: Proceedings. 16th International Conference on Pattern Recognition, vol. 2, IEEE, pp. 511–514 (2002)

  46. Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, vol. 2, IEEE, pp. 93–99 (2003)

  47. Vasilescu, M., Terzopoulos, D.: Tensor Textures: Multilinear image-based rendering. ACM Transactions on Graphics 23, 336–342 (2004)

    Article  Google Scholar 

  48. Vervliet, N., Debals, O., Sorber, L., Van Barel, M., De Lathauwer, L.: Tensorlab 3.0. Available online, March 2016. http://tensorlab.net

  49. Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science 10, 1–157 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  50. Woolfe, F., Liberty, E., Rokhlin, V., Tygert, M.: A fast randomized algorithm for the approximation of matrices. Applied and Computational Harmonic Analysis 25, 335–366 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  51. Yu, W., Gu, Y., Li, Y.: Efficient randomized algorithms for the fixed-precision low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications 39, 1339–1359 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  52. Zhou, G., Cichocki, A., Xie, S.: Decomposition of big tensors with low multilinear rank, arXiv preprint arXiv:1412.1885v1, (2014)

Download references

Acknowledgements

The authors would like to thank the Editor-in-Chief Prof. Sergiy Butenko, the guesting editor and two anonymous reviewers for their careful and very detailed comments on our paper. This project is supported by the Basic Theory and Algorithm Research between Fudan University and Huawei Technology Investment Co. under grant YBN 19095097 and Shanghai Municipal Science and Technology Commission under grant 22WZ2501900.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Maolin Che: This author is supported by the National Natural Science Foundation of China under Grant 11901471. Yimin Wei: This author is supported by the National Natural Science Foundation of China under Grant 11771099 and the Innovation Program of Shanghai Municipal Education Committee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Che, M., Wei, Y. & Xu, Y. Randomized algorithms for the computation of multilinear rank-\((\mu _1,\mu _2,\mu _3)\) approximations. J Glob Optim 87, 373–403 (2023). https://doi.org/10.1007/s10898-022-01182-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-022-01182-8

Keywords

Mathematics Subject Classification