Abstract
Tensor decompositions have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-one outer products using either the CANDECOMP/PARAFAC, the Tucker model, or some variations thereof. The motivation of these decompositions is to find an approximate representation for a given tensor. The main propose of this paper is to develop two neural network models for finding an approximation based on t-product for a given third-order tensor. Theoretical analysis shows that each of the neural network models ensures the convergence performance. The computer simulation results further substantiate that the models can find effectively the left and right singular tensor subspace.






Similar content being viewed by others
Notes
The database can be obtained from https://media.xiph.org/video/derf/.
References
Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433, 1241–1253 (2010)
Bunsegerstner, A., Byers, R., Mehrmann, V., Nichols, N.: Numerical computation of an analytic singular value decomposition of a matrix valued function. Numer. Math. 60, 1–39 (1991)
Cardoso, J.: High-order contrasts for independent component analysis. Neural Comput. 11, 157–192 (1999)
Carroll, J., Chang, J.: Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of “Eckart-Young” decomposition. Psychometrika 35, 283–319 (1970)
Che, M., Cichocki, A., Wei, Y.: Neural networks for computing best rank-one approximations of tensors and its applications. Neurocomputing 267, 114–133 (2017)
Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45, 395–428 (2019)
Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends Mach. Learn. 9, 249–429 (2016)
Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 2 applications and future perspectives. Found. Trends Mach. Learn. 9, 431–673 (2017)
Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G., Zhao, Q., Caiafa, C., Phan, H.: Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process. Mag. 32, 145–163 (2015)
Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, New York (1993)
Cichocki, A., Zdunek, R., Phan, A., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, New York (2009)
Comon, P.: Independent component analysis, a new concept? Sig. Process. 36, 287–314 (1994)
Comon, P.: Tensor decompositions: state of the art and applications. In: Mathematics in Signal Processing, V (Coventry, 2000), vol. 71 of Institute of Mathematics Applications Conference Series New Series, Oxford Univ. Press, Oxford, pp. 1–24. (2002)
Comon, P., Golub, G., Lim, L., Mourrain, B.: Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 30, 1254–1279 (2008)
De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)
De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((r_1, r_2,\dots, r_n)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)
De Lathauwer, L., Hoegaerts, L., Vandewalle, J.: A Grassmann-Rayleigh quotient iteration for dimensionality reduction in ICA. In: International Conference on Independent Component Analysis and Signal Separation, Springer, Berlin, pp. 335–342 (2004)
Diamantaras, K., Kung, S.: Cross-correlation neural network models. IEEE Trans. Signal Process. 42, 3218–3223 (1994)
Dieci, L., Eirola, T.: On smooth decompositions of matrices. SIAM J. Matrix Anal. Appl. 20, 800–819 (1999)
Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multilinear rank-\((r_1, r_2, r_3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)
Feng, D., Bao, Z., Zhang, X.: A cross-associative neural network for SVD of non-squared data matrix in signal processing. IEEE Trans. Neural Netw. 12, 1215–1221 (2001)
Fiori, S.: Singular value decomposition learning on double Stiefel manifold. Int. J. Neural Syst. 13, 155–170 (2003)
Goreinov, S., Oseledets, I., Savostyanov, D.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM J. Sci. Comput. 34, A1–A27 (2012)
Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)
Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilung. 36, 53–78 (2013)
Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus, vol. 42. Springer, Berlin (2012)
Hirsch, M., Smale, S.: Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, San Diego (1974)
Hirsch, M., Smale, S.: The Stability of Dynamical Systems. SIAM, Philadelphia (1976)
Ishteva, M., Absil, P., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32, 115–135 (2011)
Ishteva, M., De Lathauwer, L., Absil, P., Van Huffel, S.: Differential-geometric Newton method for the best rank-\((r_1, r_2, r_3)\) approximation of tensors. Numer. Algorithms 51, 179–194 (2009)
Kilmer, M., Braman, K., Hao, N., Hoover, R.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34, 148–172 (2013)
Kilmer, M., Martin, C.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)
Koch, O., Lubich, C.: Dynamical low-rank approximation. SIAM J. Matrix Anal. Appl. 29, 434–454 (2007)
Koch, O., Lubich, C.: Dynamical tensor approximation. SIAM J. Matrix Anal. Appl. 31, 2360–2375 (2010)
Kolda, T., Bader, B.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)
Liao, L., Qi, H., Qi, L.: Neurodynamical optimization. J. Global Optim. 28, 175–195 (2004)
Lu, C.: Tensor–Tensor Product Toolbox, Carnegie Mellon University, (2018). https://github.com/canyilu/tproduct
Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm, IEEE Trans. Pattern Anal. Mach. Intell. (2019). https://doi.org/10.1109/tpami.2019.2891760
Lubich, C., Rohwedder, T., Schneider, R., Vandereycken, B.: Dynamical approximation by hierarchical tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34, 470–494 (2013)
Miao, Y., Qi, L., Wei, Y.: Generalized tensor function via the tensor singular value decomposition based on the T-product. Linear Algebra Appl. 590, 258–303 (2020)
Newman, E., Horesh, L., Avron, H., Kilmer, M.: Stable tensor neural networks for rapid deep learning, arXiv:1811.06569v1 (2018)
Nikias, C., Mendel, J.: Signal processing with higher-order spectra. IEEE Signal Process. Mag. 10, 10–37 (1993)
Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)
Qiao, S., Wang, X., Wei, Y.: Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra Appl. 542, 101–117 (2017)
Rodriguezvazquez, A., Dominguezcastro, R., Rueda, A., Huertas, J., Sanchezsinencio, E.: Nonlinear switched capacitor ‘neural’ networks for optimization problems. IEEE Trans. Circuits Syst. 37, 384–398 (1990)
Savas, B., Eldén, L.: Krylov-type methods for tensor computations I. Linear Algebra Appl. 438, 891–918 (2013)
Savas, B., Lim, L.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32, 3352–3393 (2010)
Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Computer Society Conference Computer Vision and Pattern Recognition, vol. 2, IEEE, pp. 93–99 (2003)
Wang, X., Che, M., Qi, L., Wei, Y.: Modified gradient dynamic approach to the tensor complementarity problem. Optim. Methods Softw. (2019). https://doi.org/10.1080/10556788.2019.1578766
Wang, X., Che, M., Wei, Y.: Neural networks based approach solving multi-linear systems with \(\cal{M}\)-tensors. Neurocomputing 351, 33–42 (2019)
Wright, K.: Differential equations for the analytic singular value decomposition of a matrix. Numer. Math. 63, 283–295 (1992)
Zabczyk, J.: Mathematical Control Theory: an Introduction. Birkhäuser, Basel (2015)
Zhang, Z., Aeron, S.: Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 65, 1511–1526 (2017)
Zielke, G.: Report on test matrices for generalized inverses. Computing 36, 105–162 (1986)
Acknowledgements
We thank the editor and two anonymous reviewers for their detailed and helpful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
X. Wang: This author is partially supported by Shanghai Key Laboratory of Contemporary Applied Mathematics, Natural Science Foundation of Gansu Province and Innovative Ability Promotion Project in Colleges and Universities of Gansu Province 2019B-146. M. Che: This author is supported by the National Natural Science Foundation of China under Grant 11901471. Y. Wei: This author is supported by the National Natural Science Foundation of China under Grant 11771099 and Innovation Program of Shanghai Municipal Education Commission.
Rights and permissions
About this article
Cite this article
Wang, X., Che, M. & Wei, Y. Tensor neural network models for tensor singular value decompositions. Comput Optim Appl 75, 753–777 (2020). https://doi.org/10.1007/s10589-020-00167-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-020-00167-1
Keywords
- Tensor decomposition
- Singular value decomposition
- Tensor singular value decomposition
- Tensor neural networks
- Asymptotic stability