Skip to main content

Tensor Networks for Dimensionality Reduction, Big Data and Deep Learning

  • Chapter
  • First Online:
Advances in Data Analysis with Computational Intelligence Methods

Part of the book series: Studies in Computational Intelligence ((SCI,volume 738))

Abstract

Large scale multidimensional data are often available as multiway arrays or higher-order tensors which can be approximately represented in distributed forms via low-rank tensor decompositions and tensor networks. Our particular emphasis is on elucidating that, by virtue of the underlying low-rank approximations, tensor networks have the ability to reduce the dimensionality and alleviate the curse of dimensionality in a number of applied areas, especially in large scale optimization problems and deep learning. We briefly review and provide tensor links between low-rank tensor network decompositions and deep neural networks. We elucidating, through graphical illustrations, that low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volume of data/parameters. Our focus is on the Hierarchical Tucker, tensor train (TT) decompositions and MERA tensor networks in specific applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The standard multilinear product can be generalized to nonlinear multilinear product as \(\underline{\mathbf{C}}=\underline{\mathbf{G}}\times ^{\sigma }_1 \mathbf{B}^{(1)} \times ^{\sigma }_2 \mathbf{B}^{(2)} \cdots \times ^{\sigma }_N \mathbf{B}^{(N)}\), where \( \underline{\mathbf{G}}\times ^{\sigma }_n \mathbf{B}= \sigma ( \underline{\mathbf{G}}\times _n \mathbf{B})\), and \(\sigma \) is a suitably chosen nonlinear activation function.

  2. 2.

    In the literature, sometimes the symbol \(\times _n\) is replaced by \(\bullet _n\).

  3. 3.

    Strictly speaking, the minimum set of internal indices \(\{R_1,R_2,R_3,\ldots \}\) is called the rank (bond dimensions) of a specific tensor network.

  4. 4.

    Although similar approaches have been known in quantum physics for a long time, their rigorous mathematical analysis is still a work in progress (see [27, 69] and references therein).

  5. 5.

    A compositional function can take, for example, the following form \(h_1(\ldots h_3(h_{21}(h_{11}(x_1,x_2)h_{12}(x_3,x_4)),h_{22}(h_{13}(x_5,x_6)h_{14}(x_7,x_8))\ldots )))\).

  6. 6.

    Note that the representation layer can be considered as a tensorization of input patches \(\mathbf{x}_n\).

  7. 7.

    It should be noted that these tensors share the same entries, except for the parameters in the output layer.

  8. 8.

    In the worst case scenario the TT ranks can grow up to \(I^{(N/2)}\) for an Nth-order tensor.

  9. 9.

    The symbols \(\sigma (\cdot )\) and \(P(\cdot )\) are respectively the activation and pooling functions of the network.

  10. 10.

    It is important to note that TT/TC tensor networks described in this section do not necessary need to have weight sharing and do not need even to be convolutional.

References

  1. Zurada, J.: Introduction to Artificial Neural Systems, vol. 8. West St, Paul (1992)

    Google Scholar 

  2. LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, MIT Press, pp. 255–258 (1998)

    Google Scholar 

  3. Hinton, G., Sejnowski, T.: Learning and relearning in boltzmann machines. In: Parallel Distributed Processing, MIT Press, pp. 282–317 (1986)

    Google Scholar 

  4. Cichocki, A., Kasprzak, W., Amari, S.: Multi-layer neural networks with a local adaptive learning rule for blind separation of source signals. In: Proceedings of the International Symposium Nonlinear Theory and Applications (NOLTA), Las Vegas, NV, Citeseer, pp. 61–65 (1995)

    Google Scholar 

  5. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  6. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  7. Cichocki, A., Zdunek, R.: Multilayer nonnegative matrix factorisation. Electron. Lett. 42(16), 1 (2006)

    Article  Google Scholar 

  8. Cichocki, A., Zdunek, R.: Regularized alternating least squares algorithms for non-negative matrix/tensor factorization. In: International Symposium on Neural Networks, pp. 793–802. Springer (2007)

    Google Scholar 

  9. Cichocki, A.: Tensor decompositions: new concepts in brain data analysis? J. Soc. Instr. Control Eng. 50(7), 507–516. arXiv:1305.0395 (2011)

  10. Cichocki, A.: Era of big data processing: a new approach via tensor networks and tensor decompositions, (invited). In: Proceedings of the International Workshop on Smart Info-Media Systems in Asia (SISA2013). arXiv:1403.2048 (September 2013)

  11. Cichocki, A.: Tensor networks for big data analytics and large-scale optimization problems. arXiv:1407.3124 (2014)

  12. Cichocki, A., Mandic, D., Caiafa, C., Phan, A., Zhou, G., Zhao, Q., Lathauwer, L.D.: Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process. Mag. 32(2), 145–163 (2015)

    Article  Google Scholar 

  13. Cichocki, A., Lee, N., Oseledets, I., Phan, A.H., Zhao, Q., Mandic, D.: Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends Mach. Learn. 9(4–5), 249–429 (2016)

    Article  MATH  Google Scholar 

  14. Cichocki, A., Phan, A.H., Zhao, Q., Lee, N., Oseledets, I., Sugiyama, M., Mandic, D.: Tensor networks for dimensionality reduction and large-scale optimization: part 2 applications and future perspectives. Found. Trends Mach. Learn. 9(6), 431–673 (2017)

    Google Scholar 

  15. Oseledets, I., Tyrtyshnikov, E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31(5), 3744–3759 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dolgov, S., Khoromskij, B.: Two-level QTT-Tucker format for optimized tensor calculus. SIAM J. Matrix Anal. Appl. 34(2), 593–623 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kazeev, V., Khoromskij, B., Tyrtyshnikov, E.: Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity. SIAM J. Sci. Comput. 35(3), A1511–A1536 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kazeev, V., Khammash, M., Nip, M., Schwab, C.: Direct solution of the chemical master equation using quantized tensor trains. PLoS Comput. Biol. 10(3), e1003359 (2014)

    Article  Google Scholar 

  19. Kressner, D., Steinlechner, M., Uschmajew, A.: Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM J. Sci. Comput. 36(5), A2346–A2368 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Vervliet, N., Debals, O., Sorber, L., De Lathauwer, L.: Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 31(5), 71–79 (2014)

    Article  Google Scholar 

  21. Dolgov, S., Khoromskij, B.: Simultaneous state-time approximation of the chemical master equation using tensor product formats. Numer. Linear Algebra Appl. 22(2), 197–219 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Liao, S., Vejchodský, T., Erban, R.: Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks. J. R. Soc. Interface 12(108), 20150233 (2015)

    Article  Google Scholar 

  23. Bolten, M., Kahl, K., Sokolović, S.: Multigrid methods for tensor structured Markov chains with low rank approximation. SIAM J. Sci. Comput. 38(2), A649–A667 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lee, N., Cichocki, A.: Estimating a few extreme singular values and vectors for large-scale matrices in Tensor Train format. SIAM J. Matrix Anal. Appl. 36(3), 994–1014 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  25. Lee, N., Cichocki, A.: Regularized computation of approximate pseudoinverse of large matrices using low-rank tensor train decompositions. SIAM J. Matrix Anal. Appl. 37(2), 598–623 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kolda, T., Bader, B.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Orús, R.: A practical introduction to tensor networks: matrix product states and projected entangled pair states. Ann. Phys. 349, 117–158 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Dolgov, S., Savostyanov, D.: Alternating minimal energy methods for linear systems in higher dimensions. SIAM J. Sci. Comput. 36(5), A2248–A2271 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  29. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, Chichester (2009)

    Book  Google Scholar 

  30. Cohen, N., Shashua, A.: Convolutional rectifier networks as generalized tensor decompositions. In: Proceedings of The 33rd International Conference on Machine Learning, pp. 955–963 (2016)

    Google Scholar 

  31. Li, J., Battaglino, C., Perros, I., Sun, J., Vuduc, R.: An input-adaptive and in-place approach to dense tensor-times-matrix multiply. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p. 76. ACM (2015)

    Google Scholar 

  32. Ballard, G., Benson, A., Druinsky, A., Lipshitz, B., Schwartz, O.: Improving the numerical stability of fast matrix multiplication algorithms. arXiv:1507.00687 (2015)

  33. Ballard, G., Druinsky, A., Knight, N., Schwartz, O.: Brief announcement: Hypergraph partitioning for parallel sparse matrix-matrix multiplication. In: Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures, pp. 86–88. ACM (2015)

    Google Scholar 

  34. Tucker, L.: The extension of factor analysis to three-dimensional matrices. In: Gulliksen, H., Frederiksen, N. (eds.) Contributions to Mathematical Psychology, pp. 110–127. Holt, Rinehart and Winston, New York (1964)

    Google Scholar 

  35. Tucker, L.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  36. Sun, J., Tao, D., Faloutsos, C.: Beyond streams and graphs: dynamic tensor analysis. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge Discovery and Data Mining, pp. 374–383. ACM (2006)

    Google Scholar 

  37. Drineas, P., Mahoney, M.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 420(2), 553–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  38. Lu, H., Plataniotis, K., Venetsanopoulos, A.: A survey of multilinear subspace learning for tensor data. Pattern Recogn. 44(7), 1540–1551 (2011)

    Article  MATH  Google Scholar 

  39. Li, M., Monga, V.: Robust video hashing via multilinear subspace projections. IEEE Trans. Image Process. 21(10), 4397–4409 (2012)

    Article  MathSciNet  Google Scholar 

  40. Pham, N., Pagh, R.: Fast and scalable polynomial kernels via explicit feature maps. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 239–247. ACM (2013)

    Google Scholar 

  41. Wang, Y., Tung, H.Y., Smola, A., Anandkumar, A.: Fast and guaranteed tensor decomposition via sketching. In: Advances in Neural Information Processing Systems, pp. 991–999 (2015)

    Google Scholar 

  42. Kuleshov, V., Chaganty, A., Liang, P.: Tensor factorization via matrix factorization. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pp. 507–516 (2015)

    Google Scholar 

  43. Sorber, L., Domanov, I., Van Barel, M., De Lathauwer, L.: Exact line and plane search for tensor optimization. Comput. Optim. Appl. 63(1), 121–142 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  44. Lubasch, M., Cirac, J., Banuls, M.C.: Unifying projected entangled pair state contractions. New J. Phys. 16(3), 033014 (2014)

    Article  Google Scholar 

  45. Di Napoli, E., Fabregat-Traver, D., Quintana-Ortí, G., Bientinesi, P.: Towards an efficient use of the BLAS library for multilinear tensor contractions. Appl. Math. Comput. 235, 454–468 (2014)

    MathSciNet  MATH  Google Scholar 

  46. Pfeifer, R., Evenbly, G., Singh, S., Vidal, G.: NCON: A tensor network contractor for MATLAB. arXiv:1402.0939 (2014)

  47. Kao, Y.J., Hsieh, Y.D., Chen, P.: Uni10: An open-source library for tensor network algorithms. J. Phys. Conf. Ser. 640, 012040 (2015). IOP Publishing

    Google Scholar 

  48. Grasedyck, L., Kessner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  49. Comon, P.: Tensors: A brief introduction. IEEE Signal Process. Mag. 31(3), 44–53 (2014)

    Google Scholar 

  50. Sidiropoulos, N., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E., Faloutsos, C.: Tensor decomposition for signal processing and machine learning. arXiv:1607.01668 (2016)

  51. Zhou, G., Cichocki, A.: Fast and unique Tucker decompositions via multiway blind source separation. Bull. Pol. Acad. Sci. 60(3), 389–407 (2012)

    Google Scholar 

  52. Phan, A., Cichocki, A., Tichavský, P., Zdunek, R., Lehky, S.: From basis components to complex structural patterns. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, May 26–31, 2013, pp. 3228–3232

    Google Scholar 

  53. Phan, A., Tichavský, P., Cichocki, A.: Low rank tensor deconvolution. In: Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing, ICASSP, April 2015, pp. 2169–2173

    Google Scholar 

  54. Lee, N., Cichocki, A.: Fundamental tensor operations for large-scale data analysis using tensor network formats. Multidimension. Syst. Signal Process, pp 1–40, Springer (March 2017)

    Google Scholar 

  55. Bellman, R.: Adaptive Control Processes. Princeton University Press, Princeton, NJ (1961)

    Book  MATH  Google Scholar 

  56. Austin, W., Ballard, G., Kolda, T.: Parallel tensor compression for large-scale scientific data. arXiv:1510.06689 (2015)

  57. Jeon, I., Papalexakis, E., Faloutsos, C., Sael, L., Kang, U.: Mining billion-scale tensors: algorithms and discoveries. VLDB J. 1–26 (2016)

    Google Scholar 

  58. Phan, A., Cichocki, A.: PARAFAC algorithms for large-scale problems. Neurocomputing 74(11), 1970–1984 (2011)

    Article  Google Scholar 

  59. Klus, S., Schütte, C.: Towards tensor-based methods for the numerical approximation of the Perron-Frobenius and Koopman operator. arXiv:1512.06527 (December 2015)

  60. Bader, B., Kolda, T.: MATLAB tensor toolbox version. 2, 6 (2015)

    Google Scholar 

  61. Garcke, J., Griebel, M., Thess, M.: Data mining with sparse grids. Computing 67(3), 225–253 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  62. Bungartz, H.J., Griebel, M.: Sparse grids. Acta Numerica 13, 147–269 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  63. Hackbusch, W.: Tensor spaces and numerical tensor calculus. Springer Series in Computational Mathematics, vol. 42. Springer, Heidelberg (2012)

    Google Scholar 

  64. Bebendorf, M.: Adaptive cross-approximation of multivariate functions. Constr. Approx. 34(2), 149–179 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  65. Dolgov, S.: Tensor product methods in numerical simulation of high-dimensional dynamical problems. Ph.D. thesis, Faculty of Mathematics and Informatics, University Leipzig, Germany, Leipzig, Germany (2014)

    Google Scholar 

  66. Cho, H., Venturi, D., Karniadakis, G.: Numerical methods for high-dimensional probability density function equations. J. Comput. Phys. 305, 817–837 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  67. Trefethen, L.: Cubature, approximation, and isotropy in the hypercube. SIAM Rev. (to appear) (2017)

    Google Scholar 

  68. Oseledets, I., Dolgov, S., Kazeev, V., Savostyanov, D., Lebedeva, O., Zhlobich, P., Mach, T., Song, L.: TT-Toolbox (2012)

    Google Scholar 

  69. Oseledets, I.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  70. Khoromskij, B.: Tensors-structured numerical methods in scientific computing: Survey on recent advances. Chemometr. Intell. Lab. Syst. 110(1), 1–19 (2011)

    Article  Google Scholar 

  71. Oseledets, I., Tyrtyshnikov, E.: TT cross-approximation for multidimensional arrays. Linear Algebra Appl. 432(1), 70–88 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  72. Khoromskij, B., Veit, A.: Efficient computation of highly oscillatory integrals by using QTT tensor approximation. Comput. Methods Appl. Math. 16(1), 145–159 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  73. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  74. Schneider, D.: Deeper and cheaper machine learning [top tech 2017]. IEEE Spectr. 54(1), 42–43 (2017)

    Article  Google Scholar 

  75. Lebedev, V., Lempitsky, V.: Fast convolutional neural networks using group-wise brain damage. arXiv:1506.02515 (2015)

  76. Novikov, A., Podoprikhin, D., Osokin, A., Vetrov, D.: Tensorizing neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 442–450 (2015)

    Google Scholar 

  77. Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., Liao, Q.: Why and when can deep–but not shallow–networks avoid the curse of dimensionality: a review. arXiv:1611.00740 (2016)

  78. Yang, Y., Hospedales, T.: Deep multi-task representation learning: a tensor factorisation approach. arXiv:1605.06391 (2016)

  79. Cohen, N., Sharir, O., Shashua, A.: On the expressive power of deep learning: a tensor analysis. In: 29th Annual Conference on Learning Theory, pp. 698–728 (2016)

    Google Scholar 

  80. Chen, J., Cheng, S., Xie, H., Wang, L., Xiang, T.: On the equivalence of restricted Boltzmann machines and tensor network states. arXiv e-prints (2017)

    Google Scholar 

  81. Cohen, N., Shashua, A.: Inductive bias of deep convolutional networks through pooling geometry. CoRR (2016). arXiv:1605.06743

  82. Sharir, O., Tamari, R., Cohen, N., Shashua, A.: Tensorial mixture models. CoRR (2016). arXiv:1610.04167

  83. Lin, H.W., Tegmark, M.: Why does deep and cheap learning work so well? arXiv e-prints (2016)

    Google Scholar 

  84. Zwanziger, D.: Fundamental modular region, Boltzmann factor and area law in lattice theory. Nucl. Phys. B 412(3), 657–730 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  85. Eisert, J., Cramer, M., Plenio, M.: Colloquium: Area laws for the entanglement entropy. Rev. Modern Phys. 82(1), 277 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  86. Calabrese, P., Cardy, J.: Entanglement entropy and quantum field theory. J. Stat. Mech. Theory Exp. 2004(06), P06002 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  87. Anselmi, F., Rosasco, L., Tan, C., Poggio, T.: Deep convolutional networks are hierarchical kernel machines. arXiv:1508.01084 (2015)

  88. Mhaskar, H., Poggio, T.: Deep vs. shallow networks: an approximation theory perspective. Anal. Appl. 14(06), 829–848 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  89. White, S.: Density-matrix algorithms for quantum renormalization groups. Phys. Rev. B 48(14), 10345 (1993)

    Article  Google Scholar 

  90. Vidal, G.: Efficient classical simulation of slightly entangled quantum computations. Phys. Rev. Lett. 91(14), 147902 (2003)

    Article  Google Scholar 

  91. Perez-Garcia, D., Verstraete, F., Wolf, M., Cirac, J.: Matrix product state representations. Quantum Inf. Comput. 7(5), 401–430 (2007)

    MathSciNet  MATH  Google Scholar 

  92. Verstraete, F., Murg, V., Cirac, I.: Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Adv. Phys. 57(2), 143–224 (2008)

    Article  Google Scholar 

  93. Schollwöck, U.: Matrix product state algorithms: DMRG, TEBD and relatives. In: Strongly Correlated Systems, pp. 67–98. Springer (2013)

    Google Scholar 

  94. Huckle, T., Waldherr, K., Schulte-Herbriggen, T.: Computations in quantum tensor networks. Linear Algebra Appl. 438(2), 750–781 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  95. Vidal, G.: Class of quantum many-body states that can be efficiently simulated. Phys. Rev. Lett. 101(11), 110501 (2008)

    Article  Google Scholar 

  96. Evenbly, G., Vidal, G.: Algorithms for entanglement renormalization. Phys. Rev. B 79(14), 144108 (2009)

    Article  Google Scholar 

  97. Evenbly, G., Vidal, G.: Tensor network renormalization yields the multiscale entanglement renormalization Ansatz. Phys. Rev. Lett. 115(20), 200401 (2015)

    Article  Google Scholar 

  98. Evenbly, G., White, S.R.: Entanglement renormalization and wavelets. Phys. Rev. Lett. 116(14), 140403 (2016)

    Article  MathSciNet  Google Scholar 

  99. Evenbly, G., White, S.R.: Representation and design of wavelets using unitary circuits. arXiv e-prints (2016)

    Google Scholar 

  100. Matsueda, H.: Analytic optimization of a MERA network and its relevance to quantum integrability and wavelet. arXiv:1608.02205 (2016)

  101. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  MATH  Google Scholar 

  102. Smilde, A., Bro, R., Geladi, P.: Multi-way Analysis: Applications in the Chemical Sciences. Wiley, New York (2004)

    Book  Google Scholar 

  103. Tao, D., Li, X., Wu, X., Maybank, S.: General tensor discriminant analysis and Gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007)

    Article  Google Scholar 

  104. Kroonenberg, P.: Applied Multiway Data Analysis. Wiley, New York (2008)

    Book  MATH  Google Scholar 

  105. Favier, G., de Almeida, A.: Overview of constrained PARAFAC models. EURASIP J. Adv. Signal Process. 2014(1), 1–25 (2014)

    Article  Google Scholar 

  106. Kressner, D., Steinlechner, M., Vandereycken, B.: Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 54(2), 447–468 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  107. Zhang, Z., Yang, X., Oseledets, I., Karniadakis, G., Daniel, L.: Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition. IEEE Trans. Comput.-Aided Des. Integr. Circ. Syst. 34(1), 63–76 (2015)

    Google Scholar 

  108. Corona, E., Rahimian, A., Zorin, D.: A tensor-train accelerated solver for integral equations in complex geometries. arXiv:1511.06029 (2015)

  109. Litsarev, M., Oseledets, I.: A low-rank approach to the computation of path integrals. J. Comput. Phys. 305, 557–574 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  110. Benner, P., Khoromskaia, V., Khoromskij, B.: A reduced basis approach for calculation of the Bethe-Salpeter excitation energies by using low-rank tensor factorisations. Mol. Phys. 114(7–8), 1148–1161 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This work has been partially supported by Misnistry of Education and Science of the Russian Federation (grant 14.756,0001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrzej Cichocki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Cite this chapter

Cichocki, A. (2018). Tensor Networks for Dimensionality Reduction, Big Data and Deep Learning. In: Gawęda, A., Kacprzyk, J., Rutkowski, L., Yen, G. (eds) Advances in Data Analysis with Computational Intelligence Methods. Studies in Computational Intelligence, vol 738. Springer, Cham. https://doi.org/10.1007/978-3-319-67946-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67946-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67945-7

  • Online ISBN: 978-3-319-67946-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics