Skip to main content
Log in

Characterization Of sampling patterns for low-tt-rank tensor retrieval

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

In this paper, we analyze the fundamental conditions for low-rank tensor completion given the separation or tensor-train (TT) rank, i.e., ranks of TT unfoldings. We exploit the algebraic structure of the TT decomposition to obtain the deterministic necessary and sufficient conditions on the locations of the samples to ensure finite completability. Specifically, we propose an algebraic geometric analysis on the TT manifold that can incorporate the whole rank vector simultaneously in contrast to the existing approach based on the Grassmannian manifold that can only incorporate one rank component. Our proposed technique characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern and the TT decomposition, which is instrumental to obtaining the deterministic condition on the sampling pattern for finite completability. In addition, based on the proposed analysis, assuming that the entries of the tensor are sampled independently with probability p, we derive a lower bound on the sampling probability p, or equivalently, the number of sampled entries that ensures finite completability with high probability. Moreover, we also provide the deterministic and probabilistic conditions for unique completability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Specified by a subset of rows and a subset of columns (not necessarily consecutive).

  2. Since \(\mathcal {U}^{(1)}\) and \(\mathcal {U}^{(d)}\) are two-way tensors, i.e., matrices we also denote them by U(1) and U(d). Moreover, since \(\mathcal {U}^{(i)}\) is a three-way tensor, \(\mathbf {\widetilde U}_{(2)}^{(i)} = \mathbf {U}_{(3)}^{{(i)}^{\top }}\), \(i=2,\dots ,d-1.\)

References

  1. Ashraphijuo, M., Aggarwal, V., Wang, X.: On deterministic sampling patterns for robust low-rank matrix completion. IEEE Signal Process. Lett. 25(3), 343–347 (2018)

    Google Scholar 

  2. Ashraphijuo, M., Aggarwal, V., Wang, X.: Deterministic and probabilistic conditions for finite completability of low-Tucker-rank tensor. IEEE Trans. Inform. Theory 65(9), 5380–5400 (2019)

    MathSciNet  MATH  Google Scholar 

  3. Ashraphijuo, M., Madani, R., Lavaei, J.: Characterization of rank-constrained feasibility problems via a finite number of convex programs. In: IEEE conference on decision and control (CDC), pp. 6544–6550 (2016)

  4. Ashraphijuo, M., Xiaodong, W: Fundamental conditions for low-CP-rank tensor completion. J. Mach. Learn. Res. JMLR 18(63), 1–29 (2017)

    MathSciNet  MATH  Google Scholar 

  5. Ashraphijuo, M., Xiaodong, W.: Clustering a union of low-rank subspaces of different dimensions with missing data. Pattern Recogn. Lett. 120, 31–35 (2019)

    Google Scholar 

  6. Ashraphijuo, M., Xiaodong, W., Vaneet, A: Rank determination for low-rank data completion. J. Mach. Learn. Res. JMLR 18(1), 3422–3450 (2017)

    MathSciNet  MATH  Google Scholar 

  7. Ashraphijuo, M., Xiaodong, W., Zhang, J.: Low-rank data completion with very low sampling rate using Newton’s method. IEEE Trans. Signal Process. 67(7), 1849–1859 (2019)

    MathSciNet  MATH  Google Scholar 

  8. Beck, M.H., Jäckle, A., Worth, G.A., Meyer, H.-D.: The multiconfiguration time-dependent hartree MCTDH method: a highly efficient algorithm for propagating wavepackets. Phys. Rep. 324(1), 1–105 (2000)

    Google Scholar 

  9. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    MathSciNet  MATH  Google Scholar 

  10. Candès, E.J., Eldar, Y.C., Strohmer, T., Voroninski, V.: Phase retrieval via matrix completion. SIAM J. Imaging Sci. 6(1), 199–225 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Candès, E.J., Tao, T.: The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    MathSciNet  MATH  Google Scholar 

  13. Douglas, C.J., Chang, J.-J.: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 35(3), 283–319 (1970)

    MATH  Google Scholar 

  14. Lathauwer, L.D., Moor, B.D., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    MathSciNet  MATH  Google Scholar 

  15. Ely, G., Aeron, S., Hao, N., Kilmer, M.E.: 5D and 4D pre-stack seismic data completion using tensor nuclear norm TNN. Society of Exploration Geophysicists SEG (2013)

  16. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-N-rank tensor recovery via convex optimization. Inverse Problems 27(2), 1–19 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Rong G., lee, J.D., Tengyu, M.: Matrix completion has no spurious local minimum. arXiv:1605.07272 (2016)

  18. Goldfarb, Donald, Qin, Zhiwei: Robust low-rank tensor recovery: models and algorithms. SIAM J. Matrix Anal. Appl. 35(1), 225–253 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31(4), 2029–2054 (2010)

    MathSciNet  MATH  Google Scholar 

  20. Holtz, S., Rohwedder, T., Schneider, R.: On manifolds of tensors of fixed TT-rank. Numer. Math. 120(4), 701–731 (2012)

    MathSciNet  MATH  Google Scholar 

  21. Prateek, J., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. Annual Symposium on the Theory of Computing, pp. 665–674 (2013)

  22. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)

    MathSciNet  MATH  Google Scholar 

  23. Kolda, T.G.: Orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl. 23(1), 243–255 (2001)

    MathSciNet  MATH  Google Scholar 

  24. Kreimer, N., Stanton, A., Sacchi, M.D.: Tensor completion based on nuclear norm minimization for 5D seismic data reconstruction. Geophysics 78(6), V273–V284 (2013)

    Google Scholar 

  25. Kressner, D., Steinlechner, M., Vandereycken, B.: Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 54(2), 447–468 (2014)

    MathSciNet  MATH  Google Scholar 

  26. Krishnamurthy, A., Singh, A.: Low-rank matrix and tensor completion via adaptive sampling. Advances in Neural Information Processing Systems, pp. 836–844 (2013)

  27. Lim, L.-H., Comon, P.: Multiarray signal processing: Tensor decomposition meets compressed sensing. Comptes Rendus Mecanique 338(6), 311–320 (2010)

    MATH  Google Scholar 

  28. Liu, J., Przemyslaw, M., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Google Scholar 

  29. Liu, X.Y., Aeron, S., Aggarwal, V., Wang, X., Wu, M.Y.: Adaptive sampling of RF fingerprints for fine-grained indoor localization. IEEE Trans. Mob. Comput. 15 (10), 2411–2423 (2016)

    Google Scholar 

  30. Liu, X.-Y., Aeron, S., Aggarwal, V., Wang, X.: xiaodong Low-tubal-rank tensor completion using alternating minimization. arXiv:1610.01690(2016)

  31. Liu, X.-Y., Aeron, S., Aggarwal, V., Wang, X., Wu, M.-Y.: Tensor completion via adaptive sampling of tensor fibers: Application to efficient indoor RF fingerprinting. In: IEEE International conference on acoustics, speech and signal processing ICASSP, pp. 2529–2533 (2016)

  32. Oseledets, I., Tyrtyshnikov, E.: TT -Cross approximation for multidimensional arrays. Linear Algebra Appl. 432(1), 70–88 (2010)

    MathSciNet  MATH  Google Scholar 

  33. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    MathSciNet  MATH  Google Scholar 

  34. Oseledets, I.V., Tyrtyshnikov, E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31(5), 3744–3759 (2009)

    MathSciNet  MATH  Google Scholar 

  35. Oseledets, I.V., Tyrtyshnikov, E.E.: Tensor tree decomposition does not need a tree. Linear Algebra Applications, 8 (2009)

  36. Papalexakis, E.E., Faloutsos, C., Sidiropoulos, N.D.: Parcube: Sparse parallelizable tensor decompositions. European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 521–536 (2012)

  37. Pimentel-Alarcón, D., Boston, N., Nowak, R.: A characterization of deterministic sampling patterns for low-rank matrix completion. IEEE J. Sel. Top. Signal Process. 10(4), 623–636 (2016)

    Google Scholar 

  38. Rauhut, H., Schneider, R., Stojanac, Z.: Low rank tensor recovery via iterative hard thresholding. Linear Algebra Appl. 523, 220–262 (2017)

    MathSciNet  MATH  Google Scholar 

  39. Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. Advances in Neural Information Processing Systems, pp. 2967–2975 (2013)

  40. Schollwöck, U.: The density-matrix renormalization group. J. Mod. Phys. 77 (1), 259 (2005)

    MathSciNet  MATH  Google Scholar 

  41. Sidiropoulos, N.D., Kyrillidis, A.: Multi-way compressed sensing for sparse low-rank tensors. IEEE Signal Process. Lett. 19(11), 757–760 (2012)

    Google Scholar 

  42. Signoretto, M., Dinh, Q.T., Lathauwer, L.D., Suykens, J.A.: Learning with tensors: a framework based on convex optimization and spectral regularization. Mach. Learn. 94(3), 303–351 (2014)

    MathSciNet  MATH  Google Scholar 

  43. Stegeman, A., Sidiropoulos, N.D.: On Kruskal’s uniqueness condition for the candecomp/Parafac decomposition. Linear Algebra Appl. 420(2), 540–552 (2007)

    MathSciNet  MATH  Google Scholar 

  44. Sturmfels, B.: Solving Systems of Polynomial Equations. Number 97 american mathematical society (2002)

  45. Berge, J.M.T., Sidiropoulos, N.D.: On uniqueness in candecomp/parafac. Psychometrika 67(3), 399–409 (2002)

    MathSciNet  MATH  Google Scholar 

  46. Tomioka, R., Hayashi, K., Hisashi, K.: Estimation of low-rank tensors via convex optimization. arXiv:1010.0789 (2010)

  47. Uschmajew, A., Vandereycken, B.: The geometry of algorithms using hierarchical tensors. Linear Algebra Appl. 439(1), 133–166 (2013)

    MathSciNet  MATH  Google Scholar 

  48. Wang, W., Aggarwal, V., Shuchin A.: Tensor completion by alternating minimization under the tensor train TT model. arXiv:1609.05587(2016)

  49. Zhou, G., Cichocki, A., Xie, S.: Fast nonnegative matrix/tensor factorization based on low-rank approximation. IEEE Trans. Signal Process. 60(6), 2928–2940 (2012)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the U.S. National Science Foundation under Grant CCF-1814803 and in part by the U.S. Office of Naval Research under Grant N000141712827.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Wang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: A Canonical Decomposition and The Degree of Freedom

Appendix: A Canonical Decomposition and The Degree of Freedom

We are interested in providing a structure on the decomposition \(\mathbb {U}\) such that there is one decomposition among all possible decompositions of the sampled tensor \(\mathcal {U}\) that captures the structure. Before describing such a structure on TT decomposition, we start with a similar structure for matrix decomposition.

Lemma 15

Let X denote a generically chosen matrix from the manifold of n1 × n2 matrices of rank r. Then, there exists a unique decomposition X = YZ such that \(\mathbf {Y} \in \mathbb {R}^{n_{1} \times r}\), \(\mathbf {Z} \in \mathbb {R}^{r \times n_{2}}\) and Y(1 : r, 1 : r) = Ir, where Y(1 : r, 1 : r) represents the submatrix of Y consists of the first r columns and the first r rows and Ir denotes the r × r identity matrix.

Proof

Weshow that there exists exactly one decomposition X = YZ such that Y(1 : r, 1 : r) = Ir with probability one. Considering the first r rows of X = YZ, we conclude X(1 : r, :) = IrZ = Z. Therefore, we need to show that there exists exactly one Y(r + 1 : n1, :) such that X(r + 1 : n1, :) = Y(r + 1 : n1, :)Z or equivalently X(r + 1 : n1, :) = X(1 : r, :)Y(r + 1 : n1, :). It suffices to show that each column of Y(r + 1 : n1, :) can be determined uniquely having x = X(1 : r, :)y where \(\mathbf {x} \in \mathbb {R}^{n_{2} \times 1}\) and \(\mathbf {y} \in \mathbb {R}^{r \times 1}\). As X is a generically chosen n1 × n2 matrix of rank r, we have \(\text {rank}\left (\mathbf {X}(1:r,:)\right ) = r\) with probability one. Hence, x(1 : r) = X(1 : r, 1 : r)y results in r independent degree-1 equations in terms of the r variables (entries of y), and therefore y has exactly one solution with probability one. □

Remark 6

Note that the genericity assumption is necessary as we can find counter examples for Lemma 15 in the absence of genericity assumption, e.g., it is easily verified that the following decomposition is not possible:

figure c

Remark 2

Assume that \(\mathbf {Q} \in \mathbb {R}^{r \times r}\) is an arbitrary given full rank matrix. Then, for any submatrixFootnote 1\(\mathbf {P} \in \mathbb {R}^{r \times r}\) of Y, Lemma 15 also holds if we replace Y(1 : r, 1 : r) = Ir by P = Q in the statement. The proof is similar to the proof of Lemma 15 and thus it is omitted.

As mentioned earlier, similar to the matrix case, we are interested in obtaining a structure on TT decomposition of a tensor such that there exists one decomposition among all possible TT decompositions of a tensor that captures the structure. Hence, we define the following structure on the decomposition in order to characterize a condition on the sampling pattern to study the algebraic independency of the above-mentioned polynomials.

Definition 4

Consider any d − 1 submatrices \(\mathbf {P}_{1},\dots ,\mathbf {P}_{d-1}\) of \(\mathbf {U}^{(1)},\mathbf {U}^{(2)}_{(2)},\mathbf {U}^{(3)}_{(2)},\dots ,\)\(\mathbf {U}^{(d-1)}_{(2)}\), respectively such that (i) \(\mathbf {P}_{i} \in \mathbb {R}^{r_{i} \times r_{i}}\), \(i=1,\dots ,d-1\), (ii) the ri columns of \(\mathbf {U}^{(i)}_{(2)}\) corresponding to columns of Pi belong to ri distinct rows of \(\mathbf {U}^{(i)}_{(3)}\), \(i=2,\dots ,d-1\). Then, \(\mathbb {U}\) is said to have a proper structure if Pi is full rank, \(i=1,\dots ,d\).Footnote 2

Define the matrices \(\mathbf {P}_{1}^{\text {can}},\dots ,\mathbf {P}_{d-1}^{\text {can}}\) such that for any 1 ≤ xiri and any \(1 \leq x_{i}^{\prime } \leq r_{i}\) we have:

$$ \mathbf{P}_{i}^{\text{can}} (x_{i},k_{i}) = \mathcal{U}^{(i)}(1,x_{i},k_{i}) \in \mathbb{R}^{r_{i} \times r_{i}}, \ \ \ \ \ \ \ \ i = 2,\dots,d-1, $$
(43)

and

$$ \mathbf{P}_{1}^{\text{can}} (x_{1},k_{1}) = \mathcal{U}^{(1)}(x_{1},k_{1}) \in \mathbb{R}^{r_{1} \times r_{1}}. $$
(44)

It is easy to verify that \(\mathbf {P}_{1}^{\text {can}},\dots ,\mathbf {P}_{d-1}^{\text {can}}\) satisfy properties (i) and (ii) in Definition 4.

Definition 5

(Canonical basis) We call \(\mathbb {U}\) a canonical decomposition if for \(i =1,\dots ,d\) we have \(\mathbf {P}_{i}^{\text {can}} = \mathbf {I}_{r_{i}}\), where \(\mathbf {I}_{r_{i}}\) is the ri × ri identity matrix.

Lemma 16

Consider the TT decomposition in (1). Then, \(\mathbf {U}^{(1)} \in \mathbb {R}^{n_{1} \times r_{1}}\), \(\mathbf {U}^{(d)} \in \mathbb {R}^{r_{d-1} \times n_{d}}\), \(\mathbf {U}^{(i)}_{(1)} \in \mathbb {R}^{r_{i-1} \times n_{i} r_{i}}\) and \(\mathbf {U}^{(i)}_{(3)} \in \mathbb {R}^{r_{i} \times r_{i-1} n_{i}}\), \(i = 2,\dots ,d-1\), are full rank matrices.

Proof

In general, besides the separation rank \((r_{1},\dots ,r_{d-1})\), we may be able to obtain a TT decomposition for other vectors \((r_{1}^{\prime },\dots ,r_{d-1}^{\prime })\) as well. However, according to [20] among all possible TT decomposition for different values of \(r_{i}^{\prime }\)’s, \(r_{i}^{\prime } = \text {rank}(\mathbf {\widetilde U}_{(i)}) = r_{i}\), \(i=1,\dots ,d-1\), is minimal, in the sense that there does not exist any decomposition with \(r_{i}^{\prime }\)’s such that \(r_{i}^{\prime } \leq r_{i}\) for \(i=1,\dots ,d-1\) and \(r_{i}^{\prime } < r_{i}\) for at least one \(i \in \{1,\dots ,d-1\}\). By contradiction, assume that \(\mathbf {U}_{(1)}^{(i+1)}\) is not full rank. Then, \(\text {rank}\left (\mathbf { \widetilde U}_{(2)}^{(i)} \mathbf {U}_{(1)}^{(i+1)} \right ) < r_{i}\).

Let X denote the matrix \(\mathbf { \widetilde U}_{(2)}^{(i)} \mathbf {U}_{(1)}^{(i+1)}\). Since \(\text {rank}\left (\mathbf X \right ) = r_{i}^{\prime } < r_{i} \), there exists a decomposition \( \mathbf {X} = \mathbf { \widetilde U}_{(2)}^{{(i)}^{\prime }} \mathbf {U}_{(1)}^{{(i+1)}^{\prime }}\) such that \(\mathbf { \widetilde U}_{(2)}^{{(i)}^{\prime }} \in \mathbb {R}^{r_{i-1}n_{i} \times r_{i}^{\prime }}\) and also \(\mathbf {U}_{(1)}^{{(i+1)}^{\prime }} \in \mathbb {R}^{r_{i}^{\prime } \times n_{i+1}r_{i+1}}\). Hence, the existence of the TT decomposition with \(\mathcal { U}^{(i)}\) and \( \mathcal {U}^{(i+1)}\) replaced by \(\mathcal { U}^{{(i)}^{\prime }} \) and \( \mathcal {U}^{{(i+1)}^{\prime }}\) contradicts the above-mentioned minimum property of the separation rank. Note that for a three-way tensor, the second TT unfolding is the transpose of the third Tucker unfolding, and therefore \(\text {rank}\left (\mathbf { \widetilde U}_{(2)}^{(i)} \right )= \text {rank}\left (\mathbf { U}_{(3)}^{(i)} \right )\) and the rest of the cases can be verified similarly. □

Lemma 17

Assume that \(\mathbf {Q}_{i} \in \mathbf {R}^{r_{i} \times r_{i}}\) is an arbitrary given full rank matrix, 1 ≤ id − 1. Consider a set of matrices \(\mathbf {P}_{1},\dots ,\mathbf {P}_{d-1}\) that satisfy properties (i) and (ii) in Definition 4. Then, there exists exactly one decomposition \(\mathbb {U}\) of the sampled tensor \(\mathcal {U}\) such that Pi = Qi, \(i=1,\dots ,d-1\).

Proof

Consider an arbitrary decomposition \(\mathbb {U}\) of the sampled tensor \(\mathcal {U}\). Let \(\mathcal {A}^{(i)} = \mathcal {U}^{(i)} \mathcal {U}^{(i+1)} \)\( \in \mathbb {R}^{r_{i-1} \times n_{i} \times n_{i+1} \times r_{i+1}}\), \(i=1,\dots ,d-1\), where the above multiplication is the same tensor multiplication in TT decomposition (1). Note that for a three-way tensor, the second TT unfolding is the transpose of the third Tucker unfolding, and therefore their ranks are the same. According to Lemma 16, \(\text {rank}\left (\mathbf {U}^{(1)} \right )=\text {rank}\left (\mathbf {U}^{(2)}_{(1)}\right )=r_{1}\), \(\text {rank}\left (\widetilde {\mathbf {U}}_{(2)}^{(2)} \right )=\text {rank}\left (\mathbf {U}_{(1)}^{(3)}\right )=r_{2}\), \(\dots \), and \(\text {rank}\left (\widetilde {\mathbf {U}}_{(2)}^{(d-1)} \right )=\text {rank}\left (\mathbf {U}^{(d)}\right )=r_{d-1}\).

As a result, we have \(\text {rank}\left (\mathbf {U}^{(1)} \mathbf {U}_{(1)}^{(2)}\right )=r_{1}\), \(\text {rank}\left (\widetilde {\mathbf {U}}^{(2)}_{(2)} \mathbf {U}_{(1)}^{(3)}\right )=r_{2}, \dots ,\)\(\text {rank}\left (\widetilde {\mathbf {U}}_{(2)}^{(d-1)} \mathbf {U}^{(d)}\right )\) = rd− 1. Observe that \(\widetilde {\mathbf {U}}_{(2)}^{(i)} \mathbf {U}_{(1)}^{(i+1)} = \widetilde {\mathbf {A}}_{(2)}^{(i)}\), and therefore \(\text {rank}\left (\widetilde {\mathbf {A}}_{(2)}^{(i)} \right ) = r_{i}\) for \(i=2,\dots ,d-2\) and similarly \(\text {rank}\left (\widetilde {\mathbf {A}}_{(1)}^{(1)} \right ) = r_{1}\) and \(\text {rank}\left (\widetilde {\mathbf {A}}_{(2)}^{(d)} \right ) = r_{d}\). According to Lemma 15 and Remark 7, for an n1 × n2 matrix X of rank r there exists a unique decomposition X = X1X2 such that \(\mathbf {X}_{1} \in \mathbb {R}^{n_{1} \times r}\) and \(\mathbf {X}_{2} \in \mathbb {R}^{r \times n_{2}}\) and an arbitrary r × r submatrix of X1 is equal to the given r × r full rank matrix.

We claim that there exist \((\mathcal {V}^{(i)},\mathcal {V}^{(i+1)})\) such that \(\mathcal {V}^{(i)} \mathcal {V}^{(i+1)} = \mathcal {A}^{(i)}\) and the corresponding submatrix Pi is equal to the given full rank matrix Qi, \(i=1,\dots ,d-1\). We repeat this procedure for each \(i=1,\dots ,d-1\) and update two core tensors of TT decomposition \((\mathcal {V}^{(i)},\mathcal {V}^{(i+1)})\) at iteration i and at the end, we obtain a TT decomposition that has the mentioned structure in the statement of Lemma 17. In the following we show the existence of such \((\mathcal {V}^{(i)},\mathcal {V}^{(i+1)})\) at each iteration. At step one, we find \((\mathcal {V}^{(1)},\mathcal {V}^{(2)})\) such that \(\mathcal {V}^{(1)} \mathcal {V}^{(2)} = \mathcal {A}^{(1)}\) and the corresponding submatrix P1 of \(\mathcal {V}^{(1)}\) is equal to Q1. We update the decomposition with \(\mathcal {U}^{(1)}\) and \(\mathcal {U}^{(2)}\) replaced by \(\mathcal {V}^{(1)}\) and \(\mathcal {V}^{(2)}\), and therefore we obtain a new decomposition \(\mathbb {U}^{1}\) of the sampled tensor \(\mathcal {U}\) such that the submatrix of \(\mathcal {V}^{(1)}\) corresponding to P1 is equal to Q1. Then, in step 2 we consider \(\mathcal {A}^{(2)}\) and similarly we update the second and third factor of the decomposition obtained in the last step. Eventually after d − 1 steps, we obtain a decomposition of the sampled tensor \(\mathcal {U}\) that Pi = Qi, \(i=1,\dots ,d-1\). To show the uniqueness of such decomposition, we show that each core tensor of the TT decomposition can be determined uniquely. Remark 7 for rank component r1 results that \(\mathcal {U}^{(1)}\) and the multiplication of the rest of the core tensors of the TT decomposition can be determined uniquely. By repeating this procedure for other rank components the uniqueness of such decomposition can be verified by showing the uniqueness of the core tensors one by one. □

Lemma 17 leads to the fact that given \(\mathcal {U}^{(d)}\), the dimension of all tuples \((\mathcal {U}^{(1)},{\dots } ,\mathcal {U}^{(d-1)})\) that satisfy TT decomposition is \( {\sum }_{i=1}^{d-1} r_{i-1}n_{i}r_{i} -{\sum }_{i=1}^{d-1} {r_{i}^{2}} \), as \( {\sum }_{i=1}^{d-1} r_{i-1}n_{i}r_{i}\) is the total number of entries of \((\mathcal {U}^{(1)} , {\dots } , \mathcal {U}^{(d-1)})\) and \({\sum }_{i=1}^{d-1} {r_{i}^{2}} \) is the total number of the entries of the pattern or structure that is equivalent to the uniqueness of TT decomposition. We make the following assumption which will be referred to, when it is needed.

Note that for Lemma 17, we need the strong low-rankness assumption rini. However, these results are also consequences of the analysis in [20]. The purpose is to present a simple and intuitive proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ashraphijuo, M., Wang, X. Characterization Of sampling patterns for low-tt-rank tensor retrieval. Ann Math Artif Intell 88, 859–886 (2020). https://doi.org/10.1007/s10472-020-09691-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-020-09691-6

Keywords

Mathematics Subject Classification (2010)

Navigation