Skip to main content
Log in

Low-Rank Tensor Completion Based on Log-Det Rank Approximation and Matrix Factorization

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Rank evaluation plays a key role in low-rank tensor completion and tensor nuclear norm is often used as a substitute of rank in the optimization due to its convex property. However, this replacement often incurs unexpected errors, and since singular value decomposition is frequently involved, the computation cost of the norm is high, especially in handling large scale matrices from the mode-n unfolding of a tensor. This paper presents a novel tensor completion method, in which a non-convex logDet function is utilized to approximate the rank and a matrix factorization is adopted to reduce the evaluation cost of the function. The study shows that the logDet function is a much tighter rank approximation than the nuclear norm and the matrix factorization can significantly reduce the size of matrix that needs to be evaluated. In the implementation of the method, alternating direction method of multipliers is employed to obtain the optimal tensor completion. Several experiments are carried out to validate the method and the results show that the proposed method is effective.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. De Lathauwer, L., Vandewalle, J.: Dimensionality reduction in higher-order signal processing and rank-(R1, R2, …, RN) reduction in multilinear algebra. Linear Algebra Appl. 391, 31–55 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Vlasic, D., Brand, M., Pfister, H., Popovic, J.: Face transfer with multilinear models. ACM Trans. Graph. 24(3), 426–433 (2005)

    Article  Google Scholar 

  3. Beylkin, G., Mohlenkamp, M.J.: Numerical operator calculus in higher dimensions. Proc Natl Acad Sci USA 99(16), 10246–10251 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  4. Mørup, M.: Applications of tensor (multiway array) factorizations and decompositions in data mining. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 1(1), 24–40 (2011)

    Article  Google Scholar 

  5. Komodakis, N.: Image completion using global optimization. In: Proceedings of the CVPR IEEE, pp. 442–452 (2006)

  6. Patwardhan, K.A., Sapiro, G., Bertalmio, M.: Video inpainting under constrained camera motion. IEEE Trans. Image Process. 16(2), 545–553 (2007)

    Article  MathSciNet  Google Scholar 

  7. Varghees, V.N., Manikandan, M.S., Gini, R.: Adaptive MRI image denoising using total-variation and local noise estimation. In: International Conference on Advances in Engineering, Science and Management, pp. 506–511. IEEE (2012)

  8. Li, N., Li, B.X.: Tensor completion for on-board compression of hyperspectral images. In: IEEE International Conference on Image Processing, pp. 517–520. IEEE (2010)

  9. Gillis, N., Glineur, F.: Low-rank matrix approximation with weights or missing data is NP-hard. SIAM. J. Matrix. Anal. A 32(4), 1149–1165 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kang, Z., Peng, C., Cheng, Q.: Robust PCA via non-convex rank approximation. In: IEEE International Conference on Data Mining, pp. 211–220. IEEE Computer Society (2015)

  13. Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. Proc. Am. Contrib. Conf. 3, 2156–2162 (2003)

    Google Scholar 

  14. Kang, Z., Peng, C., Cheng, Q.: Top-N recommender system via matrix completion. In: AAAI (2016)

  15. Gu, S.H., Zhang, L., Zuo, W.M., Feng, X.C.: Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the CVPR IEEE, pp. 2862–2869 (2014)

  16. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  17. Xu, Y.Y., Hao, R.R., Yin, W.T., Su, Z.X.: Parallel matrix factorization for low-rank tensor completion. Inverse Probl. Imaging 9(2), 601–624 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  19. Ma, S.Q., Goldfarb, D., Chen, L.F.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Progr. 128(1–2), 321–353 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(3), 615–640 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Progr. Comput. 4(4), 333–361 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Chen, C.H., He, B.S., Yuan, X.M.: Matrix completion via an alternating direction method. IMA J. Numer. Anal. 32(1), 227–245 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  23. Liu, Y.Y., Jiao, L.C., Shang, F.H.: An efficient matrix factorization based low-rank representation for subspace clustering. Pattern Recogn. 46(1), 284–292 (2013)

    Article  MATH  Google Scholar 

  24. Liu, Y.Y., Shang, F.H.: An efficient matrix factorization method for tensor completion. IEEE Signal Proc. Lett. 20(4), 307–310 (2013)

    Article  Google Scholar 

  25. Ji, T.Y., Huang, T.Z., Zhao, X.L., et al.: A non-convex tensor rank approximation for tensor completion. Appl. Math. Model. 48, 410–422 (2017)

    Article  MathSciNet  Google Scholar 

  26. Kang, Z., Peng, C., Li, H., et al.: Subspace clustering using log-determinant rank approximation. In: ACM KDD (2015)

  27. Kang, Z., Peng, C., Cheng, Q.: Robust subspace clustering via smoothed rank approximation. IEEE Signal Proc. Lett. 22(11), 2088–2092 (2015)

    Article  Google Scholar 

  28. Kang, Z., Peng, C., Cheng, J., Cheng, Q.: LogDet Rank Minimization with Application to Subspace Clustering. Comput. Intel, Neurosc (2015)

    Book  Google Scholar 

  29. Li, Y.F., Zhang, Y., Huang, Z.H.: A reweighted nuclear norm minimization algorithm for low rank matrix recovery. Elsevier, Amsterdam (2014)

    Book  MATH  Google Scholar 

  30. He, B., Tao, M., Yuan, X.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22(2), 313–340 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 27(2), 025010 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  32. Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  33. Bai, M.R., Zhang, X.J., Ni, G.Y., Cui, C.F.: An adaptive correction approach for tensor completion. SIAM J. Imaging. Sci. 9(3), 1298–1323 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  34. Kajo, I., Kamel, N., Ruichek, Y., Malik, A.S.: SVD-based tensor-completion technique for background initialization. IEEE Trans. Image Process. 27(6), 3114–3126 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  35. Zhou, P., Lu, C.Y., Lin, Z.C., Zhang, C.: Tensor factorization for low-rank tensor completion. IEEE Trans. Image Process. 27(3), 1152–1163 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  36. Bengua, J.A., Phien, H.N., Tuan, H.D., Do, M.N.: Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans. Image Process. 26, 2466–2479 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  37. Che, M.L., Wei, Y.M.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45, 395–428 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  38. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  39. Mu, C., Huang, B., Wright, J., et al.: Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery (2013)

Download references

Acknowledgements

We really appreciate J. Liu, Silvia Gandy, M. Bai, P. Zhou, and J. A. Bengua for sharing the codes of FaLRTC, ADM-TR (E), ACA, TCTF and TMAC-TT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tifan Xiong.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research is supported by the National Key Research and Development Program of China (Project No. 2017YFD0700103) and National Natural Science Foundation of China (Grant Nos. #51775202 and #51475186).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, C., Huang, Z., Wan, L. et al. Low-Rank Tensor Completion Based on Log-Det Rank Approximation and Matrix Factorization. J Sci Comput 80, 1888–1912 (2019). https://doi.org/10.1007/s10915-019-01009-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-019-01009-x

Keywords

Navigation