Skip to main content
Log in

Unsupervised learning low-rank tensor from incomplete and grossly corrupted data

  • Machine Learning - Applications & Techniques in Cyber Intelligence
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Low-rank tensor completion and recovery have received considerable attention in the recent literature. The existing algorithms, however, are prone to suffer a failure when the multiway data are simultaneously contaminated by arbitrary outliers and missing values. In this paper, we study the unsupervised tensor learning problem, in which a low-rank tensor is recovered from an incomplete and grossly corrupted multidimensional array. We introduce a unified framework for this problem by using a simple equation to replace the linear projection operator constraint, and further reformulate it as two convex optimization problems through different approximations of the tensor rank. Two globally convergent algorithms, derived from the alternating direction augmented Lagrangian (ADAL) and linearized proximal ADAL methods, respectively, are proposed for solving these problems. Experimental results on synthetic and real-world data validate the effectiveness and superiority of our methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Acar E, Dunlavy DM, Kolda TG, Mørup M (2010) Scalable tensor factorizations with missing data. In: SIAM international conference on data mining, pp 701–712

  2. Cai JF, CandèS EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  3. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 58(3):1–37

    Article  MathSciNet  Google Scholar 

  4. Chen Y, Jalali A, Sanghavi S, Caramanis C (2013) Low-rank matrix recovery from errors and erasures. IEEE Trans Inf Theory 59(7):4324–4337

    Article  Google Scholar 

  5. De Lathauwer L, Vandewalle J (2004) Dimensionality reduction in higher-order signal processing and rank-(\(r_{1}\),\(r_{2}\), \(\cdots\), \(r_{N}\)) reduction in multilinear algebra. Linear Algebra Appl 391(1):31–55

    Article  MathSciNet  Google Scholar 

  6. Eckstein J, Bertsekas DP (1992) On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math Program 55(1–3):293–318

    Article  MathSciNet  Google Scholar 

  7. Filipović M, Jukić A (2015) Tucker factorization with missing data with application to low-\(n\)-rank tensor completion. Multidimens Syst Signal Process 26(3):1–16

    Article  Google Scholar 

  8. Gandy S, Recht B, Yamada I (2011) Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl 27(2):025010

    Article  MathSciNet  Google Scholar 

  9. Goldfarb D, Qin Z (2014) Robust low-rank tensor recovery: models and algorithms. SIAM J Matrix Anal Appl 35(1):225–253

    Article  MathSciNet  Google Scholar 

  10. Harshman RA (2009) Foundations of the PARAFAC procedure: model and conditions for an “explanatory” multi-mode factor analysis. UCLA Work Pap Phonetic 16:1–84

    Google Scholar 

  11. Huang B, Mu C, Goldfarb D, Wright J (2014) Provable low-rank tensor recovery. In: Optimization Online, p 4252

  12. Kolda TG, Sun J (2008) Scalable tensor decompositions for multi-aspect data mining. In: 8th IEEE international conference on data mining, pp 363–372

  13. Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  14. Li X (2013) Compressed sensing and matrix completion with constant proportion of corruptions. Constr Approx 37(1):73–99

    Article  MathSciNet  Google Scholar 

  15. Li Y, Yan J, Zhou Y, Yang J (2010) Optimum subspace learning and error correction for tensors. In: European conference on computer vision conference on computer vision, pp 790–803

  16. Liu J, Musialski P, Wonka P, Ye J (2009) Tensor completion for estimating missing values in visual data. In: IEEE international conference on computer vision, pp 2114–2121

  17. Liu J, Musialski P, Wonka P, Ye J (2013) Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell 35(1):208–220

    Article  Google Scholar 

  18. Mu C, Huang B, Wright J, Goldfarb D (2013) Square deal: lower bounds and improved relaxations for tensor recovery. In: International conference on machine learning, pp 73–81

  19. Recht B (2011) A simpler approach to matrix completion. J Mach Learn Res 12(4):3413–3430

    MathSciNet  MATH  Google Scholar 

  20. Shang F, Liu Y, Cheng J, Cheng H (2014) Robust principal component analysis with missing data. In: Proceedings of the 23rd ACM international conference on information and knowledge management, Shanghai, China, pp 1149–1158

  21. Signoretto M, Dinh QT, Lathauwer LD, Suykens JAK (2014) Learning with tensors: a framework based on convex optimization and spectral regularization. Mach Learn 94(3):303–351

    Article  MathSciNet  Google Scholar 

  22. Signoretto M, Lathauwer LD, Suykens JAK (2010) Nuclear norms for tensors and their use for convex multilinear estimation. Technical Report 10-186, ESAT-SISTA, K. U. Leuven, Leuven, Belgium

  23. Tan H, Cheng B, Feng J, Feng G, Wang W, Zhang Y-J (2013) Low-\(n\)-rank tensor recovery based on multi-linear augmented lagrange multiplier method. Neurocomputing 119:144–152

    Article  Google Scholar 

  24. Tomioka R, Hayashi K, Kashima H (2010) Estimation of low-rank tensors via convex optimization. arXiv:1010.0789

  25. Tomioka R, Suzuki T, Hayashi K, Kashima H (2011) Statistical performance of convex tensor decomposition. In: Advances in neural information processing systems, pp 972–980

  26. Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279–311

    Article  MathSciNet  Google Scholar 

  27. Yang J, Yuan X (2013) Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Math Comput 82(281):301–329

    Article  MathSciNet  Google Scholar 

  28. Yang J, Zhang Y (2011) Alternating direction algorithms for \(ell _1\)-problems in compressive sensing. SIAM J Sci Comput 35(1):250–278

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China Nos. 61702023 and 91538204.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yaoming Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, Z., Zhou, Y. & Zhao, Y. Unsupervised learning low-rank tensor from incomplete and grossly corrupted data. Neural Comput & Applic 31, 8327–8335 (2019). https://doi.org/10.1007/s00521-018-3899-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3899-x

Keywords

Navigation