Skip to main content
Log in

A faster tensor robust PCA via tensor factorization

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Many kinds of real-world multi-way signal, like color images, videos, etc., are represented in tensor form and may often be corrupted by outliers. To recover an unknown signal tensor corrupted by outliers, tensor robust principal component analysis (TRPCA) serves as a robust tensorial modification of the fundamental PCA. Recently, a successful TRPCA model based on the tubal nuclear norm (TNN) (Lu et al. in IEEE Trans Pattern Anal Mach Intell 42:925–938, 2019) has attracted much attention thanks to its superiority in many applications. However, TNN is computationally expensive due to the requirement of full singular value decompositions, seriously limiting its scalability to large tensors. To address this issue, we propose a new TRPCA model which adopts a factorization strategy. Algorithmically, an algorithm based on the non-convex augmented Lagrangian method is developed with convergence guarantee. Theoretically, we rigorously establish the sub-optimality of the proposed algorithm. We also extend the proposed model to the robust tensor completion problem. Both the effectiveness and efficiency of the proposed algorithm is demonstrated through extensive experiments on both synthetic and real data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Although the problem of tensor RPCA is related to the RPCA [3] and tensor completion (TC) [18], we do not give further reviews of related works on variants of RPCA and TC.

  2. http://www.mrt.kit.edu/z/publ/download/velodynetracking/dataset.html.

References

  1. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends®. Mach Learn 3(1):1–122

    MATH  Google Scholar 

  2. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM (JACM) 58(3):11

    Article  MathSciNet  MATH  Google Scholar 

  3. Candès EJ, Tao T (2010) The power of convex relaxation: near-optimal matrix completion. IEEE Trans Inf Theory 56(5):2053–2080

    Article  MathSciNet  MATH  Google Scholar 

  4. Fazel M (2002) Matrix rank minimization with applications. Ph.D. thesis, Stanford University

  5. Foucart S, Rauhut H (2013) A mathematical introduction to compressive sensing, vol 1. Birkhäuser, Basel

    Book  MATH  Google Scholar 

  6. Friedland S, Lim L (2017) Nuclear norm of higher-order tensors. Math Comput 87(311):1255–1281

    Article  MathSciNet  MATH  Google Scholar 

  7. Goldfarb D, Qin Z (2014) Robust low-rank tensor recovery: models and algorithms. SIAM J Matrix Anal Appl 35(1):225–253

    Article  MathSciNet  MATH  Google Scholar 

  8. Harshman RA (1970) Foundations of the parafac procedure: models and conditions for an “explanatory” multi-modal factor analysis

  9. Hillar CJ, Lim L (2009) Most tensor problems are np-hard. J ACM 60(6):45

    MathSciNet  MATH  Google Scholar 

  10. Huang B, Mu C, Goldfarb D, Wright J (2015) Provable models for robust low-rank tensor completion. Pac J Optim 11(2):339–364

    MathSciNet  MATH  Google Scholar 

  11. Jiang Q, Ng M (2019) Robust low-tubal-rank tensor completion via convex optimization. In: Proceedings of the 28th international joint conference on artificial intelligence. AAAI Press, Macao, China, pp 2649–2655

  12. Kilmer ME, Braman K, Hao N, Hoover RC (2013) Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J Matrix Anal Appl 34(1):148–172

    Article  MathSciNet  MATH  Google Scholar 

  13. Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500

    Article  MathSciNet  MATH  Google Scholar 

  14. Lai Z, Xu Y, Chen Q, Yang J, Zhang D (2014) Multilinear sparse principal component analysis. IEEE Trans Neural Netw 25(10):1942–1950

    Article  Google Scholar 

  15. Lai Z, Xu Y, Yang J, Tang J, Zhang D (2013) Sparse tensor discriminant analysis. IEEE Trans Image Process 22(10):3904–3915

    Article  MathSciNet  MATH  Google Scholar 

  16. Lin Z, Chen M, Ma Y (2010) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055

  17. Liu G, Yan S (2012) Active subspace: toward scalable low-rank learning. Neural Comput 24(12):3371–3394

    Article  MathSciNet  MATH  Google Scholar 

  18. Liu J, Musialski P, Wonka P, Ye J (2013) Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell 35(1):208–220

    Article  Google Scholar 

  19. Liu X, Aeron S, Aggarwal V, Wang X (2020) Low-tubal-rank tensor completion using alternating minimization. IEEE Trans Inf Theory 66(3):1714–1737

    Article  MathSciNet  MATH  Google Scholar 

  20. Liu Y, Jiao L, Shang F (2013) A fast tri-factorization method for low-rank matrix recovery and completion. Pattern Recognit 46(1):163–173

    Article  MATH  Google Scholar 

  21. Liu Z, Lai Z, Ou W, Zhang K, Zheng R (2020) Structured optimal graph based sparse feature extraction for semi-supervised learning. Signal Process 170:107456

    Article  Google Scholar 

  22. Liu Z, Wang J, Liu G, Zhang L (2019) Discriminative low-rank preserving projection for dimensionality reduction. Appl Soft Comput 85:105768

    Article  Google Scholar 

  23. Lu C, Feng J, Chen Y, Liu W, Lin Z, Yan S (2016) Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, Las Vegas, USA, pp 5249-5257

  24. Lu C, Feng J, Liu W, Lin Z, Yan S et al (2019) Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans Pattern Anal Mach Intell 42:925–938

    Article  Google Scholar 

  25. Moosmann F, Stiller C (2013) Joint self-localization and tracking of generic objects in 3d range data. In: Proceedings of the IEEE international conference on robotics and automation. IEEE, Karlsruhe, Germany, pp 1138–1144

  26. Peng Y, Lu BL (2017) Discriminative extreme learning machine with supervised sparsity preserving for image classification. Neurocomputing 261:242–252

    Article  Google Scholar 

  27. Peng Y, Lu BL (2017) Robust structured sparse representation via half-quadratic optimization for face recognition. Multimed Tools Appl 76(6):8859–8880

    Article  Google Scholar 

  28. Romera-Paredes B, Pontil M (2013) A new convex relaxation for tensor completion. In: Proceedings of advances in neural information processing systems. The Neural Information Processing Systems Foundation, Lake Tahoe, USA, pp 2967–2975

  29. Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279–311

    Article  MathSciNet  Google Scholar 

  30. Wang A, Jin Z (2017) Near-optimal noisy low-tubal-rank tensor completion via singular tube thresholding. In: Proceedings of the IEEE international conference on data mining workshop (ICDMW). IEEE, New Orleans, USA, pp 553–560

  31. Wang A, Jin Z, Tang G (2020) Robust tensor decomposition via t-SVD: near-optimal statistical guarantee and scalable algorithms. Signal Process 167:107319. https://doi.org/10.1016/j.sigpro.2019.107319

    Article  Google Scholar 

  32. Wang A, Jin Z, Yang J (2019) A factorization strategy for tensor robust PCA. In: 2019 5th IAPR Asian conference on pattern recognition (ACPR). IAPR, Auckland, New Zealand, pp 424–437

  33. Wang A, Lai Z, Jin Z (2019) Noisy low-tubal-rank tensor completion. Neurocomputing 330:267–279

    Article  Google Scholar 

  34. Wang A, Li C, Jin Z, Zhao Q (2020) Robust tensor decomposition via orientation invariant tubal nuclear norms. In: Proceedings of AAAI conference on artiicial intelligence. AAAI Press, New York, USA, pp 6102–6109

  35. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  36. Wu T, Bajwa WU (2018) A low tensor-rank representation approach for clustering of imaging data. IEEE Signal Process Lett 25(8):1196–1200

    Article  Google Scholar 

  37. Xie Y, Tao D, Zhang W, Liu Y, Zhang L, Qu Y (2018) On unifying multi-view self-representations for clustering by tensor multi-rank minimization. Int J Comput Vis 126(11):1157–1179

    Article  MathSciNet  Google Scholar 

  38. Xu Y, Hao R, Yin W, Su Z (2015) Parallel matrix factorization for low-rank tensor completion. Inverse Probl Imaging 9(2):601–624

    Article  MathSciNet  MATH  Google Scholar 

  39. Xue J, Zhao Y, Liao W, Chan JCW (2018) Total variation and rank-1 constraint rpca for background subtraction. IEEE Access 6:49955–49966

    Article  Google Scholar 

  40. Xue J, Zhao Y, Liao W, Chan JCW (2019) Nonconvex tensor rank minimization and its applications to tensor recovery. Inf Sci 503:109–128

    Article  MathSciNet  Google Scholar 

  41. Xue J, Zhao Y, Liao W, Chan JCW, Kong SG (2020) Enhanced sparsity prior model for low-rank tensor completion. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2956153

  42. Zhang F, Yang G, Yang Z, Wan M (2018) Robust recovery of corrupted image data based on \(l_{1-2}\) metric. IEEE Access 6:5848–5855

    Article  Google Scholar 

  43. Zhang Z, Aeron S (2017) Exact tensor completion using t-SVD. IEEE Trans Signal Process 65(6):1511–1526

    Article  MathSciNet  MATH  Google Scholar 

  44. Zhang Z, Ely G, Aeron S, Hao N, Kilmer M (2014) Novel methods for multilinear data completion and de-noising based on tensor SVD. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, Columbus, USA, pp 3842–3849

  45. Zhou P, Feng J (2017) Outlier-robust tensor PCA. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, Honolulu, USA, pp 3938–3946

Download references

Acknowledgements

The authors would like to thank Prof. Qibin Zhao, Ms. Jin Wang, Mr. Bo Wang and Dr. Dongxu Wei for their long time support. This work is partially supported by National Natural Science Foundation of China [Grant nos. 61872188, U1713208, 61972204, 61672287, 61861136011, 61773215, 61703209,61972212], by the Natural Science Foundation of Guangdong Province [Grant nos. 2020A1515010671, 2017A030313367], and by the Natural Science Foundation of Jiangsu Province [Grant no. BK20190089].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhong Jin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Part of this work [32] has been presented in 5th IAPR Asian Conference on Pattern Recognition (ACPR), 26-29 November 2019, Auckland, New Zealand.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, AD., Jin, Z. & Yang, JY. A faster tensor robust PCA via tensor factorization. Int. J. Mach. Learn. & Cyber. 11, 2771–2791 (2020). https://doi.org/10.1007/s13042-020-01150-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01150-2

Navigation