Skip to main content
Log in

A robust low-rank matrix completion based on truncated nuclear norm and Lp-norm

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

The low-rank matrix completion problem has aroused notable attention in various fields, such as engineering and applied sciences. The classical methods approximate the rank minimization problem by minimizing the nuclear norm, therefore obtaining unsatisfactory results, which may deviate from the true solution. In addition, most methods minimize the square error directly, which may be sensitive to the outliers. This paper presents a robust matrix completion model, which is suitable for a low sampling rate. First, the truncated nuclear norm is introduced, which is a more accurate and robust approximation to the rank function. Then, the Lp-norm may be employed as an error function, which provides a robust estimation. Finally, several optimization algorithms are employed to solve the model. Numerical simulations and experimental data analysis show the effectiveness and advantages of the proposed method. Notably, the algorithm can better approximate rank minimization problems and enhance robustness to outliers, especially when the sampling rate is very low. The method’s practical potential is illustrated on the MovieLens-1M dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Li X, Shen C, Dick A et al (2015) Online metric-weighted linear representations for robust visual tracking. IEEE Transac Pattern Anal Mach Intell 38(5):931–950. https://doi.org/10.1109/TPAMI.2015.2469276

    Article  Google Scholar 

  2. Tsiligianni E, Deligiannis N (2019) Deep coupled-representation learning for sparse linear inverse problems with side information. IEEE Signal Process Lett 26(12):1768–1772. https://doi.org/10.1109/LSP.2019.2929869

    Article  Google Scholar 

  3. Tang H, Sun Y, Yin B et al (2011) 3D face recognition based on sparse representation. J Supercomput 58(1):84–95. https://doi.org/10.1007/s11227-010-0533-9

    Article  Google Scholar 

  4. Ding X, Liang H, Jakobsson A et al (2022) High-resolution source localization exploiting the sparsity of the beamforming map. Signal Process 192:108377. https://doi.org/10.1016/j.sigpro.2021.108377

    Article  Google Scholar 

  5. Xu JL, Hsu YL (2022) Analysis of agricultural exports based on deep learning and text mining. J Supercompu 1–17. https://doi.org/10.1007/s11227-021-04238-w

  6. Zhang M, Desrosiers C (2018) High-quality image restoration using low-rank patch regularization and global structure sparsity. IEEE Transact Image Process 28(2):868–879. https://doi.org/10.1109/TIP.2018.2874284

    Article  MathSciNet  MATH  Google Scholar 

  7. Jia S, Zhang X, Li Q (2015) Spectral-spatial hyperspectral image classification using \(\ell _ 1/2\) regularized low-rank representation and sparse representation-based graph cuts. IEEE J Sel Topics Appl Earth Obs Remote Sens 8(6):2473–2484. https://doi.org/10.1109/JSTARS.2015.2423278

    Article  Google Scholar 

  8. Hassan MM, Ullah S, Hossain MS et al (2021) An end-to-end deep learning model for human activity recognition from highly sparse body sensor data in Internet of Medical Things environment. J Supercomput 77(3):2237–2250. https://doi.org/10.1007/s11227-020-03361-4

    Article  Google Scholar 

  9. Jiang J, Kang L, Huang J et al (2020) Deep learning based mild cognitive impairment diagnosis using structure MR images. Neurosci Lett 730:134971. https://doi.org/10.1016/j.neulet.2020.134971

    Article  Google Scholar 

  10. Yang L, Fang J, Duan H et al (2018) Fast low-rank Bayesian matrix completion with hierarchical Gaussian prior models. IEEE Transact Signal Process 66(11):2804–2817. https://doi.org/10.1109/TSP.2018.2816575

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang S, Wang M (2019) Correction of corrupted columns through fast robust Hankel matrix completion. IEEE Transact Signal Process 67(10):2580–2594. https://doi.org/10.1109/TSP.2019.2904021

    Article  MathSciNet  MATH  Google Scholar 

  12. Candès EJ, Recht B (2009) Exact matrix completion via convex optimization. Found Comput Math 9(6):717–772. https://doi.org/10.1007/s10208-009-9045-5

    Article  MathSciNet  MATH  Google Scholar 

  13. Fazel M, Hindi H, Boyd S P (2001) A Rank Minimization Heuristic With Application to Minimum Order System Approximation. In: Proceedings of the 2001 American Control Conference. IEEE, 6: 4734-4739. https://doi.org/10.1109/ACC.2001.945730

  14. Candès EJ, Tao T (2010) The power of convex relaxation: near-optimal matrix completion. IEEE Transact Inf Theory 56(5):2053–2080. https://doi.org/10.1109/TIT.2010.2044061

    Article  MathSciNet  MATH  Google Scholar 

  15. Tan VYF, Balzano L, Draper SC (2011) Rank minimization over finite fields. In, (2011) IEEE international symposium on information theory proceedings. IEEE 1195–1199. https://doi.org/10.1109/ISIT.2011.6033722

  16. Cai JF, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982. https://doi.org/10.1137/080738970

    Article  MathSciNet  MATH  Google Scholar 

  17. Jain P, Meka R, Dhillon I (2010) Guaranteed rank minimization via singular value projection. Adv in Neural Inf Process Syst, 23

  18. Hu Y, Zhang D, Ye J et al (2012) Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans on pattern analysis and machine intelligence 35(9):2117–2130. https://doi.org/10.1109/TPAMI.2012.271

    Article  Google Scholar 

  19. Nie F, Wang H, Huang H et al (2015) Joint Schatten \(p\)-norm and \(\ell _p\)-norm robust matrix completion for missing value recovery. Knowl Inf Syst 42(3):525–544. https://doi.org/10.1007/s10115-013-0713-z

    Article  Google Scholar 

  20. Yang L, Kou KI, Miao J (2021) Weighted truncated nuclear norm regularization for low-rank quaternion matrix completion. J Vis Commun Image Represent 81:103335. https://doi.org/10.1016/j.jvcir.2021.103335

    Article  Google Scholar 

  21. Zhang H, Qian J, Zhang B et al (2019) Low-rank matrix recovery via modified schatten-\(p\) norm minimization with convergence guarantees. IEEE Trans Image Process 29:3132–3142. https://doi.org/10.1109/TIP.2019.2957925

    Article  MathSciNet  Google Scholar 

  22. Fan J, Ding L, Chen Y, et al. (2019) Factor group-sparse regularization for efficient low-rank matrix recovery. Adv Neural Inf Process Syst, 32

  23. Nie F, Huang H, Cai X, et al. (2010) Efficient and Robust Feature Selection Via Joint \(\ell\)2, 1-Norms Minimization. In: Proceedings of the 23rd International Conference on Neural Information Processing Systems-Volume 2. 1813-1821

  24. Gabay D, Mercier B (1976) A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput Math Appl 2(1):17–40. https://doi.org/10.1016/0898-1221(76)90003-1

    Article  MATH  Google Scholar 

  25. Gerald C F. Applied numerical analysis (2004). Pearson Education India

  26. Boyd S, Parikh N, Chu E (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc

  27. Blaschke B, Neubauer A, Scherzer O (1997) On convergence rates for the iteratively regularized Gauss-Newton method. IMA J Numer Anal 17(3):421–436. https://doi.org/10.1093/imanum/17.3.421

    Article  MathSciNet  MATH  Google Scholar 

  28. Kou J, Li Y, Wang X (2006) A modification of Newton method with third-order convergence. Appl Math Comput 181(2):1106–1111. https://doi.org/10.1016/j.amc.2006.01.076

    Article  MathSciNet  MATH  Google Scholar 

  29. Li M, Kwok J T Y, Lü B (2010) Making Large-Scale Nyström Approximation Possible. In: ICML 2010-proceedings, 27th International Conference on Machine Learning, 631

  30. Bertsekas D P (2014) Constrained optimization and lagrange multiplier methods. Academic press

  31. Toh KC, Yun S (2010) An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific J Optim 6(615–640):15

    MathSciNet  MATH  Google Scholar 

  32. F. Maxwell Harper and Joseph A. Konstan (2015). The movielens datasets: History and context. ACM transactions on interactive intelligent systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages. https://doi.org/10.1145/2827872

  33. Shamir O, Shalev-Shwartz S (2014) Matrix completion with the trace norm: learning, bounding, and transducing. J Mach Learn Res 15(1):3401–3423. https://doi.org/10.5555/2627435.2697073

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Kang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, H., Kang, L. & Huang, J. A robust low-rank matrix completion based on truncated nuclear norm and Lp-norm. J Supercomput 78, 12950–12972 (2022). https://doi.org/10.1007/s11227-022-04385-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04385-8

Keywords

Navigation