Skip to main content

Low-Rank Matrix Recovery via Continuation-Based Approximate Low-Rank Minimization

  • Conference paper
  • First Online:
PRICAI 2018: Trends in Artificial Intelligence (PRICAI 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11012))

Included in the following conference series:

Abstract

Low-rank matrix recovery (LRMR) is to recover the underlying low-rank structure from the degraded observations, and has myriad formulations, for example robust principal component analysis (RPCA). As a core component of LRMR, the low-rank approximation model attempts to capture the low-rank structure by approximating the \(\ell _0\)-norm of all the singular values of a low-rank matrix, i.e., the number of the non-zero singular values. Towards this purpose, this paper develops a low-rank approximation model by jointly combining a parameterized hyperbolic tangent (tanh) function with the continuation process. Specificially, the continuation process is exploited to impose the parameterized tanh function to approximate the \(\ell _0\)-norm. We then apply the proposed low-rank model to RPCA and refer to it as tanh-RPCA. Convergence analysis on optimization, and experiments of background subtraction on seven challenging real-world videos show the efficacy of the proposed low-rank model through comparing tanh-RPCA with several state-of-the-art methods.

X. Zhang and Y. Gao—Contributed equally to this work and this work was supported by the National Key Research and Development Program of China [2016YFB0200401], the National Natural Science Foundation of China [U1435222] and the National High-tech R&D Program [2015AA020108].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    F-measure is the harmonic average of the precision and recall, with the best value at 1 (perfect precision and recall) and the worst at 0.

References

  1. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  2. Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3), 11 (2011)

    Article  MathSciNet  Google Scholar 

  3. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717 (2009)

    Article  MathSciNet  Google Scholar 

  4. Cao, Z., Long, M., Wang, J., Yu, P.S.: HashNet: deep learning to hash by continuation. In: IEEE International Conference on Computer Vision, pp. 5609–5618 (2017)

    Google Scholar 

  5. Chen, C.F., Wei, C.P., Wang, Y.C.F.: Low-rank matrix recovery with structural incoherence for robust face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2618–2625 (2012)

    Google Scholar 

  6. Fan, Q., Jiao, Y., Lu, X.: A primal dual active set algorithm with continuation for compressed sensing. IEEE Trans. Sig. Process. 62(23), 6276–6285 (2014)

    Article  MathSciNet  Google Scholar 

  7. Gao, Y., Cai, H., Zhang, X., Lan, L., Luo, Z.: Background subtraction via 3D convolutional neural networks. In: International Conference on Pattern Recognition. IEEE (2018)

    Google Scholar 

  8. Gong, P., Ye, J., Zhang, C.: Multi-stage multi-task feature learning. J. Mach. Learn. Res. 14(1), 2979–3010 (2013)

    MathSciNet  MATH  Google Scholar 

  9. Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., Zhang, L.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 121(2), 183–208 (2017)

    Article  Google Scholar 

  10. Guo, X., Lin, Z., Center, C.M.I.: Route: robust outlier estimation for low rank matrix recovery. In: International Joint Conference on Artificial Intelligence, pp. 1746–1752 (2017)

    Google Scholar 

  11. Hu, Y., Zhang, D., Ye, J., Li, X., He, X.: Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2117–2130 (2013)

    Article  Google Scholar 

  12. Jin, Z.F., Wan, Z., Jiao, Y., Lu, X.: An alternating direction method with continuation for nonconvex low rank minimization. J. Sci. Comput. 66(2), 849–869 (2016)

    Article  MathSciNet  Google Scholar 

  13. Kang, Z., Peng, C., Cheng, Q.: Robust PCA via nonconvex rank approximation. In: IEEE International Conference on Data Mining, pp. 211–220 (2015)

    Google Scholar 

  14. Kang, Z., Peng, C., Cheng, Q.: Robust subspace clustering via smoothed rank approximation. IEEE Sig. Process. Lett. 22(11), 2088–2092 (2015)

    Article  Google Scholar 

  15. Kang, Z., Peng, C., Cheng, Q.: Robust subspace clustering via tighter rank approximation. In: ACM International on Conference on Information and Knowledge Management, pp. 393–401 (2015)

    Google Scholar 

  16. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  17. Lewis, A.S., Sendov, H.S.: Nonsmooth analysis of singular values. Part i: theory. Set-Valued Anal. 13(3), 213–241 (2005)

    Article  MathSciNet  Google Scholar 

  18. Li, L., Huang, W., Gu, I.Y.H., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)

    Article  Google Scholar 

  19. Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., Ma, Y.: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. J. Marine Biol. Assoc. UK 56(3), 707–722 (2009)

    Google Scholar 

  20. Lin, Z., Liu, R., Su, Z.: Linearized alternating direction method with adaptive penalty for low-rank representation. In: Advances in Neural Information Processing Systems, pp. 612–620 (2011)

    Google Scholar 

  21. Lu, C., Tang, J., Yan, S., Lin, Z.: Generalized nonconvex nonsmooth low-rank minimization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4130–4137 (2014)

    Google Scholar 

  22. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11(1), 19–60 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Mu, Y., Dong, J., Yuan, X., Yan, S.: Accelerated low-rank visual recovery by random projection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2609–2616 (2011)

    Google Scholar 

  24. Peng, C., Kang, Z., Li, H., Cheng, Q.: Subspace clustering using log-determinant rank approximation. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 925–934 (2015)

    Google Scholar 

  25. Peng, C., Kang, Z., Yang, M., Cheng, Q.: RAP: scalable RPCA for low-rank matrix recovery. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 2113–2118 (2016)

    Google Scholar 

  26. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MathSciNet  Google Scholar 

  27. Shen, X., Wu, Y.: A unified approach to salient object detection via low rank matrix recovery. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 853–860 (2012)

    Google Scholar 

  28. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: IEEE International Conference on Computer Vision, pp. 255–261 (1999)

    Google Scholar 

  29. Wang, N., Yao, T., Wang, J., Yeung, D.-Y.: A probabilistic approach to robust matrix factorization. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 126–139. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_10

    Chapter  Google Scholar 

  30. Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. In: Advances in Neural Information Processing Systems, pp. 2080–2088 (2009)

    Google Scholar 

  31. Xiang, S., Tong, X., Ye, J.: Efficient sparse group feature selection via nonconvex optimization. In: International Conference on Machine Learning, pp. 284–292 (2013)

    Google Scholar 

  32. Xin, B., Tian, Y., Wang, Y., Gao, W.: Background subtraction via generalized fused lasso foreground modeling. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4676–4684 (2015)

    Google Scholar 

  33. Yuan, X., Yang, J.: Sparse and low rank matrix decomposition via alternating direction method. Pac. J. Optim. 9(1) (2009)

    Google Scholar 

  34. Zhang, C.H., et al.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    Article  MathSciNet  Google Scholar 

  35. Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11(3), 1081–1107 (2010)

    MathSciNet  MATH  Google Scholar 

  36. Zhang, Z., Yan, S., Zhao, M.: Similarity preserving low-rank representation for enhanced data representation and effective subspace learning. Neural Netw. 53, 81–94 (2014)

    Article  Google Scholar 

  37. Zhang, Z., Zhao, M., Li, F., Zhang, L., Yan, S.: Robust alternating low-rank representation by joint lp-and l2, p-norm minimization. Neural Netw. 96, 55–70 (2017)

    Article  Google Scholar 

  38. Zheng, Y., Liu, G., Sugimoto, S., Yan, S., Okutomi, M.: Practical low-rank matrix approximation under robust \(\ell_1\)-norm. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1410–1417 (2012)

    Google Scholar 

  39. Zou, W., Kpalma, K., Liu, Z., Ronsin, J.: Segmentation driven low-rank matrix recovery for saliency detection. In: British Machine Vision Conference, pp. 1–13 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yongqiang Gao or Zhigang Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, X., Gao, Y., Lan, L., Guo, X., Huang, X., Luo, Z. (2018). Low-Rank Matrix Recovery via Continuation-Based Approximate Low-Rank Minimization. In: Geng, X., Kang, BH. (eds) PRICAI 2018: Trends in Artificial Intelligence. PRICAI 2018. Lecture Notes in Computer Science(), vol 11012. Springer, Cham. https://doi.org/10.1007/978-3-319-97304-3_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-97304-3_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-97303-6

  • Online ISBN: 978-3-319-97304-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics