Skip to main content
Log in

An Approximate Augmented Lagrangian Method for Nonnegative Low-Rank Matrix Approximation

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

A Correction to this article was published on 07 December 2021

This article has been updated

Abstract

Nonnegative low-rank (NLR) matrix approximation, which is different from the classical nonnegative matrix factorization, has been recently proposed for several data analysis applications. The approximation is computed by alternately projecting onto the fixed-rank matrix manifold and the nonnegative matrix manifold. To ensure the convergence of the alternating projection method, the given nonnegative matrix must be close to a non-tangential point in the intersection of the nonegative and the low-rank manifolds. The main aim of this paper is to develop an approximate augmented Lagrangian method for solving nonnegative low-rank matrix approximation. We show that the sequence generated by the approximate augmented Lagrangian method converges to a critical point of the NLR matrix approximation problem. Numerical results to demonstrate the performance of the approximate augmented Lagrangian method on approximation accuracy, convergence speed, and computational time are reported.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Change history

Notes

  1. Although APM has been compared with A-MU and A-HALS in [20], we still list results obtained by NMF methods since stopping criterions we adopted are different from those used in [20]. Moreover, recover quality such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are reported for Face datasets. The results obtained by the NMF model can be seen as a reference for the performance of the NLR mode.

  2. Data for UMist can be downloaded from https://cs.nyu.edu/~roweis/data.html.

  3. The data for YaleB can be downloaded from http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html; The date for ORL can be downloaded from http://www.uk.research.att.com/facedatabase.html; The date for CBCL can be downloaded from http://www.ai.mit.edu/projects/cbcl; Data for UMist, Olivetti and Frey can be downloaded from https://cs.nyu.edu/~roweis/data.html.

References

  1. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  Google Scholar 

  2. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press Inc, Cambridge (1981)

    MATH  Google Scholar 

  3. Casalino, G., Buono, N.D., Mencar, C.: Nonnegative Matrix Factorizations for Intelligent Data Analysis. Springer, Berlin (2016)

    Book  Google Scholar 

  4. Cichocki, A., Phan, A.H.: Fast local algorithms for large scale nonnegative matrix and tensor factorizations. IEICE Trans. Fundam. 92(3), 708–721 (2009)

    Article  Google Scholar 

  5. Gillis, N.: The why and how of nonnegative matrix factorization. Statistics 12, 2–2 (2014)

    Google Scholar 

  6. Gillis, N., Glineur, F.: Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization. Neural Comput. 24(4), 1085–1105 (2012)

    Article  MathSciNet  Google Scholar 

  7. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  8. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)

    Article  MathSciNet  Google Scholar 

  9. Hosseiniasl, E., Zurada, J.M.: Nonnegative matrix factorization for document clustering: a survey. In: International Conference on Artificial Intelligence and Soft Computing, pp. 726–737 (2014)

  10. Hsieh, C.J., Dhillon, I.S.: Fast coordinate descent methods with variable selection for non-negative matrix factorization. In: ACM Sigkdd International Conference on Knowledge Discovery and Data Mining (2011)

  11. Kim, J., Park, H.: Fast nonnegative matrix factorization: an active-set-like method and comparisons. SIAM J. Sci. Comput. 33(6), 3261–3281 (2011)

    Article  MathSciNet  Google Scholar 

  12. Lee, D.D., Seung, H.S.: Unsupervised learning by convex and conic coding. Adv. Neural Inf. Process. Syst. 9, 515–521 (1997)

    Google Scholar 

  13. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    Article  Google Scholar 

  14. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. Adv. Neural Inf. Process. Syst. 556–562 (2000)

  15. Lee, J.M.: Introduction to Smooth Manifolds. Springer, New York (2012)

    Book  Google Scholar 

  16. Lu, Z., Zhang, Y.: An augmented Lagrangian approach for sparse principal component analysis. Math. Program. 135(1), 149–193 (2012)

    Article  MathSciNet  Google Scholar 

  17. Powell, M.J.: A method for nonlinear constraints in minimization problems. Optimization 283–298 (1969)

  18. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  Google Scholar 

  19. Rockafellar, T.R., Wets, J.B.: Variational Analysis. Springer, Berlin (2004)

    MATH  Google Scholar 

  20. Song, G., Ng, M.K.: Nonnegative low rank matrix approximation for nonnegative matrices. Appl. Math. Lett. 105, 106300 (2020)

    Article  MathSciNet  Google Scholar 

  21. Zhang, C., Jing, L., Xiu, N.: A new active set method for nonnegative matrix factorization. SIAM J. Sci. Comput. 36(6), A2633–A2653 (2014)

    Article  MathSciNet  Google Scholar 

  22. Zhu, H., Zhang, X., Chu, D., Liao, L.: Nonconvex and nonsmooth optimization with generalized orthogonality constraints: an approximate augmented Lagrangian method. J. Sci. Comput. 72(1), 331–372 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael K. Ng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

H. Zhu’s research is supported in part by NSF of China grant NSFC11701227,11971149, NSF of Jiangsu Province under Project No. BK20170522, NSF of Jiangsu University under Project No. 5501190009. M. Ng’s research is supported in part by the HKRGC GRF 12300218, 12300519, 17201020 and 17300021. G.-J. Song’s research is supported by the Key NSF of Shandong Province grant ZR2020KA008.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, H., Ng, M.K. & Song, GJ. An Approximate Augmented Lagrangian Method for Nonnegative Low-Rank Matrix Approximation. J Sci Comput 88, 45 (2021). https://doi.org/10.1007/s10915-021-01556-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01556-2

Keywords

Navigation