Abstract
Matrix completion is usually formulated as a low-rank matrix approximation problem. Several methods have been proposed to solve this problem, e.g., truncated nuclear norm regularization (TNNR) which performs well in recovery accuracy and convergence speed, and hybrid truncated norm regularization (HTNR) method which has better stability compared to TNNR. In this paper, a modified hybrid truncated norm regularization method, named WHTNR, is proposed to accelerate the convergence of the HTNR method. The proposed WHTNR method can preferentially restore rows with fewer missing elements in the matrix by assigning appropriate weights to the first r singular values. The presented experiments show empirical evidence on significant improvements of the proposed method over the closest four methods, both in convergence speed or in accuracy, it is robust to the parameter truncate singular values r.
Similar content being viewed by others
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Ma, T.H., Lou, Y., Huang, T.Z.: Truncated l1 − 2 models for sparse recovery and rank minimization. SIAM J. Imaging Sci. 10 (3), 1346–1380 (2017)
Zhao, X.L., Wang, F., Huang, T.Z., et al.: Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 51(7), 4045–4058 (2013)
Zhao, X.L., Xu, W.H., Jiang, T.X., et al.: Deep plug-and-play prior for low-rank tensor completion. Neurocomputing 400, 137–149 (2020)
Zhao, X.L., Zhang, H., Jiang, T.X., et al.: Fast algorithm with theoretical guarantees for constrained low-tubal-rank tensor recovery in hyperspectral images denoising. Neurocomputing 413, 397–409 (2020)
Jannach, D., Resnick, P., Tuzhilin, A., et al.: Recommender systems—beyond matrix completion. Commun. ACM 59(11), 94–102 (2016)
Ramlatchan, A., Yang, M., Liu, Q., et al.: A survey of matrix completion methods for recommendation systems. Big Data Mining and Analytics 1(4), 308–323 (2018)
Wang, W., Chen, J., Wang, J., et al.: Geography-aware inductive matrix completion for personalized Point-of-Interest recommendation in smart cities. IEEE Internet Things J. 7(5), 4361–4370 (2019)
Candes, E.J., Plan, Y.: Matrix completion with noise. Proc. IEEE 98(6), 925–936 (2010)
Zou, C., Hu, Y., Cai, D., et al.: Salient object detection via fast iterative truncated nuclear norm recovery. In: International Conference on Intelligent Science and Big Data Engineering, pp 238–245. Springer, Berlin (2013)
Wright, J., Ganesh, A., Rao, S., et al.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in Neural Information Processing Systems 22 (2009)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)
Fazel, M.: Matrix Rank Minimization with Applications. PhD thesis, Stanford University (2002)
Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055(2010)
Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization 6(3), 615–640 (2010)
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Zhang, D., Hu, Y., Ye, J., et al.: Matrix completion by truncated nuclear norm regularization. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp 2192–2199. IEEE (2012)
Hu, Y., Zhang, D., Ye, J., et al.: Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2117–2130 (2013)
Liu, Q., Lai, Z., Zhou, Z., et al.: A truncated nuclear norm regularization method based on weighted residual error for matrix completion. IEEE Trans. Image Process. 25(1), 316–330 (2015)
Xue, S., Qiu, W., Liu, F., et al.: Double weighted truncated nuclear norm regularization for low-rank matrix completion. arXiv:1901.01711(2019)
Yang, L., Kou, K.I., Miao, J.: Weighted truncated nuclear norm regularization for low-rank quaternion matrix completion. J. Vis. Commun. Image Represent. 81, 103335 (2021)
Ye, H., Li, H., Cao, F., et al.: A hybrid truncated norm regularization method for matrix completion. IEEE Trans. Image Process. 28(10), 5171–5186 (2019)
Mirsky, L.: A trace inequality of John von Neumann. Monatshefte für mathematik 79(4), 303–306 (1975)
Tyrrell, R., Fellar, R: Convex analysis. Princeton University Press (1996)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Guanghui Cheng contributed equally to this work.
Appendices
Appendix A: Proof of Lemma 1
First, Lemma 1 (1) is discussed. On the basis of that \(\left ({X}_{*}, {H}_{*},{Y}_{*}\right )\) is the optimal solution to the Lagrange function (19). Hence, we have
Clearly, X∗ = H∗, then the left-hand side is p∗. And \(p_{k+1}=\text {Tr}\left (A WX_{k+1}B{ }^{T}\right )-\text {Tr}\left (C H_{k+1}D{}^{T}\right )+\lambda \left (\|{X}_{k+1}\|_{F}^{2}-\|{C} {X}_{k+1}\|_{F}^{2}\right )\), (34) is rewritten to \(p_{*}{\leq } p_{k+1}+ {\langle }\frac {\mu }{2}R_{k+1}+Y_{*},R_{k+1}{\rangle }\), the inequality (34) holds.
Thus, Lemma 1 (1) is proved. Then Lemma1(2) is discussed.
Clearly, Xk+ 1 minimizes L(X,Hk,Yk) by definition. Since L(X,Hk,Yk) is closed, convex and subdifferentiable on X. On the basis of the property of subdifferential [23], the optimality condition is obtained as follows
Since Yk+ 1 = Yk + μRk+ 1, we have Yk = Yk+ 1 − μRk+ 1, (35) is rewritten as follows
note that Xk+ 1 minimizes
In the same way, Hk+ 1 minimizes L(Xk+ 1,H,Yk) by definition. Since L(Xk+ 1,H,Yk) is closed, convex and subdifferentiable on H. On the basis of the property of subdifferential [23], the optimality condition is obtained as follows,
Since Yk+ 1 = Yk + μRk+ 1, we have Yk = Yk+ 1 − μRk+ 1, (39) is rewritten as follows
note that Hk+ 1 minimizes
Hence, for the optimal solution (X∗,H∗), we have
and
With (42), (43), and X∗ = H∗, we have
Thus, Lemma 1(2) is also proved.
Appendix B: Proof of Lemma 2
Add the two inequalities in Lemma 1, multiply both sides by 2, we have
the inequality (45) is rewritten as follows
Since Yk+ 1 = Yk + μRk+ 1, we have
Since \({Y}_{k+1}-{Y}_{k}=\left ({Y}_{k+1}-{Y}_{*}\right )-\left ({Y}_{k}-{Y}_{*}\right )\), we have
The inequality (47) is rewritten as follows
Since \({H}_{k}-{H}_{k+1}=\left ({H}_{k}-{H}_{*}\right )-\left ({H}_{k+1}-{H}_{*}\right )\), we have
The inequality (47) is rewritten as follows
In other words,
Hence, Vk decreases. Clearly, in order to prove (30), it only need to verify the − 2〈Rk+ 1,Hk − Hk+ 1〉≥ 0.
In fact, recalling that Hk+ 1 minimizes \(-\text {Tr}\left (C H D^{T}\right )-{\langle }Y_{k+1}, H{\rangle }\) and Hk minimizes \(-\text {Tr}\left (C H D^{T}\right )-{\langle }Y_{k}, H{\rangle }\), we have
and
Add the two inequalities (53) and (53), we have
With Yk+ 1 − Yk = μRk+ 1 and μ > 0, then the inequality (55) is rewritten to − 2μ〈Rk+ 1,Hk − Hk+ 1〉≥ 0
Thus, Lemma 2 is proved.
Appendix C: Proof of Theorem 1
In terms of Lemma 2, Vk decreases in each iteration and \(V_{k}-V_{k+1} \geq \mu \left (\left \|R_{k+1}\right \|_{F}^{2}+\|H_{k+1}-H_{k} \|_{F}^{2}\right )\), add all the terms on the both sides and rearrange it, we have
Notes that \(\mu \left (\left \|R_{k+1}\right \|_{F}^{2}+\|H_{k+1}-H_{k} \|_{F}^{2}\right )\rightarrow 0\)(\(k \rightarrow \infty \)). Hence, if \(k \rightarrow \infty \), then \(R_{k+1} \rightarrow 0\) and \(H_{k+1}-H_{k}\rightarrow 0\). In view of Theorem 1, we have
and
That is to say, \(p_{k} \rightarrow p_{*}\) as \(k \rightarrow \infty \).
Thus, Theorem 1 is proved.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wan, X., Cheng, G. Weighted hybrid truncated norm regularization method for low-rank matrix completion. Numer Algor 94, 619–641 (2023). https://doi.org/10.1007/s11075-023-01513-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-023-01513-0