Skip to main content
Log in

Weighted hybrid truncated norm regularization method for low-rank matrix completion

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

Matrix completion is usually formulated as a low-rank matrix approximation problem. Several methods have been proposed to solve this problem, e.g., truncated nuclear norm regularization (TNNR) which performs well in recovery accuracy and convergence speed, and hybrid truncated norm regularization (HTNR) method which has better stability compared to TNNR. In this paper, a modified hybrid truncated norm regularization method, named WHTNR, is proposed to accelerate the convergence of the HTNR method. The proposed WHTNR method can preferentially restore rows with fewer missing elements in the matrix by assigning appropriate weights to the first r singular values. The presented experiments show empirical evidence on significant improvements of the proposed method over the closest four methods, both in convergence speed or in accuracy, it is robust to the parameter truncate singular values r.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Ma, T.H., Lou, Y., Huang, T.Z.: Truncated l1 − 2 models for sparse recovery and rank minimization. SIAM J. Imaging Sci. 10 (3), 1346–1380 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  2. Zhao, X.L., Wang, F., Huang, T.Z., et al.: Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 51(7), 4045–4058 (2013)

    Article  Google Scholar 

  3. Zhao, X.L., Xu, W.H., Jiang, T.X., et al.: Deep plug-and-play prior for low-rank tensor completion. Neurocomputing 400, 137–149 (2020)

    Article  Google Scholar 

  4. Zhao, X.L., Zhang, H., Jiang, T.X., et al.: Fast algorithm with theoretical guarantees for constrained low-tubal-rank tensor recovery in hyperspectral images denoising. Neurocomputing 413, 397–409 (2020)

    Article  Google Scholar 

  5. Jannach, D., Resnick, P., Tuzhilin, A., et al.: Recommender systems—beyond matrix completion. Commun. ACM 59(11), 94–102 (2016)

    Article  Google Scholar 

  6. Ramlatchan, A., Yang, M., Liu, Q., et al.: A survey of matrix completion methods for recommendation systems. Big Data Mining and Analytics 1(4), 308–323 (2018)

    Article  Google Scholar 

  7. Wang, W., Chen, J., Wang, J., et al.: Geography-aware inductive matrix completion for personalized Point-of-Interest recommendation in smart cities. IEEE Internet Things J. 7(5), 4361–4370 (2019)

    Article  Google Scholar 

  8. Candes, E.J., Plan, Y.: Matrix completion with noise. Proc. IEEE 98(6), 925–936 (2010)

    Article  Google Scholar 

  9. Zou, C., Hu, Y., Cai, D., et al.: Salient object detection via fast iterative truncated nuclear norm recovery. In: International Conference on Intelligent Science and Big Data Engineering, pp 238–245. Springer, Berlin (2013)

  10. Wright, J., Ganesh, A., Rao, S., et al.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in Neural Information Processing Systems 22 (2009)

  11. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Fazel, M.: Matrix Rank Minimization with Applications. PhD thesis, Stanford University (2002)

  13. Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055(2010)

  14. Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization 6(3), 615–640 (2010)

    MathSciNet  MATH  Google Scholar 

  15. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Zhang, D., Hu, Y., Ye, J., et al.: Matrix completion by truncated nuclear norm regularization. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp 2192–2199. IEEE (2012)

  17. Hu, Y., Zhang, D., Ye, J., et al.: Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2117–2130 (2013)

    Article  Google Scholar 

  18. Liu, Q., Lai, Z., Zhou, Z., et al.: A truncated nuclear norm regularization method based on weighted residual error for matrix completion. IEEE Trans. Image Process. 25(1), 316–330 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  19. Xue, S., Qiu, W., Liu, F., et al.: Double weighted truncated nuclear norm regularization for low-rank matrix completion. arXiv:1901.01711(2019)

  20. Yang, L., Kou, K.I., Miao, J.: Weighted truncated nuclear norm regularization for low-rank quaternion matrix completion. J. Vis. Commun. Image Represent. 81, 103335 (2021)

    Article  Google Scholar 

  21. Ye, H., Li, H., Cao, F., et al.: A hybrid truncated norm regularization method for matrix completion. IEEE Trans. Image Process. 28(10), 5171–5186 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Mirsky, L.: A trace inequality of John von Neumann. Monatshefte für mathematik 79(4), 303–306 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  23. Tyrrell, R., Fellar, R: Convex analysis. Princeton University Press (1996)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guanghui Cheng.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Guanghui Cheng contributed equally to this work.

Appendices

Appendix A: Proof of Lemma 1

First, Lemma 1 (1) is discussed. On the basis of that \(\left ({X}_{*}, {H}_{*},{Y}_{*}\right )\) is the optimal solution to the Lagrange function (19). Hence, we have

$$ \begin{array}{@{}rcl@{}} L(X_{*}, H_{*}, Y_{*}) \leq L(X_{k+1}, H_{k+1}, Y_{*}). \end{array} $$
(34)

Clearly, X = H, then the left-hand side is p. And \(p_{k+1}=\text {Tr}\left (A WX_{k+1}B{ }^{T}\right )-\text {Tr}\left (C H_{k+1}D{}^{T}\right )+\lambda \left (\|{X}_{k+1}\|_{F}^{2}-\|{C} {X}_{k+1}\|_{F}^{2}\right )\), (34) is rewritten to \(p_{*}{\leq } p_{k+1}+ {\langle }\frac {\mu }{2}R_{k+1}+Y_{*},R_{k+1}{\rangle }\), the inequality (34) holds.

Thus, Lemma 1 (1) is proved. Then Lemma1(2) is discussed.

Clearly, Xk+ 1 minimizes L(X,Hk,Yk) by definition. Since L(X,Hk,Yk) is closed, convex and subdifferentiable on X. On the basis of the property of subdifferential [23], the optimality condition is obtained as follows

$$ \begin{array}{@{}rcl@{}} \begin{aligned} 0 & \in \partial L(X_{k+1}, H_{k}, Y_{k}) \\ & = \partial\left[\text{Tr}\left( A WX_{k+1} B^{T}\right)-\text{Tr}\left( C H_{k} D{ }^{T}\right)+\lambda\left( \|{X}_{k+1}\|_{F}^{2}-\|{C} {X}_{k+1}\|_{F}^{2}\right)\right]\\ & +{\mu}(X_{k+1}-H_{k})+Y_{k}. \end{aligned} \end{array} $$
(35)

Since Yk+ 1 = Yk + μRk+ 1, we have Yk = Yk+ 1μRk+ 1, (35) is rewritten as follows

$$ \begin{array}{@{}rcl@{}} 0\in\partial\left[\text{Tr}\left( A WX_{k+1} B^{T}\right)+\lambda\left( \|{X}_{k+1}\|_{F}^{2}-\|{C} {X}_{k+1}\|_{F}^{2}\right)\right] \end{array} $$
(36)
$$ \begin{array}{@{}rcl@{}} +{\mu}(H_{k+1}-H_{k})+Y_{k+1}. \end{array} $$
(37)

note that Xk+ 1 minimizes

$$ \begin{array}{@{}rcl@{}} \text{Tr}\left( A WX B^{T}\right)+\lambda\left( \|{X}\|_{F}^{2}-\left\|{C} {X}\right\|_{F}^{2}\right)+{\langle}{\mu}(H_{k+1}-H_{k})+Y_{k+1}, X{\rangle}. \end{array} $$
(38)

In the same way, Hk+ 1 minimizes L(Xk+ 1,H,Yk) by definition. Since L(Xk+ 1,H,Yk) is closed, convex and subdifferentiable on H. On the basis of the property of subdifferential [23], the optimality condition is obtained as follows,

$$ \begin{array}{@{}rcl@{}} \begin{aligned} 0 & \in \partial L(X_{k+1}, H_{k+1}, Y_{k})) \\ & = \partial[-\text{Tr}\left( C H_{k+1} D{ }^{T}\right)]-{\mu}(X_{k+1}-H_{k+1})-Y_{k}. \end{aligned} \end{array} $$
(39)

Since Yk+ 1 = Yk + μRk+ 1, we have Yk = Yk+ 1μRk+ 1, (39) is rewritten as follows

$$ \begin{array}{@{}rcl@{}} 0\in\partial[-\text{Tr}\left( C H_{k+1} D{ }^{T}\right)]-Y_{k}. \end{array} $$
(40)

note that Hk+ 1 minimizes

$$ \begin{array}{@{}rcl@{}} -\text{Tr}\left( C H D{ }^{T}\right)-{\langle}Y_{k+1}, H{\rangle}. \end{array} $$
(41)

Hence, for the optimal solution (X,H), we have

$$ \begin{array}{@{}rcl@{}} \begin{aligned} &\text{Tr}\left( A WX_{k+1} B^{T}\right)+\lambda\left( \|{X}_{k+1}\|_{F}^{2}-\|{C} {X}_{k+1}\|_{F}^{2}\right)+{\langle}{\mu}(H_{k+1}-H_{k})+Y_{k+1}, X_{k+1}{\rangle}\\ &\leq \text{Tr}\left( A WX_{*}B^{T}\right)+\lambda\left( \|{X}_{*}\|_{F}^{2}-\left\|{C} {X}_{*}\right\|_{F}^{2}\right)+{\langle}{\mu}(H_{k+1}-H_{k})+Y_{k+1}, X_{*}{\rangle}. \end{aligned} \end{array} $$
(42)

and

$$ \begin{array}{@{}rcl@{}} \begin{aligned} -\text{Tr}\left( C H_{k+1} D{ }^{T}\right)-{\langle}Y_{k+1}, H_{k+1}{\rangle}\leq -\text{Tr}\left( C H_{*} D{ }^{T}\right)-{\langle}Y_{k+1}, H_{*}{\rangle}. \end{aligned} \end{array} $$
(43)

With (42), (43), and X = H, we have

$$ \begin{array}{@{}rcl@{}} \begin{aligned} p_{k+1}-p_{*} &\leq-{\langle}{\mu}(H_{k+1}-H_{k})+Y_{k+1}, X_{k+1}-X_{*}{\rangle}+{\langle}Y_{k+1}, H_{k+1}-H_{ *}{\rangle}\\ &=-{\langle}{\mu}(H_{k+1}-H_{k})+Y_{k+1}, H_{k+1}-H_{*}+R_{k+1}{\rangle}\\ &+{\langle}Y_{k+1}, H_{k+1}-H_{ *}{\rangle}\\ &={\langle}-Y_{k+1}, R_{k+1}{\rangle}-{\langle}{\mu}(H_{k+1}-H_{k}), H_{k+1}-H_{*}+R_{k+1}{\rangle}. \end{aligned} \end{array} $$
(44)

Thus, Lemma 1(2) is also proved.

Appendix B: Proof of Lemma 2

Add the two inequalities in Lemma 1, multiply both sides by 2, we have

$$ \begin{array}{@{}rcl@{}} \begin{aligned} {\langle}\frac{\mu}{2}R_{k+1}+Y_{*},R_{k+1}{\rangle} + {\langle}-Y_{k+1}, R_{k+1}{\rangle}\\ -{\langle}{\mu}(H_{k+1}-H_{k}), H_{k+1}-H_{*}+R_{k+1}{\rangle}\geq0. \end{aligned} \end{array} $$
(45)

the inequality (45) is rewritten as follows

$$ \begin{array}{@{}rcl@{}} \begin{aligned} &2{\langle}{Y}_{k+1}-{Y}_{*}, {R}_{k+1}{\rangle}-\mu\left\|{R}_{k+1}\right\|_{F}^{2}-2 \mu{\langle}H_{k+1}-H_{k}, {R}_{k+1}{\rangle}\\ &+2\mu{\langle}H_{k+1}-H_{k}, H_{k+1}-H_{*}{\rangle} \leq 0. \end{aligned} \end{array} $$
(46)

Since Yk+ 1 = Yk + μRk+ 1, we have

$$ \begin{array}{@{}rcl@{}} \begin{aligned} & 2\left\langle{Y}_{k+1}-{Y}_{*}, {R}_{k+1}\right\rangle \\ =& 2\left\langle{Y}_{k}+\mu {R}_{k+1}-{Y}_{*}, {R}_{k+1}\right\rangle \\ =& 2\left\langle{Y}_{k}-{Y}_{*}, {R}_{k+1}\right\rangle+\mu\left\|{R}_{k+1}\right\|_{F}^{2}+\mu\left\|{R}_{k+1}\right\|_{F}^{2} \\ =& \frac{2}{\mu}\left\langle{Y}_{k}-{Y}_{*}, {Y}_{k+1}-{Y}_{k}\right\rangle+\frac{1}{\mu}\left\|{Y}_{k+1}-{Y}_{k}\right\|_{F}^{2}+\mu\left\|{R}_{k+1}\right\|_{F}^{2}. \end{aligned} \end{array} $$
(47)

Since \({Y}_{k+1}-{Y}_{k}=\left ({Y}_{k+1}-{Y}_{*}\right )-\left ({Y}_{k}-{Y}_{*}\right )\), we have

$$ \begin{array}{@{}rcl@{}} 2\langle{Y}_{k+1}-{Y}_{k},{Y}_{k}-{Y}_{*} \rangle=\|{Y}_{k+1} -{Y}_{*}\|_{F}^{2}-\|{Y}_{k+1}-{Y}_{k}\|_{F}^{2}-\|{Y}_{k}-{Y}_{*}\|_{F}^{2}. \end{array} $$
(48)

The inequality (47) is rewritten as follows

$$ \begin{array}{@{}rcl@{}} 2\left\langle{Y}_{k+1}-{Y}_{*}, {R}_{k+1}\right\rangle=\frac{1}{\mu}\left( \left\|{Y}_{k+1} -{Y}_{*}\right\|_{F}^{2}-\left\|{Y}_{k}-{Y}_{*}\right\|_{F}^{2}\right) +\mu\left\|{R}_{k+1}\right\|_{F}^{2}. \end{array} $$
(49)

Since \({H}_{k}-{H}_{k+1}=\left ({H}_{k}-{H}_{*}\right )-\left ({H}_{k+1}-{H}_{*}\right )\), we have

$$ \begin{array}{@{}rcl@{}} 2\langle{H}_{k}-{H}_{k+1},{H}_{k+1}-{H}_{*} \rangle=\|{H}_{k} -{H}_{*}\|_{F}^{2}-\|{H}_{k}-{H}_{k+1}\|_{F}^{2}-\|{H}_{k+1}-{H}_{*}\|_{F}^{2}. \end{array} $$
(50)

The inequality (47) is rewritten as follows

$$ \begin{array}{@{}rcl@{}} \begin{aligned} &\frac{1}{\mu}\left( \left\|{Y}_{k+1} -{Y}_{*}\right\|_{F}^{2}-\left\|{Y}_{k}-{Y}_{*}\right\|_{F}^{2}\right) +{\mu}\left( \|{H}_{k+1}-{H}_{*}\|_{F}^{2} -\|{H}_{k} -{H}_{*}\|_{F}^{2}\right)\\ &+\mu\|{H}_{k}-{H}_{k+1}\|_{F}^{2}+2 \mu{\langle}H_{k}-H_{k+1}, {R}_{k+1}{\rangle}\\ &=V_{k+1}-V_{k} + \mu\|{H}_{k}-{H}_{k+1}+{R}_{k+1}\|_{F}^{2}\\ &\leq0. \end{aligned} \end{array} $$
(51)

In other words,

$$ \begin{array}{@{}rcl@{}} V_{k}-V_{k+1} \geq \mu\|{H}_{k}-{H}_{k+1}+{R}_{k+1}\|_{F}^{2} \geq 0. \end{array} $$
(52)

Hence, Vk decreases. Clearly, in order to prove (30), it only need to verify the − 2〈Rk+ 1,HkHk+ 1〉≥ 0.

In fact, recalling that Hk+ 1 minimizes \(-\text {Tr}\left (C H D^{T}\right )-{\langle }Y_{k+1}, H{\rangle }\) and Hk minimizes \(-\text {Tr}\left (C H D^{T}\right )-{\langle }Y_{k}, H{\rangle }\), we have

$$ \begin{array}{@{}rcl@{}} -\text{Tr}\left( C H_{k+1} D^{T}\right)-{\langle}Y_{k+1}, H_{k+1}{\rangle} \leq -\text{Tr}\left( C H_{k} D^{T}\right)-{\langle}Y_{k+1}, H_{k}{\rangle}. \end{array} $$
(53)

and

$$ \begin{array}{@{}rcl@{}} -\text{Tr}\left( C H_{k} D^{T}\right)-{\langle}Y_{k}, H_{k}{\rangle} \leq -\text{Tr}\left( C H_{k+1} D^{T}\right)-{\langle}Y_{k}, H_{k+1}{\rangle}. \end{array} $$
(54)

Add the two inequalities (53) and (53), we have

$$ \begin{array}{@{}rcl@{}} {\langle}Y_{k+1}-Y_{k}, H_{k+1}-H_{k}{\rangle} \leq 0. \end{array} $$
(55)

With Yk+ 1Yk = μRk+ 1 and μ > 0, then the inequality (55) is rewritten to − 2μRk+ 1,HkHk+ 1〉≥ 0

Thus, Lemma 2 is proved.

Appendix C: Proof of Theorem 1

In terms of Lemma 2, Vk decreases in each iteration and \(V_{k}-V_{k+1} \geq \mu \left (\left \|R_{k+1}\right \|_{F}^{2}+\|H_{k+1}-H_{k} \|_{F}^{2}\right )\), add all the terms on the both sides and rearrange it, we have

$$ \begin{array}{@{}rcl@{}} {\sum}_{k=1}^{\infty}\left( \mu\left( \left\|R_{k+1}\right\|_{F}^{2}+\|H_{k+1}-H_{k} \|_{F}^{2}\right)\right) \leq {\sum}_{k=1}^{\infty}\left( V_{k}-V_{k+1}\right) \leq V_{1}<\infty. \end{array} $$

Notes that \(\mu \left (\left \|R_{k+1}\right \|_{F}^{2}+\|H_{k+1}-H_{k} \|_{F}^{2}\right )\rightarrow 0\)(\(k \rightarrow \infty \)). Hence, if \(k \rightarrow \infty \), then \(R_{k+1} \rightarrow 0\) and \(H_{k+1}-H_{k}\rightarrow 0\). In view of Theorem 1, we have

$$ \begin{array}{@{}rcl@{}} p_{*}-p_{k+1} \leq{\langle}\frac{\mu}{2}R_{k+1}+Y_{*},R_{k+1}{\rangle} \rightarrow 0(k \rightarrow \infty). \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} p_{k+1}-p_{*} &\leq& {\langle}-Y_{k+1}, R_{k+1}{\rangle}-{\langle}{\mu}(H_{k+1}-H_{k}), H_{k+1}-H_{*}+R_{k+1}{\rangle}\\ &\rightarrow& 0(k \rightarrow \infty). \end{array} $$

That is to say, \(p_{k} \rightarrow p_{*}\) as \(k \rightarrow \infty \).

Thus, Theorem 1 is proved.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wan, X., Cheng, G. Weighted hybrid truncated norm regularization method for low-rank matrix completion. Numer Algor 94, 619–641 (2023). https://doi.org/10.1007/s11075-023-01513-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-023-01513-0

Keywords

Navigation