Low rank matrix completion using truncated nuclear norm and sparse regularizer

https://doi.org/10.1016/j.image.2018.06.007Get rights and content

Highlights

  • This paper proposes a novel matrix completion algorithm by employing a low-rank prior based on truncated nuclear norm and a sparse prior simultaneously.

  • To address the resulting optimization problem, a method alternating between two steps is developed, and the problem involved in the second step is converted to several subproblems with closed-form solutions.

  • Experimental results demonstrate the effectiveness of the proposed algorithm and its better performance as compared with the state-of-the-art matrix completion algorithms.

Abstract

Matrix completion is a challenging problem with a range of real applications. Many existing methods are based on low-rank prior of the underlying matrix. However, this prior may not be sufficient to recover the original matrix from its incomplete observations. In this paper, we propose a novel matrix completion algorithm by employing the low-rank prior and a sparse prior simultaneously. Specifically, the matrix completion task is formulated as a rank minimization problem with a sparse regularizer. The low-rank property is modeled by the truncated nuclear norm to approximate the rank of the matrix, and the sparse regularizer is formulated as an 1-norm term based on a given transform operator. To address the raised optimization problem, a method alternating between two steps is developed, and the problem involved in the second step is converted to several subproblems with closed-form solutions. Experimental results show the effectiveness of the proposed algorithm and its better performance as compared with the state-of-the-art matrix completion algorithms.

Introduction

Matrix completion arising widely in many fields has attracted a great deal of attention in recent years. Many problems in signal processing, computer vision, and machine learning can be formulated as matrix completion, for instance, image inpainting [[1], [2]], video denoising [3], classification [[4], [5]], recommender systems [[6], [7]], and so on. Given a matrix with some of its entries missing, the goal of matrix completion is to recover the missing entries so that the reconstructed matrix approximates the original complete matrix. Obviously, this is inherently an ill-posed problem as there are infinite possible completions and a unique optimal solution cannot be determined. Prior information related to the complete matrix data needs to be exploited to make this problem well-defined.

In many real applications, the underlying matrix has low rank or approximately low rank property. For instance, natural image data has the low rank structure [8]. As a result, the low rank assumption of the expected complete matrix is commonly used in matrix completion [[8], [9], [10], [11]]. Given a partially observed matrix MRm×n, the general matrix completion problem can be formulated as a constrained rank minimization problem, that is minXrank(X)s.t.Xij=Mij,(i,j)Ωwhere XRm×n, rank() denotes the rank of its operand, and Ω{1,,m}×{1,,n} is the set of indices corresponding to the observed entries in M.

However, the above problem is NP-hard in general due to the non-convex and discontinuous nature of the rank function. It has been proven theoretically that, under some general conditions, low rank matrices can be recovered exactly from most sets of sampled entries by minimizing the nuclear norm of the matrix [6]. Therefore, most existing methods for matrix completion use the nuclear norm, i.e., the sum of singular values of a matrix, as a convex surrogate of the rank function. Typical examples are singular value thresholding (SVT) [11], robust principal components analysis [[12], [13]], and nuclear norm regularized least squares [14]. Unfortunately, these nuclear norm based methods may lead to suboptimal results, since the nuclear norm may not approximate the rank function well in practice. In particular, all of the nonzero singular values have equal contributions in the rank function while they are treated differently in the nuclear norm when added together and minimized simultaneously. Recently, the truncated nuclear norm regularization (TNNR) method [[15], [8]] was proposed by only minimizing the sum of the min(m,n)r minimum singular values, i.e., the truncated nuclear norm, rather than the summation of all singular values as in the nuclear norm based methods. A two-step optimization scheme was proposed to address the truncated nuclear norm minimization problem. The TNNR method outperforms the nuclear norm based methods as it gives better approximation of the rank function.

Although these low-rank based approaches have obtained good results, additional information should be considered for more accurate reconstructions. A promising choice is to exploit the sparse property of the complete matrix data in a certain domain, such as transform domains where many signals have inherently sparse structures [[16], [17]]. The sparse low-rank texture inpainting (SLRTI) method proposed in [18] uses sparse structure obtained in transform domain to achieve better results for matrix completion. However, the sparse prior employed in this method is modeled using explicit bases in the form of matrix, which requires the transform to be separable. The SLRTI method employs a linearized approximation of the original objective function and thus only obtains an approximate solution. In addition, the nuclear norm is used in SLRTI to approximate the rank function, rather than the more accurate truncated nuclear norm.

This paper focuses on the matrix completion problem and proposes a novel method which simultaneously considers the low-rank and sparse priors. In particular, the truncated nuclear norm is used as the surrogate of the rank function, leading to a better approximation. The sparse prior is formulated as an 1-norm regularizer in a more general way, as compared with the SLRTI method. Instead of using explicit bases to sparsify the underlying matrix, the sparse regularizer used in the proposed method is formulated in a more general way by applying the transform operator as an implicit function. As the proposed formulation cannot be addressed by traditional optimization methods directly, a two-step optimization method is proposed, which alternates between the singular value decomposition of the estimated matrix and the update of the matrix by solving a constrained optimization problem. To solve the problem involved in the second step, a variable splitting technique is used and a method following the alternating direction method of multipliers (ADMM) framework [19] is developed.

The remainder of the paper is organized as follows. In the next section, a brief review of the related work is provided. Our proposed method is presented in Section 3. Section 4 provides experimental results. Conclusions are drawn in Section 5.

Section snippets

Related work

As mentioned in the previous section, the matrix completion problem is usually addressed by considering the low-rank prior and minimizing the rank of the underlying matrix. Since the rank minimization problem (1) cannot be solved directly, the rank function in the objective function is relaxed to other forms that can be addressed more easily. The most common way is to use the nuclear norm to approximate the rank function [[20], [11]], and thus the matrix completion problem (1) can be recast as

Proposed method

In this section, the formulation of the proposed method is presented first, and then the corresponding optimization framework is introduced in detail.

Experimental results

In this section, several experiments are conducted to demonstrate the effectiveness of the proposed TNN-SR algorithm for matrix completion.2 Three state-of-the-art algorithms are used as the baselines: TNNR [8], SLRTI [18], and a recently proposed method named deep matrix factorization (DMF) [25].3

Conclusion

In this paper, we have proposed a novel matrix completion algorithm based on low-rank and sparse priors. Specifically, the truncated nuclear norm is employed to approximate the rank of the matrix, rather than the nuclear norm used in most existing approaches, to obtain a more accurate approximation. The sparse prior is exploited by an 1-norm regularizer based on a transform operator, which is a general form to model the sparse property of the underlying matrix. We have also proposed an

Acknowledgments

This work was supported by the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province of China (17KJB510025). The authors thank the Associate Editor and the anonymous reviewers for their contributions to improving the quality of the paper.

References (26)

  • N. Komodakis, Image completion using global optimization, in: 2006 IEEE Conference on Computer Vision and Pattern...
  • LiW. et al.

    Efficient image completion method based on alternating direction theory

  • H. Ji, C. Liu, Z. Shen, Y. Xu, Robust video denoising using low rank matrix completion, in: 2010 IEEE Conference on...
  • R.S. Cabral, F. Torre, J.P. Costeira, A. Bernardino, Matrix completion for multi-label image classification, in:...
  • CabralR. et al.

    Matrix completion for weakly-supervised multi-label image classification

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2015)
  • CandèsE.J. et al.

    Exact matrix completion via convex optimization

    Found. Comput. Math.

    (2009)
  • SteckH.

    Training and testing of recommender systems on data missing not at random

  • HuY. et al.

    Fast and accurate matrix completion via truncated nuclear norm regularization

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2013)
  • JainP. et al.

    Low-rank matrix completion using alternating minimization

  • VandereyckenB.

    Low-rank matrix completion by Riemannian optimization

    SIAM J. Optim.

    (2013)
  • CaiJ.-F. et al.

    A singular value thresholding algorithm for matrix completion

    SIAM J. Optim.

    (2010)
  • CandèsE.J. et al.

    Robust principal component analysis?

    J. ACM

    (2011)
  • WrightJ. et al.

    Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization

  • Cited by (0)

    View full text