Elsevier

Signal Processing

Volume 157, April 2019, Pages 213-224
Signal Processing

Orthogonal tubal rank-1 tensor pursuit for tensor completion

https://doi.org/10.1016/j.sigpro.2018.11.015Get rights and content

Highlights

  • A novel tensor completion method based on tensor tubal rank minimization and orthogonal matching pursuit is proposed.

  • The convergence under the noise-free case is proved.

  • The algorithm is modified to overcome the tubal impulsive noises by employing tensor tubal lp-norm optimization with 1 < p < 2.

  • Experimental results show that the proposed methods outperform the state-of-the art algorithms in terms of RMSE, PSNR, SSIM and/or computational complexity.

Abstract

This work addresses the issue of tensor completion. The properties of the tensor tubal rank are firstly discussed. It is shown that the tensor tubal rank has similar properties like that of matrix rank derived from SVD. The completion algorithm for the case that the measurements are noise-free or corrupted by Gaussian noise is then proposed based on an orthogonal pursuit on tubal rank-1 tensors. The philosophy behind the devised approach is to relax the problem of tensor tubal rank minimization into tensor Frobenius-norm optimization with a constraint on the maximum number of orthogonal tensors. An iterative procedure which calculates one orthogonal tensor at each iterative step is then suggested, and the local convergence under the noise-free case is also proved. Furthermore, the proposed method is generalized to the situation where the observations are corrupted by impulsive noise in a tubal form. To tackle the impulsive noise, we formulate the problem of tensor completion as minimization of tensor tubal ℓp-norm with 1 < p < 2. An iteratively reweighted procedure is employed to compute the orthogonal tensors. The algorithms are compared with the state-of-the-art approaches using both synthetic data and real data sets.

Introduction

Recovery of multi-dimensional data under limited number of measurements, which is referred to as matrix completion or tensor completion, is an important problem which has drawn increasing attention in recent years. This problem is widely seen in various fields such as recommendation systems [1], dimensionality reduction [2], speech signal processing [3], [4], MIMO radar [5], [6], data mining [7], [8], multi-class learning [9], and computer vision [10], [11].

The idea of recovering unmeasured entries of a matrix basically relies on the framework of finding a low-rank matrix that can model the original data, as turns out to be the task of minimizing the matrix rank or the ℓ0-norm of the singular values of the matrix in the general case. However, it is a non-convex NP-hard problem. One conventional solution is to minimize the matrix nuclear norm or the ℓ1-norm of the singular values instead of matrix rank, therefore relaxing the non-convex problem to a tractable and convex one [12], [13], [14]. Based on this concept, a number of algorithms, such as Singular Value Thresholding (SVT) [15], Grassman Manifold based method [16], Singular Value Projection [17] and Fixed Point Continuation with Approximate SVD (FPCA) [18], have been proposed.

For tensors which are regarded as a generalization of vectors and matrices, although the process of data recovery is more natural as it employs the intrinsically multi-dimensional structure, the problem is more complicated to solve because the lack of a unique definition of tensor rank. One type of methods is to use the minimum number of rank-1 tensors from the CANDECOMP/PARAFAC (CP) decomposition [19], [20], [21]. However, the decomposition is very costly, and sometimes fails to give reasonable result because of the ill-posedness property [22]. Other researchers minimize the Tucker rank as the tensor rank from the Tucker decomposition to achieve good reconstruction results, but these algorithms require the knowledge of tensor rank [23] or a rank estimate [5] thereby being only valid for special applications. Moreover, some of the tensor completion methods based on the CP or Tucker rank minimization rely on the usage of optimization toolbox [24] thus leading to a very high complexity [25]. Other researchers, on the other hand, have proposed to solve it as a low-n-rank tensor recovery problem, which first unfolds an R-D tensor to R matrices in R ways and then minimizes the sum of the ranks or nuclear norms of all the matrices for data recovery [26]. However, it may fail to exploit the tensor structure with the unfolding operations and thereby leads to a suboptimal procedure [27]. Furthermore, when the tensor is very large, these methods have to perform Singular Value Decomposition (SVD) on several large matrices at each iteration, resulting in considerably high computational complexity [28].

Recently, a new type of tensor decomposition based on tensor product is proposed [29]. This factorization method first defines the tensor tubal rank as the tensor rank and then decomposes the tensor into several sub-tensors using the idea of unraveling the multi-dimensional structure by constructing group rings along the tensor tubes. The advantage of such an approach is that the resulting algebra and analysis are very close to those of matrix algebra and analysis [30], and it is shown to be suitable for tensor completion [11]. However, similar to the matrix nuclear norm based methods [15], these algorithms require a selection of several user-defined parameters. With inappropriate parameters, a severe downgrade of performance will be resulted. Moreover, most existing methods cannot deal with the situation when there are outlier measurements [31], [32]. In this work, we propose a novel tensor completion method based on tensor tubal rank [29], [30] minimization as well as orthogonal matching pursuit [33]. A modification is then applied for tensor completion under tubal impulsive noises by employing tensor tubal ℓp-norm optimization on the residual between the measurements and recovered data with p < 2.

The rest of the paper is organized as follows. In Section 2, the notations and required definitions are first presented. In Section 3, our tensor recovery approaches for both random sampling and tubal sampling cases are developed, and the convergence is also proved. In Section 4, a modified algorithm is devised to tackle the situation when the measured entries are corrupted by tubal outliers. Finally, numerical results are provided in Section 5, and conclusions are drawn in Section 6.

Section snippets

Notations, definitions and preliminaries

Before going to the main result, we present the notations and preliminaries in this paper. We mainly follow the definitions in [29], [30], and some notations are newly defined in this section.

Scalars, vectors, matrices and tensors are denoted by italic, bold lower-case, bold upper-case and bold calligraphic symbols, respectively. The transpose and conjugate transpose of a vector or a matrix are written as T and H, and the i × i identity matrix is symbolized as Ii. To refer to the (m1,m2,,mR)

Main result

In this section, we derive the tensor recovery algorithm using 3-D data as an example. Note that for higher-dimensional data, we can recursively apply the Fourier transform over successive dimensions higher than three [35] and then unfold the whole tensor into a 3-D one.

Rank-1 tensor pursuit with random tubal outlier

In this section, we focus on the problem that the observed data are corrupted by tubal impulsive noise, that is, some tubes of a tensor YRM1×M2×M3 are corrupted by impulsive noise while the other tubes are clean or only added by additive Gaussian noise. This phenomenon has been well elaborated in [32], [39]. For ease of presentation, we assume that the observed tensor is sampled by random tubal sampling, and define the additive noise asE=(1q2)E1+q2E2where the entries in E1 are zero or follows

Numerical results

In this section, we show the numerical results, based on the synthetic data as well as real image and video data, of the proposed algorithms and state-of-the-art algorithms. The first subsection considers the standard tensor completion problem while the second one tackles the problem in the presence of tubal outliers. All the experiments are repeated 100 times to obtain the average performance.

Conclusion

In this paper, we devise several data recovery methods for the tensor completion problem using the idea of tensor tubal rank and greedy orthogonal tensor pursuit. The TR1TP and TR1TPT methods are first developed under the situation when the measured entries are randomly sampled or tubal randomly sampled. Using the idea of unraveling the multi-dimensional structure by performing FFT along the tensor tubes, the recovered orthogonal multi-dimensional tensors can be retrieved. Additionally, the

References (46)

  • L.T. Huang et al.

    Target estimation in bistatic MIMO radar via tensor completion

    Signal Process.

    (2016)
  • X. Hu et al.

    Matrix completion-based MIMO radar imaging with sparse planar array

    Signal Process.

    (2017)
  • T.G. Kolda et al.

    Scalable tensor decompositions for multi-aspect data mining

    2008 Eighth IEEE International Conference on Data Mining

    (2008)
  • N. Boumal et al.

    RTRMC: a riemannian trust-region method for low-rank matrix completion

    Proceedings of International Conference on Neural Information Processing Systems 24 (NIPS 2011)

    (2011)
  • N. Linial et al.

    The geometry of graphs and some of its algorithmic applications

    Combinatorica

    (1995)
  • Y. Pang et al.

    Learning regularized LDA by clustering

    IEEE Trans. Neural Netw. Learn. Syst.

    (2014)
  • M.J. Taghizadeh et al.

    Ad hoc microphone array calibration: euclidean distance matrix completion algorithm and theoretical guarantees

    Signal Process.

    (2015)
  • J. Sun et al.

    Multivis: content-based social network exploration through multi-way visual analysis

    SIAM International Conference on Data Mining (SDM2009)

    (2009)
  • G. Obozinski et al.

    Joint covariate selection and joint subspace selection for multiple classification problems

    Stat. Comput.

    (2010)
  • J. Liu et al.

    Tensor completion for estimating missing values in visual data

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2013)
  • Z. Zhang et al.

    Novel methods for multilinear data completion and de-noising based on tensor-SVD

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR))

    (2014)
  • M. Fazel et al.

    A rank minimization heuristic with application to minimum order system approximation

    Proceedings of the 2001 American Control Conference

    (2001)
  • B. Recht et al.

    Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization

    SIAM Rev.

    (2010)
  • E.J. Candés et al.

    Exact matrix completion via convex optimization

    Found. Comput. Math.

    (2009)
  • J.F. Cai et al.

    A singular value thresholding algorithm for matrix completion

    SIAM J. Optim.

    (2010)
  • R.H. Keshavan, S. Oh, A gradient descent algorithm on the grassman manifold for matrix completion, 2009, ArXiv preprint...
  • P. Jain et al.

    Guaranteed rank minimization via singular value projection

    Proceedings of International Conference on Neural Information Processing Systems 23 (NIPS 2010)

    (2010)
  • S.Q. Ma et al.

    Fixed point and bregman iterative methods for matrix rank minimization

    Math. Program.

    (2011)
  • T.G. Kolda et al.

    Tensor decompositions and applications

    SIAM Rev.

    (2009)
  • C. Mu et al.

    Square deal: lower bounds and improved relaxations for tensor recovery

    Proceedings of the 31st International Conference on Machine Learning (ICML)

    (2014)
  • P. Jain et al.

    Provable tensor factorization with missing data

    Proceedings of International Conference on Neural Information Processing Systems 27 (NIPS 2014)

    (2014)
  • V. De Silva et al.

    Tensor rank and the ill-posedness of the best low-rank approximation problem

    SIAM J. Matrix Anal. Appl.

    (2008)
  • M. Filipović et al.

    Tucker factorization with missing data with application to low-n-rank tensor completion

    Multidimens Syst. Signal Process.

    (2015)
  • Cited by (24)

    • Robust tensor decomposition via t-SVD: Near-optimal statistical guarantee and scalable algorithms

      2020, Signal Processing
      Citation Excerpt :

      Thus, it is of significantly practical and theoretical importance to develop efficient algorithms with performance guarantee to robustify traditional tensor decompositions. Recently, the low tubal rank models have achieved better performances than low Tucker rank models in many low rank tensor recovery tasks, like tensor completion [13–16], tensor sensing [17,18], tensor robust principal component analysis (TRPCA) [6,19], outlier robust tensor principal component analysis (OR-TPCA) [8], etc. At the core of these models is the tubal nuclear norm (TNN) ‖ · ‖⋆ [20], which is pointed out to be powerful in capturing the ubiquitous “spatial-shifting” correlations in real-world multi-way data [21].

    • Deep Learning Aided Sound Source Localization: A Nonsynchronous Measurement Approach

      2023, IEEE Transactions on Instrumentation and Measurement
    View all citing articles on Scopus
    1

    The work described in this paper was supported by the National Natural Science Foundation of China (NSFC) under Grant 61501300 and U1501253.

    View full text