Orthogonal tubal rank-1 tensor pursuit for tensor completion
Introduction
Recovery of multi-dimensional data under limited number of measurements, which is referred to as matrix completion or tensor completion, is an important problem which has drawn increasing attention in recent years. This problem is widely seen in various fields such as recommendation systems [1], dimensionality reduction [2], speech signal processing [3], [4], MIMO radar [5], [6], data mining [7], [8], multi-class learning [9], and computer vision [10], [11].
The idea of recovering unmeasured entries of a matrix basically relies on the framework of finding a low-rank matrix that can model the original data, as turns out to be the task of minimizing the matrix rank or the ℓ0-norm of the singular values of the matrix in the general case. However, it is a non-convex NP-hard problem. One conventional solution is to minimize the matrix nuclear norm or the ℓ1-norm of the singular values instead of matrix rank, therefore relaxing the non-convex problem to a tractable and convex one [12], [13], [14]. Based on this concept, a number of algorithms, such as Singular Value Thresholding (SVT) [15], Grassman Manifold based method [16], Singular Value Projection [17] and Fixed Point Continuation with Approximate SVD (FPCA) [18], have been proposed.
For tensors which are regarded as a generalization of vectors and matrices, although the process of data recovery is more natural as it employs the intrinsically multi-dimensional structure, the problem is more complicated to solve because the lack of a unique definition of tensor rank. One type of methods is to use the minimum number of rank-1 tensors from the CANDECOMP/PARAFAC (CP) decomposition [19], [20], [21]. However, the decomposition is very costly, and sometimes fails to give reasonable result because of the ill-posedness property [22]. Other researchers minimize the Tucker rank as the tensor rank from the Tucker decomposition to achieve good reconstruction results, but these algorithms require the knowledge of tensor rank [23] or a rank estimate [5] thereby being only valid for special applications. Moreover, some of the tensor completion methods based on the CP or Tucker rank minimization rely on the usage of optimization toolbox [24] thus leading to a very high complexity [25]. Other researchers, on the other hand, have proposed to solve it as a low-n-rank tensor recovery problem, which first unfolds an R-D tensor to R matrices in R ways and then minimizes the sum of the ranks or nuclear norms of all the matrices for data recovery [26]. However, it may fail to exploit the tensor structure with the unfolding operations and thereby leads to a suboptimal procedure [27]. Furthermore, when the tensor is very large, these methods have to perform Singular Value Decomposition (SVD) on several large matrices at each iteration, resulting in considerably high computational complexity [28].
Recently, a new type of tensor decomposition based on tensor product is proposed [29]. This factorization method first defines the tensor tubal rank as the tensor rank and then decomposes the tensor into several sub-tensors using the idea of unraveling the multi-dimensional structure by constructing group rings along the tensor tubes. The advantage of such an approach is that the resulting algebra and analysis are very close to those of matrix algebra and analysis [30], and it is shown to be suitable for tensor completion [11]. However, similar to the matrix nuclear norm based methods [15], these algorithms require a selection of several user-defined parameters. With inappropriate parameters, a severe downgrade of performance will be resulted. Moreover, most existing methods cannot deal with the situation when there are outlier measurements [31], [32]. In this work, we propose a novel tensor completion method based on tensor tubal rank [29], [30] minimization as well as orthogonal matching pursuit [33]. A modification is then applied for tensor completion under tubal impulsive noises by employing tensor tubal ℓp-norm optimization on the residual between the measurements and recovered data with p < 2.
The rest of the paper is organized as follows. In Section 2, the notations and required definitions are first presented. In Section 3, our tensor recovery approaches for both random sampling and tubal sampling cases are developed, and the convergence is also proved. In Section 4, a modified algorithm is devised to tackle the situation when the measured entries are corrupted by tubal outliers. Finally, numerical results are provided in Section 5, and conclusions are drawn in Section 6.
Section snippets
Notations, definitions and preliminaries
Before going to the main result, we present the notations and preliminaries in this paper. We mainly follow the definitions in [29], [30], and some notations are newly defined in this section.
Scalars, vectors, matrices and tensors are denoted by italic, bold lower-case, bold upper-case and bold calligraphic symbols, respectively. The transpose and conjugate transpose of a vector or a matrix are written as T and H, and the i × i identity matrix is symbolized as Ii. To refer to the
Main result
In this section, we derive the tensor recovery algorithm using 3-D data as an example. Note that for higher-dimensional data, we can recursively apply the Fourier transform over successive dimensions higher than three [35] and then unfold the whole tensor into a 3-D one.
Rank-1 tensor pursuit with random tubal outlier
In this section, we focus on the problem that the observed data are corrupted by tubal impulsive noise, that is, some tubes of a tensor are corrupted by impulsive noise while the other tubes are clean or only added by additive Gaussian noise. This phenomenon has been well elaborated in [32], [39]. For ease of presentation, we assume that the observed tensor is sampled by random tubal sampling, and define the additive noise aswhere the entries in are zero or follows
Numerical results
In this section, we show the numerical results, based on the synthetic data as well as real image and video data, of the proposed algorithms and state-of-the-art algorithms. The first subsection considers the standard tensor completion problem while the second one tackles the problem in the presence of tubal outliers. All the experiments are repeated 100 times to obtain the average performance.
Conclusion
In this paper, we devise several data recovery methods for the tensor completion problem using the idea of tensor tubal rank and greedy orthogonal tensor pursuit. The TR1TP and TR1TPT methods are first developed under the situation when the measured entries are randomly sampled or tubal randomly sampled. Using the idea of unraveling the multi-dimensional structure by performing FFT along the tensor tubes, the recovered orthogonal multi-dimensional tensors can be retrieved. Additionally, the
References (46)
- et al.
Target estimation in bistatic MIMO radar via tensor completion
Signal Process.
(2016) - et al.
Matrix completion-based MIMO radar imaging with sparse planar array
Signal Process.
(2017) - et al.
Scalable tensor decompositions for multi-aspect data mining
2008 Eighth IEEE International Conference on Data Mining
(2008) - et al.
RTRMC: a riemannian trust-region method for low-rank matrix completion
Proceedings of International Conference on Neural Information Processing Systems 24 (NIPS 2011)
(2011) - et al.
The geometry of graphs and some of its algorithmic applications
Combinatorica
(1995) - et al.
Learning regularized LDA by clustering
IEEE Trans. Neural Netw. Learn. Syst.
(2014) - et al.
Ad hoc microphone array calibration: euclidean distance matrix completion algorithm and theoretical guarantees
Signal Process.
(2015) - et al.
Multivis: content-based social network exploration through multi-way visual analysis
SIAM International Conference on Data Mining (SDM2009)
(2009) - et al.
Joint covariate selection and joint subspace selection for multiple classification problems
Stat. Comput.
(2010) - et al.
Tensor completion for estimating missing values in visual data
IEEE Trans. Pattern Anal. Mach. Intell.
(2013)
Novel methods for multilinear data completion and de-noising based on tensor-SVD
IEEE Conference on Computer Vision and Pattern Recognition (CVPR))
A rank minimization heuristic with application to minimum order system approximation
Proceedings of the 2001 American Control Conference
Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
SIAM Rev.
Exact matrix completion via convex optimization
Found. Comput. Math.
A singular value thresholding algorithm for matrix completion
SIAM J. Optim.
Guaranteed rank minimization via singular value projection
Proceedings of International Conference on Neural Information Processing Systems 23 (NIPS 2010)
Fixed point and bregman iterative methods for matrix rank minimization
Math. Program.
Tensor decompositions and applications
SIAM Rev.
Square deal: lower bounds and improved relaxations for tensor recovery
Proceedings of the 31st International Conference on Machine Learning (ICML)
Provable tensor factorization with missing data
Proceedings of International Conference on Neural Information Processing Systems 27 (NIPS 2014)
Tensor rank and the ill-posedness of the best low-rank approximation problem
SIAM J. Matrix Anal. Appl.
Tucker factorization with missing data with application to low-n-rank tensor completion
Multidimens Syst. Signal Process.
Cited by (24)
ℓ<inf>2,p</inf>-correlation and robust matching pursuit for sparse approximation
2020, Digital Signal Processing: A Review JournalGeneralized tensor function via the tensor singular value decomposition based on the T-product
2020, Linear Algebra and Its ApplicationsRobust tensor decomposition via t-SVD: Near-optimal statistical guarantee and scalable algorithms
2020, Signal ProcessingCitation Excerpt :Thus, it is of significantly practical and theoretical importance to develop efficient algorithms with performance guarantee to robustify traditional tensor decompositions. Recently, the low tubal rank models have achieved better performances than low Tucker rank models in many low rank tensor recovery tasks, like tensor completion [13–16], tensor sensing [17,18], tensor robust principal component analysis (TRPCA) [6,19], outlier robust tensor principal component analysis (OR-TPCA) [8], etc. At the core of these models is the tubal nuclear norm (TNN) ‖ · ‖⋆ [20], which is pointed out to be powerful in capturing the ubiquitous “spatial-shifting” correlations in real-world multi-way data [21].
Semi-blind receivers for MIMO multi-relaying systems via rank-one tensor approximations
2020, Signal ProcessingDeep Learning Aided Sound Source Localization: A Nonsynchronous Measurement Approach
2023, IEEE Transactions on Instrumentation and Measurement