Abstract
Sparse representation has garnered significant attention across multiple fields, including signal processing, statistics, and machine learning. The fundamental concept of this technique is that we can express the signal as a linear combination of only a few elements from a known basis. Compressed sensing (CS) is an interesting application of this technique. It is valued for its potential to improve data collection and ensure efficient acquisition and recovery from just a few measurements. In this paper, we propose a novel approach for the high-order CS problem based on the Einstein product, utilizing a tensor dictionary instead of the commonly used matrix-based dictionaries in the Tucker model. Our approach provides a more general framework for compressed sensing. We present two novel models to address the CS problem in the multidimensional case. The first model represents a natural generalization of CS to higher-dimensional signals; we extend the traditional CS framework to effectively capture the sparsity of multidimensional signals and enable efficient recovery. In the second model, we introduce a complexity reduction technique by utilizing a low-rank representation of the signal. We extend the OMP and the homotopy algorithms to solve the high-order CS problem. Through various simulations, we validate the effectiveness of our proposed method, including its application to solving the completion tensor problem in 2D and 3D colored and hyperspectral images.








Similar content being viewed by others
Data availability
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.
References
Beik FPA, Movahed FS, Ahmadi-Asl S (2016) on the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations. Numer Linear Algebra Appl 23(3):444–466
Brazell M, Li N, Navasca C, Tamon C (2013) Solving multilinear systems via tensor inversion. SIAM J Matrix Anal Appl 34(2):542–570
Caiafa CF, Cichocki A (2013) Computing sparse representations of multidimensional signals using Kronecker bases. Neural Comput 25(1):186–220
Caiafa CF, Cichocki A (2013) Multidimensional compressed sensing and their applications. Wiley Interdiscip Rev Data Min Knowl Discov 3(6):355–380
Candes J EJ, Romberg Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509
Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215
Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20(1):33–61
De Lathauwer L (2008) Decompositions of a higher-order tensor in block terms-Part II: definitions and uniqueness. SIAM J Matrix Anal Appl 30(3):1033–1066
Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306
Donoho DL, Tsaig Y (2008) Fast solution of l1—norm minimization problems when the solution may be sparse. IEEE Trans Inf Theory 54(11):4789–4812
Duarte MF, Baraniuk RG (2010) Kronecker product matrices for compressive sensing. In: 2010 IEEE international conference on acoustics, speech, and signal processing, ICASSP 2010—proceedings, pp 3650–3653
Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–499
El Guide M, El Ichi A, Jbilou K, Beik FPA (2022) Tensor Krylov subspace methods via the Einstein product with applications to image and video processing. Appl Numer Math 181:347–363
Foucart S, Rauhut H (2013) A mathematical introduction to compressive sensing. Applied and numerical harmonic analysis. Springer, New York
Kolda T (2006) Multilinear operators for higher-order decompositions. In: Technical report SAND2006-2081, 923081, Sandia National Laboratories. https://www.osti.gov/servlets/purl/923081/
Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500
Li Q, Schonfeld D, Friedland S (2013) Generalized tensor compressive sensing. In: 2013 IEEE international conference on multimedia and expo (ICME), pp 1–6
Qi N, Shi Y, Sun X, Yin B (2016) Tensr: multi-dimensional tensor sparse representation. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 5916–5925
Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 50(10):2231–2242
Zhao R, Wang Q (2019) Learning separable dictionaries for sparse tensor representation: an online approach. IEEE Trans Circuits Syst II Exp Briefs 66(3):502–506
Acknowledgements
We would like to thank the two referees for valuable remarks and helpful comments.
Funding
This work is supported by OCP foundation through the APRD research program.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest that could have influenced the work presented in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The contribution of Ferdaous Ait Addi is major in this work.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Addi, F.A., Bentbib, A.H. & Jbilou, K. Tensor sparse representation via Einstein product. Comp. Appl. Math. 43, 222 (2024). https://doi.org/10.1007/s40314-024-02749-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-02749-9