Skip to main content
Log in

A Novel Framework for Wireless Digital Communication Signals via a Tensor Perspective

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

In this paper, we introduce a novel signal characteristic, which is ubiquitous in the vast majority of communication signals. That is, a modulated signal is of an inherent low-rank structure following a reshaping operation. We first use a toy model to develop a framework for modelling this signal characteristic, and theoretically prove the impact on the signals’ rank structure by additive white Gaussian noise and inter-symbol interference, which is of great concern in the wireless communication field. Subsequently, the model is generalized to multi-input-multi-output signals, and tensor rank is taken into account. Using multi-linear algebra, we prove that the low-rankness of a reshaped signal only depends on the structure of its embedding subspace, and that its rank measure is upper bounded by the multi-rank of the basis tensor. As an application, we propose a novel adaptive sampling and reconstruction scheme for generic software-defined radio based on the low-rank structure. Numerical simulations demonstrate that the proposed method outperforms compressed sensing-based method, particularly when the modulated signal does not satisfy the sparsity assumption in the time and frequency domains. The results of practical experiments further demonstrate that many types of modulated signals can be effectively reconstructed from very limited observations using our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Baraniuk, R. G. (2007). Compressive sensing. IEEE Signal Processing Magazine, 24(4), 118–121.

    Article  MathSciNet  Google Scholar 

  2. Batra, A., Balakrishnan, J., Aiello, G. R., Foerster, J. R., & Dabak, A. (2004). Design of a multiband OFDM system for realistic UWB channel environments. IEEE Transactions on Microwave Theory and Techniques, 52(9), 2123–2138.

    Article  Google Scholar 

  3. Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6), 717–772.

    Article  MathSciNet  MATH  Google Scholar 

  4. Cheng, Y. B., Gu, H., & Su, W. M. (2012). Joint 4-d angle and Doppler shift estimation via tensor decomposition for mimo array. IEEE Communications Letters, 16(6), 917–920. https://doi.org/10.1109/lcomm.2012.040912.120298.

    Article  Google Scholar 

  5. Chi, C. Y., Chen, C. Y., Chen, C. H., & Feng, C. C. (2003). Batch processing algorithms for blind equalization using higher-order statistics. IEEE Signal Processing Magazine, 20(1), 25–49.

    Article  Google Scholar 

  6. Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G., Zhao, Q., Caiafa, C., et al. (2015). Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Processing Magazine, 32(2), 145–163.

    Article  Google Scholar 

  7. de Almeida, A. L., Favier, G., & Mota, J. C. M. (2006). Space-time multiplexing codes: A tensor modeling approach. In IEEE 7th workshop on signal processing advances in wireless communications, 2006, SPAWC’06 (pp. 1–5). IEEE

  8. de Almeida, A. L. F., Favier, G., & Mota, J. C. M. (2007). Parafac-based unified tensor modeling for wireless communication systems with application to blind multiuser equalization. Signal Processing, 87(2), 337–351. https://doi.org/10.1016/j.sigpro.2005.12.014.

    Article  MATH  Google Scholar 

  9. de Almeida, A. L. F., Favier, G., & Mota, J. C. M. (2009). Constrained tucker-3 model for blind beamforming. Signal Processing, 89(6), 1240–1244. https://doi.org/10.1016/j.sigpro.2008.11.016.

    Article  MATH  Google Scholar 

  10. de Almeida, A. L., Favier, G., & Ximenes, L. R. (2013). Space-time-frequency (STF) MIMO communication systems with blind receiver based on a generalized PARATUCK2 model. IEEE Transactions on Signal Processing, 61(8), 1895–1909.

    Article  MathSciNet  Google Scholar 

  11. De Lathauwer, L., & Castaing, J. (2007). Tensor-based techniques for the blind separation of dscdma signals. Signal Processing, 87(2), 322–336. https://doi.org/10.1016/j.sigpro.2005.12.015.

    Article  MATH  Google Scholar 

  12. De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.

    Article  MathSciNet  MATH  Google Scholar 

  13. Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, 52(4), 1289–1306.

    Article  MathSciNet  MATH  Google Scholar 

  14. Ermis, B., Acar, E., & Cemgil, A. T. (2015). Link prediction in heterogeneous data via generalized coupled tensor factorization. Data Mining and Knowledge Discovery, 29(1), 203–236.

    Article  MathSciNet  Google Scholar 

  15. Favier, G., & Almeida, A L Fd. (2014). Tensor space-time-frequency coding with semi-blind receivers for mimo wireless communication systems. IEEE Transactions on Signal Processing, 62(22), 5987–6002. https://doi.org/10.1109/TSP.2014.2357781.

    Article  MathSciNet  Google Scholar 

  16. Fazel, M. (2002). Matrix rank minimization with applications. Thesis

  17. Fernandes, C. A. R., Favier, G., & Mota, J. C. M. (2011). Parafac-based channel estimation and data recovery in nonlinear mimo spread spectrum communication systems. Signal Processing, 91(2), 311–322. https://doi.org/10.1016/j.sigpro.2010.07.010.

    Article  MATH  Google Scholar 

  18. Gallager, R. G. (2008). Principles of digital communication (Vol. 1). Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  19. Ji, S., Xue, Y., & Carin, L. (2008). Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6), 2346–2356.

    Article  MathSciNet  Google Scholar 

  20. Kang, X., Li, S., Fang, L., & Benediktsson, J. A. (2015). Intrinsic image decomposition for feature extraction of hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing, 53(4), 2241–2253.

    Article  Google Scholar 

  21. Kohlenberg, A. (1953). Exact interpolation of band-limited functions. Journal of Applied Physics, 24(12), 1432–1436.

    Article  MathSciNet  MATH  Google Scholar 

  22. Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455–500.

    Article  MathSciNet  MATH  Google Scholar 

  23. Landau, H. (1967). Necessary density conditions for sampling and interpolation of certain entire functions. Acta Mathematica, 117(1), 37–52.

    Article  MathSciNet  MATH  Google Scholar 

  24. Liu, X., Zhao, G., Yao, J., & Qi, C. (2015). Background subtraction based on low-rank and structured sparse decomposition. IEEE Transactions on Image Processing, 24(8), 2502–2514.

    Article  MathSciNet  Google Scholar 

  25. Lu, Y. M., & Do, M. N. (2008). A theory for sampling signals from a union of subspaces. IEEE Transactions on Signal Processing, 56(6), 2334–2345.

    Article  MathSciNet  Google Scholar 

  26. Lunden, J., Koivunen, V., Huttunen, A., & Poor, H. V. (2009). Collaborative cyclostationary spectrum sensing for cognitive radio systems. IEEE Transactions on Signal Processing, 57(11), 4182–4195.

    Article  MathSciNet  Google Scholar 

  27. Mishali, M., & Eldar, Y. C. (2011). Sub-nyquist sampling. IEEE Signal Processing Magazine, 28(6), 98–124.

    Article  Google Scholar 

  28. Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2013). Square deal: Lower bounds and improved relaxations for tensor recovery. arXiv preprint arXiv:1307.5870

  29. Newson, A., Tepper, M., & Sapiro, G. (2015). Low-rank spatio-temporal video segmentation. In: British machine vision conference, BMVC.

  30. Nion, D., & De Lathauwer, L. (2007). Blind receivers based on tensor decompositions. Application in DS-CDMA and over-sampled. In Conference record of the forty-first Asilomar conference on signals, systems and computers (Vols. 1–5). New York: IEEE.

  31. Nion, D., De Lathauwer, L., & IEEE (2007). A tensor-based blind DS-CDMA receiver using simultaneous matrix diagonalization. 2007 IEEE 8th Workshop on signal processing advances in wireless communications (Vols. 1 and 2). New York: IEEE.

  32. Otazo, R., Cands, E., & Sodickson, D. K. (2015). Lowrank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. Magnetic Resonance in Medicine, 73(3), 1125–1136.

    Article  Google Scholar 

  33. Parr, B., Cho, B., Wallace, K., & Ding, Z. (2003). A novel ultra-wideband pulse design algorithm. IEEE Communications Letters, 7(5), 219–221.

    Article  Google Scholar 

  34. Porat, B., & Friedlander, B. (1991). Blind equalization of digital communication channels using high-order moments. IEEE Transactions on Signal Processing, 39(2), 522–526.

    Article  Google Scholar 

  35. Shannon, C. E. (1949). Communication in the presence of noise. Proceedings of the IRE, 37(1), 10–21.

    Article  MathSciNet  Google Scholar 

  36. Sidiropoulos, N. D., & Budampati, R. S. (2002). Khatri-rao space-time codes. IEEE Transactions on Signal Processing, 50(10), 2396–2407.

    Article  MathSciNet  MATH  Google Scholar 

  37. Sidiropoulos, N. D., Giannakis, G. B., & Bro, R. (2000). Blind parafac receivers for ds-cdma systems. IEEE Transactions on Signal Processing, 48(3), 810–823.

    Article  Google Scholar 

  38. Sutton, P. D., Nolan, K. E., & Doyle, L. E. (2008). Cyclostationary signatures in practical cognitive radio applications. IEEE Journal on Selected Areas in Communications, 26(1), 13–24.

    Article  Google Scholar 

  39. Toh, K. C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization, 6(615–640), 15.

    MathSciNet  MATH  Google Scholar 

  40. Tong, L., Xu, G., & Kailath, T. (1994). Blind identification and equalization based on second-order statistics: A time domain approach. IEEE Transactions on Information Theory, 40(2), 340–349.

    Article  Google Scholar 

  41. Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12), 4655–4666.

    Article  MathSciNet  MATH  Google Scholar 

  42. Tsang, T. K., & El-Gamal, M. N. (2005). Ultra-wideband (UWB) communications systems: An overview. In IEEE-NEWCAS 3rd international conference, 2005 (pp. 381–386). IEEE.

  43. Watanabe, S. (1965). Karhunen–Loeve expansion and factor analysis, theoretical remarks and applications. In Proceedings of 4th Prague conference on information theory.

  44. Wu, H. C., Saquib, M., & Yun, Z. (2008). Novel automatic modulation classification using cumulant features for communications via multipath channels. IEEE Transactions on Wireless Communications, 7(8), 3098–3105.

    Article  Google Scholar 

  45. Wu, Q., & Wong, K. M. (1996). Blind adaptive beamforming for cyclostationary signals. IEEE Transactions on Signal Processing, 44(11), 2757–2767.

    Article  Google Scholar 

  46. Ximenes, L. R. (2015). Tensor-based MIMO relaying communication systems. Thesis

  47. Ximenes, L. R., Favier, G., & Almeida, A L Fd. (2016). Closed-form semi-blind receiver for MIMO relay systems using double Khatri Uao space-time coding. IEEE Signal Processing Letters, 23(3), 316–320. https://doi.org/10.1109/LSP.2016.2518699.

    Article  Google Scholar 

  48. Zhang, H., & Kohno, R. (2003). Soft-spectrum adaptation in UWB impulse radio. In: 14th IEEE proceedings on personal, indoor and mobile radio communications 2003, PIMRC 2003 (Vol. 1, pp. 289–293). IEEE.

  49. Zhang, X. F., Wu, H. L., Li, J. F., & Xu, D. Z. (2012). Semiblind channel estimation and signal detection for ofdm system with receiver diversity. Wireless Personal Communications, 66(1), 101–115. https://doi.org/10.1007/s11277-011-0328-1.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the grant from National Natural Science Foundation of China (No. 61671167).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Li.

Appendix: Proof of Theoretical Results

Appendix: Proof of Theoretical Results

1.1 Proof of Theorem 2

To prove the theorem, we first introduce a lemma for a lower bound of the spectral norm of a product of two matrices.

Lemma 4

Let \({\mathbf {A}}\in {\mathcal {R}}^{m\times {}r}\) and \({\mathbf {B}}\in {\mathcal {R}}^{n\times {}r}\) be two full-rank matrices. Then,

$$\begin{aligned} \Vert {\mathbf {AB}}^{\mathrm {T}}\Vert _2\ge \min \left\{ \sigma _{\max }({\mathbf {A}})\sigma _{\min }({\mathbf {B}}),\sigma _{\max }({\mathbf {B}})\sigma _{\min }({\mathbf {A}})\right\} \end{aligned}$$
(32)

where \(\sigma _{\max }(\cdot )\) and \(\sigma _{\min }(\cdot )\) denote the largest and smallest singular of a matrix, respectively.

Proof

It is known that

$$\begin{aligned} \Vert {\mathbf {AB}}^{\mathrm {T}}\Vert _2=\sup _{\Vert {\mathbf {x}}\Vert =1} \Vert {\mathbf {AB}}^{\mathrm {T}}{\mathbf {x}}\Vert =\sup _{\Vert {\mathbf {x}}\Vert =1} \Vert {\mathbf {A}}{\mathbf {V}}_B{\mathbf {D}}_B{\mathbf {U}} _B^{\mathrm {T}}{\mathbf {x}}\Vert \end{aligned}$$
(33)

where \({\mathbf {B}}={\mathbf {U}}_B{\mathbf {D}}_B{\mathbf {V}}_B ^{\mathrm {T}}\) is the singular value decomposition (SVD) of \({\mathbf {B}}\). Hence, letting \({\mathbf {x}}={\mathbf {U}}_{\mathbf {B}}(:,1)\), which denotes the left singular vector corresponding the largest singular value, we have

$$\begin{aligned} \begin{aligned}&\Vert {\mathbf {AB}}^{\mathrm {T}}\Vert _2=\sup _{\Vert {\mathbf {x}}\Vert =1} \Vert {\mathbf {AB}}^{\mathrm {T}}{\mathbf {x}}\Vert \ge \sigma _{\hbox {max}}({\mathbf {B}})\Vert \mathbf {A}{\mathbf {V}}_ {\mathbf {B}}(:,1)\Vert \\&\quad \quad \quad \,\,\,\,\,\ge \sigma _{\hbox {max}}({\mathbf {B}})\sigma _{\hbox {min}}(\mathbf {A}) \end{aligned} \end{aligned}$$
(34)

Likewise, we have

$$\begin{aligned} \Vert {\mathbf {AB}}^{\mathrm {T}}\Vert _2\ge \sigma _{\hbox {max}}(\mathbf {A})\sigma _{\hbox {min}}({\mathbf {B}}) \end{aligned}$$
(35)

Combing inequality (34) and (35), the lemma is proved. \(\square \)

Next, the following lemma shows that the matrix nuclear norm would decrease if multiplying a diagonal matrix with the unit spectral norm.

Lemma 5

Consider that \({\mathbf {D}}\in {\mathcal {R}}^{n\times {}n}\) is a diagonal positive semi-definite matrix and \(\Vert {\mathbf {D}}\Vert _2=1\). \({\mathbf {X}}\in {\mathcal {R}}^{n\times {}n}\) is any matrix. Then,

$$\begin{aligned} \Vert \mathbf {DX}\Vert _*\le {}\Vert {\mathbf {X}}\Vert _* \end{aligned}$$
(36)

Proof

$$\begin{aligned}&\Vert \mathbf {DX}\Vert _*\nonumber \\&\quad =\sup _{\Vert {\mathbf {Y}}\Vert _2=1}tr(\mathbf {DXY}^{{\mathrm {T}}})\nonumber \\&\quad =\sup _{\Vert {\mathbf {Y}}\Vert _2=1}\sum _{i=1}^n{\mathbf {D}}(i,i)tr\left( \mathbf {e}_i\mathbf {e}_i^{{\mathrm {T}}}\mathbf {XY}^{{\mathrm {T}}}\right) \end{aligned}$$
(37)
$$\begin{aligned}&\quad =\sup _{\Vert {\mathbf {Y}}\Vert _2=1}\sum _{i=1}^n{\mathbf {D}}(i,i)tr\left( \mathbf {e}_i\mathbf {e}_i^{{\mathrm {T}}}{\mathbf {X}} \sum _{j=1}^n\sigma _j\mathbf {v}_j\mathbf {u}_j^{{\mathrm {T}}}\right) \nonumber \\&\quad =\sup _{\Vert \mathbf {U\Sigma {}V}^{{\mathrm {T}}}\Vert _2=1}\sum _{i=1}^n\sum _{j=1}^n {\mathbf {D}}(i,i)\sigma _jtr\left( \mathbf {e}_i\mathbf {e}_i^ {{\mathrm {T}}}{\mathbf {X}}\mathbf {v}_j\mathbf {u}_j^{{\mathrm {T}}}\right) \nonumber \\&\quad \le \sup _{\Vert \mathbf {U\Sigma {}V}\Vert _2=1}\sum _{i=1}^n\sum _{j=1}^n {\mathbf {D}}(i,i)\sigma _j{}tr\left( \mathbf {e}_i^ {{\mathrm {T}}}{\mathbf {X}}\mathbf {v}_j\right) \nonumber \\&\quad \le \sup _{\Vert \mathbf {U\Sigma {}V}^{{\mathrm {T}}}\Vert _2=1}\sum _{i=1}^n\sum _{j=1}^n tr\left( \mathbf {e}_i^ {{\mathrm {T}}}{\mathbf {X}}\mathbf {v}_j\right) \nonumber \\&\quad =\sup _{\mathbf {VV}^{{\mathrm {T}}}={\mathbf {V}}^{{\mathrm {T}}} {\mathbf {V}}=\mathbf {I}}tr\left( \mathbf {XV}\right) \nonumber \\&\quad \le \sup _{\Vert {\mathbf {Z}}\Vert _2=1}tr(\mathbf {XZ}^{{\mathrm {T}}})\nonumber \\&\quad =\Vert {\mathbf {X}}\Vert _* \end{aligned}$$
(38)

In these formulas, \(tr(\cdot )\) denotes calculating the trace of a matrix, \(\left\{ \mathbf {e}_i,\,i=1,\ldots ,n\right\} \) denotes the standard basis for Euclidean space, and \(\mathbf {Y=U\Sigma {}V}^{{\mathrm {T}}}\) denotes the SVD of \({\mathbf {Y}}\). Note that equation (37) holds because \(tr(\cdot )\) is a linear function. Inequality (38) holds because \(\Vert {\mathbf {D}}\Vert _2=\Vert {\mathbf {Y}}\Vert _2=1\). \(\square \)

Using Lemma 5, the following lemma provides an upper bound of the nuclear norm of a product of two matrices.

Lemma 6

Let \({\mathbf {A}}\in {\mathcal {R}}^{m\times {}r}\) and \({{\mathbf {B}}}\in {\mathcal {R}}^{n\times {}r}\) be any matrices. Then, it holds that

$$\begin{aligned} \Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _*\le \Vert {\mathbf {A}}\Vert _2 \Vert {{\mathbf {B}}}\Vert _* \end{aligned}$$
(39)

Proof

First, let \(k=\max \{m,n,r\}\), and square \({\mathbf {A}}\) and \({{\mathbf {B}}}\) into matrix \(\bar{{\mathbf {A}}}\in {\mathcal {R}}^{k\times {}k}\) and \(\bar{{{\mathbf {B}}}}\in {\mathcal {R}}^{k\times {}k}\), respectively, by using zero entries. Then, it is obvious that the spectral norm and nuclear norm are unchanged, namely,

$$\begin{aligned} \Vert {\mathbf {A}}\Vert _*=\Vert \bar{{\mathbf {A}}}\Vert _*, \Vert {\mathbf {A}}\Vert _2=\Vert \bar{{\mathbf {A}}}\Vert _2 \end{aligned}$$
(40)

and

$$\begin{aligned} \Vert {{\mathbf {B}}}\Vert _*=\Vert \bar{{{\mathbf {B}}}}\Vert _*, \Vert {{\mathbf {B}}}\Vert _2=\Vert \bar{{{\mathbf {B}}}}\Vert _2 \end{aligned}$$
(41)

Furthermore, it holds that

$$\begin{aligned} \Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _*= \Vert \bar{{\mathbf {A}}}\bar{{{\mathbf {B}}}}^{{\mathrm {T}}}\Vert _* \end{aligned}$$
(42)

Hence,

$$\begin{aligned} \Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _*= & {} \Vert \bar{{\mathbf {A}}}\bar{{{\mathbf {B}}}}^{{\mathrm {T}}}\Vert _*\\= & {} \sup _{\Vert {\mathbf {X}}\Vert _2=1}tr (\bar{{\mathbf {A}}}\bar{{{\mathbf {B}}}}^{{\mathrm {T}}} {\mathbf {X}}^{{\mathrm {T}}})\\\le & {} \sup _{\Vert {\mathbf {X}}\Vert _2=1}\sum _{i=1}^k \sigma _i(\bar{{\mathbf {A}}})\sigma _i ({\mathbf {X}}\bar{{{\mathbf {B}}}})\\&\quad \Vert {\mathbf {A}}\Vert _2 \sup _{\Vert {\mathbf {X}}\Vert _2=1}\Vert {\mathbf {X}}\bar{{{\mathbf {B}}}}\Vert _*\\= & {} \Vert {\mathbf {A}}\Vert _2 \sup _{\Vert \mathbf {UDV}^{{\mathrm {T}}}\Vert =1} \Vert \mathbf {DV}^{{\mathrm {T}}}\bar{{{\mathbf {B}}}}\Vert _*\\\le & {} \Vert {\mathbf {A}}\Vert _2\Vert {{\mathbf {B}}}\Vert _* \end{aligned}$$

Note that the last inequality holds by Lemma 5 and rotational invariance of the matrix nuclear norm. \(\square \)

To prove Theorem 2, we have the following equality by using Lemmas 4 and 6

$$\begin{aligned} \frac{\Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _*}{\Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _2}\le \frac{\Vert {\mathbf {A}}\Vert _*\sigma _{\max }\left( {{\mathbf {B}}}\right) }{\Vert {\mathbf {A}}\Vert _2\sigma _{\min }\left( {{\mathbf {B}}}\right) } =\frac{\Vert {\mathbf {A}}\Vert _*}{\Vert {\mathbf {A}}\Vert _2}cond({{\mathbf {B}}}) \end{aligned}$$

Likewise, we have

$$\begin{aligned} \frac{\Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _*}{\Vert {{\mathbf {AB}}}^{{\mathrm {T}}}\Vert _2}\le =\frac{\Vert {{\mathbf {B}}}\Vert _*}{\Vert {{\mathbf {B}}}\Vert _2}cond({\mathbf {A}}) \end{aligned}$$
(43)

Combining the two inequalities, the theorem is proven. \(\square \)

1.2 Proof of Lemma 2

The probability

$$\begin{aligned}&{\mathbb {P}}\left( \Vert {\mathbf {X}}\Vert _2<\beta -\lambda \right) \\&\quad ={\mathbb {P}}\left( \sup _{\Vert {{\mathbf {x}}}\Vert =1}\Vert \beta {\mathbf {a}}<{\mathbf {p}},{{\mathbf {x}}}>+\sigma \mathbf {Nx}\Vert<\beta -\lambda \right) \\&\quad \le {\mathbb {P}}\left( \Vert \beta {\mathbf {a}}+\sigma \mathbf {Np}\Vert<\beta -\lambda \right) \\&\quad \le {\mathbb {P}}\left( \vert \beta -\sigma \Vert \mathbf {Np}\Vert \vert<\beta -\lambda \right) \\&\quad ={\mathbb {P}}\left( \frac{\lambda }{\sigma }<\Vert \mathbf {Np}\Vert <\frac{2\beta }{\sigma }-\frac{\lambda }{\sigma }\right) \\&\quad \le {\mathbb {P}}\left( \Vert \mathbf {Np}\Vert _2^2-K>\frac{\lambda ^2}{\sigma ^2}-K\right) \end{aligned}$$

Because \(\Vert {\mathbf {Np}}\Vert ^2\) is \({\mathcal {X}}^2\) distributed with K degrees of freedom, then

$$\begin{aligned} {\mathbb {P}}\left( \Vert {\mathbf {Np}}\Vert ^2-K>\lambda _1\right) \le {}2e^{-4T} \end{aligned}$$
(44)

where \(\lambda _1=2(\sqrt{4KT}+4T)\). Hence,

$$\begin{aligned} {\mathbb {P}}\left( \Vert {\mathbf {X}}\Vert _2 <\beta -\lambda \right) \le 2e^{-4T} \end{aligned}$$
(45)

where \(\lambda =\sigma \sqrt{K+8T+4\sqrt{KT}}\). With some simple derivation, we can obtain

$$\begin{aligned} {\mathbb {P}}\left( \Vert {\mathbf {X}} \Vert _2\ge \beta -\sigma \sqrt{K+8T+4\sqrt{KT}}\right) >1-2e^{-4T} \end{aligned}$$
(46)

The lemma is proven. \(\square \)

1.3 Proof of Theorem 3

The probability

$$\begin{aligned} {\mathbb {P}}\left( \frac{\Vert {\mathbf {X}}\Vert _*}{\Vert {\mathbf {X}}\Vert _2}\ge \lambda \right)\approx & {} {\mathbb {P}}\left( \frac{\Vert {\mathbf {X}}\Vert _*}{\beta }\ge \lambda \right) \\= & {} {\mathbb {P}}\left( \frac{\Vert \beta \mathbf {ap}^{{\mathrm {T}}}+\sigma {\mathbf {N}}\Vert _*}{\beta }\ge \lambda \right) \\\le & {} {\mathbb {P}}\left( \frac{\beta +\sigma \Vert {\mathbf {N}}\Vert _*}{\beta }\ge \lambda \right) \\\le & {} {\mathbb {P}}\left( \frac{1+\sigma \min \left\{ K,T\right\} \Vert {\mathbf {N}}\Vert _2}{\beta }\ge \lambda \right) \\= & {} {\mathbb {P}}\left( \Vert {\mathbf {N}}\Vert _2\ge \frac{\beta \left( \lambda -1\right) }{\sigma \min \left\{ K,T\right\} }\right) \\\le & {} {}2e^{-T} \end{aligned}$$

where

$$\begin{aligned} \frac{\beta \left( \lambda -1\right) }{\sigma \min \left\{ K,T\right\} } =(4/3)\sqrt{K+8T+4\sqrt{KT}} \end{aligned}$$
(47)

namely,

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\left( \frac{\Vert {\mathbf {X}}\Vert _*}{\Vert {\mathbf {X}}\Vert _2}\ge {}1+\frac{4}{3}\min \left\{ K,T\right\} \left( \sigma /\beta \right) \sqrt{K+8T+4\sqrt{KT}}\right) \\&\le {}2e^{-T} \end{aligned} \end{aligned}$$
(48)

The theorem is proven. \(\square \)

1.4 Proof of Lemma 3

First, we define sets \({\mathcal {D}}_i, i=1,\ldots ,R\) as

$$\begin{aligned} {\mathcal {D}}_i=\left\{ x\Biggm |\,\vert {}x-{\mathbf {R}}_P(i,i)\vert \le \sum _{j\ne {}i}^R \vert {\mathbf {R}}_P(i,j)\vert \right\} ,\,i=1,\ldots ,R \end{aligned}$$
(49)

Using the assumption, we know

$$\begin{aligned} {\mathbf {R}}_P(1,1)-\sum _{i\ne {}1}\vert {\mathbf {R}}_P(1,i)\vert >\max _{j=2,\ldots ,R}\left\{ \sum _{i=1}^R \vert {\mathbf {R}}_P(j,i)\vert \right\} \end{aligned}$$
(50)

Hence, \(\mathcal {D}_i\) is disjoint to other sets. It means that

$$\begin{aligned} {\mathcal {D}}_1\cap {\mathcal {D}}_i=\emptyset ,\,i=2,\ldots ,R \end{aligned}$$
(51)

Using Gershgorins Theorem, we can estimate the eigenvalues of \({\mathbf {R}}_P\). Because \({\mathcal {D}}_1\) is disjoint, then the largest eigenvalue of \({\mathbf {R}}_P\) must be contained in \({\mathcal {D}}_1\). Furthermore, it is known that the singular values of \({\mathbf {P}}\) are square roots of the corresponding eigenvalue of \({\mathbf {R}}_P\). Hence, the following inequalities hold

$$\begin{aligned} \begin{aligned}&\sqrt{\vert {\mathbf {R}}_P(1,1)\vert -\sum _{i\ne {}1}^R\vert {\mathbf {R}}_P(1,i)\vert }+(R-1)\sqrt{\min _{j\ne {}1} \left\{ \vert {\mathbf {R}}_P(j,j)\vert -\sum _{i\ne {}j}^R\vert {\mathbf {R}}_P(j,i)\vert \right\} }\\&\le \Vert {\mathbf {P}}\Vert _*=\sum _{i=1}^R \sigma _i\le \sqrt{\sum _{i=1}^R\vert {\mathbf {R}}_P(1,i)\vert }+(R-1)\sqrt{\max _{j\ne {}1}\left\{ \sum _{i=1}^R\vert {\mathbf {R}}_P(j,i)\vert \right\} } \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\quad \sqrt{\vert {\mathbf {R}}_P(1,1)\vert -\sum _{i\ne {}1}^R\vert {\mathbf {R}}_P(1,i)\vert }\\&\le \Vert {\mathbf {P}}\Vert _2=\sigma _1\le \sqrt{\sum _{i=1}^R\vert {\mathbf {R}}_P(1,i)\vert } \end{aligned} \end{aligned}$$
(52)

By using these two inequalities, we can easily obtain the result of the lemma. \(\square \)

1.5 Proof of Theorem 5

Given flattening index \((\bar{a},\bar{b})\), let matrices \({\mathbf {X}}={\mathbf {X}}_{(\bar{a},\bar{b})}\) and \({\mathbf {P}}_k={\mathbf {P}}_{k,(\bar{a},\bar{b})},\,k=1,\ldots \,K\) for short. Moreover, assume that \({\mathbf {P}}_k\) can be decomposed as \({\mathbf {P}}_k={\mathbf {U}}_k{\mathbf {D}}_k{\mathbf {V}}_k^{{\mathrm {T}}}\) by SVD, where \({\mathbf {U}}_k\in {\mathcal {R}}^{M\times {}R_k}\), \({\mathbf {V}}_k\in {\mathcal {R}}^{T\times {}R_k}\), and diagonal matrix \({\mathbf {D}}_k\in {\mathcal {R}}^{R_k\times {}R_k}\) denotes left/right singular matrix and singular value matrix, respectively. By (24), \({\mathbf {X}}\) can be represented as

$$\begin{aligned} {\mathbf {X}}&= \left[ \begin{array}{ccc} {\mathbf {U}}_1&\ldots&{\mathbf {U}}_K \end{array} \right] \left[ \begin{array}{ccc} \alpha _1{\mathbf {D}}_1&{}\mathbf {0}&{}\mathbf {0}\\ \mathbf {0}&{}\ddots &{}\mathbf {0}\\ \mathbf {0}&{}\mathbf {0}&{}\alpha _K{\mathbf {D}}_K \end{array} \right] \\&\quad \cdot \left[ \begin{array}{ccc} {\mathbf {V}}_1&\ldots&{\mathbf {V}}_K \end{array} \right] ^{{\mathrm {T}}}\\& =\mathbf {UDV}^{{\mathrm {T}}} \end{aligned} $$
(53)

where operation \([{\mathbf {A}}_1\ldots {\mathbf {A}}_K]\) denotes the concatenation of \({\mathbf {A}}_k,\,k=1,\ldots ,K\). According the property of matrix rank, we know

$$\begin{aligned} rank({\mathbf {X}})\le \min \{{\mathbf {U}},{\mathbf {V}}\} \end{aligned}$$
(54)

Meanwhile, note that

$$\begin{aligned} & {\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}\\&\quad= \left[ \begin{array}{ccc} {\mathbf {P}}_1&\ldots&{\mathbf {P}}_K \end{array} \right] \\&\quad= \left[ \begin{array}{ccc} {\mathbf {U}}_1&\ldots&{\mathbf {U}}_K \end{array} \right] \left[ \begin{array}{ccc} {\mathbf {D}}_1{\mathbf {V}}_1^{{\mathrm {T}}}&{}\mathbf {0}&{}\mathbf {0}\\ \mathbf {0}&{}\ddots &{}\mathbf {0}\\ \mathbf {0}&{}\mathbf {0}&{}{\mathbf {D}}_K{\mathbf {V}}_K^{{\mathrm {T}}} \end{array} \right] \end{aligned}$$
(55)

and

$$\begin{aligned} \begin{aligned}&\quad {\mathbf {P}}_{(\bar{a}\cup {}\{L+1\},\bar{b})}\\&=\left[ \begin{array}{ccc} {\mathbf {P}}_1^{{\mathrm {T}}}&\ldots&{\mathbf {P}}_K^{{\mathrm {T}}} \end{array} \right] \\&=\left[ \begin{array}{ccc} {\mathbf {V}}_1&\ldots&{\mathbf {V}}_K \end{array} \right] \left[ \begin{array}{ccc} {\mathbf {D}}_1{\mathbf {U}}_1^{{\mathrm {T}}}&{}\mathbf {0}&{}\mathbf {0}\\ \mathbf {0}&{}\ddots &{}\mathbf {0}\\ \mathbf {0}&{}\mathbf {0}&{}{\mathbf {D}}_K{\mathbf {U}}_K^{{\mathrm {T}}} \end{array} \right] \end{aligned} \end{aligned}$$
(56)

It can be determined that

$$\begin{aligned} rank({\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})})=rank({\mathbf {U}}) \end{aligned}$$
(57)

and

$$\begin{aligned} rank({\mathbf {P}}_{(\bar{a}\cup {}\{L+1\},\bar{b})})=rank({\mathbf {V}}) \end{aligned}$$
(58)

because the right block matrices are column full rank. Hence, the theorem is proven. \(\square \)

1.6 Proof of Theorem 7

From the assumptions used in the proof of Theorem 5, we have

$$\begin{aligned} \begin{aligned}&{\mathbf {X}}=\sum _{k=1}^K\alpha _k{\mathbf {P}}_k\\&=\left[ \begin{array}{ccc} {\mathbf {P}}_1&\ldots&{\mathbf {P}}_K \end{array} \right] \left[ \begin{array}{c} \alpha _1\mathbf {I}\\ \vdots \\ \alpha _K\mathbf {I} \end{array} \right] \\&={\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}{\mathbf {A}}^{{\mathrm {T}}} \end{aligned} \end{aligned}$$
(59)

It can be easily found that \({\mathbf {X}}\) is full rank and \({\mathbf {A}}\) can be decomposed using SVD such that

$$\begin{aligned} {\mathbf {A}}=\left( \left( \sum _{k=1}^K\alpha _i\right) ^{-\frac{1}{2}} \left[ \begin{array}{c} \alpha _1\mathbf {I}\\ \vdots \\ \alpha _K\mathbf {I} \end{array} \right] \right) \left( \left( \sum _{k=1}^K\alpha _i\right) ^{\frac{1}{2}}\mathbf {I}\right) \end{aligned}$$
(60)

Hence, we obtain that the condition number of \({\mathbf {A}}\) equals 1, namely, \(cond({\mathbf {A}})=1\).

Using Theorem 2, we have

$$\begin{aligned} \frac{\Vert {\mathbf {X}}\Vert _*}{\Vert {\mathbf {X}}\Vert _2}\le \frac{\Vert {\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}\Vert _*}{\Vert {\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}\Vert _2}cond({\mathbf {A}}) \end{aligned}$$
(61)

Hence,

$$\begin{aligned} \frac{\Vert {\mathbf {X}}\Vert _*}{\Vert {\mathbf {X}}\Vert _2}\le \frac{\Vert {\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}\Vert _*}{\Vert {\mathbf {P}}_{(\bar{a},\bar{b}\cup {}\{L+1\})}\Vert _2} \end{aligned}$$
(62)

Likewise,

$$\begin{aligned} \frac{\Vert {\mathbf {X}}\Vert _*}{\Vert {\mathbf {X}}\Vert _2}\le \frac{\Vert {\mathbf {P}}_{(\bar{a}\cup {}\{L+1\},\bar{b})}\Vert _*}{\Vert {\mathbf {P}}_{(\bar{a}\cup {}\{L+1\},\bar{b})}\Vert _2} \end{aligned}$$
(63)

Combining these two inequalities, the theorem is proven.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, Y., Li, C., Dou, Z. et al. A Novel Framework for Wireless Digital Communication Signals via a Tensor Perspective. Wireless Pers Commun 99, 509–537 (2018). https://doi.org/10.1007/s11277-017-5124-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-017-5124-0

Keywords

Navigation