Skip to main content
Log in

The Large Sample Performance of a Maximum Likelihood Method for OFDM Carrier Frequency Offset Estimation

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

This paper is concerned with the performance analysis of a blind method for carrier frequency offset (CFO) estimation in OFDM systems. The method generates a maximum likelihood CFO estimate when message-carrying symbols and channel are assumed nonrandom. It has been observed in simulation that, when a large number of OFDM blocks are in use under noisy scenarios, the performance of the method does not achieve the corresponding Cramer-Rao lower bound (CRB) and is lower bounded by another (larger) CRB when symbols are assumed random. This phenomenon is caused by the inconsistency of nonrandom symbol estimates and is further verified by using first-order perturbation analysis and comparison with the two CRBs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. In this case, unknowns include the parameters in the statistics of \(\mathbf{s}(k)\) rather than actual values of \(\mathbf{s}(k)\).

  2. Collection of a large number of multiple blocks over a time-invariant channel is possible if the channel coherence time is larger than the acquisition period. The channel coherence time is given by \(\Delta t_{c} = \frac{3}{4\sqrt{\pi }} \frac{c}{v f_{0}}\) where \(c=3\times 10^{5}\) km/s is the speed of light, \(f_{0}\) is the central carrier frequency and \(v\) the relative speed of a mobile user. For a slowly moving user with the maximum speed \(v=5\) km/h and the IEEE 802.16a wireless LAN/MAN applications where \(f_{0}=3\times 10^{9}\) Hz, the coherence time is equal to \(76.2\) ms. If the sampling frequency is chosen as \((160/7)\) MHz according to the last row of Table B.36 of [7], the duration of each block is \((256+16) \times (7/160) = 11.9 \mu s\). Thus the coherence time covers more than 6,400 blocks.

  3. This result means that the term \(z_{\psi }^{(k-1)(N+N_{g})}\) in (7) leaves no impact on the conditional CRB. This term is caused by initial time instant. Since the unconditional CRB is calculated from the probability density function involving only data covariance (see 19), this term does not affect the unconditional CRB either.

References

  1. Bolcskei, H. (2001). Blind estimation of symbol timing and carrier frequency offset in wireless OFDM systems. IEEE Transactions on Communication, 49(6), 988–999.

    Article  Google Scholar 

  2. Chen, B. (2002). Maximum likelihood estimation of OFDM carrier frequency offset. IEEE Signal Processing Letters, 9(4), 123–126.

    Article  Google Scholar 

  3. Gao, F., & Nallanathan, A. (2006). Blind maximum likelihood CFO estimation for OFDM systems via polynomial rooting. IEEE Signal Processing Letters, 13(2), 73–76.

    Article  Google Scholar 

  4. Ghogho, M., Swami, A., & Giannakis, G. B. (2001). Optimized null-subcarrier selection for CFO estimation in OFDM over frequency-selective fading channels. San Antonio, USA: GLOBECOM’2001.

    Google Scholar 

  5. Haddadi, F., Nayebi, M. M., & Aref, M. R. (2009). Direction-of-arrival estimation for temporally correlated narrowband signals. IEEE Transations on Signal Processing, 57(2), 600–609.

    Article  MathSciNet  Google Scholar 

  6. Huang, D., & Ben, Leitaief K. (2006). Carrier frequency offset estimation for OFDM systems using null subcarriers. IEEE Transactions on Communications, 54(5), 813–823.

    Article  Google Scholar 

  7. IEEE. (2003). IEEE Std 802.16a-2003, IEEE Standard for Local and Metropolitan Area Networks, Part 16: Air Interface for Fixed Broadband Wireless Access Systems-Amendment 2: Medium Access Control Modifications and Additional Physical Layer Specifications for 2–11 GHz, January.

  8. Janssen, P., & Stoica, P. (1988). On the expectation of the product of four matrix-valued random variables. IEEE Transactions on Automatic Control, 33(9), 867–870.

    Article  MathSciNet  MATH  Google Scholar 

  9. Larsson, E. G., Liu, G., Li, J., & Giannakis, G. B. (2001). Joint symbol timing and channel estimation for OFDM-based WLANs. IEEE Communications Letters, 5(8), 325–327.

    Article  Google Scholar 

  10. Liu, H., & Tureli, U. (1998). A high-efficiency carrier estimator for OFDM communications. IEEE Communication Letters, 2(4), 104–106.

    Article  Google Scholar 

  11. Ma, X., Tepedelenlioglu, C., Giannakis, G. B., & Barbarossa, S. (2001). Non-data-aided carrier offset estimators for OFDM with null subcarriers: Identifiability, algorithms, and performance. IEEE Journal on Selected Areas in Communications, 19(12), 2504–2515.

    Article  Google Scholar 

  12. Stoica, P., & Nehorai, A. (1989). MUSIC, maximum likelihood, and Cramer-Rao bound. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(5), 720–740.

    Article  MathSciNet  MATH  Google Scholar 

  13. Stoica, P., & Nehorai, A. (1990). Performance study of conditional and unconditional direction-of-arrival estimation. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(10), 1783–1795.

    Article  MATH  Google Scholar 

  14. Stoica, P., Larson, E. G., & Gershman, A. B. (2001). The stochastic CRB for array processing: A textbook derivation. IEEE Signal Processing Letters, 8(5), 148–150.

    Article  Google Scholar 

  15. van Trees, H. L. (1968). Detection, estimation and modulation, part I. New York: Wiley.

    Google Scholar 

  16. Tureli, U., Liu, H., & Zoltowski, M. D. (2000). OFDM blind carrier offset estimation: ESPRIT. IEEE Transaction on Communications, 48(9), 1459–1461.

    Article  Google Scholar 

  17. Tureli, H., Kivanc, D., & Liu, H. (2001). Experimental and analytical studies on a high-resolution OFDM carrier offset estimator. IEEE Transaction on Vehicular Technology, 50(2), 629–643.

    Article  Google Scholar 

  18. Tureli, U., Honan, P., & Liu, H. (2004). Low-complexity nonlinear least squares carrier offset estimation for OFDM: Identifiability, diversity and performance. IEEE Transactions on Signal Processing, 52(9), 2441–2452.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Cheng.

Appendices

Appendix 1: Proof of CRB Expressions

The noisy data model (10) is similar to (1.1a) of [12] where \(\mathbf{E} \mathbf{W} z_{\psi }^{(k-1)(N+N_{g})}\) plays an equivalent role as the array manifold matrix \(A\) and \(\tilde{\mathbf{s}}(k)\) is equivalent to amplitudes \(x(t)\). However, the dependence of \(\mathbf{A}\) on \(\psi \) is different, with all columns of \(\mathbf{A}\) being functions of \(\psi \). This difference leads to varied expressions to (E.8f) and (E.8g) in [12] under the conditional model and to the first-order partial derivative of the covariance matrix in (16) of [14] under the unconditional model.

Proof of Lemma 2

For CFO estimation, the diagonal matrix \(X(k)\) (for amplitudes) in (E.8f) and (E.8g) should be replaced by the column vector \(\tilde{\mathbf{s}}(k)\). Then following the derivation of \(CRB^{-1}(\theta )\) after (E.11b), the Fisher information matrix for the CFO becomes a scalar which is equal to

$$\begin{aligned} F_{c}= \frac{2}{\sigma ^{2}} \sum _{k=1}^{K} \tilde{\mathbf{s}}(k)^{H} \mathbf{Q}^{^{\prime }H} \mathbf{P}_\mathbf{Q}^{\perp } \mathbf{Q}^{^{\prime }} \tilde{\mathbf{s}}(k) = \frac{2K}{\sigma ^{2}} tr [ \mathbf{Q}^{^{\prime }H} \mathbf{P}_\mathbf{Q}^{\perp } \mathbf{Q}^{^{\prime }} \hat{\mathbf{R}}_{\tilde{\mathbf{s}}} ] \end{aligned}$$
(49)

where \(\mathbf{Q}=\mathbf{A} z_{\psi }^{(k-1)(N+N_{g})}\). Since \(\mathbf{R}_{\tilde{\mathbf{s}}}\) is assumed to be positive definite, for \(K >>1, \hat{\mathbf{R}}_{\tilde{\mathbf{s}}}\) can also be considered positive definite and hence \(tr [ \mathbf{Q}^{^{\prime }H} \mathbf{P}_\mathbf{Q}^{\perp } \mathbf{Q}^{^{\prime }} \hat{\mathbf{R}}_{\tilde{\mathbf{s}}(k)} ]\) is positive according to Lemma 1. Thus one can write

$$\begin{aligned} CRB_{c} = F_{c}^{-1} = \frac{\sigma ^{2}}{2K} \left( tr \left[ \mathbf{Q}^{^{\prime }H} \mathbf{P}_\mathbf{Q}^{\perp } \mathbf{Q}^{^{\prime }} \hat{\mathbf{R}}_{\tilde{\mathbf{s}}(k)}\right]\right)^{-1}. \end{aligned}$$
(50)

Using (9), one can have

$$\begin{aligned} \mathbf{P}_\mathbf{Q}^{\perp }&= \mathbf{I}-\mathbf{A} (\mathbf{A}^{H} \mathbf{A})^{-1} \mathbf{A}^{H} = \mathbf{I}-\mathbf{A} \mathbf{A}^{H}\end{aligned}$$
(51)
$$\begin{aligned} \mathbf{Q}^{^{\prime }}&= \mathbf{A}^{^{\prime }} -j(k-1)(N+N_{g}) \mathbf{A}. \end{aligned}$$
(52)

Using \(\mathbf{Q}^{^{\prime }H} \mathbf{P}_\mathbf{Q}^{\perp } \mathbf{Q}^{^{\prime }} = (\mathbf{A}^{^{\prime }})^{H} (\mathbf{I}_{N} - \mathbf{A} \mathbf{A}^{H}) \mathbf{A}^{^{\prime }} = \mathbf{H}\) Footnote 3, the conditional CRB can be simplified into

$$\begin{aligned} CRB_{c} = \frac{\sigma ^{2}}{2K} ( tr[ \mathbf{H} \hat{\mathbf{R}}_{\tilde{s}} ] )^{-1}. \end{aligned}$$
(53)

Proof of Lemma 3

Under the unconditional model, the Fisher information matrix (FIM) has the same form as (4) in [14] where the covariance matrix of the data should be replaced by \(\mathbf{R}\) as defined in (19). Since all the columns of \(\mathbf{A}\) depend on \(\psi \), the first-order partial derivative of the covariance matrix assumes the following expression (as opposed to (16) in [14])

$$\begin{aligned} \frac{\partial \mathbf{R}}{\partial \psi } = \mathbf{A}^{^{\prime }} \mathbf{R}_{\tilde{s}} \mathbf{A}^{H} + \mathbf{A} \mathbf{R}_{\tilde{s}} (\mathbf{A}^{^{\prime }})^{H}. \end{aligned}$$
(54)

Define the first-order partial derivative vector of \(vec( \mathbf{R})\) with respect to \(\psi \) as

$$\begin{aligned} \mathbf{g}=\left[ ((\mathbf{R}^{-1})^{1/2})^{T} \otimes (\mathbf{R}^{-1})^{1/2} \right] \frac{\partial \mathbf{R} }{\partial \psi } =vec ( \mathbf{Z} + \mathbf{Z}^{H} ) \end{aligned}$$
(55)

where

$$\begin{aligned} \mathbf{Z}=(\mathbf{R}^{-1})^{1/2} \mathbf{A} \mathbf{R}_{\tilde{s}} (\mathbf{A}^{^{\prime }})^{H} (\mathbf{R}^{-1})^{1/2}. \end{aligned}$$
(56)

In a similar way to (13) of [14], one obtains the Fisher information scalar

$$\begin{aligned} F_{u} = K \mathbf{g}^{H} \mathbf{P}_{\varvec{\varDelta }}^{\perp } \mathbf{g} \end{aligned}$$
(57)

where \(\varvec{\varDelta }=[ \frac{\partial vec(\mathbf{R}) }{\partial \varvec{\rho }}, \frac{\partial vec(\mathbf{R}) }{\partial \sigma ^{2}} ]\) and \(\varvec{\rho }\) is a column vector containing free (real) parameters in the conjugate symmetric matrix \(\mathbf{R}_{\tilde{s}}\). Note that \(\varvec{\varDelta }\) has the same structure as in [14]. Thus (29) of [14] (without subscripts \(k,p\) for \(\mathbf{Z}\)) is also valid for CFO estimation and

$$\begin{aligned} F_{u} =2 K \mathfrak R \left\{ tr \left( \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } \mathbf{Z}^{H} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } \mathbf{Z}^{H} \right) + tr \left( \mathbf{Z} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } \mathbf{Z}^{H} \right) \right\} . \end{aligned}$$
(58)

From \(\mathbf{Z}^{H} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp }=\mathbf{0}\),

$$\begin{aligned} F_{u} = 2 K \mathfrak R \left\{ tr \left( \mathbf{Z} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } \mathbf{Z}^{H} \right) \right\} . \end{aligned}$$
(59)

Using (23), one can write \(\mathbf{Z}^{H}=(\mathbf{R}^{-1})^{1/2} \mathbf{A}^{^{\prime }} \mathbf{R}_{\tilde{s}} \mathbf{A}^{H} (\mathbf{R}^{-1})^{1/2}\). Substituting \(\mathbf{Z}\) and thus

$$\begin{aligned} F_{u} = 2 K \mathfrak R \left\{ tr \left( (\mathbf{A}^{^{\prime }})^{H} (\mathbf{R}^{-1})^{1/2} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } (\mathbf{R}^{-1})^{1/2} \mathbf{A}^{^{\prime }} \cdot \mathbf{R}_{\tilde{s}} \mathbf{A} \mathbf{R}^{-1} \mathbf{A} \mathbf{R}_{\tilde{s}} \right) \right\} . \end{aligned}$$
(60)

Using (31) in [14],

$$\begin{aligned} (\mathbf{R}^{-1})^{1/2} \mathbf{P}_{ (\mathbf{R}^{-1})^{1/2} \mathbf{A} }^{\perp } (\mathbf{R}^{-1})^{1/2} =\mathbf{P}_{ \mathbf{A} }^{\perp } \mathbf{R}^{-1}. \end{aligned}$$
(61)

Making use of \(\mathbf{P}_\mathbf{A}^{\perp } \mathbf{R}^{-1} = (1/\sigma ^{2}) \mathbf{P}_\mathbf{A}^{\perp }\) (after (31) in [14]) and (9),

$$\begin{aligned} \mathbf{P}_{ \mathbf{A} }^{\perp } \mathbf{R}^{-1}= (1/\sigma ^{2}) \mathbf{P}_{ \mathbf{A} }^{\perp } = (1/\sigma ^{2}) (\mathbf{I}-\mathbf{A} \mathbf{A}^{H}). \end{aligned}$$
(62)

Using (61) and (62) in (60) gives that

$$\begin{aligned} F_{u} = (2K /\sigma ^{2}) \mathfrak R \left\{ tr \left( \mathbf{H} \cdot \mathbf{R}_{\tilde{s}} \mathbf{A} \mathbf{R}^{-1} \mathbf{A} \mathbf{R}_{\tilde{s}} \right) \right\} . \end{aligned}$$
(63)

Let the eigen-decomposition of \(\mathbf R\) as \(\mathbf{R}=\mathbf{U}_{s} \varvec{\varLambda }_{s} \mathbf{U}_{s}^{H} + \sigma ^{2} \mathbf{U}_{n} \mathbf{U}_{n}^{H}\) where \(\varvec{\varLambda }_{s}\) is a diagonal matrix containing the \(P\) largest eigenvalues of \(\mathbf{R}, \sigma ^{2}\) is the value of the remaining (identical) \(N-P\) eigenvalues, \(\mathbf{U}_{s}\) and \(\mathbf{U}_{n}\) contain the corresponding eigenvectors. The dimensions of \(\mathbf{U}_{s}\) and \(\mathbf{U}_{n}\) are \(N \times P\) and \(N \times (N-P)\) respectively. Eigenvectors are chosen such that \([\mathbf{U}_{s},\mathbf{U}_{n}]\) is a unitary matrix. Then \(\mathbf{R}^{-1}=\mathbf{U}_{s} \varvec{\varLambda }_{s}^{-1} \mathbf{U}_{s}^{H} + \sigma ^{-2} \mathbf{U}_{n} \mathbf{U}_{n}^{H}\). Note that \(\mathbf{A}^{H} \mathbf{U}_{s}\) is a \(P \times P\) nonsingular matrix and \(\mathbf{A}^{H} \mathbf{U}_{n}=\mathbf{0}\). Thus \(\mathbf{A}^{H} \mathbf{R}^{-1} \mathbf{A} = \mathbf{A}^{H} \mathbf{U}_{s} \cdot \varvec{\varLambda }_{s}^{-1} \cdot \mathbf{U}_{s}^{H} \mathbf{A}\) where all the three matrices have a full rank \(P\). Now it is easy to see that \(\mathbf{R}_{\tilde{s}} \mathbf{A} \mathbf{R}^{-1} \mathbf{A} \mathbf{R}_{\tilde{s}}\) has a full rank as well, and the trace value in (63) is positive according to Lemma 1. Then the symbol \(\mathfrak R \) therein can be dropped and the lemma is proved from \(CRB_{u}=F_{u}^{-1}\).

Appendix 2: Analysis of MUSIC

1.1 The First-order and Second-order Derivatives

Since \(\tilde{\mathbf{W}} \tilde{\mathbf{W}}^{H}=\mathbf{I}_{N} - \mathbf{W} \mathbf{W}^{H}\) and \(\mathbf{E}_{\theta } \mathbf{E}_{\theta }^{*}=\mathbf{I}_{N}\), one can rewrite the MUSIC function as

$$\begin{aligned} f_{mu}(\theta ) = \frac{1}{K} \sum _{k=1}^{K} \hat{\mathbf{y}}(k)^{H} (\mathbf{I}_{N} - \mathbf{E}_{\theta } \mathbf{W} \mathbf{W}^{H} \mathbf{E}_{\theta }^{*}) \hat{\mathbf{y}}(k). \end{aligned}$$
(64)

The first-order derivative of the MUSIC function (34) is given by

$$\begin{aligned} f_{mu}^{^{\prime }}(\psi )= - \frac{1}{K} \sum _{k=1}^{K} \left\{ \hat{\mathbf{y}}(k)^{H} \mathbf{E}^{^{\prime }} \mathbf{W} \mathbf{W}^{H} \mathbf{E}^{*} \hat{\mathbf{y}}(k) + \hat{\mathbf{y}}(k)^{H} \mathbf{E} \mathbf{W} \mathbf{W}^{H} \mathbf{E}^{^{\prime }*} \hat{\mathbf{y}}(k) \right\} \end{aligned}$$
(65)

Let

$$\begin{aligned} \varvec{\varGamma } =diag[0,1,2,\cdots ,N-1]. \end{aligned}$$
(66)

One can write

$$\begin{aligned} \mathbf{E}^{^{\prime }}&= j \varvec{\varGamma } \mathbf{E} \end{aligned}$$
(67)
$$\begin{aligned} \mathbf{A}^{^{\prime }}&= \mathbf{E}^{^{\prime }} \mathbf{W}= j \varvec{\varGamma } \mathbf{E} \mathbf{W} = j \varvec{\varGamma } \mathbf{A}. \end{aligned}$$
(68)

Define \(\dot{\mathbf{A}} = [ j\varvec{\varGamma } \mathbf{A},\mathbf{A}]\) and

$$\begin{aligned} \mathbf{J}= \left[\begin{array}{cc} \mathbf{0}&\mathbf{I}_{P} \\ \mathbf{I}_{P}&\mathbf{0}\\ \end{array} \right]_{2P \times 2P}. \end{aligned}$$
(69)

The first-order derivative can be written in the following form:

$$\begin{aligned} f_{mu}^{^{\prime }}(\psi )&= - \frac{1}{K} \sum _{k=1}^{K} \hat{\mathbf{y}}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \hat{\mathbf{y}}(k) = - \frac{1}{K} \sum _{k=1}^{K} \left\{ \mathbf{y}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{y}(k) \right. \nonumber \\&\left. +\mathbf{y}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) +\mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{y}(k) +\mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \right\} . \end{aligned}$$
(70)

Note that

$$\begin{aligned} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} = j \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} - j \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \end{aligned}$$
(71)

and using (9)

$$\begin{aligned} \mathbf{y}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{y}(k)&= \tilde{\mathbf{s}}(k)^{H} \mathbf{A}^{H} ( j \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} - j \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } ) \mathbf{A} \tilde{\mathbf{s}}(k) \nonumber \\&= j\tilde{\mathbf{s}}(k)^{H} ( \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} -\mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} ) \tilde{\mathbf{s}}(k) =0. \end{aligned}$$
(72)

Then one has

$$\begin{aligned} f_{mu}^{^{\prime }}(\psi ) = - \frac{1}{K} \sum _{k=1}^{K} \left\{ \mathbf{y}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) +\mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{y}(k) +\mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \right\} . \end{aligned}$$
(73)

The second-order derivative is given by

$$\begin{aligned} f^{^{\prime \prime }}_{mu}(\psi )&= - \frac{2}{K} \mathfrak R \left\{ \hat{\mathbf{y}}(k)^{H} (\mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ) \hat{\mathbf{y}}(k) \right\} \nonumber \\&= - 2 \mathfrak R \left\{ tr [ (\mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ) \left( \sum _{k=1}^{K} \hat{\mathbf{y}}(k) \hat{\mathbf{y}}(k)^{H}/K\right)] \right\} \end{aligned}$$
(74)

and its limiting version by

$$\begin{aligned} f^{^{\prime \prime }}_{mu,\infty }(\psi )&= -2 \mathfrak R \left\{ tr [ (\mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ) ( \mathbf{A} \mathbf{R}_{\tilde{\mathbf{s}}} \mathbf{A}^{H} ) ] \right\} \nonumber \\&- 2 \sigma ^{2} \mathfrak R \{ tr [ \mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ]\} \end{aligned}$$
(75)

where \(\sum _{k=1}^{K} \hat{\mathbf{y}}(k) \hat{\mathbf{y}}(k)^{H}/K\) is approximated by its asymptotic matrix as given in (19). Note that \(\mathbf{A}{^{\prime \prime }}= -\varvec{\varGamma }^{2} \mathbf{A}\) and \(\mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} =-\varvec{\varGamma }^{2} \mathbf{A} \mathbf{A}^{H} + \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma }\). Thus

$$\begin{aligned}&tr [ \mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ] =0\end{aligned}$$
(76)
$$\begin{aligned}&tr [ (\mathbf{A}{^{\prime \prime }} \mathbf{A}^{H}+ \mathbf{A}^{^{\prime }} (\mathbf{A}^{^{\prime }})^{H} ) ( \mathbf{A} \mathbf{R}_{\tilde{\mathbf{s}}} \mathbf{A}^{H} ) ] \nonumber \\&=- tr [ (\mathbf{A}^{H} \varvec{\varGamma }\varvec{\varGamma } \mathbf{A} - \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A}) \mathbf{R}_{\tilde{\mathbf{s}}} ] =- tr [ \mathbf{H} \mathbf{R}_{\tilde{\mathbf{s}}} ] \end{aligned}$$
(77)

and thus

$$\begin{aligned}&f^{^{\prime \prime }}_{mu,\infty }(\psi ) = 2 \mathfrak R \{ tr [ \mathbf{H} \mathbf{R}_{\tilde{\mathbf{s}}} ] \} = 2 tr [ \mathbf{H} \mathbf{R}_{\tilde{\mathbf{s}}} ] \end{aligned}$$
(78)

where the symbol \(\mathfrak R \) is dropped because the trace value is real (in fact, positive) according to Lemma 1.

1.2 The Mean and Variance of the First-order Derivative

The expectation of the first two terms of (73) are zero because the noise variables have zero means. The third term is

$$\begin{aligned} E \{ \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \} = \sigma ^{2} tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ] = \sigma ^{2} tr [ \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} - \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } ] = 0. \end{aligned}$$

Thus

$$\begin{aligned} E \{ f_{mu}^{^{\prime }}(\psi ) \} =0. \end{aligned}$$
(79)

It is well-known that the third-order moment of Gaussian random variables is zero. Using this property and (11), one obtains

$$\begin{aligned} E (f_{mu}^{^{\prime }}(\psi ) )^{2}&= \frac{2}{K^{2} } \sum _{k=1}^{K} E \left\{ \mathbf{y}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{y}(k) \right\} \nonumber \\&+ \frac{1}{K^{2} } \sum _{k=1}^{K} E \left\{ \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \right\} \nonumber \\&= \frac{2 \sigma ^{2}}{K } tr [ \mathbf{A}^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{A} \cdot \hat{\mathbf{R}}_{\tilde{\mathbf{s}}(k)} ] \end{aligned}$$
(80)
$$\begin{aligned}&+ \frac{1}{K} E \left\{ \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \right\} . \end{aligned}$$
(81)

From (71), \(\mathbf{A}^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \!= \!j ( \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \!-\! \mathbf{A}^{H} \varvec{\varGamma }) \!=\! - j \mathbf{A}^{H} \varvec{\varGamma } ( \mathbf{I}_{N} \!-\! \mathbf{A} \mathbf{A}^{H} )\), then \( \mathbf{A}^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{A} = \mathbf{A}^{H} \varvec{\varGamma } ( \mathbf{I}_{N} - \mathbf{A} \mathbf{A}^{H} ) \varvec{\varGamma } \mathbf{A} \) and thus the term (80) is equal to \(\frac{2 \sigma ^{2}}{K} tr [ \mathbf{H} \hat{\mathbf{R}}_{\tilde{\mathbf{s}}} ]\).

An expression of the term (81) can be developed based on the formula (2.4) in [8], for the product of four matrix-valued Gaussian random variables. In (81), only vector-valued Gaussian random variables are involved and a simplified expression is given in the following lemma.

Lemma B1

Let \(\mathbf{z}_{i},i=1,2,3,4\) be column vectors of complex Gaussian random variables, with the properties that \(E\{ \mathbf{z}_{i} \}= \mathbf{0}\) for some \(i\in [1,4], E\{ \mathbf{z}_{3} \otimes \mathbf{z}_{1} \}= \mathbf{0}\) and \(E\{ \mathbf{z}_{4} \otimes \mathbf{z}_{2} \}= \mathbf{0}\). Then

$$\begin{aligned} E \{ \mathbf{z}_{1}^{H} \mathbf{z}_{2} \mathbf{z}_{3}^{H} \mathbf{z}_{4}\} = E \{ \mathbf{z}_{1}^{H} \mathbf{z}_{2} \} E \{ \mathbf{z}_{3}^{H} \mathbf{z}_{4}\} + tr [ E \{ \mathbf{z}_{2} \mathbf{z}_{3}^{H} \} E \{ \mathbf{z}_{4} \mathbf{z}_{1}^{H} \} ]. \end{aligned}$$
(82)

For the term (81), \(\mathbf{z}_{1}^{H}=\mathbf{n}(k)^{H} \dot{\mathbf{A}}, \mathbf{z}_{2}=\mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k), \mathbf{z}_{3}^{H}=\mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J}, \mathbf{z}_{4}=\dot{\mathbf{A}}^{H} \mathbf{n}(k)\). It is easy to verify that the four vectors satisfy the three properties required in the above lemma. Thus, using Lemma B1, that term is equal to

$$\begin{aligned}&E \{ \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \} E \{ \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \} \nonumber \\&+ tr [ E \{ \mathbf{J} \dot{\mathbf{A}}^{H} \mathbf{n}(k) \mathbf{n}(k)^{H} \dot{\mathbf{A}} \mathbf{J} \} E \{ \dot{\mathbf{A}}^{H} \mathbf{n}(k) \mathbf{n}(k)^{H} \dot{\mathbf{A}} \} ] \nonumber \\&= \sigma ^{4} ( tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ] )^{2} + \sigma ^{4} tr [ \mathbf{J} \dot{\mathbf{A}}^{H} \dot{\mathbf{A}} \mathbf{J} \cdot \dot{\mathbf{A}}^{H} \dot{\mathbf{A}} ] \nonumber \\&= \sigma ^{4} ( tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ] )^{2} + \sigma ^{4} tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \cdot \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ]. \end{aligned}$$
(83)

\(tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ]=0\) was shown in the proof of (79). Using (71) again,

$$\begin{aligned}&tr [ \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} \cdot \dot{\mathbf{A}} \mathbf{J} \dot{\mathbf{A}}^{H} ]\nonumber \\&= - tr [ \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} - \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } - \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} + \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } ] \nonumber \\&= 2 tr [ \mathbf{A}^{H} \varvec{\varGamma }\varvec{\varGamma } \mathbf{A} - \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A} \mathbf{A}^{H} \varvec{\varGamma } \mathbf{A}] = 2 tr [ \mathbf{H} ]. \end{aligned}$$
(84)

Hence the term (81) is equal to \(\frac{2 \sigma ^{4}}{K} tr [ \mathbf{H} ]\). Therefore

$$\begin{aligned} E (f_{mu}^{^{\prime }}(\psi ) )^{2}= \frac{2 \sigma ^{2}}{K } tr [ \mathbf{H} ( \hat{\mathbf{R}}_{\tilde{\mathbf{s}}(k)} + \sigma ^{2} \mathbf{I}_{P} ) ]. \end{aligned}$$
(85)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cheng, F., Cheng, Q. The Large Sample Performance of a Maximum Likelihood Method for OFDM Carrier Frequency Offset Estimation. Wireless Pers Commun 72, 227–244 (2013). https://doi.org/10.1007/s11277-013-1010-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-013-1010-6

Keywords

Navigation