Skip to main content
Log in

A unified analytical framework for distributed variable step size LMS algorithms in sensor networks

  • Published:
Telecommunication Systems Aims and scope Submit manuscript

Abstract

Internet of Things (IoT) is helping to create a smart world by connecting sensors in a seamless fashion. With the forthcoming fifth generation (5G) wireless communication systems, IoT is becoming increasingly important since 5G will be an important enabler for the IoT. Sensor networks for IoT are increasingly used in diverse areas, e.g., in situational and location awareness, leading to proliferation of sensors at the edge of physical world. There exist several variable step-size strategies in literature to improve the performance of diffusion-based Least Mean Square (LMS) algorithm for estimation in wireless sensor networks. However, a major drawback is the complexity in the theoretical analysis of the resultant algorithms. Researchers use several assumptions to find closed-form analytical solutions. This work presents a unified analytical framework for distributed variable step-size LMS algorithms. This analysis is then extended to the case of diffusion based wireless sensor networks for estimating a compressible system and steady state analysis is carried out. The approach is applied to several variable step-size strategies for compressible systems. Theoretical and simulation results are presented and compared with the existing algorithms to show the superiority of proposed work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Atzori, L., Iera, A., & Morabito, G. (2010). The Internet of Things: A survey. Computer Networks, 54(15), 2787–2805.

    Article  Google Scholar 

  2. Stankovic, J. (2014). Research directions for the Internet of Things. IEEE Internet Things Journal, 1(1), 3–9.

    Article  Google Scholar 

  3. Ejaz, W., & Ibnkahla, M. (2015). Machine-to-machine communications in cognitive cellular systems. In Proceedings of 15th IEEE international conference on ubiquitous wireless broadband (ICUWB), Montreal, Canada (pp. 1–5).

  4. Palattella, M. R., Dohler, M., Grieco, A., Rizzo, G., Torsner, J., Engel, T., et al. (2016). Internet of Things in the 5G era: Enablers, architecture, and business models. IEEE Journal on Selected Areas in Communicatios, 34(3), 510–527.

    Article  Google Scholar 

  5. ul Hasan, N., Ejaz, W., Baig, I., Zghaibeh, M., Anpalagan, A. (2016). QoS-aware channel assignment for IoT-enabled smart building in 5G systems. In Proceedings of 8th IEEE international conference on ubiquitous and future networks (ICUFN), Vienna, Austria (pp. 924–928).

  6. Ejaz, W., Naeem, M., Basharat, M., Anpalagan, A., & Kandeepan, S. (2016). Efficient wireless power transfer in software-defined wireless sensor networks. IEEE Sensors Journal, 16(20), 7409–7420.

    Article  Google Scholar 

  7. Culler, D., Estrin, D., & Srivastava, M. (2004). Overview of sensor networks. IEEE Computer, 37(8), 41–49.

    Article  Google Scholar 

  8. Olfati-Saber, R., Fax, J. A., & Murray, R. M. (2007). Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1), 215–233.

    Article  Google Scholar 

  9. Lopes, C. G., & Sayed, A. H. (2007). Incremental adaptive strategies over distributed networks. IEEE Transactions on Signal Processing, 55(8), 4064–4077.

    Article  Google Scholar 

  10. Lopes, C. G., & Sayed, A. H. (2008). Diffusion least-mean squares over adaptive networks: Formulation and performance analysis. IEEE Transactions on Signal Processing, 56(7), 3122–3136.

    Article  Google Scholar 

  11. Schizas, I. D., Mateos, G., & Giannakis, G. B. (2009). Distributed LMS for consensus-based in-network adaptive processing. IEEE Transactions on Signal Processing, 57(6), 2365–2382.

    Article  Google Scholar 

  12. Cattivelli, F., & Sayed, A. H. (2010). Diffusion LMS strategies for distributed estimation. IEEE Transactions on Signal Processing, 58(3), 1035–1048.

    Article  Google Scholar 

  13. Bin Saeed, M. O., Zerguine, A., Zummo, S. A. (2010). Variable step-size least mean square algorithms over adaptive networks. In Proceedings of 10th international conference on information sciences signal processing and their applications (ISSPA), Kuala Lumpur, Malaysia (pp. 381–384).

  14. Bin Saeed, M. O., Zerguine, A., & Zummo, S. A. (2013). A variable step-size strategy for distributed estimation over adaptive networks. EURASIP Journal on Advances in Signal Processing, 2013(2013), 135.

    Article  Google Scholar 

  15. Almohammedi, A. & Deriche, M. (2015). Variable step-size transform domain ILMS and DLMS algorithms with system identification over adaptive networks. In Proceedings of IEEE Jordan conference on applied electrical engineering and computing technologies (AEECT), Amman, Jordan (pp. 1–6).

  16. Bin Saeed, M. O., Zerguine, A., Sohail, M. S., Rehman, S., Ejaz, W., & Anpalagan, A. (2016). A variable step-size strategy for distributed estimation of compressible systems in wireless sensor networks. In Proceedings of IEEE CAMAD, Toronto, Canada (pp. 1–5).

  17. Bin Saeed, M. O., & Zerguine, A. (2011). A new variable step-size strategy for adaptive networks. In Proceedings of the forty fifth asilomar conference on signals, systems and computers (ASILOMAR), Pacific Grove, CA (pp. 312–315).

  18. Bin Saeed, M. O., Zerguine, A., & Zummo, S. A. (2013). A noise-constrained algorithm for estimation over distributed networks. International Journal of Adaptive Control and Signal Processing, 27(10), 827–845.

    Google Scholar 

  19. Ghazanfari-Rad, S., & Labeau, F. (2014). Optimal variable step-size diffusion LMS algorithms. In Proceedings of IEEE workshop on statistical signal processing (SSP), Gold Coast, VIC (pp. 464–467).

  20. Jung, S. M., Sea, J.-H., & Park, P. G. (2015). A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation. Circuits, Systems, and Signal Process., 34(10), 3291–3304.

    Article  Google Scholar 

  21. Lee, H. S., Kim, S. E., Lee, J. W., & Song, W. J. (2015). A variable step-size diffusion LMS algorithm for distributed estimation. IEEE Transactions on Signal Processing, 63(7), 1808–1820.

    Article  Google Scholar 

  22. Sayed, A. H. (2014). Adaptive networks. Proceedings of the IEEE, 102(4), 460–497.

    Article  Google Scholar 

  23. Sayed, A. H. (2003). Fundamentals of adaptive filtering. New York: Wiley.

    Google Scholar 

  24. Nagumo, J., & Noda, A. (1967). A learning method for system identification. IEEE Transactions on Automatic Control, 12(3), 282–287.

    Article  Google Scholar 

  25. Kwong, R. H., & Johnston, E. W. (1992). A variable step-size LMS algorithm. IEEE Transactions on Signal Processing, 40(7), 1633–1642.

    Article  Google Scholar 

  26. Aboulnasr, T., & Mayyas, K. (1997). A robust variable step size LMS-type algorithm: Analysis and simulations. IEEE Transactions on Signal Processing, 45(3), 631–639.

    Article  Google Scholar 

  27. Costa, M. H., & Bermudez, J. C. M. (2008). A noise resilient variable step-size LMS algorithm. Signal Processing, 88(3), 733–748.

    Article  Google Scholar 

  28. Wei, Y., Gelfand, S. B., & Krogmeier, J. V. (2001). Noise-constrained least mean squares algorithm. IEEE Transactions on Signal Processing, 49(9), 1961–1970.

    Article  Google Scholar 

  29. Sulyman, A. I., & Zerguine, A. (2003). Convergence and steady-state analysis of a variable step-size NLMS algorithm. Signal Processing, 83(6), 1255–1273.

    Article  Google Scholar 

  30. Bin Saeed, M. O., & Zerguine, A. (2013). A variable step-size strategy for sparse system identification. In Proceedings of 10th international multi-conference on systems, signals & devices (SSD), Hammamet (pp. 1–4).

  31. Al-Naffouri, T. Y., & Moinuddin, M. (2010). Exact performance analysis of the \(\epsilon \)-NLMS algorithm for colored circular Gaussian inputs. IEEE Transactions on Signal Processing, 58(10), 5080–5090.

    Article  Google Scholar 

  32. Al-Naffouri, T. Y., Moinuddin, M., & Sohail, M. S. (2011). Mean weight behavior of the NLMS algorithm for correlated Gaussian inputs. IEEE Signal Processing Letters, 18(1), 7–10.

    Article  Google Scholar 

  33. Bin Saeed, M. O. (2017). LMS-based variable step-size algorithms: A unified analysis approach. Arabian Journal of Science & Engineering, 42(7), 2809–2816.

    Article  Google Scholar 

  34. Koning, R. H., Neudecker, H., & Wansbeek, T. (1990). Block Kronecker products and the vecb operator. Economics Deptartment, Institute of Economics Research, Univ. of Groningen, Groningen, The Netherlands, Research Memo. No. 351.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Houbing Song.

Appendix A

Appendix A

Here, we present the detailed mean-square analysis. Applying the expectation operator to the weighting matrix of (14) gives

$$\begin{aligned} {\mathbb {E}}\left[ {{\hat{\varvec{\Sigma }}}} \right]= & {} {\mathbf{G}}^T {\varvec{\Sigma }} {\mathbf{G}} - {\mathbf{G}}^T {\varvec{\Sigma }} {\mathbb {E}}\left[ {\mathbf{Y}}(i) {\mathbf{U}}(i) \right] \nonumber \\&-\,{\mathbb {E}}\left[ {\mathbf{U}}^T(i) {\mathbf{Y}}^T(i) \right] {\varvec{\Sigma }} {\mathbf{G}}\nonumber \\&+\,{\mathbb {E}}\left[ {\mathbf{U}}^T(i) {\mathbf{Y}}^T(i) {\varvec{\Sigma }} {\mathbf{Y}}(i) {\mathbf{U}}(i) \right] \nonumber \\= & {} {\mathbf{G}}^T {\varvec{\Sigma }} {\mathbf{G}} -\,{\mathbf{G}}^T {\varvec{\Sigma }} {\mathbf{G}}{\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {\mathbb {E}}\left[ {{\mathbf{U}}^T(i) {\mathbf{U}}(i) } \right] \nonumber \\&-\,{\mathbb {E}}\left[ {{\mathbf{U}}^T(i) {\mathbf{U}}(i) } \right] {\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {\mathbf{G}}^T {\varvec{\Sigma }} {\mathbf{G}} \nonumber \\&+\,{\mathbb {E}}\left[ {{\mathbf{U}}^T(i) {\mathbf{Y}}^T(i) {\varvec{\Sigma }} {\mathbf{Y}}(i) {\mathbf{U}}(i) } \right] , \end{aligned}$$
(24)

For ease of notation, we denote \({\mathbb {E}}\left[ {{\hat{\varvec{\Sigma }}}} \right] = {\varvec{\Sigma }} '\) for the remaining analysis.

Next, using the Gaussian transformed variables as gives in Sect. 3.2, (14) and (24) are rewritten, respectively, as

$$\begin{aligned} {\mathbb {E}}\left[ {\left\| {{{\bar{\mathbf{w}}}}\left( i+1\right) } \right\| _{\bar{\varvec{\Sigma }} }^2 } \right]= & {} {\mathbb {E}}\left[ {\left\| {{{\bar{\mathbf{w}}}}(i) } \right\| _{\bar{\varvec{\Sigma }} '}^2 } \right] \nonumber \\&+\, {\mathbb {E}}\left[ {{\mathbf{v}}^T(i) {{\bar{\mathbf{Y}}}}^T(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {\mathbf{v}}(i) } \right] , \end{aligned}$$
(25)

and

$$\begin{aligned} \bar{\varvec{\Sigma }} '= & {} {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}} - {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}}{\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{U}}}}(i) } \right] \nonumber \\&-\, {\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{U}}}}(i) } \right] {\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}} \nonumber \\&+\, {\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{Y}}}}(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {{\bar{\mathbf{U}}}}(i) } \right] \nonumber \\= & {} {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}} - {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}}{\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {\varvec{\Lambda }}\nonumber \\&-\, {\varvec{\Lambda }} {\mathbb {E}}\left[ {{\mathbf{D}}(i) } \right] {{\bar{\mathbf{G}}}}^T \bar{\varvec{\Sigma }} {{\bar{\mathbf{G}}}} \nonumber \\&+\, {\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{Y}}}}(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {{\bar{\mathbf{U}}}}(i) } \right] , \end{aligned}$$
(26)

where \({{\bar{\mathbf{Y}}}}(i) = {{\bar{\mathbf{G}} D}}(i) {{{\bar{\mathbf{U}}}}^T(i) }\) and \({\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{U}}}}(i) } \right] = {\varvec{\Lambda }}\).

The two terms that need to be solved are \({\mathbb {E}}\Big [ {\mathbf{v}}^T(i) {{\bar{\mathbf{Y}}}}^T(i) \bar{\varvec{\Sigma }}{{\bar{\mathbf{Y}}}}(i) {\mathbf{v}}(i) \Big ]\) and \({\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{Y}}}}(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {{\bar{\mathbf{U}}}}(i) } \right] \). Using the \(\text{ bvec }\{.\}\) operator and the block Kronecker product, denoted by \(\odot \) [34], the two terms are simplified as

$$\begin{aligned} {\mathbb {E}}\left[ {{\mathbf{v}}^T(i) {{\bar{\mathbf{Y}}}}^T(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {\mathbf{v}}(i) } \right] = {\mathbf{b}}^T(i) \bar{\varvec{\sigma }}, \end{aligned}$$
(27)

and

$$\begin{aligned}&\text{ bvec } \left\{ {{\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{Y}}}}^T(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {{\bar{\mathbf{U}}}}(i) } \right] } \right\} \nonumber \\&\quad = \left( {{\mathbb {E}}\left[ {{\mathbf{D}}(i) \odot {\mathbf{D}}(i) } \right] } \right) \mathbf{A}\left( {{\mathbf{G}}^T \odot {\mathbf{G}}^T } \right) \bar{\varvec{\sigma }}, \end{aligned}$$
(28)

where \(\bar{\varvec{\sigma }} = \text{ bvec } \left\{ {\bar{\varvec{\Sigma }} } \right\} ,{\mathbf{b}}(i) = \text{ bvec } \left\{ {{\mathbf{R}}_{\mathbf{v}} {\mathbb {E}}\left[ {{\mathbf{D}}^2(i) } \right] {\varvec{\Lambda }} } \right\} ,{\mathbf{R}}_{\mathbf{v}} = {\varvec{\Lambda }} _{\mathbf{v}} \odot \mathrm{{\mathbf{I}}}_M,\varvec{\Lambda }_{\mathbf{v}}\) is a diagonal noise variance matrix for the network and \({\mathbf{A}} = \text{ diag } \left\{ {{\mathbf{A}}_1 ,{\mathbf{A}}_2 ,\ldots ,{\mathbf{A}}_N } \right\} \) [10], with each matrix \({\mathbf{A}}_k\) defined as

$$\begin{aligned} {\mathbf{A}}_k= & {} \text{ diag } \left\{ {\varvec{\Lambda }} _1 \otimes {\varvec{\Lambda }} _k ,\ldots ,{\varvec{\lambda }} _k {\varvec{\lambda }} _k^T \right. \nonumber \\&\left. +\, 2{\varvec{\Lambda }} _k \otimes {\varvec{\Lambda }} _k ,\ldots ,{\varvec{\Lambda }} _N \otimes {\varvec{\Lambda }} _k \right\} , \end{aligned}$$
(29)

where \({\varvec{\Lambda }}_k\) is the diagonal eigenvalue matrix and \(\lambda _k\) is the corresponding eigenvalue vector for node k. Applying the \(\text{ bvec }\{.\}\) operator to (26) and simplifying gives

$$\begin{aligned} \text{ bvec } \left\{ {\bar{\varvec{\Sigma }} '} \right\} = \bar{\varvec{\sigma }} ' = {\mathbf{F}}(i) \bar{\varvec{\sigma }}, \end{aligned}$$
(30)

where \({\mathbf{F}}(i)\) is given by (18). Thus, (14) is rewritten as

$$\begin{aligned} {\mathbb {E}}\left[ {\left\| {{{\bar{\mathbf{w}}}}\left( i+1\right) } \right\| _{\bar{\varvec{\sigma }} }^2 } \right] = {\mathbb {E}}\left[ {\left\| {{{\bar{\mathbf{w}}}}(i) } \right\| _{{\mathbf{F}}(i) \bar{\varvec{\sigma }} }^2 } \right] + {\mathbf{b}}^T(i) \bar{\varvec{\sigma }}, \end{aligned}$$
(31)

which characterizes the transient behavior of the network. Although not explicitly visible from (31), (18) clearly shows the effect of the VSS algorithm on the performance of the algorithm through the presence of the diagonal step-size matrix \({\mathbf{D}} (i)\).

Now, using (31) and (18), the analysis iterates as

$$\begin{aligned} {\mathbb {E}}\left[ {\left\| {{{\bar{\mathbf{w}}}}\left( 0\right) } \right\| _{\bar{\varvec{\sigma }} }^2 } \right]= & {} \left\| {{{\bar{\mathbf{w}}}}^{(o)} } \right\| _{\bar{\varvec{\sigma }} }^2,\\ {\mathbf{F}}(0)= & {} \left[ {\mathbf{I}}_{M^2 N^2 } - \left( {{\mathbf{I}}_{MN} \odot \varvec{\Lambda } {\mathbb {E}}\left[ {\mathbf{D}} (0) \right] } \right) \right. \nonumber \\&\left. - \,\left( {\varvec{\Lambda } {\mathbb {E}}\left[ {\mathbf{D}} (0) \right] \odot {\mathbf{I}}_{MN} } \right) \right. \nonumber \\&\left. +\, \left( {{\mathbb {E}}\left[ {{\mathbf{D}}(0) \odot {\mathbf{D}}(0)} \right] } \right) \mathbf{A} \right] \\&.\left( {{\mathbf{G}}^T \odot {\mathbf{G}}^T } \right) , \end{aligned}$$

where \({{\mathbb {E}}\left[ {\mathbf{D}} (0) \right] = \text{ diag } \left\{ {\mu _{1}(0) {{\mathbf{I}}}_M ,\ldots ,\mu _{N}(0) \mathrm{{\mathbf{I}}}_M } \right\} }\) as these are the initial step-size values. The first iterative update is given by

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(1) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(0) \right\| ^2_{\mathbf{F}(0) \bar{\varvec{\sigma }}} \right] + {\mathbf{b}}^T(0) \bar{\varvec{\sigma }} \\= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{\mathbf{F}(0) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T(0) \bar{\varvec{\sigma }} \\ {\mathbf{F}}(1)= & {} \left[ {\mathbf{I}}_{M^2 N^2 } - \left( {{\mathbf{I}}_{MN} \odot \varvec{\Lambda } {\mathbb {E}}\left[ {\mathbf{D}} (1) \right] } \right) \right. \nonumber \\&\left. -\,\left( {\varvec{\Lambda } {\mathbb {E}}\left[ {\mathbf{D}} (1) \right] \odot {\mathbf{I}}_{MN} } \right) \right. \nonumber \\&\left. +\,\left( {{\mathbb {E}}\left[ {{\mathbf{D}}\left( 1\right) \odot {\mathbf{D}}(1)} \right] } \right) \mathbf{A} \right] \\&.\left( {{\mathbf{G}}^T \odot {\mathbf{G}}^T } \right) , \end{aligned}$$

where \({\mathbf{b}}(0) = \text{ bvec } \left\{ {{\mathbf{R}}_{\mathbf{v}} {\mathbb {E}}\left[ {{\mathbf{D}}^2(0) } \right] {\varvec{\Lambda }} } \right\} ,{\mathbb {E}}\left[ {\mathbf{D}} (1) \right] \) is the first step-size update. The matrix \({\mathbf{F}}(i)\) is updated with (18) using the ith update for the step-size matrix \({\mathbb {E}}\left[ {\mathbf{D}} (i) \right] \), which is updated using the VSS approach that is being applied to the algorithm. The second iterative update is given by

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(2) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(1) \right\| ^2_{\mathbf{F}(1) \bar{\varvec{\sigma }}} \right] + {\mathbf{b}}^T(1) \bar{\varvec{\sigma }} \\= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{\mathbf{F}(0)\mathbf{F}(1) \bar{\varvec{\sigma }}} \nonumber \\&+\,{\mathbf{b}}^T(0) \mathbf{F}(1) \bar{\varvec{\sigma }} \nonumber \\&+\,{\mathbf{b}}^T(1) \bar{\varvec{\sigma }}. \end{aligned}$$

Continuing, the third iterative update is given by

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(3) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(2) \right\| ^2_{\mathbf{F}(2) \bar{\varvec{\sigma }}} \right] + {\mathbf{b}}^T\left( 2 \right) \bar{\varvec{\sigma }} \\= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{\mathbf{F}(0)\mathbf{F}(1)\mathbf{F}(2) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T\left( 2 \right) \bar{\varvec{\sigma }} \nonumber \\&+\,{\mathbf{b}}^T(0) \mathbf{F}(1)\mathbf{F}(2) \bar{\varvec{\sigma }} + {\mathbf{b}}^T(1) \mathbf{F}(2) \bar{\varvec{\sigma }} \\= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(2)\mathbf{F}(2) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T\left( 2 \right) \bar{\varvec{\sigma }}\nonumber \\&+\, \left[ {\sum \limits _{k = 0}^1 { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{2} {{\mathbf{F}}(m)} \right\} } } \right] {\varvec{\sigma }}, \end{aligned}$$

where the weighting matrix \({{\mathcal {A}}}(2) = \mathbf{F}(0)\mathbf{F}(1)\). Similarly, the fourth iterative update is given by

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(4) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(3) \mathbf{F}(3) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T\left( 3 \right) \bar{\varvec{\sigma }} \nonumber \\&+\,\left[ {\sum \limits _{k = 0}^2 { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{3} {{\mathbf{F}}(m)} \right\} } } \right] {\varvec{\sigma }}, \end{aligned}$$

where the weighting matrix \({{\mathcal {A}}}(3) = {{\mathcal {A}}}(2) \mathbf{F}(2)\). Now, from the third and fourth iterative updates, we generalize the recursion for the ith update as

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(i) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(i-1)\mathbf{F}(i-1) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T\left( i-1 \right) \bar{\varvec{\sigma }} \nonumber \\&+\,\left[ {\sum \limits _{k = 0}^{i-2} { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{i-1} {{\mathbf{F}}(m)} \right\} } } \right] {\varvec{\sigma }}. \end{aligned}$$
(32)

where \({{\mathcal {A}}}(i-1) = {{\mathcal {A}}}(i-2) \mathbf{F}(i-2)\). The recursion for the \((i+1)\)th update is given by

$$\begin{aligned}&{\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(i+1) \right\| ^2_{\bar{\varvec{\sigma }}} \right] = \left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(i)\mathbf{F}(i) \bar{\varvec{\sigma }}} + {\mathbf{b}}^T(i) \bar{\varvec{\sigma }}\nonumber \\&+\,\left[ {\sum \limits _{k = 0}^{i-1} { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{i} {{\mathbf{F}}(m)} \right\} } } \right] {\varvec{\sigma }}, \end{aligned}$$
(33)

where \({{\mathcal {A}}}(i) = {{\mathcal {A}}}(i-1) \mathbf{F}(i-1)\). Subtracting (32) from (33) and simplifying gives the overall recursive update equation

$$\begin{aligned} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(i+1) \right\| ^2_{\bar{\varvec{\sigma }}} \right]= & {} {\mathbb {E}} \left[ \left\| {\bar{\mathbf{w}}}(i) \right\| ^2_{\bar{\varvec{\sigma }}} \right] \nonumber \\&+\,\left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(i)\mathbf{F}(i) \bar{\varvec{\sigma }}} \nonumber \\&-\,\left\| {\bar{\mathbf{w}}}^{(o)} \right\| ^2_{{{\mathcal {A}}}(i-1)\mathbf{F}(i-1) \bar{\varvec{\sigma }}} \nonumber \\&+\,{\mathbf{b}}^T(i) \bar{\varvec{\sigma }} - {\mathbf{b}}^T\left( i-1 \right) \bar{\varvec{\sigma }} \nonumber \\&+\,\left[ {\sum \limits _{k = 0}^{i-1} { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{i} {{\mathbf{F}}(m)} \right\} } } \right] \bar{\varvec{\sigma }} \nonumber \\&-\,\left[ {\sum \limits _{k = 0}^{i-2} { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{i-1} {{\mathbf{F}}(m)} \right\} } } \right] \bar{\varvec{\sigma }},\nonumber \\ \end{aligned}$$
(34)

Simplifying (34) and rearranging the terms gives the final recursive update equation (17), where

$$\begin{aligned} {{\mathcal {B}}}(i)= & {} \sum \limits _{k = 0}^{i-2} { \left\{ {\mathbf{b}}^T\left( k \right) \prod \limits _{m = k+1}^{i-1} {{\mathbf{F}}(m)} \right\} } \nonumber \\&+\,\mathbf{b}^T (i-1) \mathbf{I}_{M^2 N^2}. \end{aligned}$$
(35)

The final set of iterative equations for the mean-square learning curve are given by (17), (18), (19) and (20).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bin Saeed, M.O., Ejaz, W., Rehman, S. et al. A unified analytical framework for distributed variable step size LMS algorithms in sensor networks. Telecommun Syst 69, 447–459 (2018). https://doi.org/10.1007/s11235-018-0447-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11235-018-0447-z

Keywords

Navigation