Elsevier

Signal Processing

Volume 113, August 2015, Pages 211-217
Signal Processing

Fast communication
Data-selective diffusion LMS for reducing communication overhead

https://doi.org/10.1016/j.sigpro.2015.01.019Get rights and content

Highlights

  • We propose a new dynamic diffusion method for reducing communication.

  • We propose a criterion for decrease of MSD in adaptation step.

  • Only adaptation step is performed, the data is sent to the neighbors.

  • When the current estimate is not sent, estimate in past is used for combination.

  • Proposed algorithm outperforms the related works aiming to reduce communications.

Abstract

The diffusion strategies have been widely studied for distributed estimation over adaptive networks. In the structure, communication resources are assigned to every node in order to share its processed data with predefined neighbors. Although the performance improves through the information exchange, it entails a communication cost. We present a dynamic diffusion method that shares only reliable information with neighbors. Each node has the ability to evaluate its updated estimate by the contribution of the new measurements to minimizing mean-square deviation (MSD). In only case of decrease of MSD, the node is allowed to transmit its estimate to neighbors. Accordingly, the proposed algorithm has a reduced amount of communication while keeping the performance as much as possible. Experimental results show that the proposed algorithm achieves more efficient reduction of communication and better performance compared to the other related algorithms.

Introduction

We study the problem of distributed estimation over adaptive networks, in which every node uses local interaction and cooperation with each other for estimation. Consider N nodes distributed in space (Fig. 1). The set of nodes that are connected to node k (including k itself) is called the neighborhood of node k which is denoted by Nk. Each node k is assumed to receive desired response dk(i) and 1×M regression vector uk,i at successive time instant i. Each node k would like to use these data to estimate an unknown M×1 parameter vector wo in a distributed and adaptive manner by sharing information only within Nk.

For the solution of such distributed estimation problems, several variations have been proposed, such as incremental least-mean-square (LMS) algorithms [1], [2], [3], diffusion LMS algorithms [4], [5], [6], [7], [8], [9], [10], [11], and algorithms based on consensus strategies [12], [13], [14], [15], [16]. Since each node has an ability to estimate an unknown system and shares data with its neighbors, refined information diffuses all nodes over the network, by which estimation performance improves greatly. In the structure, the diffusion LMS algorithms consist of adaptation and combination steps. According to the order of two steps, the combine-then-adapt (CTA) and the adapt-then-combine (ATC) diffusion LMS algorithms have been proposed [4]. In the sense of steady-state error, the ATC structure always outperforms the CTA structure. It implies that the combination step contributes to improve estimation accuracy.

Although the various diffusion techniques achieve faster convergence speed and lower steady-state error than no cooperation LMS, they are compromised by communication cost. In practical wireless sensor networks, each node often has limited power resources for communication. To reduce the communication cost without significant degradation of the estimation, various techniques have been applied to diffusion LMS, such as choosing a subset of the nodes [17], [18], [19], [20], selecting a subset of the entries of the estimates [21], [22], and reducing the dimension of the estimates [23], [24], [25]. Among these methods, we focus on the first method in which only a subset of nodes are participated in communications. Probability diffusion LMS [17] considered the changing network topology where each link between two nodes is connected with a probability. Takahashi and Yamada [18] proposed a method varying the probability on link connection by minimizing the MSD, in order to improve the estimation performance of [17]. In [19] and [20], each node is allowed to select a subset of Nk to combine the data based on the additional information that represents the quality of the nodes. In [19], a scaled product of the noise variance and the regression variance was used as a measure for selection criterion; each node selects one node with the minimum measure value. In [20], each node tries to estimate the current MSD value and exchanges it with the neighbors. Then, by some procedure with the transmitted data from the neighbors, each node calculates the cost values for the neighbor nodes to select one node. Namely, the previous algorithms [19], [20] require the additional communication to transmit scalar values for selecting one node to receive the data. In contrast to the previous work, our purpose is to endow each node with ability to conclude whether it will transmit the data or not. Since each node is able to implement this determination process with only its own information, the additional communication is not required anymore.

In this paper, we introduce a novel method for communication reduction, in which each node is allowed to dynamically update its estimate and transmit only when it is updated. To do so, we first derive a criterion that uses the mean-square deviation (MSD) to determine whether the updated estimate is worth sharing. Each node updates the estimate and transmits the updated data to its neighbors only when the MSD decreases by the adaptation. Because non-updated nodes do not send those estimates to their neighbors, some data are missing during the combination step. So we also present a method that uses previously-saved data to replace the missing data. In simulations, the proposed algorithm efficiently reduced the communication cost with less performance degradation compared to the related node selection algorithms.

Notation: We use boldface letters for random variables and normal letters for deterministic quantities.

Section snippets

Conventional diffusion LMS

At every node k and time instant i, we assume that the desired response dk(i) is related to the regression vector uk,i as a following linear model:dk(i)=uk,iwo+vk(i)where vk(i) corresponds to zero-mean measurement noise with variance σv,k2, which is assumed to be white over time and independent over space. vk(i) and ul,j are assumed to be independent of each other for all k,l,i,j. All data are assumed to be complex values.

The diffusion strategies [4] consist of two operations: adaptation and

Dynamic diffusion LMS

Although the combination step (3) improves estimation performance, it is compromised by the communication cost. To decrease the amount of communication, we first derive a criterion to decide whether the current measurements contribute to the decrease of MSD, and propose a novel diffusion LMS algorithm in which each node shares the updated estimate with its neighbors only if the criterion is satisfied. Moreover, the combination step is modified to recover unsent estimates from neighbor nodes

Simulation results

For the simulations, we assume that the channel identification scenario of an FIR model with channel length of 2. We consider a network topology with N=30 nodes with differing regressor power and noise power (Fig. 3). The regressors are zeros-mean Gaussian, and independent over space. All simulations were obtained by taking the ensemble average of the network MSD:MSDNetwork(i)=1Nk=1NEwowk,i2over 200 independent trials. We use the relative-degree variance rule [7] for combination weight al,k

Conclusion

We proposed a new dynamic diffusion LMS algorithm that reduces the amount of communication by reliable node selection. Each node is allowed to dynamically update and diffuse its estimate only when it is updated. We also presented a method in which the past estimate is substituted for missing data in the following combination step. The proposed algorithm requires fewer communications compared to the conventional diffusion LMS. Although convergence performance degradation due to reduced

References (27)

  • L. Li et al.

    A new incremental affine projection-based adaptive algorithm for distributed networks

    Signal Process.

    (2008)
  • C.G. Lopes et al.

    Incremental adaptive strategies over distributed networks

    IEEE Trans. Signal Process.

    (2007)
  • Ying Liu et al.

    Enhanced incremental LMS with norm constraints for distributed in-network estimation

    Signal Process.

    (2014)
  • F. Cattivelli et al.

    Diffusion LMS strategies for distributed estimation

    IEEE Trans. Signal Process.

    (2010)
  • N. Takahashi et al.

    Diffusion least-mean squares with adaptive combinersformulation and performance analysis

    IEEE Trans. Signal Process.

    (2010)
  • J.-W. Lee et al.

    Spatio-temporal diffusion strategies for estimation and detection over networks

    IEEE Trans. Signal Process.

    (2012)
  • X. Zhao et al.

    Diffusion adaptation over networks under imperfect information exchange and non-stationary data

    IEEE Trans. Signal Process.

    (2012)
  • P. di Lorenzo et al.

    Sparse distributed learning based on diffusion adaptation

    IEEE Trans. Signal Process.

    (2013)
  • P. Di Lorenzo, S. Barbarossa, Distributed least mean squares strategies for sparsity-aware estimation over gaussian...
  • R. Abdolee et al.

    Estimation of space-time varying parameters using a diffusion LMS algorithm

    IEEE Trans. Signal Process.

    (2014)
  • J. Chen et al.

    Multitask diffusion adaptation over networks

    IEEE Trans. Signal Process.

    (2014)
  • S. Sardellitti et al.

    Fast distributed average consensus algorithms based on advection-diffusion processes

    IEEE Trans. Signal Process.

    (2010)
  • S. Kar et al.

    Convergence rate analysis of distributed gossip (linear parameter) estimationfundamental limits and tradeoffs

    IEEE J. Sel. Top. Signal Process.

    (2011)
  • Cited by (26)

    • Diffusion maximum versoria criterion algorithms robust to impulsive noise

      2022, Digital Signal Processing: A Review Journal
    • A noise-constrained distributed adaptive direct position determination algorithm

      2017, Signal Processing
      Citation Excerpt :

      Given the network topology with limited neighbors for each sensor, the communication cost of the proposed NCD-DPD algorithm is still predictable, and affordable even with huge numbers of sensors in the network. Additionally, the communication overhead of the distributed scheme might be further reduced by utilizing communication-reduction techniques, e.g., allowing each sensor to receive the intermediate estimates from only a subset of its neighbors in each iteration [30], or utilizing the dynamic diffusion LMS algorithm by sharing only reliable information with neighbors [31]. In this work, we propose a noise-constrained D-ADPD (NCD-ADPD) algorithm that incorporates noise prior knowledge of its neighborhood.

    • Broken-motifs diffusion LMS algorithm for reducing communication load

      2017, Signal Processing
      Citation Excerpt :

      In diffusion LMS, all nodes transmit to and receive data from all their neighbors, which entails large communication load. Some methods have been proposed for diffusion LMS, such as choosing a subset of the neighbor nodes [20–24], choosing a subset of the entries of the estimates [25,22,26], and reducing the dimension of the estimates [27,28]. In the probabilistic diffusion LMS (p-DLMS) algorithm [20], each communication edge is intermittently activated with a given probability.

    • Diffusion sign-error LMS algorithm: Formulation and stochastic behavior analysis

      2016, Signal Processing
      Citation Excerpt :

      Three cooperation strategies for distributed estimation over networks are widely used, i.e., the incremental, consensus, and diffusion strategies. Based on the principle of the incremental or diffusion strategy, a variety of distributed estimation algorithms were proposed, such as distributed LMS algorithms [1–8], distributed RLS algorithms [3,9], distributed AP algorithms [10,11], and a few other algorithms for specific problems, e.g., [12–20]. Several distributed estimation algorithms were also proposed based on the consensus strategy, e.g., [21–24].

    View all citing articles on Scopus

    This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (2012R1A2A2A01011112), in part by the MSIP (Ministry of Science, ICT&Future Planning), Korea, under the C-ITRC (Convergence Information Technology Research Center) support program (NIPA-2014-H0401-14-1001) supervised by the NIPA (National IT Industry Promotion Agency).

    View full text