Fast communicationData-selective diffusion LMS for reducing communication overhead☆
Introduction
We study the problem of distributed estimation over adaptive networks, in which every node uses local interaction and cooperation with each other for estimation. Consider N nodes distributed in space (Fig. 1). The set of nodes that are connected to node k (including k itself) is called the neighborhood of node k which is denoted by . Each node k is assumed to receive desired response and 1×M regression vector at successive time instant i. Each node k would like to use these data to estimate an unknown M×1 parameter vector wo in a distributed and adaptive manner by sharing information only within .
For the solution of such distributed estimation problems, several variations have been proposed, such as incremental least-mean-square (LMS) algorithms [1], [2], [3], diffusion LMS algorithms [4], [5], [6], [7], [8], [9], [10], [11], and algorithms based on consensus strategies [12], [13], [14], [15], [16]. Since each node has an ability to estimate an unknown system and shares data with its neighbors, refined information diffuses all nodes over the network, by which estimation performance improves greatly. In the structure, the diffusion LMS algorithms consist of adaptation and combination steps. According to the order of two steps, the combine-then-adapt (CTA) and the adapt-then-combine (ATC) diffusion LMS algorithms have been proposed [4]. In the sense of steady-state error, the ATC structure always outperforms the CTA structure. It implies that the combination step contributes to improve estimation accuracy.
Although the various diffusion techniques achieve faster convergence speed and lower steady-state error than no cooperation LMS, they are compromised by communication cost. In practical wireless sensor networks, each node often has limited power resources for communication. To reduce the communication cost without significant degradation of the estimation, various techniques have been applied to diffusion LMS, such as choosing a subset of the nodes [17], [18], [19], [20], selecting a subset of the entries of the estimates [21], [22], and reducing the dimension of the estimates [23], [24], [25]. Among these methods, we focus on the first method in which only a subset of nodes are participated in communications. Probability diffusion LMS [17] considered the changing network topology where each link between two nodes is connected with a probability. Takahashi and Yamada [18] proposed a method varying the probability on link connection by minimizing the MSD, in order to improve the estimation performance of [17]. In [19] and [20], each node is allowed to select a subset of to combine the data based on the additional information that represents the quality of the nodes. In [19], a scaled product of the noise variance and the regression variance was used as a measure for selection criterion; each node selects one node with the minimum measure value. In [20], each node tries to estimate the current MSD value and exchanges it with the neighbors. Then, by some procedure with the transmitted data from the neighbors, each node calculates the cost values for the neighbor nodes to select one node. Namely, the previous algorithms [19], [20] require the additional communication to transmit scalar values for selecting one node to receive the data. In contrast to the previous work, our purpose is to endow each node with ability to conclude whether it will transmit the data or not. Since each node is able to implement this determination process with only its own information, the additional communication is not required anymore.
In this paper, we introduce a novel method for communication reduction, in which each node is allowed to dynamically update its estimate and transmit only when it is updated. To do so, we first derive a criterion that uses the mean-square deviation (MSD) to determine whether the updated estimate is worth sharing. Each node updates the estimate and transmits the updated data to its neighbors only when the MSD decreases by the adaptation. Because non-updated nodes do not send those estimates to their neighbors, some data are missing during the combination step. So we also present a method that uses previously-saved data to replace the missing data. In simulations, the proposed algorithm efficiently reduced the communication cost with less performance degradation compared to the related node selection algorithms.
Notation: We use boldface letters for random variables and normal letters for deterministic quantities.
Section snippets
Conventional diffusion LMS
At every node k and time instant i, we assume that the desired response is related to the regression vector as a following linear model:where corresponds to zero-mean measurement noise with variance , which is assumed to be white over time and independent over space. and are assumed to be independent of each other for all . All data are assumed to be complex values.
The diffusion strategies [4] consist of two operations: adaptation and
Dynamic diffusion LMS
Although the combination step (3) improves estimation performance, it is compromised by the communication cost. To decrease the amount of communication, we first derive a criterion to decide whether the current measurements contribute to the decrease of MSD, and propose a novel diffusion LMS algorithm in which each node shares the updated estimate with its neighbors only if the criterion is satisfied. Moreover, the combination step is modified to recover unsent estimates from neighbor nodes
Simulation results
For the simulations, we assume that the channel identification scenario of an FIR model with channel length of 2. We consider a network topology with N=30 nodes with differing regressor power and noise power (Fig. 3). The regressors are zeros-mean Gaussian, and independent over space. All simulations were obtained by taking the ensemble average of the network MSD:over 200 independent trials. We use the relative-degree variance rule [7] for combination weight
Conclusion
We proposed a new dynamic diffusion LMS algorithm that reduces the amount of communication by reliable node selection. Each node is allowed to dynamically update and diffuse its estimate only when it is updated. We also presented a method in which the past estimate is substituted for missing data in the following combination step. The proposed algorithm requires fewer communications compared to the conventional diffusion LMS. Although convergence performance degradation due to reduced
References (27)
- et al.
A new incremental affine projection-based adaptive algorithm for distributed networks
Signal Process.
(2008) - et al.
Incremental adaptive strategies over distributed networks
IEEE Trans. Signal Process.
(2007) - et al.
Enhanced incremental LMS with norm constraints for distributed in-network estimation
Signal Process.
(2014) - et al.
Diffusion LMS strategies for distributed estimation
IEEE Trans. Signal Process.
(2010) - et al.
Diffusion least-mean squares with adaptive combinersformulation and performance analysis
IEEE Trans. Signal Process.
(2010) - et al.
Spatio-temporal diffusion strategies for estimation and detection over networks
IEEE Trans. Signal Process.
(2012) - et al.
Diffusion adaptation over networks under imperfect information exchange and non-stationary data
IEEE Trans. Signal Process.
(2012) - et al.
Sparse distributed learning based on diffusion adaptation
IEEE Trans. Signal Process.
(2013) - P. Di Lorenzo, S. Barbarossa, Distributed least mean squares strategies for sparsity-aware estimation over gaussian...
- et al.
Estimation of space-time varying parameters using a diffusion LMS algorithm
IEEE Trans. Signal Process.
(2014)
Multitask diffusion adaptation over networks
IEEE Trans. Signal Process.
Fast distributed average consensus algorithms based on advection-diffusion processes
IEEE Trans. Signal Process.
Convergence rate analysis of distributed gossip (linear parameter) estimationfundamental limits and tradeoffs
IEEE J. Sel. Top. Signal Process.
Cited by (26)
Diffusion maximum versoria criterion algorithms robust to impulsive noise
2022, Digital Signal Processing: A Review JournalSteady-state and stability analyses of diffusion sign-error LMS algorithm
2018, Signal ProcessingA noise-constrained distributed adaptive direct position determination algorithm
2017, Signal ProcessingCitation Excerpt :Given the network topology with limited neighbors for each sensor, the communication cost of the proposed NCD-DPD algorithm is still predictable, and affordable even with huge numbers of sensors in the network. Additionally, the communication overhead of the distributed scheme might be further reduced by utilizing communication-reduction techniques, e.g., allowing each sensor to receive the intermediate estimates from only a subset of its neighbors in each iteration [30], or utilizing the dynamic diffusion LMS algorithm by sharing only reliable information with neighbors [31]. In this work, we propose a noise-constrained D-ADPD (NCD-ADPD) algorithm that incorporates noise prior knowledge of its neighborhood.
Broken-motifs diffusion LMS algorithm for reducing communication load
2017, Signal ProcessingCitation Excerpt :In diffusion LMS, all nodes transmit to and receive data from all their neighbors, which entails large communication load. Some methods have been proposed for diffusion LMS, such as choosing a subset of the neighbor nodes [20–24], choosing a subset of the entries of the estimates [25,22,26], and reducing the dimension of the estimates [27,28]. In the probabilistic diffusion LMS (p-DLMS) algorithm [20], each communication edge is intermittently activated with a given probability.
Diffusion maximum correntropy criterion algorithms for robust distributed estimation
2016, Digital Signal Processing: A Review JournalDiffusion sign-error LMS algorithm: Formulation and stochastic behavior analysis
2016, Signal ProcessingCitation Excerpt :Three cooperation strategies for distributed estimation over networks are widely used, i.e., the incremental, consensus, and diffusion strategies. Based on the principle of the incremental or diffusion strategy, a variety of distributed estimation algorithms were proposed, such as distributed LMS algorithms [1–8], distributed RLS algorithms [3,9], distributed AP algorithms [10,11], and a few other algorithms for specific problems, e.g., [12–20]. Several distributed estimation algorithms were also proposed based on the consensus strategy, e.g., [21–24].
- ☆
This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (2012R1A2A2A01011112), in part by the MSIP (Ministry of Science, ICT&Future Planning), Korea, under the C-ITRC (Convergence Information Technology Research Center) support program (NIPA-2014-H0401-14-1001) supervised by the NIPA (National IT Industry Promotion Agency).