Elsevier

Signal Processing

Volume 172, July 2020, 107507
Signal Processing

Generalized combined nonlinear adaptive filters: From the perspective of diffusion adaptation over networks

https://doi.org/10.1016/j.sigpro.2020.107507Get rights and content

Abstract

Combination of nonlinear adaptive filters (CNAF) is gaining popularity as an effective solution to enhance the filter performance by addressing compromises with respect to parameter and structure settings. However, most existing algorithms study the optimal choice of different filters while neglecting the internal structure optimization of filters. This limitation restricts the performance of CNAF. In this work, we propose a new scheme of the CNAF from the perspective of diffusion adaptive over network. Instead of combining the filters in a parallel manner, we consider to organize the candidate filters in a network manner. Specifically, a network with two subnetworks is constructed, and nodes in each subnetwork serve as linear adaptive filters and nonlinear adaptive filters respectively. A generalized CNAF (GCNAF) is obtained by linking the nodes of the network. The proposed GCNAF allows information exchange and sharing among nodes, so as to enhance the performance of the combined filters. As a result, search direction of each filter is also adjusted by combining those of other filters via the diffusion over networks. We show some representative, state-of-the-art combined adaptive filters are special cases of the proposed framework. Simulations with an acoustic problem demonstrate the effectiveness of the proposed GCNAF.

Introduction

Although linear models have obviously practical advantages, there are many situations in which they are not appropriate and need to be replaced by a nonlinear one [1], [2]. As a result, many structures have been investigated in order to model nonlinear systems in practical applications, including trigonometric expansion [2], [3], [4], neural networks [5], [6], block-based Wiener-Hammerstein models [7], [8], [9], polynomial models [1], [10], [11], etc. Every structure has their own pros and cons. To deal with the tradeoff between convergence rate and convergence accuracy, which is a limitation inherent to adaptive filters, combination of nonlinear adaptive filters (CNAF) [12], [13], [14] is gaining interest in nonlinear signal processing applications such as nonlinear echo cancellation and channel equalization. For example, collaborative functional link adaptive filters (CFLAF) were proposed in [2] based on the adaptive combination of filters in order to improve their robustness against different degrees of nonlinearity. A combination of Volterra filters (CVF) and a combination of kernel (CK) filters were presented in [10]. While both approaches achieve similar performance that is superior to a single VF, the latter is significantly more efficient. An improved solution to the CK scheme was subsequently developed in [11], named D-NLAEC-AZK. However, the performance improvement is somehow limited as the design method in [11] neglects the internal structure optimization and information interaction between linear filters and nonlinear filters.

Recently, distributed adaptation has emerged as an attractive and challenging research area with the advent of multi-agent networks. In adaptive networks, the interconnected nodes continually learn and adapt, as well as perform assigned tasks, e.g., parameter estimation, from observations collected by the dispersed agents. There are several useful distributed strategies for data processing over networks including consensus strategies [15], [16], [17], [18], [19], [20], [21], [22], [23], incremental strategies [24], [25], [26], [27], [28], and diffusion strategies [29], [30], [31], [32], [33], [34], [35], [36]. The convergence rate of distributed optimization algorithm via diffusion strategies is enhanced and performs better than a non-cooperative strategy under certain conditions [30]. Nodes in an adaptive network may approach the centralized solution through a continuous process of cooperation and information sharing with neighbors. This strategy has been applied to both linear and nonlinear collaborative parameter estimation problems [37], [38], [39].

Upon investigating the above structures of filter combination strategies and distributed network models, some important questions arise: is it possible to devise a scheme of generalized CNAF through the optimization over networks? does the scheme have the capacity to unify existing CNAF and to provide better performance in terms of the convergence rate and steady-state accuracy? This paper answers these questions.

Inspired by the work in [30], we propose a new and generalized scheme of CNAF from the perspective of the distributed optimization based on the diffusion strategy over networks. The proposed GCNAF performs the filtering task via a linear filtering subnetwork and a nonlinear filtering one. In the linear filtering subnetwork, a combination of linear filters is obtained by optimally linking each node that is associated with a linear filter. Likewise, in the nonlinear filtering subnetwork, a combination of nonlinear filters is constructed by the topological linking of each node that is associated with a nonlinear filter. A new scheme is then established by combining the two subnetworks under the guidance of a unified optimization objective function. Although a couple of existing algorithms [40], [41], [42], [43] also consider the information exchange between component filters through diffusion networks, these algorithms are focused on the linear system model. In contrast to the existing CNAF methods, the proposed scheme makes full use of the diffusion ability of the network, and hence a more efficient nonlinear filter algorithm is obtained by sharing the information among nodes.

Theoretical performance analysis is conducted to characterize the stability condition of the proposed algorithm. Simulations with an acoustic echo cancellation problem are conducted to validate the effectiveness of the developed GCNAF.

The major contributions of this work are as follows:

  • A scheme of GCNAF is proposed, which links the filters (nodes) via the topological structure of the network. This scheme facilitates information exchange and facilitates nodes to exchange intermediate estimates among nodes in order to enhance the performance of different filters.

  • In the proposed GCNAF, the search direction of each filter is optimally combined and different filters are optimally combined via the diffusion ability over networks. The GCNAF is completely characterized by the combination matrices of the network topological structure. Typical state-of-the-art combination filtering methods, such as the ones in [2], [10], [11] are particular cases of the presented scheme with different combination matrices.

  • An adaptive algorithm is derived to update the linear and nonlinear filters as well as the mixing parameters in GCNAF. Stability analysis is conducted and the stability conditions are specified.

The rest of the paper is organized as follows. Section 2 presents the data model and motivation of this work. Section 3 presents the network structure of the proposed GCNAF and derives the scheme of GCNAF through the distributed optimization based on the diffusion strategy over networks. We investigate a particular case of the developed scheme, i.e, CK in Section 4. Section 5 studies the stability condition in the mean sense via a theoretical analysis. The proposed scheme is validated in Section 6 through numerical simulations with a white Gaussian process and a speech signal as the system inputs. Finally, important conclusions are given in Section 7.

Section snippets

Data model and motivation

Notation. Italic letters (e.g., x and X) denote scalars. Boldface small letters (e.g., x) denote column vectors. Boldface capital letters (e.g., X and X) denote matrices. The superscript ( · ) represents the transpose of a matrix or a vector, and ‖ · ‖ denotes Euclidean norm of its vector argument. The operator col{⋅⋅⋅} stacks its vector arguments on the top of each other to generate a concatenated long column vector. The operator diag{⋅⋅⋅} forms a (block) diagonal matrix with its arguments.

Network structure of the GCNAF

The block diagram of the proposed GCNAF is illustrated in Fig. 1(b), where we extend the parallel structure of filters to a networked structure. The approach consists of two concurrent adaptive layers: a diffusion network layer and a combination layer. The former provides building blocks for modeling the system, and is composed of a linear filtering subnetwork and a nonlinear filtering subnetwork. The latter aims to produce the GCNAF by combining the output of the two subnetworks.

For simplicity

GCNAF as an extension of existing CNAF

In this section, we show that some state-of-the-art filter combination algorithms are special cases of the presented GCNAF. Without loss of the generality, we take CK in [10] with the order P=2 and N=2 as an example. The associated diagram is illustrated in Fig. 1(a). The output of the CK filter is expressed as:y(i)=p=1Pyp(i)=k=1N[λ1,k(i)y1,k(i)+λ2,k(i)y2,k(i)],where yp(i) represents the combination of the two filters of same order, λ1,k(i), λ2,k(i) are the mixing parameters. The outputs y1,k(

Convergence analysis of GCNAF

In this section, we study the theoretical performance of the proposed GCNAF algorithm given in Algorithm 1, which involves (17) and (18),  (27) and (28), and (30). We examine its stability condition in the mean.

To facilitate analysis, we introduce the following weight error vectors:ϕ˜k,ilinhTlinϕk,ilin,h˜k,ilinhTlinhk,ilin,ϕ˜k,inlinhTnlinϕk,inlin,h˜k,inlinhTnlinhk,inlin.We also collect the error vectors across all nodes and stack them on top of each other to form the following N × 1

Simulations

In this section, we conduct simulations to illustrate the performance of the proposed GCNAF scheme under different linear-to-nonlinear distortion power ratios (LNLRs) and compare the results with those of several other algorithms.

We assume that the reference signal d(i) follows the input-output relationd(i)=hT[xi+σ(i)f(xi)]+v(i),where xi=[x(i),x(i1),,x(iL+1)] is the input signal of length L and v(i) is additive white Gaussian noise. The variance of v(i) was adjusted so that SNR = 30 dB.

Conclusions

In this paper, a new and generalized CNAF (GCNAF) was proposed from the persepctive of distributed optimization based on the diffusion strategy over networks. Different from the existing CNAF, the proposed GCNAF links the nodes via the topological structure of the network in which each node represents either a linear or a nonlinear filter. The adaptation performance is significantly enhanced as compared to CNAF since GCNAF takes advantage of the diffusion ability of the network to exchange

CRediT authorship contribution statement

Wenxia Lu: Methodology, Investigation, Writing - original draft, Conceptualization. Lijun Zhang: Validation, Formal analysis, Writing - review & editing. Jie Chen: Formal analysis, Writing - review & editing, Validation. Jingdong Chen: Supervision, Writing - review & editing.

Declaration of Competing Interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.

References (51)

  • M. Zeller et al.

    Coefficient pruning for higher-order diagonals of volterra filters representing wiener-hammerstein models

    Proc. IWAENC, Seattle, WA, Sep.

    (2008)
  • V.J. Mathews et al.

    Polynomial Signal Processing

    (2000)
  • L.A. Azpicueta-Ruiz et al.

    Adaptive combination of volterra kernels and its application to nonlinear acoustic echo cancellation

    IEEE Trans. Audio Speech Lang. Process.

    (2011)
  • L.A. Azpicueta-Ruiz et al.

    Enhanced adaptive volterra filtering by automatic attenuation of memory regions and its application to acoustic echo cancellation

    IEEE Trans. Audio Speech Lang. Process.

    (2013)
  • J. Arenas-García et al.

    New algorithms for improved adaptive convex combination of LMS transversal filters

    IEEE Trans. Instrum. Measure.

    (2005)
  • J. Arenas-García et al.

    Mean-square performance of a convex combination of two adaptive filters

    IEEE Trans. Signal Process.

    (2006)
  • J. Arenas-García et al.

    Combinations of adaptive filters: performance and convergence properties

    IEEE Signal Process. Mag.

    (2016)
  • J. Tsitsiklis et al.

    Convergence and asymptotic agreement in distributed decision problems

    IEEE Trans. Autom. Control

    (1984)
  • I.D. Schizas et al.

    Consensus in ad hoc WSNs with noisy links-part i: distributed estimation of deterministic signals

    IEEE Trans. Signal Process.

    (2007)
  • P. Braca et al.

    Running consensus in wireless sensor networks

    Proc. Fusion, Cologne, Germany

    (2008)
  • I.D. Schizas et al.

    Distributed LMS for consensus-based in-network adaptive processing

    IEEE Trans. Signal Process.

    (2009)
  • A. Nedic et al.

    Distributed subgradient methods formulti-agent optimization

    IEEE Trans. Autom. Control

    (2009)
  • S. Kar et al.

    Distributed consensus algorithms in sensor networks: link failures and channel noise

    IEEE Trans. Signal Process.

    (2009)
  • K. Srivastava et al.

    Distributed asynchronous constrained stochastic optimization

    IEEE J. Sel. Top. Signal Process.

    (2011)
  • Y. Wang et al.

    Distributed bayesian estimation of linear models with unknown observation covariances

    IEEE Trans. Signal Process.

    (2015)
  • Cited by (5)

    • Combined Diffusion Adaptation on Adaptive Leaky Criterion and Orthogonal Gradient Algorithm

      2023, 2023 20th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2023

    This work was supported in part by National Key Research and Development Program of China (No. 2018AAA0102200), NSFC grants 61671382, 61761146001 and 61811530283.

    View full text