Elsevier

Information Sciences

Volume 640, September 2023, 119041
Information Sciences

Complex exponential graph convolutional networks

https://doi.org/10.1016/j.ins.2023.119041Get rights and content

Abstract

Graph convolutional networks based on spectral methods have recently achieved considerable success in processing non-Euclidean structured data such as graphs. In this paper, we propose the complex exponential graph convolutional network (CEGCN), which is a novel spectral convolutional architecture for performing deep learning on graphs. Specifically, we design a complex exponential polynomial filter with powerful expressive ability and whose zeros can be changed to prevent the over-smoothing issue. Furthermore, a CEGCN variant, called CEGCN-hp, combines the graph Fourier transform-based and high-pass filters to capture the high-frequency components in the graph spectral domain. We perform a spectral analysis to illustrate the motivation and expressive power of the proposed model. Experimental results show that our model matches or outperforms various state-of-the-art baselines on three downstream tasks: semi-supervised node classification, community detection, and graph classification. Our implementation is available online.1

Introduction

Over the past decade, convolutional neural networks (CNNs) have gained significant attention and achieved remarkable success in solving various deep learning problems. However, when it comes to data organized in non-Euclidean spaces, such as graphs, directly applying convolution operations becomes challenging. Graph-structured data, which is prevalent in many research areas, requires specialized techniques to process and analyze, such as spectral graph theory and signal processing tools [5]. In recent years, graph convolutional networks (GCNs) have emerged as powerful tools for solving problems on graph-structured data by aggregating node features from neighborhoods through convolutions [16], [25], [41], [46].

GCNs based on spectral methods perform graph convolution in the Fourier domain to obtain the spectral representations of nodes or graphs [14]. Early studies utilize convolution operations to compute the eigendecomposition of the graph Laplacian. However, this approach has limitations including intensive computations and non-spatially localized filters. To overcome these limitations, Chebyshev polynomials are introduced as spectral filters for GCNs to aggregate messages from neighboring nodes [9]. A previous study, namely CayleyNet [27], uses Cayley polynomials as spectral filters which specialize for narrow frequency bands. Cayley filters are localized in space, they scale linearly with the input data size for sparsely connected graphs. However, CayleyNet requires matrix inversion computations, resulting in high computational complexity. Motivated by Chebyshev polynomials, a vanilla GCN [25] that aggregates neighbor representations is proposed through a localized first-order approximation to ChebyNet. However, if the features of a node are sufficiently similar to those of its neighbors, the neighborhood aggregation may be redundant [45]. Therefore, extracting long-range patterns and feature differences among nodes is crucial for many downstream tasks [22], [23].

Despite the success of GCNs, they have limitations in terms of expressive power and modeling capabilities due to fixed coefficients in polynomial filters [8], [21]. Although high-order polynomial filters can capture the long-range patterns of high-hop neighbors, the operation is computationally expensive and produces unwanted non-local filters [27]. As a consequence, GCNs often face the over-smoothing issue where node representations tend to converge to a similar value, making them indistinguishable from one another [8]. Additionally, many GCNs only perform low-pass filtering on input signals as they multiply graph signals with adjacent matrices. This mechanism ignores high-frequency components that may contain valuable information for various graph tasks [20]. Hence, there is a need for designing spectral GCNs that assign higher weights to the high-frequency content in the input signal by constructing high-pass filters.

In this paper, we propose a novel graph convolutional architecture, called the complex exponential graph convolutional network (CEGCN), which outperforms other competitive GCN baselines. The proposed architecture offers two-fold improvements. Firstly, we introduce a new class of parametric rational complex functions, i.e., complex exponential (CE-) polynomials, which form an orthogonal trigonometric basis. This basis provides enhanced expressive power and lower computational complexity compared to the existing methods [9], [27], as the matrix inversion is not required. Secondly, to prevent over-smoothing issue, we leverage the proposed polynomials to modify the zeros and frequency response of the filters by manipulating the signal phase. This advantage allows us to use high-order spectral filters for identifying long-range interactions between nodes. Additionally, we also introduce a variant, called the complex exponential graph convolutional network with high-pass filter (CEGCN-hp), which can effectively extract high-frequency components. CEGCN-hp leverages the smoothness of labels or features over nodes and the high-frequency variations of the graph signals. Both CEGCN and CEGCN-hp achieve competitive or state-of-the-art performances compared to various baselines on three downstream tasks: semi-supervised node classification, community detection, and graph classification.

The main contributions of this study are as follows:

  • We propose two classes of parametric rational complex functions: CE- and CE-hp polynomials. Based on these, we construct a novel graph convolutional network and its variant combined with high-pass filters, i.e., CEGCN and CEGCN-hp, respectively. Unlike most existing polynomial-based spectral GCNs, these two architectures can change the frequency response and spectral phase to construct high-order filters for capturing long-range interactions between nodes and effectively preventing the over-smoothing issue.

  • We propose a novel trigonometric basis set, which offers effective expressive power and low computational complexity, to approximate even and odd functions for graph convolution operations.

  • We provide theoretical evidence of the effectiveness of the proposed model and analyze its graph frequency response for interpretability. Experiments on three downstream tasks show that the proposed model can achieve state-of-the-art accuracy compared to other baselines.

Section snippets

Related work

GCNs have been widely used for applying deep learning to graph-structured data, and they can be broadly categorized into two main approaches: spectral-based and spatial-based methods. Spectral-based GCNs generalize convolution operation to graphs [5], [9]. The vanilla GCN [25] defines a convolution operation approximation through a neighborhood aggregation function. SGC [44] simplifies the GCN architecture by eliminating nonlinearities between GCN layers, reducing additional complexity. HesGCN

Notations and related concepts

Notations: For a complex number c, the real part, imaginary part, and conjugate of c are denoted as Re{c}, Im{c}, and j=1, respectively. For a connected undirected graph G=(V;E), V={v1,,vN} and E denote the sets of nodes and edges, respectively. Symmetric ARN×N is an adjacency matrix of graph G. Each entry alr in A denotes the weight of the edge between node l and r, which is defined as{alr=0,if(l,r)Ealr=1,if(l,r)E.

We refer to a signal (or feature) matrix X=[x1,,xN]RN×F, where xi is the

Proposed model

In this section, we propose a novel spectral GCN model called the CEGCN, which utilizes complex exponential polynomials (CE-polynomials) to enhance the performance for downstream tasks.

Spectral analysis

We conduct a comprehensive study of CEGCN from the perspective of graph spectrum. Firstly, we analyze the approximation error introduced by the Taylor expansion. Secondly, we elucidate the expressive power and rationality of the CE-polynomial filters which are key components of CEGCN. Finally, we compare the frequency responses of different filters, including the CE-polynomial filters and other filters used in baselines, to gain insights into the implementation process of filtering operations

Experiments

In this section, we conduct extensive experiments for three downstream tasks to verify the performance of the proposed model and its variant: semi-supervised node classification, community detection, and graph classification. The implementation is based on the PyTorch geometric library and message passing network. The code to reproduce our experiments is publicly available on Github.

Conclusions

In this paper, we propose a novel graph convolutional network architecture, called CEGCN, along with its variant incorporating high-pass filters. We also propose a new trigonometric basis set that can approximate functions without calculating the matrix inversion. Our proposed model operates on feature matrices using graph spectral filters and can prevent the serious over-smoothing issue. Experiments shows that the proposed model achieves comparative or superior performance compared to various

CRediT authorship contribution statement

Zichao Zhang: Conceptualization, Formal analysis, Methodology, Software, Writing – original draft. Yihao Zhang: Conceptualization, Methodology, Software, Writing – review & editing. Yu Wang: Data curation, Investigation. Mingyuan Ma: Investigation, Validation. Jin Xu: Project administration, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The work is supported by the National Key R&D Program of China (No. 2019YFA0706401), and General Program of National Natural Science Foundation of China (No. 62272009, No. 62172014, No. 62172015).

References (50)

  • J. Chen et al.

    Fastgcn: fast learning with graph convolutional networks via importance sampling

  • M. Chen et al.

    Simple and deep graph convolutional networks

  • M. Defferrard et al.

    Convolutional neural networks on graphs with fast localized spectral filtering

  • X. Fan et al.

    Gated graph pooling with self-loop for graph classification

  • R. Fang et al.

    Structure-preserving graph representation learning

  • F. Feng et al.

    Graph adversarial training: dynamically regularizing based on graph structure

    IEEE Trans. Knowl. Data Eng.

    (2019)
  • W. Feng et al.

    Graph random neural networks for semi-supervised learning on graphs

  • H. Gao et al.

    Graph u-nets

  • W.L. Hamilton et al.

    Inductive representation learning on large graphs

  • M. He et al.

    Bernnet: learning arbitrary graph spectral filters via Bernstein approximation

  • X. He et al.

    Lightgcn: simplifying and powering graph convolution network for recommendation

  • Z. Hou et al.

    Graphmae: self-supervised masked graph autoencoders

  • W. Hu et al.

    Graph signal processing for geometric data and beyond: theory and applications

    IEEE Trans. Multimed.

    (2021)
  • E. Isufi et al.

    Autoregressive moving average graph filtering

    IEEE Trans. Signal Process.

    (2017)
  • W. Jie et al.

    Semi-supervised learning with mixed-order graph convolutional networks

    Inf. Sci.

    (2021)
  • Cited by (2)

    • AttIN: Paying More Attention to Neighborhood Information for Entity Typing in Knowledge Graphs

      2024, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    View full text