Loading [a11y]/accessibility-menu.js
Invariance-Preserving Localized Activation Functions for Graph Neural Networks | IEEE Journals & Magazine | IEEE Xplore

Invariance-Preserving Localized Activation Functions for Graph Neural Networks


Abstract:

Graph signals are signals with an irregular structure that can be described by a graph. Graph neural networks (GNNs) are information processing architectures tailored to ...Show More

Abstract:

Graph signals are signals with an irregular structure that can be described by a graph. Graph neural networks (GNNs) are information processing architectures tailored to these graph signals and made of stacked layers that compose graph convolutional filters with nonlinear activation functions. Graph convolutions endow GNNs with invariance to permutations of the graph nodes' labels. In this paper, we consider the design of trainable nonlinear activation functions that take into consideration the structure of the graph. This is accomplished by using graph median filters and graph max filters, which mimic linear graph convolutions and are shown to retain the permutation invariance of GNNs. We also discuss modifications to the backpropagation algorithm necessary to train local activation functions. The advantages of localized activation function architectures are demonstrated in four numerical experiments: source localization on synthetic graphs, authorship attribution of 19th century novels, movie recommender systems and scientific article classification. In all cases, localized activation functions are shown to improve model capacity.
Published in: IEEE Transactions on Signal Processing ( Volume: 68)
Page(s): 127 - 141
Date of Publication: 25 November 2019

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.