当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Invariance-Preserving Localized Activation Functions for Graph Neural Networks
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/tsp.2019.2955832
Luana Ruiz , Fernando Gama , Antonio Garcia Marques , Alejandro Ribeiro

Graph signals are signals with an irregular structure that can be described by a graph. Graph neural networks (GNNs) are information processing architectures tailored to these graph signals and made of stacked layers that compose graph convolutional filters with nonlinear activation functions. Graph convolutions endow GNNs with invariance to permutations of the graph nodes’ labels. In this paper, we consider the design of trainable nonlinear activation functions that take into consideration the structure of the graph. This is accomplished by using graph median filters and graph max filters, which mimic linear graph convolutions and are shown to retain the permutation invariance of GNNs. We also discuss modifications to the backpropagation algorithm necessary to train local activation functions. The advantages of localized activation function architectures are demonstrated in four numerical experiments: source localization on synthetic graphs, authorship attribution of 19th century novels, movie recommender systems and scientific article classification. In all cases, localized activation functions are shown to improve model capacity.

中文翻译:

图神经网络的保持不变性的局部激活函数

图信号是可以用图来描述的具有不规则结构的信号。图神经网络 (GNN) 是针对这些图信号量身定制的信息处理架构,由堆叠层组成,这些层构成了具有非线性激活函数的图卷积滤波器。图卷积赋予 GNN 对图节点标签排列的不变性。在本文中,我们考虑了考虑图结构的可训练非线性激活函数的设计。这是通过使用图中值滤波器和图最大滤波器来实现的,它们模仿线性图卷积并显示出保留 GNN 的排列不变性。我们还讨论了对训练局部激活函数所需的反向传播算法的修改。局部激活函数架构的优势在四个数值实验中得到证明:合成图上的源定位、19 世纪小说的作者归属、电影推荐系统和科学文章分类。在所有情况下,局部激活函数都被证明可以提高模型容量。
更新日期:2020-01-01
down
wechat
bug