当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Node Masking: Making Graph Neural Networks Generalize and Scale Better
arXiv - CS - Artificial Intelligence Pub Date : 2020-01-17 , DOI: arxiv-2001.07524 Pushkar Mishra, Aleksandra Piktus, Gerard Goossen, Fabrizio Silvestri
arXiv - CS - Artificial Intelligence Pub Date : 2020-01-17 , DOI: arxiv-2001.07524 Pushkar Mishra, Aleksandra Piktus, Gerard Goossen, Fabrizio Silvestri
Graph Neural Networks (GNNs) have received a lot of interest in the recent
times. From the early spectral architectures that could only operate on
undirected graphs per a transductive learning paradigm to the current state of
the art spatial ones that can apply inductively to arbitrary graphs, GNNs have
seen significant contributions from the research community. In this paper, we
discuss some theoretical tools to better visualize the operations performed by
state of the art spatial GNNs. We analyze the inner workings of these
architectures and introduce a simple concept, node masking, that allows them to
generalize and scale better. To empirically validate the theory, we perform
several experiments on three widely-used benchmark datasets for node
classification in both transductive and inductive settings.
中文翻译:
节点屏蔽:使图神经网络更好地泛化和扩展
最近,图神经网络 (GNN) 受到了很多关注。从只能在每个转导学习范式的无向图上运行的早期频谱架构到可以归纳应用于任意图的当前最先进的空间架构,GNN 已经看到了研究界的重大贡献。在本文中,我们讨论了一些理论工具,以更好地可视化由最先进的空间 GNN 执行的操作。我们分析了这些架构的内部工作原理,并引入了一个简单的概念,即节点屏蔽,使它们能够更好地泛化和扩展。为了从经验上验证该理论,我们在三个广泛使用的基准数据集上进行了几次实验,用于在转导和归纳设置中进行节点分类。
更新日期:2020-10-19
中文翻译:
节点屏蔽:使图神经网络更好地泛化和扩展
最近,图神经网络 (GNN) 受到了很多关注。从只能在每个转导学习范式的无向图上运行的早期频谱架构到可以归纳应用于任意图的当前最先进的空间架构,GNN 已经看到了研究界的重大贡献。在本文中,我们讨论了一些理论工具,以更好地可视化由最先进的空间 GNN 执行的操作。我们分析了这些架构的内部工作原理,并引入了一个简单的概念,即节点屏蔽,使它们能够更好地泛化和扩展。为了从经验上验证该理论,我们在三个广泛使用的基准数据集上进行了几次实验,用于在转导和归纳设置中进行节点分类。