当前位置: X-MOL 学术IEEE Access › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Higher-Order Graph Convolutional Networks With Multi-Scale Neighborhood Pooling for Semi-Supervised Node Classification
IEEE Access ( IF 3.9 ) Pub Date : 2021-02-18 , DOI: 10.1109/access.2021.3060173
Xun Liu , Guoqing Xia , Fangyuan Lei , Yikuan Zhang , Shihui Chang

Existing popular methods for semi-supervised node classification with high-order convolution improve the learning ability of graph convolutional networks (GCNs) by capturing the feature information from high-order neighborhoods. However, these methods with high-order convolution usually require many parameters and high computational complexity. To address these limitations, we propose HCNP, a new higher-order GCN for semi-supervised node learning tasks, which can simultaneously aggregate information of various neighborhoods by constructing high-order convolution. In HCNP, we reduce the number of parameters using a weight sharing mechanism and combine the neighborhood information via multi-scale neighborhood pooling. Further, HCNP does not require a large number of hidden units, and it fits a few parameters and exhibits low complexity. We show that HCNP matches GCNs in terms of complexity and parameters. Comprehensive evaluations on publication citation datasets (Citeseer, Pubmed, and Cora) demonstrate that the proposed methods outperform MixHop in most cases while maintaining lower complexity and fewer parameters and achieve state-of-the-art performance in terms of accuracy and parameters compared to other baselines.

中文翻译:

具有多尺度邻域池的高阶图卷积网络用于半监督节点分类

现有流行的具有高阶卷积的半监督节点分类方法通过捕获来自高阶邻域的特征信息来提高图卷积网络(GCN)的学习能力。但是,这些具有高阶卷积的方法通常需要许多参数和较高的计算复杂度。为了解决这些限制,我们提出了HCNP,这是一种用于半监督节点学习任务的新的高阶GCN,它可以通过构造高阶卷积来同时汇总各个邻域的信息。在HCNP中,我们使用权重共享机制减少参数的数量,并通过多尺度邻域池合并邻域信息。此外,HCNP不需要大量的隐藏单元,并且它适合几个参数并且显示出较低的复杂度。我们表明,HCNP在复杂性和参数方面与GCN相匹配。对出版物引文数据集(Citeseer,Pubmed和Cora)的综合评估表明,与其他方法相比,所提出的方法在大多数情况下均优于MixHop,同时保持较低的复杂性和较少的参数,并在准确性和参数方面实现了最新的性能基线。
更新日期:2021-03-02
down
wechat
bug