当前位置:
X-MOL 学术
›
arXiv.cs.CL
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Multi-hop Graph Convolutional Network with High-order Chebyshev Approximation for Text Reasoning
arXiv - CS - Computation and Language Pub Date : 2021-06-08 , DOI: arxiv-2106.05221 Shuoran Jiang, Qingcai Chen, Xin Liu, Baotian Hu, Lisai Zhang
arXiv - CS - Computation and Language Pub Date : 2021-06-08 , DOI: arxiv-2106.05221 Shuoran Jiang, Qingcai Chen, Xin Liu, Baotian Hu, Lisai Zhang
Graph convolutional network (GCN) has become popular in various natural
language processing (NLP) tasks with its superiority in long-term and
non-consecutive word interactions. However, existing single-hop graph reasoning
in GCN may miss some important non-consecutive dependencies. In this study, we
define the spectral graph convolutional network with the high-order dynamic
Chebyshev approximation (HDGCN), which augments the multi-hop graph reasoning
by fusing messages aggregated from direct and long-term dependencies into one
convolutional layer. To alleviate the over-smoothing in high-order Chebyshev
approximation, a multi-vote-based cross-attention (MVCAttn) with linear
computation complexity is also proposed. The empirical results on four
transductive and inductive NLP tasks and the ablation study verify the efficacy
of the proposed model. Our source code is available at
https://github.com/MathIsAll/HDGCN-pytorch.
中文翻译:
用于文本推理的具有高阶切比雪夫逼近的多跳图卷积网络
图卷积网络 (GCN) 因其在长期和非连续单词交互方面的优越性而在各种自然语言处理 (NLP) 任务中变得流行。然而,GCN 中现有的单跳图推理可能会遗漏一些重要的非连续依赖关系。在这项研究中,我们使用高阶动态切比雪夫近似 (HDGCN) 定义了谱图卷积网络,该网络通过将从直接和长期依赖项聚合的消息融合到一个卷积层中来增强多跳图推理。为了缓解高阶切比雪夫近似中的过度平滑,还提出了一种具有线性计算复杂度的基于多投票的交叉注意(MVCAttn)。四个转导和归纳 NLP 任务的实证结果以及消融研究验证了所提出模型的有效性。
更新日期:2021-06-10
中文翻译:
用于文本推理的具有高阶切比雪夫逼近的多跳图卷积网络
图卷积网络 (GCN) 因其在长期和非连续单词交互方面的优越性而在各种自然语言处理 (NLP) 任务中变得流行。然而,GCN 中现有的单跳图推理可能会遗漏一些重要的非连续依赖关系。在这项研究中,我们使用高阶动态切比雪夫近似 (HDGCN) 定义了谱图卷积网络,该网络通过将从直接和长期依赖项聚合的消息融合到一个卷积层中来增强多跳图推理。为了缓解高阶切比雪夫近似中的过度平滑,还提出了一种具有线性计算复杂度的基于多投票的交叉注意(MVCAttn)。四个转导和归纳 NLP 任务的实证结果以及消融研究验证了所提出模型的有效性。