当前位置: X-MOL 学术Inf. Process. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An efficiency relation-specific graph transformation network for knowledge graph representation learning
Information Processing & Management ( IF 8.6 ) Pub Date : 2022-09-15 , DOI: 10.1016/j.ipm.2022.103076
Zhiwen Xie , Runjie Zhu , Jin Liu , Guangyou Zhou , Jimmy Xiangji Huang

Knowledge graph representation learning (KGRL) aims to infer the missing links between target entities based on existing triples. Graph neural networks (GNNs) have been introduced recently as one of the latest trendy architectures serves KGRL task using aggregations of neighborhood information. However, current GNN-based methods have fundamental limitations in both modelling the multi-hop distant neighbors and selecting relation-specific neighborhood information from vast neighbors. In this study, we propose a new relation-specific graph transformation network (RGTN) for the KGRL task. Specifically, the proposed RGTN is the first pioneer model that transforms a relation-based graph into a new path-based graph by generating useful paths that connect heterogeneous relations and multi-hop neighbors. Unlike the existing GNN-based methods, our approach is able to adaptively select the most useful paths for each specific relation and to effectively build path-based connections between unconnected distant entities. The transformed new graph structure opens a new way to model the arbitrary lengths of multi-hop neighbors which leads to more effective embedding learning. In order to verify the effectiveness of our proposed model, we conduct extensive experiments on three standard benchmark datasets, e.g., WN18RR, FB15k-237 and YAGO-10-DR. Experimental results show that the proposed RGTN achieves the promising results and even outperforms other state-of-the-art models on the KGRL task (e.g., compared to other state-of-the-art GNN-based methods, our model achieves 2.5% improvement using H@10 on WN18RR, 1.2% improvement using H@10 on FB15k-237 and 6% improvement using H@10 on YAGO3-10-DR).



中文翻译:

一种用于知识图表示学习的效率特定关系图转换网络

知识图表示学习(KGRL)旨在基于现有的三元组推断目标实体之间缺失的链接。最近引入了图形神经网络 (GNN),因为最新的流行架构之一使用邻域信息的聚合服务于 KGRL 任务。然而,当前基于 GNN 的方法在对多跳远邻居建模和从大量邻居中选择特定关系的邻域信息方面都存在基本限制。在这项研究中,我们为 KGRL 任务提出了一种新的关系特定图转换网络 (RGTN)。具体来说,所提出的 RGTN 是第一个通过生成连接异构关系和多跳邻居的有用路径将基于关系的图转换为新的基于路径的图的先驱模型。与现有的基于 GNN 的方法不同,我们的方法能够为每个特定关系自适应地选择最有用的路径,并在未连接的远距离实体之间有效地建立基于路径的连接。转换后的新图结构开辟了一种对任意长度的多跳邻居进行建模的新方法,从而导致更有效的嵌入学习。为了验证我们提出的模型的有效性,我们对三个标准基准数据集进行了广泛的实验,例如 WN18RR、FB15k-237 和 YAGO-10-DR。实验结果表明,所提出的 RGTN 取得了有希望的结果,甚至在 KGRL 任务上优于其他最先进的模型(例如,与其他最先进的基于 GNN 的方法相比,我们的模型达到了 2.5%在 WN18RR 上使用 H@10 提高了 1.2%,在 FB15k-237 上使用 H@10 提高了 1.2%,在 YAGO3-10-DR 上使用 H@10 提高了 6%)。

更新日期:2022-09-15
down
wechat
bug