当前位置: X-MOL 学术Arab. J. Sci. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Entity-Context and Relation-Context Combined Knowledge Graph Embeddings
Arabian Journal for Science and Engineering ( IF 2.9 ) Pub Date : 2021-07-15 , DOI: 10.1007/s13369-021-05977-x
Yong Wu 1 , Xiaoming Fan 1 , Binjun Wang 1 , Wei Li 2
Affiliation  

Hierarchical structures are very common in knowledge graphs, and semantic hierarchy-preserved knowledge graph embeddings have achieved promising results in the knowledge graph link prediction task. However, handling one-to-many, many-to-one, and many-to-many relations that can provide hierarchical information is challenging and brings entity indistinguishability issues. To address this limitation, this paper proposes a novel knowledge graph embedding model, namely Entity-context and Relation-context combined Knowledge Graph Embeddings (ERKE), in which each relation is defined as a rotation with variable moduli from the source entity to the target entity in the polar coordinate system. It can be seen as a combination of two spaces—modulus space and phase space. In the modulus space, modulus information is used to model semantic hierarchies, and entity-context information is adopted to make node representations more expressive. Besides, based on the design of the propagation rule of Graph Convolution Network (GCN), a new GCN model suitable for processing semantic hierarchies in knowledge graphs is proposed. In the phase space, relation-context information is used to make entities easier to distinguish. Specifically, a rotation operation in the polar coordinate system is transformed to the addition operation in the rectangular coordinate system, and relations between entities are mapped into their entity-specific hyperplanes. The proposed method is verified by the experiments on three benchmark datasets, and experimental results demonstrate that the proposed method can learn the semantic hierarchies in knowledge graphs and improve the prediction accuracy of complex one-to-many, many-to-one, and many-to-many cases simultaneously.



中文翻译:

实体-上下文和关系-上下文组合知识图嵌入

层次结构在知识图中非常普遍,语义层次保留的知识图嵌入在知识图链接预测任务中取得了可喜的成果。然而,处理可以提供分层信息的一对多、多对一和多对多关系具有挑战性,并带来实体不可区分性问题。为了解决这个限制,本文提出了一种新的知识图嵌入模型,即实体-上下文和关系-上下文组合知识图嵌入(ERKE),其中每个关系被定义为从源实体到目标的具有可变模数的旋转极坐标系中的实体。它可以看作是模空间和相空间这两个空间的组合。在模空间中,模信息用于对语义层次进行建模,并采用实体上下文信息使节点表示更具表现力。此外,基于图卷积网络(GCN)传播规则的设计,提出了一种适用于处理知识图中语义层次的新GCN模型。在相空间中,关系上下文信息用于使实体更容易区分。具体地,将极坐标系中的旋转操作转换为直角坐标系中的加法操作,将实体之间的关系映射到其实体特定的超平面中。通过在三个基准数据集上的实验验证了所提出的方法,实验结果表明所提出的方法可以学习知识图中的语义层次结构,提高复杂一对多的预测精度,

更新日期:2021-07-15
down
wechat
bug