当前位置: X-MOL 学术Journal of Data and Information Science › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Context-based Embeddings for Knowledge Graph Completion
Journal of Data and Information Science Pub Date : 2022-04-01 , DOI: 10.2478/jdis-2022-0009
Fei Pu 1 , Zhongwei Zhang 1 , Yan Feng 1 , Bailin Yang 1
Affiliation  

Abstract Purpose Due to the incompleteness nature of knowledge graphs (KGs), the task of predicting missing links between entities becomes important. Many previous approaches are static, this posed a notable problem that all meanings of a polysemous entity share one embedding vector. This study aims to propose a polysemous embedding approach, named KG embedding under relational contexts (ContE for short), for missing link prediction. Design/methodology/approach ContE models and infers different relationship patterns by considering the context of the relationship, which is implicit in the local neighborhood of the relationship. The forward and backward impacts of the relationship in ContE are mapped to two different embedding vectors, which represent the contextual information of the relationship. Then, according to the position of the entity, the entity's polysemous representation is obtained by adding its static embedding vector to the corresponding context vector of the relationship. Findings ContE is a fully expressive, that is, given any ground truth over the triples, there are embedding assignments to entities and relations that can precisely separate the true triples from false ones. ContE is capable of modeling four connectivity patterns such as symmetry, antisymmetry, inversion and composition. Research limitations ContE needs to do a grid search to find best parameters to get best performance in practice, which is a time-consuming task. Sometimes, it requires longer entity vectors to get better performance than some other models. Practical implications ContE is a bilinear model, which is a quite simple model that could be applied to large-scale KGs. By considering contexts of relations, ContE can distinguish the exact meaning of an entity in different triples so that when performing compositional reasoning, it is capable to infer the connectivity patterns of relations and achieves good performance on link prediction tasks. Originality/value ContE considers the contexts of entities in terms of their positions in triples and the relationships they link to. It decomposes a relation vector into two vectors, namely, forward impact vector and backward impact vector in order to capture the relational contexts. ContE has the same low computational complexity as TransE. Therefore, it provides a new approach for contextualized knowledge graph embedding.

中文翻译:

学习基于上下文的嵌入以完成知识图谱

摘要目的由于知识图(KG)的不完整性,预测实体之间缺失链接的任务变得很重要。许多以前的方法都是静态的,这带来了一个值得注意的问题,即多义实体的所有含义共享一个嵌入向量。本研究旨在提出一种多义嵌入方法,称为关系上下文下的 KG 嵌入(简称 ContE),用于缺失链接预测。设计/方法/方法 ContE 通过考虑关系的上下文来建模和推断不同的关系模式,这隐含在关系的本地邻域中。ContE 中关系的前向和后向影响映射到两个不同的嵌入向量,表示关系的上下文信息。然后,根据实体的位置,将实体的静态嵌入向量与关系对应的上下文向量相加,得到实体的多义表示。结果 ContE 具有完全的表达能力,也就是说,给定三元组上的任何基本事实,对实体和关系的嵌入分配可以精确地将真三元组与假三元组区分开来。ContE 能够对对称、反对称、反转和组合等四种连接模式进行建模。研究局限 ContE 需要进行网格搜索以找到最佳参数以在实践中获得最佳性能,这是一项耗时的任务。有时,它需要更长的实体向量才能获得比其他模型更好的性能。实际意义 ContE 是一个双线性模型,这是一个非常简单的模型,可以应用于大型 KG。通过考虑关系的上下文,ContE 可以区分不同三元组中实体的确切含义,以便在进行组合推理时,能够推断关系的连接模式并在链接预测任务上取得良好的性能。原创性/价值 ContE 根据实体在三元组中的位置以及它们链接到的关系来考虑实体的上下文。它将一个关系向量分解为两个向量,即 前向影响向量和后向影响向量以捕获关系上下文。ContE 具有与 TransE 相同的低计算复杂度。因此,它为上下文化知识图谱嵌入提供了一种新方法。
更新日期:2022-04-01
down
wechat
bug