当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modeling of complex internal logic for knowledge base completion
Applied Intelligence ( IF 3.4 ) Pub Date : 2020-06-05 , DOI: 10.1007/s10489-020-01734-z
Hongbin Wang , Shengchen Jiang , Zhengtao Yu

Knowledge base completion has been an active research topic for knowledge graph. However, existing methods are of low learning and generalization abilities, and neglect the rich internal logic between entities and relationships. To solve the above problems, this paper proposes the modeling of complex internal logic for knowledge base completion. This method first integrates the semantic information into the knowledge representation model and strengthens the credibility scores of the positive and negative triples with the semantic gap, which not only makes the model converge faster, but also can obtain the knowledge representation of the fusion semantic information; and then we put forward the concept of knowledge subgraph, through the memory network and multi-hop attention mechanism, the knowledge information in the knowledge subgraph and the to-be-complemented triple are merged. In the process of model training, we have different training methods from the classical memory network, and added reinforcement learning. The reciprocal of the correct reasoning knowledge information in the model output is used as the reward value, and the final training model complements the triple information. The high computing capability of knowledge representation, the high learning and generalization abilities of the memory network and the multi-hop attention mechanism are also utilized in the method. The experimental results on data sets FB15k and WN18 show that the present method performs well in knowledge base completion and can effectively improve Hits@10 and MRR values. We also verified the practicability of the proposed method in the recommendation system and question answering system base on knowledge base, and have achieved good results.



中文翻译:

复杂的内部逻辑建模,以完成知识库

知识库的完成一直是知识图谱的活跃研究主题。但是,现有方法学习能力和泛化能力较低,并且忽略了实体和关系之间的丰富内部逻辑。为了解决上述问题,本文提出了用于知识库完成的复杂内部逻辑建模。该方法首先将语义信息整合到知识表示模型中,并通过语义间隙来增强正负三元组的可信度得分,不仅使模型收敛更快,而且可以获得融合语义信息的知识表示。然后通过记忆网络和多跳注意机制提出了知识子图的概念,将知识子图中的知识信息与待补三元合并。在模型训练的过程中,我们有与经典记忆网络不同的训练方法,并增加了强化学习。模型输出中正确推理知识信息的倒数用作奖励值,最终训练模型补充三元信息。该方法还利用了知识表示的高计算能力,存储网络的高学习和泛化能力以及多跳关注机制。在数据集FB15k和WN18上的实验结果表明,该方法在知识库完成方面表现良好,可以有效地提高Hits @ 10和MRR值。

更新日期:2020-06-05
down
wechat
bug