当前位置: X-MOL 学术J. Web Semant. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Embedding models for episodic knowledge graphs
Journal of Web Semantics ( IF 2.1 ) Pub Date : 2018-12-24 , DOI: 10.1016/j.websem.2018.12.008
Yunpu Ma , Volker Tresp , Erik A. Daxberger

In recent years a number of large-scale triple-oriented knowledge graphs have been generated and various models have been proposed to perform learning in those graphs. Most knowledge graphs are static and reflect the world in its current state. In reality, of course, the state of the world is changing: a healthy person becomes diagnosed with a disease and a new president is inaugurated. In this paper, we extend models for static knowledge graphs to temporal knowledge graphs. This enables us to store episodic data and to generalize to new facts (inductive learning). We generalize leading learning models for static knowledge graphs (i.e., Tucker, RESCAL, HolE, ComplEx, DistMult) to temporal knowledge graphs. In particular, we introduce a new tensor model, ConT, with superior generalization performance. The performances of all proposed models are analyzed on two different datasets: the Global Database of Events, Language, and Tone (GDELT) and the database for Integrated Conflict Early Warning System (ICEWS). We argue that temporal knowledge graph embeddings might be models also for cognitive episodic memory (facts we remember and can recollect) and that a semantic memory (current facts we know) can be generated from episodic memory by a marginalization operation. We validate this episodic-to-semantic projection hypothesis with the ICEWS dataset.



中文翻译:

情节知识图的嵌入模型

近年来,已经产生了许多大规模的三重知识图,并且已经提出了各种模型来在这些图中进行学习。大多数知识图是静态的,可以反映当前状态下的世界。当然,实际上,世界的状况正在发生变化:健康的人被诊断出患有疾病,新的总统就职。在本文中,我们将静态知识图的模型扩展到时间知识图。这使我们能够存储情景数据并归纳为新的事实(归纳学习)。我们将静态知识图(即Tucker,RESCAL,Hole,ComplEx,DistMult)的领先学习模型推广到时间知识图。特别是,我们引入了具有出色泛化性能的新张量模型ConT。在两个不同的数据集上分析了所有提议模型的性能:事件,语言和语气全球数据库(GDELT)和综合冲突预警系统(ICEWS)的数据库。我们认为,时间知识图嵌入也可能是认知情景记忆(我们记住并可以回忆的事实)的模型,而语义记忆(我们可以通过边际化操作从情景记忆中产生当前的事实)。我们使用ICEWS数据集验证了这种从情景到语义的投影假设。

更新日期:2018-12-24
down
wechat
bug