当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LightCAKE: A Lightweight Framework for Context-Aware Knowledge Graph Embedding
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-22 , DOI: arxiv-2102.10826
Zhiyuan Ning, Ziyue Qiao, Hao Dong, Yi Du, Yuanchun Zhou

For knowledge graphs, knowledge graph embedding (KGE) models learn to project the symbolic entities and relations into a low-dimensional continuous vector space based on the observed triplets. However, existing KGE models can not make a proper trade-off between the graph context and the model complexity, which makes them still far from satisfactory. In this paper, we propose a lightweight framework named LightCAKE for context-aware KGE. LightCAKE uses an iterative aggregation strategy to integrate the context information in multi-hop into the entity/relation embeddings, also explicitly models the graph context without introducing extra trainable parameters other than embeddings. Moreover, extensive experiments on public benchmarks demonstrate the efficiency and effectiveness of our framework.

中文翻译:

LightCAKE:用于上下文感知的知识图嵌入的轻量级框架

对于知识图,知识图嵌入(KGE)模型学习根据观察到的三元组将符号实体和关系投影到低维连续向量空间中。但是,现有的KGE模型无法在图上下文和模型复杂性之间做出适当的权衡,这使得它们仍然远远不能令人满意。在本文中,我们为上下文感知的KGE提出了一个名为LightCAKE的轻量级框架。LightCAKE使用迭代聚合策略将多跳中的上下文信息集成到实体/关系嵌入中,还显式地对图形上下文建模,而无需引入除嵌入之外的其他可训练参数。此外,有关公共基准的大量实验证明了我们框架的效率和有效性。
更新日期:2021-02-23
down
wechat
bug