当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reinforcement Learning–based Collective Entity Alignment with Adaptive Features
ACM Transactions on Information Systems ( IF 5.6 ) Pub Date : 2021-05-06 , DOI: 10.1145/3446428
Weixin Zeng 1 , Xiang Zhao 1 , Jiuyang Tang 1 , Xuemin Lin 2 , Paul Groth 3
Affiliation  

Entity alignment (EA) is the task of identifying the entities that refer to the same real-world object but are located in different knowledge graphs (KGs). For entities to be aligned, existing EA solutions treat them separately and generate alignment results as ranked lists of entities on the other side. Nevertheless, this decision-making paradigm fails to take into account the interdependence among entities. Although some recent efforts mitigate this issue by imposing the 1-to-1 constraint on the alignment process, they still cannot adequately model the underlying interdependence and the results tend to be sub-optimal. To fill in this gap, in this work, we delve into the dynamics of the decision-making process, and offer a reinforcement learning (RL)–based model to align entities collectively. Under the RL framework, we devise the coherence and exclusiveness constraints to characterize the interdependence and restrict collective alignment. Additionally, to generate more precise inputs to the RL framework, we employ representative features to capture different aspects of the similarity between entities in heterogeneous KGs, which are integrated by an adaptive feature fusion strategy. Our proposal is evaluated on both cross-lingual and mono-lingual EA benchmarks and compared against state-of-the-art solutions. The empirical results verify its effectiveness and superiority.

中文翻译:

具有自适应特征的基于强化学习的集体实体对齐

实体对齐 (EA) 是识别引用相同现实世界对象但位于不同知识图 (KG) 中的实体的任务。对于要对齐的实体,现有的 EA 解决方案将它们分开处理,并将对齐结果生成为另一侧的实体排名列表。然而,这种决策范式没有考虑到实体之间的相互依赖。尽管最近的一些努力通过对对齐过程施加一对一的约束来缓解这个问题,但它们仍然无法充分模拟潜在的相互依赖关系,并且结果往往不是最佳的。为了填补这一空白,在这项工作中,我们深入研究了决策过程的动态,并提供了一个基于强化学习 (RL) 的模型来共同对齐实体。在 RL 框架下,我们设计了连贯性和排他性约束来表征相互依赖并限制集体一致。此外,为了为 RL 框架生成更精确的输入,我们采用代表性特征来捕获异构 KG 中实体之间相似性的不同方面,这些特征通过自适应特征融合策略进行集成。我们的提案在跨语言和单语言 EA 基准上进行了评估,并与最先进的解决方案进行了比较。实证结果验证了其有效性和优越性。它们通过自适应特征融合策略集成。我们的提案在跨语言和单语言 EA 基准上进行了评估,并与最先进的解决方案进行了比较。实证结果验证了其有效性和优越性。它们通过自适应特征融合策略集成。我们的提案在跨语言和单语言 EA 基准上进行了评估,并与最先进的解决方案进行了比较。实证结果验证了其有效性和优越性。
更新日期:2021-05-06
down
wechat
bug