当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
KRAN: Knowledge Refining Attention Network for Recommendation
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2021-09-04 , DOI: 10.1145/3470783
Zhenyu Zhang 1 , Lei Zhang 2 , Dingqi Yang 3 , Liu Yang 2
Affiliation  

Recommender algorithms combining knowledge graph and graph convolutional network are becoming more and more popular recently. Specifically, attributes describing the items to be recommended are often used as additional information. These attributes along with items are highly interconnected, intrinsically forming a Knowledge Graph (KG). These algorithms use KGs as an auxiliary data source to alleviate the negative impact of data sparsity. However, these graph convolutional network based algorithms do not distinguish the importance of different neighbors of entities in the KG, and according to Pareto’s principle, the important neighbors only account for a small proportion. These traditional algorithms can not fully mine the useful information in the KG. To fully release the power of KGs for building recommender systems, we propose in this article KRAN, a Knowledge Refining Attention Network, which can subtly capture the characteristics of the KG and thus boost recommendation performance. We first introduce a traditional attention mechanism into the KG processing, making the knowledge extraction more targeted, and then propose a refining mechanism to improve the traditional attention mechanism to extract the knowledge in the KG more effectively. More precisely, KRAN is designed to use our proposed knowledge-refining attention mechanism to aggregate and obtain the representations of the entities (both attributes and items) in the KG. Our knowledge-refining attention mechanism first measures the relevance between an entity and it’s neighbors in the KG by attention coefficients, and then further refines the attention coefficients using a “richer-get-richer” principle, in order to focus on highly relevant neighbors while eliminating less relevant neighbors for noise reduction. In addition, for the item cold start problem, we propose KRAN-CD, a variant of KRAN, which further incorporates pre-trained KG embeddings to handle cold start items. Experiments show that KRAN and KRAN-CD consistently outperform state-of-the-art baselines across different settings.

中文翻译:

KRAN:用于推荐的知识精炼注意力网络

结合知识图谱和图卷积网络的推荐算法最近越来越流行。具体来说,描述要推荐的项目的属性通常用作附加信息。这些属性与项目是高度相互关联的,本质上形成了知识图 (KG)。这些算法使用 KGs 作为辅助数据源来缓解数据稀疏的负面影响。然而,这些基于图卷积网络的算法并没有区分KG中实体的不同邻居的重要性,并且根据帕累托原理,重要的邻居只占很小的比例。这些传统算法不能充分挖掘KG中的有用信息。为了充分释放 KG 构建推荐系统的力量,我们在本文中提出了 KRAN,一种知识精炼注意力网络,它可以巧妙地捕捉 KG 的特征,从而提高推荐性能。我们首先在 KG 处理中引入传统的注意力机制,使知识提取更有针对性,然后提出一种提炼机制来改进传统的注意力机制,以更有效地提取 KG 中的知识。更准确地说,KRAN 旨在使用我们提出的知识精炼注意力机制来聚合和获取 KG 中实体(属性和项目)的表示。我们的知识提炼注意力机制首先通过注意力系数来衡量一个实体与其在 KG 中的邻居之间的相关性,然后使用“richer-get-richer”原则进一步细化注意力系数,以关注高度相关的邻居,同时消除不相关的邻居以进行降噪。此外,对于项目冷启动问题,我们提出了 KRAN-CD,它是 KRAN 的一种变体,它进一步结合了预训练的 KG 嵌入来处理冷启动项目。实验表明,KRAN 和 KRAN-CD 在不同设置下始终优于最先进的基线。
更新日期:2021-09-04
down
wechat
bug