当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sparse Feature Factorization for Recommender Systems with Knowledge Graphs
arXiv - CS - Information Retrieval Pub Date : 2021-07-29 , DOI: arxiv-2107.14290
Vito Walter Anelli, Tommaso Di Noia, Eugenio Di Sciascio, Antonio Ferrara, Alberto Carlo Maria Mancino

Deep Learning and factorization-based collaborative filtering recommendation models have undoubtedly dominated the scene of recommender systems in recent years. However, despite their outstanding performance, these methods require a training time proportional to the size of the embeddings and it further increases when also side information is considered for the computation of the recommendation list. In fact, in these cases we have that with a large number of high-quality features, the resulting models are more complex and difficult to train. This paper addresses this problem by presenting KGFlex: a sparse factorization approach that grants an even greater degree of expressiveness. To achieve this result, KGFlex analyzes the historical data to understand the dimensions the user decisions depend on (e.g., movie direction, musical genre, nationality of book writer). KGFlex represents each item feature as an embedding and it models user-item interactions as a factorized entropy-driven combination of the item attributes relevant to the user. KGFlex facilitates the training process by letting users update only those relevant features on which they base their decisions. In other words, the user-item prediction is mediated by the user's personal view that considers only relevant features. An extensive experimental evaluation shows the approach's effectiveness, considering the recommendation results' accuracy, diversity, and induced bias. The public implementation of KGFlex is available at https://split.to/kgflex.

中文翻译:

具有知识图谱的推荐系统的稀疏特征分解

近年来,深度学习和基于分解的协同过滤推荐模型无疑占据了推荐系统的主导地位。然而,尽管它们表现出色,但这些方法需要的训练时间与嵌入的大小成正比,并且当在推荐列表的计算中也考虑辅助信息时,训练时间会进一步增加。事实上,在这些情况下,我们拥有大量高质量特征,结果模型更加复杂且难以训练。本文通过介绍 KGFlex 解决了这个问题:一种稀疏分解方法,可提供更高程度的表达能力。为了实现这一结果,KGFlex 分析历史数据以了解用户决策所依赖的维度(例如,电影方向、音乐类型、作家的国籍)。KGFlex 将每个项目特征表示为嵌入,并将用户-项目交互建模为与用户相关的项目属性的分解熵驱动组合。KGFlex 通过让用户只更新他们做出决定所依据的那些相关功能来促进训练过程。换句话说,用户-项目预测是由仅考虑相关特征的用户个人观点来调节的。考虑到推荐结果的准确性、多样性和诱发偏差,广泛的实验评估表明了该方法的有效性。KGFlex 的公开实现可在 https://split.to/kgflex 获得。KGFlex 将每个项目特征表示为嵌入,并将用户-项目交互建模为与用户相关的项目属性的分解熵驱动组合。KGFlex 通过让用户只更新他们做出决定所依据的那些相关功能来促进训练过程。换句话说,用户-项目预测是由仅考虑相关特征的用户个人观点来调节的。考虑到推荐结果的准确性、多样性和诱发偏差,广泛的实验评估表明了该方法的有效性。KGFlex 的公开实现可在 https://split.to/kgflex 获得。KGFlex 将每个项目特征表示为嵌入,并将用户-项目交互建模为与用户相关的项目属性的分解熵驱动组合。KGFlex 通过让用户只更新他们做出决定所依据的那些相关功能来促进训练过程。换句话说,用户-项目预测是由仅考虑相关特征的用户个人观点来调节的。考虑到推荐结果的准确性、多样性和诱发偏差,广泛的实验评估表明了该方法的有效性。KGFlex 的公开实现可在 https://split.to/kgflex 获得。用户-项目预测是由仅考虑相关特征的用户个人观点来调节的。考虑到推荐结果的准确性、多样性和诱发偏差,广泛的实验评估表明了该方法的有效性。KGFlex 的公开实现可在 https://split.to/kgflex 获得。用户-项目预测是由仅考虑相关特征的用户个人观点来调节的。考虑到推荐结果的准确性、多样性和诱发偏差,广泛的实验评估表明了该方法的有效性。KGFlex 的公开实现可在 https://split.to/kgflex 获得。
更新日期:2021-08-02
down
wechat
bug