当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DLSA: dual-learning based on self-attention for rating prediction
International Journal of Machine Learning and Cybernetics ( IF 5.6 ) Pub Date : 2021-05-18 , DOI: 10.1007/s13042-021-01288-7
Fulan Qian , Yafan Huang , Jianhong Li , Chengjun Wang , Shu Zhao , Yanping Zhang

Latent factor models (LFMs) have been widely applied in many rating recommendation systems because of their prediction rating capability. Nevertheless, LFMs may not fully leverage rating information and lack good recommendation performance. Furthermore, many subsequent works have often used auxiliary text information, such as user attributes, to improve the prediction effect. However, they did not fully utilize implicit information (i.e., users’ preferences, items’ common features), and additional information is sometimes difficult to acquire. In this paper, we propose a new framework, named dual-learning based on self-attention for rating prediction (DLSA), to solve these problems. Self-attention has a proven ability to learn implicit information about sentences in machine translation, which can be used to mine implicit information in recommendation systems. Additionally, dual learning has shown that the model can generate feedback information when it learns from unlabeled data; therefore, we were inspired to use it in recommendation and obtain implicit information feedback. From the user’s perspective, we design a user self-attention model to learn user-user implicit information and create an interactive user-item self-attention mechanism to learn user-item information. We can also obtain item self-attention to utilize item-item information and an item-user self-attention model to acquire item-user information from an item’s perspective. The interactive structure of the user-item and item-user can adopt the dual learning mechanism to learn implicit information feedback. Moreover, no auxiliary text information was used in the process. The proposed model combines the power of self-attention for implicit information and dual learning for information feedback in a new neural network architecture. Experiments on several real-world datasets demonstrate the effectiveness of DLSA over competitive algorithms on rating recommendation.



中文翻译:

DLSA:基于自我注意的双重学习进行等级预测

潜在因子模型(LFM)由于具有预测评级功能,因此已广泛应用于许多评级推荐系统中。但是,LFM可能无法充分利用评级信息,并且缺乏良好的推荐性能。此外,许多后续工作经常使用辅助文本信息(例如用户属性)来改善预测效果。但是,他们没有充分利用隐式信息(即用户的偏好,商品的共同特征),并且有时很难获取其他信息。在本文中,我们提出了一个新的框架,即基于自注意力的等级预测的双重学习(DLSA),以解决这些问题。自我注意已被证明能够学习有关机器翻译中句子的隐式信息,可用于在推荐系统中挖掘隐式信息。此外,双重学习表明,当模型从未标记的数据中学习时,该模型可以生成反馈信息。因此,我们受到启发将其用于推荐并获得隐式信息反馈。从用户的角度出发,我们设计了一个用户自我关注模型来学习用户-用户隐式信息,并创建一个交互式的用户项目自我关注机制来学习用户项目信息。我们还可以获取利用物品信息的物品自注意力,以及从物品角度获取物品用户信息的物品用户自我注意模型。用户-项目与项目-用户的交互结构可以采用双重学习机制来学习隐式信息反馈。而且,在此过程中未使用任何辅助文本信息。所提出的模型在新的神经网络体系结构中结合了针对隐性信息的自我注意力和针对信息反馈的双重学习能力。在多个实际数据集上进行的实验表明,DLSA优于评级推荐的竞争算法。

更新日期:2021-05-18
down
wechat
bug