当前位置: X-MOL 学术Inf. Process. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Supervised learning for Conversational Recommendation
Information Processing & Management ( IF 8.6 ) Pub Date : 2022-09-13 , DOI: 10.1016/j.ipm.2022.103067
Shuokai Li , Ruobing Xie , Yongchun Zhu , Fuzhen Zhuang , Zhenwei Tang , Wayne Xin Zhao , Qing He

Conversational recommender system (CRS) aims to model user preference through interactive conversations. Although there are some works, they still have two drawbacks: (1) they rely on large amounts of training data and suffer from data sparsity problem; and (2) they do not fully leverage different types of knowledge extracted from dialogues. To address these issues in CRS, we explore the intrinsic correlations of different types of knowledge by self-supervised learning, and propose the model SSCR, which stands for Self-Supervised learning for Conversational Recommendation. The main idea is to jointly consider both the semantic and structural knowledge via three self-supervision signals in both recommendation and dialogue modules. First, we carefully design two auxiliary self-supervised objectives: token-level task and sentence-level task, to explore the semantic knowledge. Then, we extract the structural knowledge based on external knowledge graphs from user mentioned entities. Finally, we model the inter-information between the semantic and structural knowledge with the advantages of contrastive learning. As existing similarity functions fail to achieve this goal, we propose a novel similarity function based on negative log-likelihood loss. Comprehensive experimental results on two real-world CRS datasets (including both English and Chinese with about 10,000 dialogues) show the superiority of our proposed method. Concretely, in recommendation, SSCR gets an improvement about 5%15% compared with state-of-the-art baselines on hit rate, mean reciprocal rank and normalized discounted cumulative gain. In dialogue generation, SSCR outperforms baselines on both automatic evaluations (distinct n-gram, BLEU and perplexity) and human evaluations (fluency and informativeness).



中文翻译:

用于会话推荐的自我监督学习

会话推荐系统(CRS)旨在通过交互式对话来模拟用户偏好。虽然有一些作品,但它们仍然存在两个缺点:(1)它们依赖大量的训练数据,存在数据稀疏问题;(2) 他们没有充分利用从对话中提取的不同类型的知识。为了解决 CRS 中的这些问题,我们通过自监督学习探索不同类型知识的内在相关性,并提出了SSCR模型,它代表会话R监督学习推荐。主要思想是通过推荐和对话模块中的三个自我监督信号来共同考虑语义和结构知识。首先,我们精心设计了两个辅助的自我监督目标:token-level task 和 sentence-level task,以探索语义知识。然后,我们从用户提到的实体中提取基于外部知识图的结构知识。最后,我们利用对比学习的优势对语义知识和结构知识之间的交互信息进行建模。由于现有的相似函数无法实现这一目标,我们提出了一种基于负对数似然损失的新相似函数。在两个真实世界的 CRS 数据集(包括英文和中文,大约 10 个,000 个对话)显示了我们提出的方法的优越性。具体来说,在推荐中,SSCR 得到了关于5%十五%与命中率、平均倒数排名和归一化贴现累积增益的最新基线相比。在对话生成中,SSCR 在自动评估(不同的 n-gram、BLEU 和困惑)和人工评估(流畅性和信息量)方面都优于基线。

更新日期:2022-09-13
down
wechat
bug