当前位置: X-MOL 学术Found. Trends Inf. Ret. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable Recommendation: A Survey and New Perspectives
Foundations and Trends in Information Retrieval ( IF 10.4 ) Pub Date : 2020-3-10 , DOI: 10.1561/1500000066
Yongfeng Zhang , Xu Chen

Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations. The explanations may either be post-hoc or directly come from an explainable model (also called interpretable or transparent model in some contexts). Explainable recommendation tries to address the problem of why: by providing explanations to users or system designers, it helps humans to understand why certain items are recommended by the algorithm, where the human can either be users or system designers. Explainable recommendation helpsendation systems. It also facilitates system design to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommers for better system debugging. In recent years, a large number of explainable recommendation approaches – especially model-based methods – have been proposed and applied in real-world systems.

In this survey, we provide a comprehensive review for the explainable recommendation research. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation on three perspectives: 1) We provide a chronological research timeline of explainable recommendation, including user study approaches in the early years and more recent model-based approaches. 2) We provide a two-dimensional taxonomy to classify existing explainable recommendation research: one dimension is the information source (or display style) of the explanations, and the other dimension is the algorithmic mechanism to generate explainable recommendations. 3) We summarize how explainable recommendation applies to different recommendation tasks, such as product recommendation, social recommendation, and POI recommendation.

We also devote a section to discuss the explanation perspectives in broader IR and AI/ML research. We end the survey by discussing potential future directions to promote the explainable recommendation research area and beyond.



中文翻译:

可解释的建议:调查和新观点

可解释的推荐试图开发不仅产生高质量推荐而且还产生直观解释的模型。解释可以是事后解释,也可以直接来自可解释模型(在某些情况下也称为可解释模型或透明模型)。可解释的建议试图解决为什么这样的问题:通过向用户或系统设计者提供解释,它可以帮助人们理解为什么算法建议使用某些项目,而人类可以是用户或系统设计者。可解释的推荐帮助系统。它还有助于系统设计,以提高透明性,说服力,有效性,可信赖性和推荐者的满意度,以进行更好的系统调试。近年来,已经提出了许多可解释的推荐方法,尤其是基于模型的方法,并将其应用于实际系统中。

在这项调查中,我们为可解释的推荐研究提供了全面的综述。我们首先通过将推荐问题归类为5W(即什么,什么时候,什么人,什么地方以及为什么)来突出介绍可解释性推荐在推荐器系统研究中的位置。然后,我们从三个角度对可解释性建议进行全面调查:1)我们提供了可解释性建议的时间顺序研究时间表,包括早期的用户研究方法和最近的基于模型的方法。2)我们提供了二维分类法,对现有的可解释建议研究进行分类:一个维是解释的信息源(或显示样式),另一个维是生成可解释建议的算法机制。

我们还专门讨论了更广泛的IR和AI / ML研究中的解释观点。我们通过讨论潜在的未来方向来推动调查的结束,以促进可解释的推荐研究领域以及其他领域。

更新日期:2020-03-10
down
wechat
bug