当前位置: X-MOL 学术arXiv.cs.DB › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bandits Under The Influence (Extended Version)
arXiv - CS - Databases Pub Date : 2020-09-21 , DOI: arxiv-2009.10135
Silviu Maniu, Stratis Ioannidis, Bogdan Cautis

Recommender systems should adapt to user interests as the latter evolve. A prevalent cause for the evolution of user interests is the influence of their social circle. In general, when the interests are not known, online algorithms that explore the recommendation space while also exploiting observed preferences are preferable. We present online recommendation algorithms rooted in the linear multi-armed bandit literature. Our bandit algorithms are tailored precisely to recommendation scenarios where user interests evolve under social influence. In particular, we show that our adaptations of the classic LinREL and Thompson Sampling algorithms maintain the same asymptotic regret bounds as in the non-social case. We validate our approach experimentally using both synthetic and real datasets.

中文翻译:

强盗影响下(扩展版)

随着用户的发展,推荐系统应该适应用户的兴趣。用户兴趣演变的一个普遍原因是他们的社交圈的影响。一般来说,当兴趣未知时,探索推荐空间同时也利用观察到的偏好的在线算法是可取的。我们提出了植根于线性多臂老虎机文献的在线推荐算法。我们的老虎机算法专门针对用户兴趣在社会影响下发展的推荐场景量身定制。特别是,我们展示了我们对经典 LinREL 和 Thompson Sampling 算法的改编保持与非社会案例相同的渐近后悔界限。我们使用合成数据集和真实数据集通过实验验证了我们的方法。
更新日期:2020-09-23
down
wechat
bug