当前位置: X-MOL 学术VLDB J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Incremental preference adjustment: a graph-theoretical approach
The VLDB Journal ( IF 4.2 ) Pub Date : 2020-08-03 , DOI: 10.1007/s00778-020-00623-8
Liangjun Song , Junhao Gan , Zhifeng Bao , Boyu Ruan , H. V. Jagadish , Timos Sellis

Learning users’ preferences is critical to personalized search and recommendation. Most such systems depend on lists of items rank-ordered according to the user’s preference. Ideally, we want the system to adjust its estimate of users’ preferences after every interaction, thereby becoming progressively better at giving the user what she wants. We also want these adjustments to be gradual and explainable, so that the user is not surprised by wild swings in system rank ordering. In this paper, we support a \(\textit{rank-reversal}\) operation on two items \(\text{ x }\) and \(\text{ y }\) for users: adjust the user’s preference such that the personalized rank of \(\text{ x }\) and \(\text{ y }\) is reversed. We emphasize that this problem is orthogonal to the preference learning and its solutions can run on top of the learning outcome of any vector-embedding-based preference learning model. Therefore, our preference adjustment techniques enable all those existing offline preference learning models to incrementally and interactively improve their response to (indirectly specified) user preferences. Specifically, we define the Minimum Dimension Adjustment (MDA) problem, where the preference adjustments are under certain constraints imposed by a specific graph and the goal is to adjust a user’s preference by reversing the personalized rank of two given items while minimizing the number of dimensions with value changed in the preference vector. We first prove that MDA is NP-hard, and then show that a 2.17-approximate solution can be obtained in polynomial time provided that an optimal solution to a carefully designed problem is given. Finally, we propose two efficient heuristic algorithms, where the first heuristic algorithm can achieve an approximation guarantee, and the second is provably efficient. Experiments on five publicly available datasets show that our solutions can adjust users’ preferences effectively and efficiently.



中文翻译:

增量偏好调整:一种图形理论方法

学习用户的偏好对于个性化搜索和推荐至关重要。大多数这样的系统依赖于根据用户的偏好进行排序的项目列表。理想情况下,我们希望系统在每次交互后调整其对用户偏好的估计,从而逐渐变得更好地为用户提供她想要的东西。我们还希望这些调整是渐进的和可解释的,以便用户不会对系统等级排序的剧烈波动感到惊讶。在本文中,我们支持对用户的两个项\(\ text {x} \)\(\ text {y} \)进行\(\ textit {rank-reversal} \)操作:调整用户的首选项,使\(\ text {x} \)\(\ text {y} \)的个性化排名是相反的。我们强调,这个问题与偏好学习正交,其解决方案可以在任何基于矢量嵌入的偏好学习模型的学习结果之上运行。因此,我们的偏好调整技术使所有那些现有的脱机偏好学习模型能够增量地和交互式地改善其对(间接指定的)用户偏好的响应。具体来说,我们定义最小维度调整(MDA)问题,其中偏好调整是在特定图形所施加的某些约束下进行的,目标是通过反转两个给定项目的个性化排名来调整用户的偏好,同时最小化维度数量在偏好向量中更改了值。我们首先证明MDA是NP困难的,然后证明2。只要给出针对精心设计的问题的最优解,就可以在多项式时间内获得17个近似解。最后,我们提出了两种有效的启发式算法,其中第一种启发式算法可以实现逼近保证,第二种是证明有效的。在五个公开可用的数据集上进行的实验表明,我们的解决方案可以有效,高效地调整用户的偏好。

更新日期:2020-08-03
down
wechat
bug