当前位置: X-MOL 学术IEEE Trans. Knowl. Data. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Framework of Personalized Ranking on Poisson Factorization
IEEE Transactions on Knowledge and Data Engineering ( IF 8.9 ) Pub Date : 2021-01-01 , DOI: 10.1109/tkde.2019.2924894
Ming-Syan Chen , Chung-Kuang Chou , Li-Yen Kuo

Matrix factorization (MF) has earned great success on recommender systems. However, the common-used regression-based MF not only is sensitive to outliers but also unable to guarantee that the predicted values are in line with the user preference orders, which is the basis of common measures of recommender systems, e.g., nDCG. To overcome the aforementioned drawbacks, we propose a framework for personalized ranking of Poisson factorization that utilizes learning-to-rank based posteriori instead of the classical regression-based ones. Owing to the combination, the proposed framework not only preserves user preference but also performs well on a sparse matrix. Since the posteriori that combines learning to rank and Poisson factorization does not follow the conjugate prior relationship, we estimate variational parameters approximately and propose two optimization approaches based on variational inference. As long as the used learning-to-rank model has the 1st and 2nd order partial derivatives, by exploiting our framework, the proposed optimizing algorithm can maximize the posteriori whichever the used learning-to-rank model is. In the experiment, we show that the proposed framework outperforms the state-of-the-art methods and achieves promising results on consuming log and rating datasets for multiple recommendation tasks.

中文翻译:

泊松分解的个性化排名框架

矩阵分解(MF)在推荐系统上取得了巨大的成功。然而,常用的基于回归的 MF 不仅对异常值敏感,而且无法保证预测值与用户偏好顺序一致,这是推荐系统常用度量的基础,例如 nDCG。为了克服上述缺点,我们提出了一种泊松分解的个性化排序框架,该框架利用基于学习排序的后验而不是基于经典回归的后验。由于这种组合,所提出的框架不仅保留了用户偏好,而且在稀疏矩阵上也表现良好。由于结合学习排序和泊松分解的后验不遵循共轭先验关系,我们近似估计变分参数并提出两种基于变分推理的优化方法。只要使用的学习排名模型具有一阶和二阶偏导数,通过利用我们的框架,无论使用的学习排名模型是什么,所提出的优化算法都可以最大化后验。在实验中,我们表明所提出的框架优于最先进的方法,并在为多个推荐任务消耗日志和评级数据集方面取得了可喜的结果。
更新日期:2021-01-01
down
wechat
bug