Skip to main content
Log in

Incremental preference adjustment: a graph-theoretical approach

  • Regular Paper
  • Published:
The VLDB Journal Aims and scope Submit manuscript

Abstract

Learning users’ preferences is critical to personalized search and recommendation. Most such systems depend on lists of items rank-ordered according to the user’s preference. Ideally, we want the system to adjust its estimate of users’ preferences after every interaction, thereby becoming progressively better at giving the user what she wants. We also want these adjustments to be gradual and explainable, so that the user is not surprised by wild swings in system rank ordering. In this paper, we support a \(\textit{rank-reversal}\) operation on two items \(\text{ x }\) and \(\text{ y }\) for users: adjust the user’s preference such that the personalized rank of \(\text{ x }\) and \(\text{ y }\) is reversed. We emphasize that this problem is orthogonal to the preference learning and its solutions can run on top of the learning outcome of any vector-embedding-based preference learning model. Therefore, our preference adjustment techniques enable all those existing offline preference learning models to incrementally and interactively improve their response to (indirectly specified) user preferences. Specifically, we define the Minimum Dimension Adjustment (MDA) problem, where the preference adjustments are under certain constraints imposed by a specific graph and the goal is to adjust a user’s preference by reversing the personalized rank of two given items while minimizing the number of dimensions with value changed in the preference vector. We first prove that MDA is NP-hard, and then show that a 2.17-approximate solution can be obtained in polynomial time provided that an optimal solution to a carefully designed problem is given. Finally, we propose two efficient heuristic algorithms, where the first heuristic algorithm can achieve an approximation guarantee, and the second is provably efficient. Experiments on five publicly available datasets show that our solutions can adjust users’ preferences effectively and efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. http://civilcomputing.com/HomeSeeker

  2. As mentioned earlier in Remark 2 in the previous subsection, in our application, only the edges adjacent to either s or t in \(\mathcal {G}\) have nonzero costs. Therefore, it is sufficient to set \(\pi _s = \pi _t = \min _{i = s \vee j=t}|\alpha _{i,j}|\) and \(\pi _i = 0\) for all i other than s and t. It can be verified that \(\pi \) is always valid for any valid flow in \(\mathcal {G}\), and hence, there is no need to update it at the end of each iteration in the While-Loop of Algorithm 3. As a result, it suffices to stop the Dijkstra algorithm as soon as the shortest path from s to t is found and skip the maintenance of \(\pi \) in each iteration.

  3. http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/ratings_Amazon_Instant_Video.csv.

  4. https://sites.google.com/site/yangdingqi/home/foursquare-dataset.

  5. http://academictorrents.com/browse.php?search=Netflix.

  6. https://grouplens.org/datasets/movielens/.

  7. https://snap.stanford.edu/data/web-RateBeer.html.

  8. https://github.com/lyst/lightfm.

  9. https://github.com/hexiangnan/adversarial_personalized_ranking.

References

  1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16 (2016)

  2. Ageev, A.A., Sviridenko, M.I.: Approximation algorithms for maximum coverage and max cut with given sizes of parts. In: IPCO (1999)

  3. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  4. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, Cambridge (2014)

    MATH  Google Scholar 

  5. Bhargava, A., Ganti, R., Nowak, R.: Active algorithms for preference learning problems with multiple populations. CoRR arXiv:1603.04118 (2016)

  6. Chen, S.Y., Yu, Y., Da, Q., Tan, J., Huang, H.K., Tang, H.H.: Stabilizing reinforcement learning in dynamic environment with application to online recommendation. In: SIGKDD (2018)

  7. Chintala, S.: An overview of deep learning frameworks and an introduction to pytorch (2017)

  8. Das, M., Morales, G.D.F., Gionis, A., Weber, I.: Learning to question: leveraging user preferences for shopping advice. In: SIGKDD (2013)

  9. Edmonds, J., Karp, R.M.: Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 19(2), 248–264 (1972)

    Article  Google Scholar 

  10. Ganti, R., Rao, N.S., Balzano, L., Willett, R., Nowak, R.D.: On learning high dimensional structured single index models. In: AAAI (2017)

  11. Gardner, W.A.: Learning characteristics of stochastic-gradient-descent algorithms: a general study, analysis, and critique. IEEE Trans. Signal Process. 6(2), 113–133 (1984)

    MathSciNet  Google Scholar 

  12. Grbovic, M., Cheng, H.: Real-time personalization using embeddings for search ranking at airbnb. In: SIGKDD (2018)

  13. He, X., Chua, T.S.: Neural factorization machines for sparse predictive analytics. In: SIGIR (2017)

  14. He, X., He, Z., Du, X., Chua, T.S.: Adversarial personalized ranking for recommendation. In: SIGIR (2018)

  15. He, X., Zhang, H., Kan, M.Y., Chua, T.S.: Fast matrix factorization for online recommendation with implicit feedback. In: SIGIR (2016)

  16. Huang, Y., Cui, B., Zhang, W., Jiang, J., Xu, Y.: TencentRec: real-time stream recommendation in practice. In: SIGMOD (2015)

  17. Kang, D., Payor, J.: Flow rounding. arXiv:1507.08139 (2015)

  18. Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. In: SIGKDD. ACM (2003)

  19. Li, M., Bao, Z., Sellis, T., Yan, S., Zhang, R.: Homeseeker: a visual analytics system of real estate data. J. Vis. Lang. Comput. 45, 1–16 (2018)

    Article  Google Scholar 

  20. Li, Y., Fan, J., Wang, Y., Tan, K.L.: Influence maximization on social graphs: a survey. IEEE Trans. Knowl. Data Eng. 30(10), 1852–1872 (2018)

    Article  Google Scholar 

  21. Qian, L., Gao, J., Jagadish, H.V.: Learning user preferences by adaptive pairwise comparison. Proc. VLDB Endow. 8(11), 1322–1333 (2015)

    Article  Google Scholar 

  22. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: bayesian personalized ranking from implicit feedback. In: UAI (2009)

  23. Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted boltzmann machines for collaborative filtering. In: ICDM (2007)

  24. Tang, Y., Shi, Y., Xiao, X.: Influence maximization in near-linear time: a martingale approach. In: SIGMOD. ACM (2015)

  25. Tang, Y., Xiao, X., Shi, Y.: Influence maximization: near-optimal time complexity meets practical efficiency. In: SIGMOD. ACM (2014)

  26. Teevan, J., Dumais, S.T., Horvitz, E.: Potential for personalization. ACM Trans. Computer-Human Interact. (TOCHI) 17(1), 4 (2010)

    Article  Google Scholar 

  27. Udell, M., Boyd, S.: Maximizing a sum of sigmoids. Optim. Eng (2013)

  28. Wang, W., Yin, H., Huang, Z., Wang, Q., Du, X., Nguyen, Q.V.H.: Streaming ranking based recommender systems. In: SIGIR (2018)

  29. Weston, J., Bengio, S., Usunier, N.: Wsabie: scaling up to large vocabulary image annotation. In: IJCAI (2011)

  30. Yang, L., Hsieh, C.K., Yang, H., Pollak, J.P., Dell, N., Belongie, S., Cole, C., Estrin, D.: Yum-me: a personalized nutrient-based meal recommender system. ACM Trans. Inf. Syst. (TOIS) 36(1), 7 (2017)

    Article  Google Scholar 

  31. Yin, H., Cui, B., Chen, L., Hu, Z., Zhou, X.: Dynamic user modeling in social media systems. ACM Trans. Inf. Syst. (TOIS) 33(3), 10 (2015)

    Article  Google Scholar 

  32. Yin, H., Zhou, X., Cui, B., Wang, H., Zheng, K., Nguyen, Q.V.H.: Adapting to user interest drift for POI recommendation. IEEE Trans. Knowl. Data Eng. 28(10), 2566–2581 (2016)

    Article  Google Scholar 

  33. Zhao, X., Xia, L., Zhang, L., Ding, Z., Yin, D., Tang, J.: Deep reinforcement learning for page-wise recommendations. In: RecSys (2018)

  34. Zhao, X., Zhang, L., Ding, Z., Xia, L., Tang, J., Yin, D.: Recommendations with negative feedback via pairwise deep reinforcement learning. In: SIGKDD (2018)

  35. Zhou, K., Yang, S.H., Zha, H.: Functional matrix factorizations for cold-start recommendation. In: SIGIR (2011)

Download references

Acknowledgements

This work is supported in part by ARC under Grants DP200102611, DP180102050, DE190101118, and a Google Faculty Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhao Gan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 255 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, L., Gan, J., Bao, Z. et al. Incremental preference adjustment: a graph-theoretical approach. The VLDB Journal 29, 1475–1500 (2020). https://doi.org/10.1007/s00778-020-00623-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00778-020-00623-8

Keywords

Navigation