当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Survey on Adversarial Recommender Systems
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2021-03-06 , DOI: 10.1145/3439729
Yashar Deldjoo 1 , Tommaso Di Noia 1 , Felice Antonio Merra 1
Affiliation  

Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deep CF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: Many applications of machine learning (ML) are adversarial in nature [146]. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models) and (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 76 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community working on the security of RS or on generative models using GANs to improve their quality.

中文翻译:

对抗性推荐系统的调查

基于协同过滤 (CF) 的潜在因子模型 (LFM),如矩阵分解 (MF) 和深度 CF 方法,由于其出色的性能和推荐准确度,在现代推荐系统 (RS) 中得到广泛应用。然而,成功伴随着一个新的重大挑战:机器学习 (ML) 的许多应用本质上都是对抗性的[146]。近年来,已经表明这些方法容易受到对抗性示例的影响,即旨在迫使推荐模型产生错误输出的微妙但非随机扰动。本次调查的目标有两个:(i) 展示对抗性机器学习 (AML) 的最新进展,以确保 RS 的安全性(即攻击和防御推荐模型),以及 (ii) 展示 AML 在生成对抗网络(GAN)用于生成应用,这要归功于它们学习(高维)数据分布的能力。在本次调查中,我们对发表在主要 RS 和 ML 期刊和会议上的 76 篇文章进行了详尽的文献综述。
更新日期:2021-03-06
down
wechat
bug