当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
High Intrinsic Dimensionality Facilitates Adversarial Attack: Theoretical Evidence
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2020-09-10 , DOI: 10.1109/tifs.2020.3023274
Laurent Amsaleg , James Bailey , Amelie Barbe , Sarah M. Erfani , Teddy Furon , Michael E. Houle , Milos Radovanovic , Xuan Vinh Nguyen

Machine learning systems are vulnerable to adversarial attack. By applying to the input object a small, carefully-designed perturbation, a classifier can be tricked into making an incorrect prediction. This phenomenon has drawn wide interest, with many attempts made to explain it. However, a complete understanding is yet to emerge. In this paper we adopt a slightly different perspective, still relevant to classification. We consider retrieval, where the output is a set of objects most similar to a user-supplied query object, corresponding to the set of $k$ -nearest neighbors. We investigate the effect of adversarial perturbation on the ranking of objects with respect to a query. Through theoretical analysis, supported by experiments, we demonstrate that as the intrinsic dimensionality of the data domain rises, the amount of perturbation required to subvert neighborhood rankings diminishes, and the vulnerability to adversarial attack rises. We examine two modes of perturbation of the query: either ‘closer’ to the target point, or ‘farther’ from it. We also consider two perspectives: ‘query-centric’, examining the effect of perturbation on the query’s own neighborhood ranking, and ‘target-centric’, considering the ranking of the query point in the target’s neighborhood set. All four cases correspond to practical scenarios involving classification and retrieval.

中文翻译:

高本征维度有助于对抗性攻击:理论证据

机器学习系统容易受到对抗性攻击。通过对输入对象应用精心设计的微小扰动,可以诱骗分类器做出错误的预测。这种现象引起了广泛的兴趣,并进行了许多尝试来解释它。但是,尚未完全理解。在本文中,我们采用略有不同的观点,仍然与分类有关。我们考虑检索,其中输出是一组与用户提供的查询对象最相似的对象,对应于 $ k $ -最近的邻居。我们研究了对抗性扰动对对象相对于查询的排名的影响。通过理论分析,并在实验的支持下,我们证明了随着数据域内在维度的增加,颠覆邻域排名所需的扰动量会减少,并且对抗攻击的脆弱性也会增加。我们研究了查询的两种扰动方式:“离目标点更近”或“离目标点更远”。我们还考虑了两种观点:“以查询为中心”(检查干扰对查询自身邻域排名的影响)和“以目标为中心”(考虑查询点在目标邻域集中的排名)。所有这四种情况都与涉及分类和检索的实际情况相对应。
更新日期:2020-10-11
down
wechat
bug