当前位置: X-MOL 学术Cybersecurity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A survey of practical adversarial example attacks
Cybersecurity ( IF 3.9 ) Pub Date : 2018-09-06 , DOI: 10.1186/s42400-018-0012-9
Lu Sun , Mingtian Tan , Zhe Zhou

Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning. Existing researches covered the methodologies of adversarial example generation, the root reason of the existence of adversarial examples, and some defense schemes. However practical attack against real world systems did not appear until recent, mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity. Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems. To guide future research in defending adversarial examples in the real world, we formalize the threat model for practical attacks with adversarial examples, and also analyze the restrictions and key procedures for launching real world adversarial example attacks.

中文翻译:

实际对抗性示例攻击的调查

对抗样本揭示了机器学习技术在鲁棒性方面的弱点,这也激发了对手利用弱点攻击采用机器学习的系统。现有的研究涵盖了对抗样本生成的方法论、对抗样本存在的根本原因以及一些防御方案。然而,针对现实世界系统的实际攻击直到最近才出现,主要是因为很难在不破坏完整性的情况下将人工生成的示例注入托管系统背后的模型中。最近针对人脸识别系统和道路标志识别系统的案例研究最终缩小了理论对抗性示例生成方法与针对真实系统的实际攻击方案之间的差距。
更新日期:2018-09-06
down
wechat
bug