当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
arXiv - CS - Cryptography and Security Pub Date : 2018-12-05 , DOI: arxiv-1812.01804
Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li

Image classifiers often suffer from adversarial examples, which are generated by strategically adding a small amount of noise to input images to trick classifiers into misclassification. Over the years, many defense mechanisms have been proposed, and different researchers have made seemingly contradictory claims on their effectiveness. We present an analysis of possible adversarial models, and propose an evaluation framework for comparing different defense mechanisms. As part of the framework, we introduce a more powerful and realistic adversary strategy. Furthermore, we propose a new defense mechanism called Random Spiking (RS), which generalizes dropout and introduces random noises in the training process in a controlled manner. Evaluations under our proposed framework suggest RS delivers better protection against adversarial examples than many existing schemes.

中文翻译:

对抗样本防御的随机尖峰和系统评估

图像分类器经常受到对抗性示例的影响,对抗性示例是通过向输入图像战略性地添加少量噪声以欺骗分类器误分类而生成的。多年来,已经提出了许多防御机制,不同的研究人员对其有效性提出了看似矛盾的主张。我们对可能的对抗模型进行了分析,并提出了一个用于比较不同防御机制的评估框架。作为框架的一部分,我们引入了更强大和更现实的对手策略。此外,我们提出了一种称为随机尖峰 (RS) 的新防御机制,它概括了 dropout 并以受控方式在训练过程中引入随机噪声。
更新日期:2020-01-22
down
wechat
bug