当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Perturbation Defense on Deep Neural Networks
ACM Computing Surveys ( IF 16.6 ) Pub Date : 2021-10-05 , DOI: 10.1145/3465397
Xingwei Zhang 1 , Xiaolong Zheng 1 , Wenji Mao 1
Affiliation  

Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers.

中文翻译:

深度神经网络的对抗性扰动防御

深度神经网络 (DNN) 已被证明很容易受到精心设计的对抗性扰动的攻击。具有人眼无法察觉的小扰动的图像对象可以诱导基于 DNN 的图像类分类器以高概率做出错误预测。对抗性扰动也可以欺骗现实世界的机器学习系统并在不同的架构和数据集之间转移。最近,对抗性扰动的防御方法已成为热门话题并引起了广泛关注。已经提出了大量的工作来防御对抗性扰动,增强 DNN 对潜在攻击的鲁棒性,或解释对抗性扰动的起源。在本文中,我们通过阐明它们的主要概念、深入的算法和关于对抗性扰动起源的基本假设,对经典和最先进的防御方法进行了全面的调查。此外,我们进一步讨论了该领域未来研究人员的潜在方向。
更新日期:2021-10-05
down
wechat
bug