当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Iterative Attack towards Explainable Adversarial Robustness
Pattern Recognition ( IF 7.5 ) Pub Date : 2020-09-01 , DOI: 10.1016/j.patcog.2020.107309
Yucheng Shi , Yahong Han , Quanxin Zhang , Xiaohui Kuang

Abstract Image classifiers based on deep neural networks show severe vulnerability when facing adversarial examples crafted on purpose. Designing more effective and efficient adversarial attacks is attracting considerable interest due to its potential contribution to interpretability of deep learning and validation of neural networks’ robustness. However, current iterative attacks use a fixed step size for each noise-adding step, making further investigation into the effect of variable step size on model robustness ripe for exploration. We prove that if the upper bound of noise added to the original image is fixed, the attack effect can be improved if the step size is positively correlated with the gradient obtained at each step by querying the target model. In this paper, we propose Ada-FGSM (Adaptive FGSM), a new iterative attack that adaptively allocates step size of noises according to gradient information at each step. Improvement of success rate and accuracy decrease measured on ImageNet with multiple models emphasizes the validity of our method. We analyze the process of iterative attack by visualizing their trajectory and gradient contour, and further explain the vulnerability of deep neural networks to variable step size adversarial examples.

中文翻译:

对可解释的对抗鲁棒性的自适应迭代攻击

摘要 基于深度神经网络的图像分类器在面对故意制作的对抗样本时表现出严重的脆弱性。由于其对深度学习的可解释性和神经网络鲁棒性验证的潜在贡献,设计更有效和高效的对抗性攻击引起了相当大的兴趣。然而,当前的迭代攻击对每个添加噪声的步骤使用固定的步长,进一步研究可变步长对模型鲁棒性的影响,值得探索。我们证明,如果添加到原始图像的噪声的上限是固定的,如果步长与通过查询目标模型在每一步获得的梯度正相关,则可以提高攻击效果。在本文中,我们提出了 Ada-FGSM(自适应 FGSM),一种新的迭代攻击,根据每一步的梯度信息自适应地分配噪声的步长。使用多个模型在 ImageNet 上测量的成功率和准确性下降的提高强调了我们方法的有效性。我们通过可视化其轨迹和梯度轮廓来分析迭代攻击的过程,并进一步解释深度神经网络对可变步长对抗样本的脆弱性。
更新日期:2020-09-01
down
wechat
bug