当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Revisiting ensemble adversarial attack
Signal Processing: Image Communication ( IF 3.5 ) Pub Date : 2022-06-04 , DOI: 10.1016/j.image.2022.116747
Ziwen He , Wei Wang , Jing Dong , Tieniu Tan

Deep neural networks have shown vulnerability to adversarial attacks. Adversarial examples generated with an ensemble of source models can effectively attack unseen target models, posing a security threat to practical applications. In this paper, we investigate the manner of ensemble adversarial attacks from the viewpoint of network gradients with respect to inputs. We observe that most ensemble adversarial attacks simply average gradients of the source models, ignoring their different contributions in the ensemble. To remedy this problem, we propose two novel ensemble strategies, the Magnitude-Agnostic Bagging Ensemble (MABE) strategy and Gradient-Grouped Bagging And Stacking Ensemble (G2BASE) strategy. The former builds on a bagging ensemble and leverages a gradient normalization module to rebalance the ensemble weights. The latter divides diverse models into different groups according to the gradient magnitudes and combines an intragroup bagging ensemble with an intergroup stacking ensemble. Experimental results show that the proposed methods enhance the success rate in white-box attacks and further boost the transferability in black-box attacks.



中文翻译:

重新审视整体对抗性攻击

深度神经网络已显示出对抗性攻击的脆弱性。使用一组源模型生成的对抗性示例可以有效地攻击看不见的目标模型,对实际应用构成安全威胁。在本文中,我们从相对于输入的网络梯度的角度研究了集成对抗攻击的方式。我们观察到大多数集成对抗攻击只是平均源模型的梯度,而忽略了它们在集成中的不同贡献。为了解决这个问题,我们提出了两种新颖的集成策略,即 Magnitude-Agnostic Bagging Ensemble (MABE) 策略和 Gradient-Grouped Bagging And Stacking Ensemble (G 2基础)战略。前者建立在 bagging 集成之上,并利用梯度归一化模块来重新平衡集成权重。后者根据梯度大小将不同的模型分为不同的组,并将组内 bagging 集成与组间堆叠集成相结合。实验结果表明,所提出的方法提高了白盒攻击的成功率,并进一步提高了黑盒攻击的可迁移性。

更新日期:2022-06-04
down
wechat
bug