当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes
Security and Communication Networks ( IF 1.968 ) Pub Date : 2020-11-16 , DOI: 10.1155/2020/8882494
Bowen Zhang 1 , Benedetta Tondi 2 , Xixiang Lv 1 , Mauro Barni 2
Affiliation  

The existence of adversarial examples and the easiness with which they can be generated raise several security concerns with regard to deep learning systems, pushing researchers to develop suitable defence mechanisms. The use of networks adopting error-correcting output codes (ECOC) has recently been proposed to counter the creation of adversarial examples in a white-box setting. In this paper, we carry out an in-depth investigation of the adversarial robustness achieved by the ECOC approach. We do so by proposing a new adversarial attack specifically designed for multilabel classification architectures, like the ECOC-based one, and by applying two existing attacks. In contrast to previous findings, our analysis reveals that ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation. Moreover, the adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class, hence making it difficult to use the prediction confidence to detect them. Our findings are proven by means of experimental results obtained on MNIST, CIFAR-10, and GTSRB classification tasks.

中文翻译:

基于纠错输出代码的DNN的对抗鲁棒性

对抗性示例的存在及其生成的难易程度引发了有关深度学习系统的一些安全问题,促使研究人员开发合适的防御机制。最近,有人提出采用采用纠错输出代码(ECOC)的网络来对抗在白盒设置中产生对抗性示例的情况。在本文中,我们对通过ECOC方法实现的对抗鲁棒性进行了深入研究。为此,我们提出了一种新的对抗性攻击,该攻击专门针对多标签分类架构而设计,例如基于ECOC的对抗性攻击,并通过应用两种现有的攻击来实现。与以前的发现相反,我们的分析表明,通过引入较小的对抗性扰动,可以非常轻松地攻击基于ECOC的网络。此外,对抗示例可以以这样的方式生成,以实现预测目标类别的高概率,因此使得难以使用预测置信度来检测它们。通过在MNIST,CIFAR-10和GTSRB分类任务上获得的实验结果,证明了我们的发现。
更新日期:2020-11-16
down
wechat
bug