当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Defense against adversarial attacks by low‐level image transformations
International Journal of Intelligent Systems ( IF 7 ) Pub Date : 2020-07-20 , DOI: 10.1002/int.22258
Zhaoxia Yin 1 , Hua Wang 1 , Jie Wang 1 , Jin Tang 1 , Wenzhong Wang 1
Affiliation  

Deep neural networks (DNNs) are vulnerable to adversarial examples, which can fool classifiers by maliciously adding imperceptible perturbations to the original input. Currently, a large number of research on defending adversarial examples pay little attention to the real‐world applications, either with high computational complexity or poor defensive effects. Motivated by this observation, we develop an efficient preprocessing module to defend adversarial attacks. Specifically, before an adversarial example is fed into the model, we perform two low‐level image transformations, WebP compression and flip operation, on the picture. Then we can get a de‐perturbed sample that can be correctly classified by DNNs. WebP compression is utilized to remove the small adversarial noises. Due to the introduction of loop filtering, there will be no square effect like JPEG compression, so the visual quality of the denoised image is higher. And flip operation, which flips the image once along one side of the image, destroys the specific structure of adversarial perturbations. By taking class activation mapping to localize the discriminative image regions, we show that flipping image may mitigate adversarial effects. Extensive experiments demonstrate that the proposed scheme outperforms the state‐of‐the‐art defense methods. It can effectively defend adversarial attacks while ensuring only slight accuracy drops on normal images.

中文翻译:

通过低级图像转换防御对抗攻击

深度神经网络(DNN)容易受到对抗性示例的攻击,这些示例会通过恶意向原始输入中添加无法察觉的扰动来欺骗分类器。当前,关于防御对抗性示例的大量研究很少关注现实世界的应用程序,这些应用程序具有较高的计算复杂性或较差的防御效果。受此观察结果的启发,我们开发了一种有效的预处理模块来防御对抗性攻击。具体来说,在将对抗性示例输入模型之前,我们对图片执行两个低级图像转换,即WebP压缩和翻转操作。然后,我们可以得到可以由DNN正确分类的去扰动样本。WebP压缩用于消除较小的对抗性噪声。由于引入了环路过滤,不会有像JPEG压缩这样的正方形效果,因此去噪图像的视觉质量更高。翻转操作会沿图像的一侧翻转一次图像,从而破坏了对抗性摄动的特定结构。通过采取类激活映射来定位可区分图像区域,我们表明翻转图像可以减轻对抗性影响。大量实验表明,该方案优于最新的防御方法。它可以有效地防御对抗攻击,同时确保正常图像的准确性仅略有下降。通过采取类激活映射来定位区分图像区域,我们表明翻转图像可以减轻对抗性影响。大量实验表明,该方案优于最新的防御方法。它可以有效地防御对抗攻击,同时确保正常图像的准确性仅略有下降。通过采取类激活映射来定位区分图像区域,我们表明翻转图像可以减轻对抗性影响。大量实验表明,该方案优于最新的防御方法。它可以有效地防御对抗攻击,同时确保正常图像的准确性仅略有下降。
更新日期:2020-07-20
down
wechat
bug