当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Crafting Adversarial Perturbations via Transformed Image Component Swapping
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 9-12-2022 , DOI: 10.1109/tip.2022.3204206
Akshay Agarwal 1 , Nalini Ratha 1 , Mayank Vatsa 2 , Richa Singh 2
Affiliation  

Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.

中文翻译:


通过变换后的图像组件交换来制作对抗性扰动



对抗性攻击已被证明可以欺骗深度分类网络。这些攻击有两个关键特征:首先,这些扰动大多是由深度神经网络本身精心制作的加性噪声​​。其次,噪声被添加到整个图像中,而不是将它们视为构成它们的多个组件的组合。受这些观察的启发,在本研究中,我们首先研究各种图像成分的作用以及这些成分对图像分类的影响。这些操纵不需要了解网络和外部噪声即可有效地发挥作用,因此有可能成为现实世界攻击的最实用的选择之一。基于特定图像成分的重要性,我们还提出了一种针对看不见的深层网络的可转移对抗攻击。所提出的攻击利用投影梯度下降策略将对抗性扰动添加到所操纵的分量图像中。这些实验在广泛的网络和四个数据库(包括 ImageNet 和 CIFAR-100)上进行。实验表明,所提出的攻击实现了更好的可转移性,从而为攻击者提供了上风。在ImageNet数据库上,所提出的攻击成功率高达88.5%,而当前对该数据库最先进的攻击成功率为53.8%。我们进一步测试了针对最成功的防御措施之一的攻击的弹性,即对抗性训练,以衡量其强度。与几种具有挑战性的攻击的比较表明:(i)所提出的攻击对多个看不见的网络具有更高的可转移率,并且(ii)很难减轻其影响。 我们声称,基于对图像成分的理解,所提出的研究已经能够识别迄今为止未曾见过且使用当前防御机制无法解决的较新的对抗性攻击。
更新日期:2024-08-28
down
wechat
bug