当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Frequency-Tuned Universal Adversarial Attacks on Texture Recognition
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2022-09-02 , DOI: 10.1109/tip.2022.3202366
Yingpeng Deng , Lina J Karam

Although deep neural networks (DNNs) have been shown to be susceptible to image-agnostic adversarial attacks on natural image classification problems, the effects of such attacks on DNN-based texture recognition have yet to be explored. As part of our work, we find that limiting the perturbation’s $l_{p}$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images. Based on the fact that human perception is affected by local visual frequency characteristics, we propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain. Our experiments indicate that our proposed method can produce less perceptible perturbations yet with a similar or higher white-box fooling rates on various DNN texture classifiers and texture datasets as compared to existing universal attack techniques. We also demonstrate that our approach can improve the attack robustness against defended models as well as the cross-dataset transferability for texture recognition problems.

中文翻译:

纹理识别的频率调谐通用对抗攻击

尽管深度神经网络 (DNN) 已被证明容易受到针对自然图像分类问题的与图像无关的对抗性攻击,但此类攻击对基于 DNN 的纹理识别的影响仍有待探索。作为我们工作的一部分,我们发现限制扰动的 $l_{p}$空间域中的范数可能不是限制纹理图像的普遍对抗性扰动的可感知性的合适方法。基于人类感知受局部视觉频率特征影响的事实,我们提出了一种频率调整的通用攻击方法来计算频域中的通用扰动。我们的实验表明,与现有的通用攻击技术相比,我们提出的方法可以产生较少可感知的扰动,但在各种 DNN 纹理分类器和纹理数据集上具有相似或更高的白盒愚弄率。我们还证明了我们的方法可以提高对防御模型的攻击鲁棒性以及纹理识别问题的跨数据集可迁移性。
更新日期:2022-09-02
down
wechat
bug