当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DiFNet: Densely High-Frequency Convolutional Neural Networks
IEEE Signal Processing Letters ( IF 3.2 ) Pub Date : 2021-06-21 , DOI: 10.1109/lsp.2021.3090944
Wenzheng Hu , Mingyang Li , Zheng Wang , Jianqiang Wang , Changshui Zhang

Deep convolutional neural networks have achieved great success in many computer vision tasks. However, they can be attacked by adversarial examples which are input-data with small intentional feature perturbations to fool machine learning models. This vulnerability to adversarial examples poses a potential threat to their widespread application, especially in security-sensitive scenarios. In this paper, we revisit adversarial examples in the frequency domain referring to the computational theory of edge detection, and propose a novel densely high-frequency convolution neural network (DiFNet) to effectively defend against adversarial attacks. DiFNet introduces classical edge detection operations into the network structure to enhance the detection ability for high-frequency components in the image. It works well to defend against the imperceptible perturbations attacks even without adversarial examples. Experiments demonstrate that DiFNet outperforms handcraft-designed CNNs in terms of prediction accuracy with improved robustness against state-of-the-art adversarial attacks (FGSM, PGD, etc.).

中文翻译:


DiFNet:密集高频卷积神经网络



深度卷积神经网络在许多计算机视觉任务中取得了巨大的成功。然而,它们可能会受到对抗性示例的攻击,这些示例是具有小的故意特征扰动的输入数据,以欺骗机器学习模型。这种对抗性示例的漏洞对其广泛应用构成了潜在威胁,特别是在安全敏感的场景中。在本文中,我们参考边缘检测的计算理论重新审视频域中的对抗性示例,并提出一种新型的密集高频卷积神经网络(DiFNet)来有效防御对抗性攻击。 DiFNet将经典的边缘检测操作引入到网络结构中,以增强对图像中高频成分的检测能力。即使没有对抗性例子,它也能很好地防御难以察觉的扰动攻击。实验表明,DiFNet 在预测精度方面优于手工设计的 CNN,并且提高了针对最先进的对抗性攻击(FGSM、PGD 等)的鲁棒性。
更新日期:2021-06-21
down
wechat
bug