当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Uni-image: Universal image construction for robust neural model.
Neural Networks ( IF 7.8 ) Pub Date : 2020-05-21 , DOI: 10.1016/j.neunet.2020.05.018
Jiacang Ho 1 , Byung-Gook Lee 1 , Dae-Ki Kang 1
Affiliation  

Deep neural networks have shown high performance in prediction, but they are defenseless when they predict on adversarial examples which are generated by adversarial attack techniques. In image classification, those attack techniques usually perturb the pixel of an image to fool the deep neural networks. To improve the robustness of the neural networks, many researchers have introduced several defense techniques against those attack techniques. To the best of our knowledge, adversarial training is one of the most effective defense techniques against the adversarial examples. However, the defense technique could fail against a semantic adversarial image that performs arbitrary perturbation to fool the neural networks, where the modified image semantically represents the same object as the original image. Against this background, we propose a novel defense technique, Uni-Image Procedure (UIP) method. UIP generates a universal-image (uni-image) from a given image, which can be a clean image or a perturbed image by some attacks. The generated uni-image preserves its own characteristics (i.e. color) regardless of the transformations of the original image. Note that those transformations include inverting the pixel value of an image, modifying the saturation, hue, and value of an image, etc. Our experimental results using several benchmark datasets show that our method not only defends well known adversarial attacks and semantic adversarial attack but also boosts the robustness of the neural network.



中文翻译:

单图像:用于健壮神经模型的通用图像构造。

深度神经网络在预测方面表现出很高的性能,但是当它们预测由对抗性攻击技术生成的对抗性示例时,它们是无防御的。在图像分类中,那些攻击技术通常会扰乱图像的像素以欺骗深度神经网络。为了提高神经网络的鲁棒性,许多研究人员介绍了针对这些攻击技术的几种防御技术。据我们所知,对抗训练是针对对抗示例的最有效防御技术之一。但是,这种防御技术可能无法抵抗语义对抗图像,该语义对抗图像会执行任意扰动来愚弄神经网络,其中修改后的图像在语义上表示与原始图像相同的对象。在这种背景下,单映像过程(UIP)方法。UIP从给定图像生成通用图像(uni-image),该图像可以是干净图像,也可以是某些攻击引起的扰动图像。不管原始图像的变换如何,生成的单图像都保留其自身的特征(即颜色)。请注意,这些转换包括反转图像的像素值,修改图像的饱和度,色相和值等。我们使用多个基准数据集的实验结果表明,我们的方法不仅可以防御众所周知的对抗攻击和语义对抗,而且可以防御也提高了神经网络的鲁棒性。

更新日期:2020-05-21
down
wechat
bug