当前位置: X-MOL 学术J. Inf. Secur. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
NaturalAE: Natural and robust physical adversarial examples for object detectors
Journal of Information Security and Applications ( IF 5.6 ) Pub Date : 2021-01-04 , DOI: 10.1016/j.jisa.2020.102694
Mingfu Xue , Chengxiang Yuan , Can He , Jian Wang , Weiqiang Liu

Recently, many studies show that deep neural networks (DNNs) are susceptible to adversarial examples, which are generated by adding imperceptible perturbations to the input of DNN. However, in order to convince that adversarial examples are real threats in real physical world, it is necessary to study and evaluate the adversarial examples in real-world scenarios. In this paper, we propose a natural and robust physical adversarial example attack method targeting object detectors under real-world conditions, which is more challenging than targeting image classifiers. The generated adversarial examples are robust to various physical constraints and visually look similar to the original images, thus these adversarial examples are natural to humans and will not cause any suspicions. First, to ensure the robustness of the adversarial examples in real-world conditions, the proposed method exploits different image transformation functions (Distance, Angle, Illumination and Photographing), to simulate various physical changes during the iterative optimization of the adversarial examples generation. Second, to construct natural adversarial examples, the proposed method uses an adaptive mask to constrain the area and intensities of the added perturbations, and utilizes the real-world perturbation score (RPS) to make the perturbations be similar to those real noises in physical world. Compared with existing studies, our generated adversarial examples can achieve a high success rate with less conspicuous perturbations. Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing. Specifically, the attack success rate of generated adversarial examples indoors and outdoors is high up to 73.33% and 82.22%, respectively. Meanwhile, the proposed method ensures the naturalness of the generated adversarial example, and the size of added perturbations is as low as 29361.86, which is much smaller than the perturbations in the existing works (95381.14 at the highest). Further, the proposed physical adversarial attack method can be transferred from the white-box models to other object detection models. The attack success rate of the adversarial examples (generated targeting Faster R-CNN Inception v2) is high up to 57.78% on the SSD models, while the success rate of adversarial example (generated targeting YOLO v2) on SSD models reaches 77.78%. This paper reveals that physical adversarial example attacks are real threats in the real-world conditions, and can hopefully provide guidance for designing robust object detectors and image classifiers.



中文翻译:

NaturalAE:用于对象检测器的自然而强大的物理对抗示例

最近,许多研究表明,深度神经网络(DNN)容易受到对抗性示例的影响,这些对抗性示例是通过在DNN的输入中添加不可感知的扰动而生成的。但是,为了说服对抗示例是现实世界中的真实威胁,有必要研究和评估现实场景中的对抗示例。在本文中,我们提出了一种针对现实环境中目标检测器的自然而强大的物理对抗示例攻击方法,这比针对图像分类器更具挑战性。生成的对抗示例对各种物理约束都很健壮,并且在视觉上看起来类似于原始图像,因此这些对抗示例对于人类而言是自然的,不会引起任何怀疑。第一,为了确保真实世界条件下对抗性示例的鲁棒性,所提出的方法利用不同的图像变换功能(距离,角度,照明和照相),以模拟对抗性示例生成的迭代优化过程中的各种物理变化。其次,为了构建自然对抗示例,该方法使用自适应蒙版来约束所添加扰动的面积和强度,并利用实际扰动得分([RP小号),以使干扰与物理世界中的真实噪声相似。与现有研究相比,我们生成的对抗性示例可以在不引起明显干扰的情况下获得较高的成功率。实验结果表明,所生成的对抗示例在各种室内和室外物理条件(包括不同的距离,角度,照明和拍摄)下均具有鲁棒性。具体而言,在室内和室外生成的对抗示例的攻击成功率分别高达73.33%和82.22%。同时,所提出的方法确保了所生成的对抗性例子的自然性,并且所添加的扰动的大小低至29361.86,这比现有作品中的扰动小得多(最高为95381.14)。进一步,所提出的物理对抗攻击方法可以从白盒模型转移到其他物体检测模型。在SSD模型上,对抗示例(针对Faster R-CNN Inception v2生成)的攻击成功率高达57.78%,而在SSD模型上对抗示例(针对YOLO v2生成)的攻击成功率达到77.78%。本文揭示了物理对抗示例攻击是现实世界中的真实威胁,并有望为设计鲁棒的目标检测器和图像分类器提供指导。而对抗示例(针对YOLO v2生成)在SSD型号上的成功率达到77.78%。本文揭示了物理对抗示例攻击是现实世界中的真实威胁,并有望为设计鲁棒的目标检测器和图像分类器提供指导。而对抗示例(针对YOLO v2生成)在SSD型号上的成功率达到77.78%。本文揭示了物理对抗示例攻击是现实条件下的真实威胁,并有望为设计鲁棒的目标检测器和图像分类器提供指导。

更新日期:2021-01-04
down
wechat
bug