当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation
Pattern Recognition ( IF 8 ) Pub Date : 2021-02-20 , DOI: 10.1016/j.patcog.2021.107903
Yatie Xiao , Chi-Man Pun , Bo Liu

Deep learning has shown superiority in dealing with complicated and professional tasks (e.g., computer vision, audio, and language processing). However, research works have confirmed that Deep Neural Networks (DNNs) are vulnerable to carefully crafted adversarial perturbations, which cause DNNs confusion on specific tasks. In object detection domain, the background has little contributions to object classification, and the crafted adversarial perturbations added to the background do not improve the adversary effect in fooling deep neural detection models yet induce substantial distortions in generated examples. Based on such situation, we introduce an adversarial attack algorithm named Adaptive Object-oriented Adversarial Method (AO2AM). It aims to fool deep neural object detection networks with the adversarial examples by applying the adaptive cumulation of object-based gradients and adding the adaptive object-based adversarial perturbations merely onto objects rather than the whole frame of input images. AO2AM can effectively make the representations of generated adversarial samples close to the decision boundary in the latent space, and force deep neural detection networks to yield inaccurate locations and false classification in the process of object detection. Compared with existing adversarial attack methods which generate adversarial perturbations acting on the global scale of the original inputs, the adversarial examples produced by AO2AM can effectively fool deep neural object detection networks and maintain a high structural similarity with corresponding clean inputs. Performing adversarial attacks on Faster R-CNN, AO2AM gains attack success rate (ASR) over 98.00% on pre-processed Pascal VOC 2007&2012 (Val), and reaches SSIM over 0.870. In Fooling SSD, AO2AM receives SSIM exceeding 0.980 on L2 norm constraint. On SSIM and Mean Attack Ratio, AO2AM outperforms adversarial attack methods based on global scale perturbations.



中文翻译:

通过自适应的面向对象对抗性干扰来欺骗深度神经检测网络

深度学习在处理复杂且专业的任务(例如计算机视觉,音频和语言处理)方面显示出优势。但是,研究工作已经证实,深度神经网络(DNN)容易受到精心设计的对抗性干扰的影响,这会引起DNN在特定任务上的困惑。在对象检测领域,背景对对象分类的贡献很小,添加到背景中的精心设计的对抗性扰动并未改善愚弄深度神经检测模型的不利影响,但会在生成的示例中引起实质性的失真。在这种情况下,我们引入了一种名为“自适应面向对象的对抗方法”(AO)的对抗攻击算法。2个在)。它旨在通过应用基于对象的梯度的自适应累积并仅将自适应的基于对象的对抗性扰动添加到对象而非输入图像的整个帧上,来欺骗具有对抗性示例的深度神经对象检测网络。AO2个AM可以有效地使生成的不利样本的表示接近潜在空间中的决策边界,并迫使深度神经检测网络在对象检测过程中产生不准确的位置和错误的分类。与现有的对抗攻击方法相比,该方法会在原始输入的全球范围内产生对抗性干扰,而AO产生的对抗性示例2个AM可以有效地欺骗深度神经目标检测网络,并与相应的干净输入保持较高的结构相似性。对Faster R-CNN,AO进行对抗攻击2个在经过预处理的Pascal VOC 2007和2012(Val)上,AM获得的攻击成功率(ASR)超过98.00%,并达到0.870的SSIM。在傻瓜固态硬盘中,AO2个AM收到的SSIM超过0.980 L.2个规范约束。关于SSIM和平均攻击率,AO2个AM的性能优于基于全局规模扰动的对抗性攻击方法。

更新日期:2021-02-26
down
wechat
bug