当前位置: X-MOL 学术Comput. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards cross-task universal perturbation against black-box object detectors in autonomous driving
Computer Networks ( IF 4.4 ) Pub Date : 2020-07-15 , DOI: 10.1016/j.comnet.2020.107388
Quanxin Zhang , Yuhang Zhao , Yajie Wang , Thar Baker , Jian Zhang , Jingjing Hu

Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the cross-task, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pile-up, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.



中文翻译:

应对自动驾驶中的黑匣子物体探测器的跨任务通用摄动

深度神经网络是人工智能的主要研究领域,适用于许多决策领域。无人驾驶和无人驾驶汽车通常依赖于深度神经网络,以在本地或通过5G通道在分布式节点之间通过群体智能在真实的道路环境中进行准确,可靠的检测,分类和对周围物体的测距。但是,事实证明,深度神经网络容易受到精心设计的对抗性示例的攻击,这些示例性示例在计算机视觉任务中是人眼无法察觉的。研究该漏洞对于增强神经网络的鲁棒性是有价值的。但是,针对对象检测模型的现有对抗示例均取决于图像,因此在本文中,我们使用通用扰动对对象检测模型实施对抗性攻击。我们发现了通用扰动的跨任务,跨模型和跨数据集的可传递性,我们首先训练通用扰动生成器,然后以两种方式将通用扰动添加到目标图像:调整大小和堆积,以解决普遍扰动不能直接应用于攻击目标检测模型的问题。我们使用通用扰动的可传递性来攻击黑盒物体检测模型。这样,减少了生成对抗性示例的时间成本。在PASCAL VOC和MS COCO数据集上进行了一系列实验,证明了跨任务攻击的可行性并证明了我们对两种代表性物体检测器进行攻击的有效性:

更新日期:2020-07-21
down
wechat
bug