当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An adversarial attack on DNN-based black-box object detectors
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2020-03-29 , DOI: 10.1016/j.jnca.2020.102634
Yajie Wang , Yu-an Tan , Wenjiao Zhang , Yuhang Zhao , Xiaohui Kuang

Object detection models play an essential role in various IoT devices as one of the core components. Scientific experiments have proven that object detection models are vulnerable to adversarial examples. Heretofore, some attack methods against object detection models have been proposed, but the existing attack methods can only attack white-box models or a specific type of black-box models. In this paper, we propose a novel black-box attack method called Evaporate Attack, which can successfully attack both regression-based and region-based detection models. To perform an effective attack on different types of object detection models, we design an optimization algorithm, which can generate adversarial examples only utilizes the position and label information of the model's prediction. Evaporate Attack can hide objects from detection models without any interior information of the model. This scenario is much practical in real-world faced by the attacker. Our approach achieves an 84% fooling rate on regression-based YOLOv3 and a 48% fooling rate on region-based Faster R–CNN, under the premise that all objects are hidden.



中文翻译:

对基于DNN的黑盒物体检测器的对抗攻击

对象检测模型作为核心组件之一,在各种物联网设备中扮演着至关重要的角色。科学实验证明,目标检测模型容易受到对抗性示例的攻击。迄今为止,已经提出了一些针对对象检测模型的攻击方法,但是现有的攻击方法只能攻击白盒模型或特定类型的黑盒模型。在本文中,我们提出了一种新颖的黑盒攻击方法,称为“蒸发攻击”,它可以成功地攻击基于回归和基于区域的检测模型。为了对不同类型的对象检测模型进行有效的攻击,我们设计了一种优化算法,该算法仅利用模型预测的位置和标签信息即可生成对抗性示例。蒸发攻击可以在没有模型内部信息的情况下从检测模型中隐藏对象。这种情况在攻击者面对的现实世界中非常实用。在所有对象都被隐藏的前提下,我们的方法在基于回归的YOLOv3上实现了84%的愚弄率,而在基于区域的Faster R–CNN上实现了48%的愚弄率。

更新日期:2020-03-29
down
wechat
bug