当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Physically Realizable Adversarial Examples for LiDAR Object Detection
arXiv - CS - Robotics Pub Date : 2020-04-01 , DOI: arxiv-2004.00543
James Tu, Mengye Ren, Siva Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, Raquel Urtasun

Modern autonomous driving systems rely heavily on deep learning models to process point cloud sensory data; meanwhile, deep models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Despite the fact that this poses a security concern for the self-driving industry, there has been very little exploration in terms of 3D perception, as most adversarial attacks have only been applied to 2D flat images. In this paper, we address this issue and present a method to generate universal 3D adversarial objects to fool LiDAR detectors. In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%. We report attack results on a suite of detectors using various input representation of point clouds. We also conduct a pilot study on adversarial defense using data augmentation. This is one step closer towards safer self-driving under unseen conditions from limited training data.

中文翻译:

LiDAR 目标检测的物理可实现对抗性示例

现代自动驾驶系统严重依赖深度学习模型来处理点云传感数据;与此同时,深度模型已被证明容易受到视觉上难以察觉的扰动的对抗性攻击。尽管这给自动驾驶行业带来了安全问题,但在 3D 感知方面的探索很少,因为大多数对抗性攻击仅应用于 2D 平面图像。在本文中,我们解决了这个问题,并提出了一种生成通用 3D 对抗性对象以欺骗 LiDAR 检测器的方法。特别是,我们证明了在任何目标车辆的屋顶上放置一个对抗性物体以将车辆完全隐藏在 LiDAR 检测器之外,成功率为 80%。我们使用点云的各种输入表示报告对一组检测器的攻击结果。我们还使用数据增强对对抗性防御进行了试点研究。从有限的训练数据来看,这是在看不见的条件下更安全的自动驾驶的一步。
更新日期:2020-04-03
down
wechat
bug