当前位置: X-MOL 学术J. Syst. Archit. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attacking vision-based perception in end-to-end autonomous driving models
Journal of Systems Architecture ( IF 3.7 ) Pub Date : 2020-04-04 , DOI: 10.1016/j.sysarc.2020.101766
Adith Boloor , Karthik Garimella , Xin He , Christopher Gill , Yevgeniy Vorobeychik , Xuan Zhang

Recent advances in machine learning, especially techniques such as deep neural networks, are enabling a range of emerging applications. One such example is autonomous driving, which often relies on deep learning for perception. However, deep learning-based perception has been shown to be vulnerable to a host of subtle adversarial manipulations of images. Nevertheless, the vast majority of such demonstrations focus on perception that is disembodied from end-to-end control. We present novel end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road. These attacks target deep neural network models for end-to-end autonomous driving control. A systematic investigation shows that such attacks are easy to engineer, and we describe scenarios (e.g., right turns) in which they are highly effective. We define several objective functions that quantify the success of an attack and develop techniques based on Bayesian Optimization to efficiently traverse the search space of higher dimensional attacks. Additionally, we define a novel class of hijacking attacks, where painting lines on the road cause the driverless car to follow a target path. Through the use of network deconvolution, we provide insights into the successful attacks, which appear to work by mimicking activations of entirely different scenarios. Our code is available on https://github.com/xz-group/AdverseDrive



中文翻译:

在端到端自动驾驶模型中攻击基于视觉的感知

机器学习的最新进展,尤其是诸如深度神经网络之类的技术,正在实现一系列新兴应用。这样的例子之一就是自动驾驶,它通常依赖于深度学习来感知。但是,已经证明,基于深度学习的感知容易受到图像的许多微妙对抗操作的影响。尽管如此,绝大多数此类演示都集中在端到端控制中无法体现的感知上。我们使用简单的可物理实现的攻击,在模拟中提出了关于自动驾驶的新型端到端攻击:在道路上涂黑线。这些攻击的目标是用于端到端自动驾驶控制的深度神经网络模型。一项系统的调查表明,此类攻击易于设计,因此我们描述了各种情况(例如,右转)在其中非常有效。我们定义了一些量化攻击成功的目标函数,并基于贝叶斯优化技术开发了有效地遍历高维攻击的搜索空间的技术。此外,我们定义了一个新颖的类劫持攻击,即道路上的涂漆线导致无人驾驶汽车沿着目标路径行驶。通过使用网络反卷积,我们可以洞悉成功的攻击,这些攻击似乎可以通过模拟完全不同的场景的激活来起作用。我们的代码位于https://github.com/xz-group/AdverseDrive

更新日期:2020-04-04
down
wechat
bug