当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2022-08-15 , DOI: 10.1109/tii.2022.3198481
Yanjiao Chen 1 , Xiaotian Zhu 1 , Xueluan Gong 1 , Xinjing Yi 1 , Shuyang Li 1
Affiliation  

With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully designed poisons into the training dataset of the DNN model in the traffic sign recognition system, the attacker can mislead the system to make targeted misclassification or cause a reduction in model classification accuracy. In this article, we conduct a thorough investigation of the state-of-the-art data poisoning attacks and defenses against AVs. According to whether the attacker needs to manipulate the data labeling process, we divide the state-of-the-art attack approaches into two categories, i.e., dirty-label attacks and clean-label attacks. We also differentiate the existing defense methods into two categories based on whether to modify the training data or the models, i.e., data-based defenses and model-based defenses. In addition to a detailed review of attacks and defenses in each category, we also give a qualitative comparison of the existing attacks and defenses. Besides, we provide a quantitative comparison of the existing attack and defense methods through experiments. Last but not least, we pinpoint several future directions for data poisoning attacks and defenses in AVs, providing possible ways for further research.

中文翻译:

车联网网络中的数据中毒攻击:分类、最新技术和未来方向

随着深度学习的空前发展,自动驾驶汽车(AV)如今取得了巨大的进步。然而,DNN 模型支持的 AV 容易受到数据中毒攻击,阻碍了自动驾驶的大规模应用。例如,通过在交通标志识别系统中的 DNN 模型的训练数据集中注入精心设计的毒物,攻击者可以误导系统进行有针对性的错误分类或导致模型分类精度下降。在本文中,我们对最先进的数据中毒攻击和针对 AV 的防御进行了彻底调查。根据攻击者是否需要操纵数据标记过程,我们将最先进的攻击方法分为两类,即脏标签攻击和干净标签攻击。我们还根据是否修改训练数据或模型将现有的防御方法分为两类,即基于数据的防御和基于模型的防御。除了对每个类别的攻击和防御进行详细回顾外,我们还对现有的攻击和防御进行了定性比较。此外,我们通过实验对现有的攻击和防御方法进行了定量比较。最后但并非最不重要的一点是,我们确定了 AV 中数据中毒攻击和防御的几个未来方向,为进一步研究提供了可能的途径。我们还对现有的攻击和防御进行了定性比较。此外,我们通过实验对现有的攻击和防御方法进行了定量比较。最后但并非最不重要的一点是,我们确定了 AV 中数据中毒攻击和防御的几个未来方向,为进一步研究提供了可能的途径。我们还对现有的攻击和防御进行了定性比较。此外,我们通过实验对现有的攻击和防御方法进行了定量比较。最后但并非最不重要的一点是,我们确定了 AV 中数据中毒攻击和防御的几个未来方向,为进一步研究提供了可能的途径。
更新日期:2022-08-15
down
wechat
bug