当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial examples: attacks and defenses in the physical world
International Journal of Machine Learning and Cybernetics ( IF 5.6 ) Pub Date : 2021-01-04 , DOI: 10.1007/s13042-020-01242-z
Huali Ren , Teng Huang , Hongyang Yan

Deep learning technology has become an important branch of artificial intelligence. However, researchers found that deep neural networks, as the core algorithm of deep learning technology, are vulnerable to adversarial examples. The adversarial examples are some special input examples which were added small magnitude and carefully crafted perturbations to yield erroneous results with extremely confidence. Hence, they bring serious security risks to deep-learning-based systems. Furthermore, adversarial examples exist not only in the digital world, but also in the physical world. This paper presents a comprehensive overview of adversarial attacks and defenses in the real physical world. First, we reviewed these works that can successfully generate adversarial examples in the digital world, analyzed the challenges faced by applications in real environments. Then, we compare and summarize the work of adversarial examples on image classification tasks, target detection tasks, and speech recognition tasks. In addition, the relevant feasible defense strategies are summarized. Finally, relying on the reviewed work, we propose potential research directions for the attack and defense of adversarial examples in the physical world.



中文翻译:

对抗示例:物理世界中的攻击和防御

深度学习技术已经成为人工智能的重要分支。然而,研究人员发现,深度神经网络作为深度学习技术的核心算法,容易受到对抗性例子的攻击。对抗示例是一些特殊的输入示例,这些示例被添加了较小的数量并精心制作了扰动,以极高的置信度产生错误的结果。因此,它们给基于深度学习的系统带来了严重的安全风险。此外,对抗性例子不仅存在于数字世界中,而且存在于物理世界中。本文全面介绍了现实世界中的对抗性攻击和防御。首先,我们回顾了这些可以在数字世界中成功产生对抗性例子的作品,分析了实际环境中应用程序面临的挑战。然后,我们比较并总结了对抗示例在图像分类任务,目标检测任务和语音识别任务上的工作。另外,总结了相关的可行防御策略。最后,根据已审查的工作,我们为物理世界中对抗性示例的攻击和防御提出了潜在的研究方向。

更新日期:2021-01-04
down
wechat
bug