当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CIS Publication Spotlight [Publication Spotlight]
IEEE Computational Intelligence Magazine ( IF 10.3 ) Pub Date : 2020-05-01 , DOI: 10.1109/mci.2020.2976181
Haibo He , Jon Garibaldi , Kay Chen Tan , Julian Togelius , Yaochu Jin , Yew Soon Ong

Digital Object Identifier: 10.1109/ TNNLS.2018.2886017 “With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to welldesigned input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversar ial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.”

中文翻译:

CIS 出版物聚焦 [出版物聚焦]

数字对象标识符:10.1109/ TNNLS.2018.2886017 “随着广泛应用的快速进步和重大成功,深度学习正在许多安全关键环境中得到应用。然而,最近发现深度神经网络 (DNN) 容易受到称为对抗性示例的精心设计的输入样本的影响。对抗性扰动是人类无法察觉的,但可以在测试/部署阶段轻松欺骗 DNN。对抗样本的脆弱性成为在安全关键环境中应用 DNN 的主要风险之一。因此,对对抗样本的攻击和防御引起了极大的关注。在本文中,我们回顾了 DNN 对抗性示例的最新发现,总结了生成对抗性示例的方法,并提出了这些方法的分类法。根据分类法,研究了对抗样本的应用。我们进一步阐述了对抗样本的对策。此外,还讨论了对抗性示例中的三个主要挑战和潜在的解决方案。”
更新日期:2020-05-01
down
wechat
bug