当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks
arXiv - CS - Cryptography and Security Pub Date : 2021-07-22 , DOI: arxiv-2107.10599
Ramin Barati, Reza Safabakhsh, Mohammad Rahmati

In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.

中文翻译:

解释人工神经网络中的对抗样本现象

在本文中,我们从收敛的角度研究对抗样本的存在和对抗训练,并提供证据表明 ANN 中的逐点收敛可以解释这些观察结果。我们提案的主要贡献在于,它将规避攻击和对抗训练的目标与学习理论中已经定义的概念联系起来。此外,我们扩展并统一了文献中的一些其他提议,并对这些提议中的观察提供了替代解释。通过不同的实验,我们证明该框架在现象研究中很有价值,并且适用于现实世界的问题。
更新日期:2021-07-23
down
wechat
bug