当前位置: X-MOL 学术Symmetry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Relationship between Generalization and Robustness to Adversarial Examples
Symmetry ( IF 2.940 ) Pub Date : 2021-05-07 , DOI: 10.3390/sym13050817
Anibal Pedraza , Oscar Deniz , Gloria Bueno

One of the most intriguing phenomenons related to deep learning is the so-called adversarial examples. These samples are visually equivalent to normal inputs, undetectable for humans, yet they cause the networks to output wrong results. The phenomenon can be framed as a symmetry/asymmetry problem, whereby inputs to a neural network with a similar/symmetric appearance to regular images, produce an opposite/asymmetric output. Some researchers are focused on developing methods for generating adversarial examples, while others propose defense methods. In parallel, there is a growing interest in characterizing the phenomenon, which is also the focus of this paper. From some well known datasets of common images, like CIFAR-10 and STL-10, a neural network architecture is first trained in a normal regime, where training and validation performances increase, reaching generalization. Additionally, the same architectures and datasets are trained in an overfitting regime, where there is a growing disparity in training and validation performances. The behaviour of these two regimes against adversarial examples is then compared. From the results, we observe greater robustness to adversarial examples in the overfitting regime. We explain this simultaneous loss of generalization and gain in robustness to adversarial examples as another manifestation of the well-known fitting-generalization trade-off.

中文翻译:

论概括性与鲁棒性的对抗性关系

与深度学习相关的最有趣的现象之一就是所谓的对抗性例子。这些样本在视觉上等效于正常输入,人类无法察觉,但它们会导致网络输出错误结果。可以将该现象描述为对称/不对称问题,从而将输入到具有与常规图像相似/对称外观的神经网络的输入产生相反/不对称的输出。一些研究人员专注于开发用于生成对抗性示例的方法,而其他研究人员则提出了防御方法。同时,人们对表征这种现象越来越感兴趣,这也是本文的重点。从一些常见的常见图像数据集(例如CIFAR-10和STL-10)中,首先在正常状态下训练神经网络架构,培训和验证性能提高的地方,达到普遍化的程度。此外,在过度拟合的情况下对相同的体系结构和数据集进行了训练,在这种情况下,训练和验证性能之间的差距越来越大。然后比较这两种制度对抗对手的行为。从结果可以看出,在过度拟合制度中,对抗性示例具有更大的鲁棒性。我们将这种普遍性的同时丧失和对抗性示例的鲁棒性解释为众所周知的拟合一般化折衷的另一种体现。然后比较这两种制度对抗对手的行为。从结果可以看出,在过度拟合制度中,对抗性示例具有更大的鲁棒性。我们将这种普遍性的同时丧失和对抗性示例的鲁棒性解释为众所周知的拟合一般化折衷的另一种体现。然后比较这两种制度对抗对手的行为。从结果可以看出,在过度拟合制度中,对抗性示例具有更大的鲁棒性。我们将这种普遍性的同时丧失和对抗性示例的鲁棒性解释为众所周知的拟合一般化折衷的另一种体现。
更新日期:2021-05-07
down
wechat
bug