当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robustness to adversarial examples can be improved with overfitting
International Journal of Machine Learning and Cybernetics ( IF 3.1 ) Pub Date : 2020-02-26 , DOI: 10.1007/s13042-020-01097-4
Oscar Deniz , Anibal Pedraza , Noelia Vallez , Jesus Salido , Gloria Bueno

Deep learning (henceforth DL) has become most powerful machine learning methodology. Under specific circumstances recognition rates even surpass those obtained by humans. Despite this, several works have shown that deep learning produces outputs that are very far from human responses when confronted with the same task. This the case of the so-called “adversarial examples” (henceforth AE). The fact that such implausible misclassifications exist points to a fundamental difference between machine and human learning. This paper focuses on the possible causes of this intriguing phenomenon. We first argue that the error in adversarial examples is caused by high bias, i.e. by regularization that has local negative effects. This idea is supported by our experiments in which the robustness to adversarial examples is measured with respect to the level of fitting to training samples. Higher fitting was associated to higher robustness to adversarial examples. This ties the phenomenon to the trade-off that exists in machine learning between fitting and generalization.

中文翻译:

过度拟合可以提高对抗性示例的鲁棒性

深度学习(以下简称DL)已成为最强大的机器学习方法。在特定情况下,识别率甚至超过人类获得的识别率。尽管如此,一些工作表明,深度学习在遇到相同任务时所产生的输出与人类的反应相去甚远。所谓的“对抗性例子”(此后称为AE)就是这种情况。存在这种难以置信的错误分类的事实表明,机器学习和人类学习之间存在根本差异。本文着重于这种有趣现象的可能原因。我们首先辩称,对抗性例子中的错误是由高偏差引起的,即由具有局部负面影响的正则化引起的。这个想法在我们的实验中得到了支持,在实验中,针对对抗样本的鲁棒性是根据训练样本的拟合程度来衡量的。拟合度越高,对抗性示例的鲁棒性越高。这将这种现象与机器学习在拟合和泛化之间存在的取舍联系在一起。
更新日期:2020-02-26
down
wechat
bug