当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Connections between Regularizations for Improving DNN Robustness
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-04 , DOI: arxiv-2007.02209
Yiwen Guo and Long Chen and Yurong Chen and Changshui Zhang

This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs), from a theoretical point of view. Specifically, we study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional. We investigate them on DNNs with general rectified linear activations, which constitute one of the most prevalent families of models for image classification and a host of other machine learning applications. We shed light on essential ingredients of these regularizations and re-interpret their functionality. Through the lens of our study, more principled and efficient regularizations can possibly be invented in the near future.

中文翻译:

关于提高 DNN 鲁棒性的正则化之间的联系

本文从理论的角度分析了最近为提高深度神经网络 (DNN) 的对抗性鲁棒性而提出的正则化术语。具体来说,我们研究了几种有效方法之间可能的联系,包括输入梯度正则化、雅可比正则化、曲率正则化和交叉 Lipschitz 函数。我们在具有一般校正线性激活的 DNN 上研究它们,这些 DNN 构成了最流行的图像分类模型系列和许多其他机器学习应用程序。我们阐明了这些正则化的基本成分并重新解释了它们的功能。从我们研究的角度来看,在不久的将来可能会发明出更有原则和更有效的正则化。
更新日期:2020-07-07
down
wechat
bug