当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training
arXiv - CS - Computational Complexity Pub Date : 2021-05-25 , DOI: arxiv-2105.12237
Yatong Bai, Tanmay Gautam, Yu Gai, Somayeh Sojoudi

Recent work has shown that the training of a one-hidden-layer, scalar-output fully-connected ReLU neural network can be reformulated as a finite-dimensional convex program. Unfortunately, the scale of such a convex program grows exponentially in data size. In this work, we prove that a stochastic procedure with a linear complexity well approximates the exact formulation. Moreover, we derive a convex optimization approach to efficiently solve the "adversarial training" problem, which trains neural networks that are robust to adversarial input perturbations. Our method can be applied to binary classification and regression, and provides an alternative to the current adversarial training methods, such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). We demonstrate in experiments that the proposed method achieves a noticeably better adversarial robustness and performance than the existing methods.

中文翻译:

稳健的一层神经网络训练的实用凸公式

最近的工作表明,可以将一个隐藏层,标量输出完全连接的ReLU神经网络的训练重新构造为有限维凸程序。不幸的是,这种凸程序的规模在数据大小上呈指数增长。在这项工作中,我们证明了线性复杂度的随机过程很好地近似了精确的公式。此外,我们导出了凸优化方法来有效解决“对抗训练”问题,该问题训练了对对抗输入扰动具有鲁棒性的神经网络。我们的方法可以应用于二进制分类和回归,并为当前的对抗训练方法提供了一种替代方法,例如快速梯度符号方法(FGSM)和投影梯度下降(PGD)。
更新日期:2021-05-27
down
wechat
bug