当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gradient-Guided Dynamic Efficient Adversarial Training
arXiv - CS - Cryptography and Security Pub Date : 2021-03-04 , DOI: arxiv-2103.03076
Fu Wang, Yanghao Zhang, Yanbin Zheng, Wenjie Ruan

Adversarial training is arguably an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks. As a response to the inefficiency, we propose the Dynamic Efficient Adversarial Training (DEAT), which gradually increases the adversarial iteration during training. Moreover, we theoretically reveal that the connection of the lower bound of Lipschitz constant of a given network and the magnitude of its partial derivative towards adversarial examples. Supported by this theoretical finding, we utilize the gradient's magnitude to quantify the effectiveness of adversarial training and determine the timing to adjust the training procedure. This magnitude based strategy is computational friendly and easy to implement. It is especially suited for DEAT and can also be transplanted into a wide range of adversarial training methods. Our post-investigation suggests that maintaining the quality of the training adversarial examples at a certain level is essential to achieve efficient adversarial training, which may shed some light on future studies.

中文翻译:

梯度引导的动态有效对抗训练

对抗性训练可以说是一种有效而耗时的方法,可以训练强大的深度神经网络来抵御强大的对抗性攻击。作为对效率低下的一种回应,我们提出了动态有效对抗训练(DEAT),该训练在训练过程中逐渐增加了对抗迭代的次数。此外,我们从理论上揭示了给定网络的Lipschitz常数下界与它对偏向例子的偏导数的大小的联系。在这一理论发现的支持下,我们利用梯度的大小来量化对抗训练的有效性,并确定调整训练程序的时机。这种基于幅度的策略是计算友好的,易于实现。它特别适合DEAT,也可以移植到多种对抗训练方法中。我们的事后调查表明,将训练型对抗性样本的质量保持在一定水平上对于实现有效的对抗性训练至关重要,这可能会为以后的研究提供一些启示。
更新日期:2021-03-05
down
wechat
bug