当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial parameter defense by multi-step risk minimization
Neural Networks ( IF 7.8 ) Pub Date : 2021-08-25 , DOI: 10.1016/j.neunet.2021.08.022
Zhiyuan Zhang 1 , Ruixuan Luo 2 , Xuancheng Ren 1 , Qi Su 3 , Liangyou Li 4 , Xu Sun 5
Affiliation  

Previous studies demonstrate DNNs’ vulnerability to adversarial examples and adversarial training can establish a defense to adversarial examples. In addition, recent studies show that deep neural networks also exhibit vulnerability to parameter corruptions. The vulnerability of model parameters is of crucial value to the study of model robustness and generalization. In this work, we introduce the concept of parameter corruption and propose to leverage the loss change indicators for measuring the flatness of the loss basin and the parameter robustness of neural network parameters. On such basis, we analyze parameter corruptions and propose the multi-step adversarial corruption algorithm. To enhance neural networks, we propose the adversarial parameter defense algorithm that minimizes the average risk of multiple adversarial parameter corruptions. Experimental results show that the proposed algorithm can improve both the parameter robustness and accuracy of neural networks.



中文翻译:

通过多步风险最小化的对抗性参数防御

先前的研究表明 DNN 容易受到对抗性示例的影响,对抗性训练可以建立对对抗性示例的防御。此外,最近的研究表明,深度神经网络也表现出对参数损坏的脆弱性。模型参数的脆弱性对于模型鲁棒性和泛化的研究具有重要价值。在这项工作中,我们引入了参数损坏的概念,并建议利用损失变化指标来衡量损失盆地的平坦度和神经网络参数的参数稳健性。在此基础上,我们分析了参数损坏并提出了多步对抗性损坏算法。为了增强神经网络,我们提出了对抗参数防御算法,该算法将多个对抗参数损坏的平均风险降至最低。

更新日期:2021-09-06
down
wechat
bug