当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-01-16 , DOI: arxiv-2101.06507
Jia Liu, Yaochu Jin

Many existing deep learning models are vulnerable to adversarial examples that are imperceptible to humans. To address this issue, various methods have been proposed to design network architectures that are robust to one particular type of adversarial attacks. It is practically impossible, however, to predict beforehand which type of attacks a machine learn model may suffer from. To address this challenge, we propose to search for deep neural architectures that are robust to five types of well-known adversarial attacks using a multi-objective evolutionary algorithm. To reduce the computational cost, a normalized error rate of a randomly chosen attack is calculated as the robustness for each newly generated neural architecture at each generation. All non-dominated network architectures obtained by the proposed method are then fully trained against randomly chosen adversarial attacks and tested on two widely used datasets. Our experimental results demonstrate the superiority of optimized neural architectures found by the proposed approach over state-of-the-art networks that are widely used in the literature in terms of the classification accuracy under different adversarial attacks.

中文翻译:

针对多种类型攻击的鲁棒神经体系结构的多目标搜索

许多现有的深度学习模型容易受到人类难以察觉的对抗性例子的攻击。为了解决这个问题,已经提出了各种方法来设计对一种特定类型的对抗攻击具有鲁棒性的网络体系结构。但是,实际上不可能预先预测机器学习模型可能遭受的攻击类型。为了解决这一挑战,我们建议使用多目标进化算法搜索对五种著名的对抗性攻击具有鲁棒性的深度神经体系结构。为了减少计算成本,计算随机选择的攻击的标准化错误率,作为每一代每个新生成的神经体系结构的鲁棒性。然后,针对随机选择的对抗性攻击对通过该方法获得的所有非支配性网络体系结构进行全面训练,并在两个广泛使用的数据集上进行了测试。我们的实验结果表明,在不同对抗性攻击下的分类精度方面,所提出的方法所发现的优化神经体系结构优于文献中广泛使用的最新网络。
更新日期:2021-01-19
down
wechat
bug