当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multitask Learning Strengthens Adversarial Robustness
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-07-14 , DOI: arxiv-2007.07236
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick

Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network. We present both theoretical and empirical analyses that connect the adversarial robustness of a model to the number of tasks that it is trained on. Experiments on two datasets show that attack difficulty increases as the number of target tasks increase. Moreover, our results suggest that when models are trained on multiple tasks at once, they become more robust to adversarial attacks on individual tasks. While adversarial defense remains an open challenge, our results suggest that deep networks are vulnerable partly because they are trained on too few tasks.

中文翻译:

多任务学习增强对抗性鲁棒性

尽管深度网络在一系列计算机视觉基准测试中实现了很高的准确性,但它们仍然容易受到对抗性攻击,在这种情况下,难以察觉的输入扰动欺骗了网络。我们提出了理论和实证分析,将模型的对抗性鲁棒性与其训练的任务数量联系起来。在两个数据集上的实验表明,攻击难度随着目标任务数量的增加而增加。此外,我们的结果表明,当模型同时在多个任务上进行训练时,它们对单个任务的对抗性攻击变得更加稳健。虽然对抗性防御仍然是一个公开的挑战,但我们的结果表明,深度网络很脆弱,部分原因是它们接受的任务训练太少。
更新日期:2020-09-14
down
wechat
bug