当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Shift Invariance Can Reduce Adversarial Robustness
arXiv - CS - Machine Learning Pub Date : 2021-03-03 , DOI: arxiv-2103.02695
Songwei Ge, Vasu Singla, Ronen Basri, David Jacobs

Shift invariance is a critical property of CNNs that improves performance on classification. However, we show that invariance to circular shifts can also lead to greater sensitivity to adversarial attacks. We first characterize the margin between classes when a shift-invariant linear classifier is used. We show that the margin can only depend on the DC component of the signals. Then, using results about infinitely wide networks, we show that in some simple cases, fully connected and shift-invariant neural networks produce linear decision boundaries. Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background. This is more than a curiosity; we show empirically that with real datasets and realistic architectures, shift invariance reduces adversarial robustness. Finally, we describe initial experiments using synthetic data to probe the source of this connection.

中文翻译:

转变不变性可以降低对抗性的鲁棒性

移位不变性是CNN的关键属性,可提高分类的性能。但是,我们表明,循环移位的不变性也可以导致对对抗攻击的更大敏感性。当使用不变位移线性分类器时,我们首先描述类之间的裕度。我们表明,余量只能取决于信号的直流分量。然后,使用有关无限宽网络的结果,我们表明,在某些简单情况下,完全连接且移位不变的神经网络会产生线性决策边界。利用这一点,我们证明了神经网络中的移位不变性为两个类别的简单情况产生了对抗性示例,每个类别都由一个带有灰色背景上的黑点或白点的单个图像组成。这不仅仅是好奇心;我们从经验上证明,对于真实的数据集和真实的体系结构,移位不变性会降低对抗性的鲁棒性。最后,我们描述了使用合成数据来探究这种连接的来源的初步实验。
更新日期:2021-03-05
down
wechat
bug