当前位置:
X-MOL 学术
›
arXiv.cs.CR
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Enhancing Adversarial Robustness via Test-time Transformation Ensembling
arXiv - CS - Cryptography and Security Pub Date : 2021-07-29 , DOI: arxiv-2107.14110 Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
arXiv - CS - Cryptography and Security Pub Date : 2021-07-29 , DOI: arxiv-2107.14110 Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
Deep learning models are prone to being fooled by imperceptible perturbations
known as adversarial attacks. In this work, we study how equipping models with
Test-time Transformation Ensembling (TTE) can work as a reliable defense
against such attacks. While transforming the input data, both at train and test
times, is known to enhance model performance, its effects on adversarial
robustness have not been studied. Here, we present a comprehensive empirical
study of the impact of TTE, in the form of widely-used image transforms, on
adversarial robustness. We show that TTE consistently improves model robustness
against a variety of powerful attacks without any need for re-training, and
that this improvement comes at virtually no trade-off with accuracy on clean
samples. Finally, we show that the benefits of TTE transfer even to the
certified robustness domain, in which TTE provides sizable and consistent
improvements.
中文翻译:
通过测试时转换集成增强对抗性鲁棒性
深度学习模型很容易被称为对抗性攻击的难以察觉的扰动所愚弄。在这项工作中,我们研究了如何为模型配备测试时间转换集成 (TTE) 以作为对此类攻击的可靠防御。虽然已知在训练和测试时间转换输入数据可以提高模型性能,但尚未研究其对对抗性鲁棒性的影响。在这里,我们以广泛使用的图像变换的形式,对 TTE 对对抗性鲁棒性的影响进行了全面的实证研究。我们表明,TTE 不断提高模型对各种强大攻击的鲁棒性,而无需任何重新训练,而且这种改进几乎没有牺牲干净样本的准确性。最后,
更新日期:2021-07-30
中文翻译:
通过测试时转换集成增强对抗性鲁棒性
深度学习模型很容易被称为对抗性攻击的难以察觉的扰动所愚弄。在这项工作中,我们研究了如何为模型配备测试时间转换集成 (TTE) 以作为对此类攻击的可靠防御。虽然已知在训练和测试时间转换输入数据可以提高模型性能,但尚未研究其对对抗性鲁棒性的影响。在这里,我们以广泛使用的图像变换的形式,对 TTE 对对抗性鲁棒性的影响进行了全面的实证研究。我们表明,TTE 不断提高模型对各种强大攻击的鲁棒性,而无需任何重新训练,而且这种改进几乎没有牺牲干净样本的准确性。最后,