当前位置: X-MOL 学术Eng. Appl. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial robustness and attacks for multi-view deep models
Engineering Applications of Artificial Intelligence ( IF 8 ) Pub Date : 2020-11-12 , DOI: 10.1016/j.engappai.2020.104085
Xuli Sun , Shiliang Sun

Recent work has highlighted the vulnerability of many deep machine learning models to adversarial examples. It attracts increasing attention to adversarial attacks, which can be used to evaluate the security and robustness of models before they are deployed. However, to our best knowledge, there is no specific research on the adversarial robustness and attacks for multi-view deep models. Based on the fact that adversarial examples generalize well among different models, this paper takes the adversarial attack on the multi-view convolutional neural network as an example to investigate the adversarial robustness of multi-view deep models, and further proposes effective multi-view adversarial attacks. This paper proposes two strategies, two-stage attack (TSA) and end-to-end attack (ETEA), to attack against well-trained multi-view models. With the mild assumption that the single-view model on which the target multi-view model is based is known, we first propose the TSA strategy. The main idea of TSA is to attack the multi-view model with adversarial examples generated by attacking the associated single-view model, by which state-of-the-art single-view attack methods are directly extended to the multi-view scenario. Then we further propose the ETEA strategy where the multi-view model is provided publicly. The ETEA is applied to accomplish direct attacks on the target multi-view model, where we develop three effective multi-view attack methods. Extensive experimental results show that multi-view models are more robust than single-view models and demonstrate the effectiveness of the proposed multi-view adversarial attacks.



中文翻译:

多视图深度模型的对抗鲁棒性和攻击

最近的工作强调了许多深度机器学习模型容易受到对抗性例子的攻击。它引起了对对抗性攻击的越来越多的关注,这些攻击可用于在部署模型之前评估模型的安全性和鲁棒性。但是,据我们所知,没有针对多视图深度模型的对抗鲁棒性和攻击的具体研究。基于对抗性例子在不同模型之间具有很好的通用性,本文以对多视角卷积神经网络的对抗攻击为例,研究多视角深度模型的对抗鲁棒性,并进一步提出有效的多视角对抗性攻击。本文提出了两种策略,两阶段攻击(TSA)和端到端攻击(ETEA),以攻击训练有素的多视图模型。在温和的假设下,已知目标多视图模型所基于的单视图模型,我们首先提出了TSA策略。TSA的主要思想是通过攻击相关联的单视图模型生成对抗性示例来攻击多视图模型,最新的单视图攻击方法可直接将其扩展到多视图场景。然后,我们进一步提出了ETEA策略,其中公开提供了多视图模型。ETEA用于对目标多视图模型进行直接攻击,在此模型中,我们开发了三种有效的多视图攻击方法。大量的实验结果表明,多视图模型比单视图模型更强大,并且证明了所提出的多视图对抗攻击的有效性。

更新日期:2020-11-13
down
wechat
bug