当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Project Gradient Descent Adversarial Attack against Multisource Remote Sensing Image Scene Classification
Security and Communication Networks Pub Date : 2021-06-12 , DOI: 10.1155/2021/6663028
Yan Jiang 1 , Guisheng Yin 1 , Ye Yuan 1 , Qingan Da 1
Affiliation  

Deep learning technology (a deeper and optimized network structure) and remote sensing imaging (i.e., the more multisource and the more multicategory remote sensing data) have developed rapidly. Although the deep convolutional neural network (CNN) has achieved state-of-the-art performance on remote sensing image (RSI) scene classification, the existence of adversarial attacks poses a potential security threat to the RSI scene classification task based on CNN. The corresponding adversarial samples can be generated by adding a small perturbation to the original images. Feeding the CNN-based classifier with the adversarial samples leads to the classifier misclassify with high confidence. To achieve a higher attack success rate against scene classification based on CNN, we introduce the projected gradient descent method to generate adversarial remote sensing images. Then, we select several mainstream CNN-based classifiers as the attacked models to demonstrate the effectiveness of our method. The experimental results show that our proposed method can dramatically reduce the classification accuracy under untargeted and targeted attacks. Furthermore, we also evaluate the quality of the generated adversarial images by visual and quantitative comparisons. The results show that our method can generate the imperceptible adversarial samples and has a stronger attack ability for the RSI scene classification.

中文翻译:

针对多源遥感图像场景分类的项目梯度下降对抗攻击

深度学习技术(更深层次和优化的网络结构)和遥感成像(即更多的多源和多类别的遥感数据)发展迅速。尽管深度卷积神经网络 (CNN) 在遥感图像 (RSI) 场景分类方面取得了最先进的性能,但对抗性攻击的存在对基于 CNN 的 RSI 场景分类任务构成了潜在的安全威胁。可以通过对原始图像添加小扰动来生成相应的对抗样本。用对抗样本馈送基于 CNN 的分类器会导致分类器以高置信度错误分类。为了实现基于CNN的场景分类更高的攻击成功率,我们引入了投影梯度下降法来生成对抗性遥感图像。然后,我们选择了几个主流的基于 CNN 的分类器作为攻击模型来证明我们方法的有效性。实验结果表明,我们提出的方法可以显着降低非针对性和针对性攻击下的分类精度。此外,我们还通过视觉和定量比较来评估生成的对抗性图像的质量。结果表明,我们的方法可以生成不易察觉的对抗样本,对RSI场景分类具有更强的攻击能力。实验结果表明,我们提出的方法可以显着降低非针对性和针对性攻击下的分类精度。此外,我们还通过视觉和定量比较来评估生成的对抗性图像的质量。结果表明,我们的方法可以生成不易察觉的对抗样本,对RSI场景分类具有更强的攻击能力。实验结果表明,我们提出的方法可以显着降低非针对性和针对性攻击下的分类精度。此外,我们还通过视觉和定量比较来评估生成的对抗性图像的质量。结果表明,我们的方法可以生成不易察觉的对抗样本,对RSI场景分类具有更强的攻击能力。
更新日期:2021-06-13
down
wechat
bug