当前位置: X-MOL 学术IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial examples for CNN-based SAR image classification: An experience study
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing ( IF 5.5 ) Pub Date : 2021-01-01 , DOI: 10.1109/jstars.2020.3038683
Haifeng Li , Haikuo Huang , Li Chen , Jian Peng , Haozhe Huang , Zhenqi Cui , Xiaoming Mei , Guohua Wu

Synthetic aperture radar (SAR) has all-day and all-weather characteristics and plays an extremely important role in the military field. The breakthroughs in deep learning methods represented by convolutional neural network (CNN) models have greatly improved the SAR image recognition accuracy. Classification models based on CNNs can perform high-precision classification, but there are security problems against adversarial examples (AEs). However, the research on AEs is mostly limited to natural images, and remote sensing images (SAR, multispectral, etc.) have not been extensively studied. To explore the basic characteristics of AEs of SAR images (ASIs), we use two classic white-box attack methods to generate ASIs from two SAR image classification datasets and then evaluate the vulnerability of six commonly used CNNs. The results show that ASIs are quite effective in fooling CNNs trained on SAR images, as indicated by the obtained high attack success rate. Due to the structural differences among CNNs, different CNNs present different vulnerabilities in the face of ASIs. We found that ASIs generated by nontarget attack algorithms feature attack selectivity, which is related to the feature space distribution of the original SAR images and the decision boundary of the classification model. We propose the sample-boundary-based AE selectivity distance to successfully explain the attack selectivity of ASIs. We also analyze the effects of image parameters, such as image size and number of channels, on the attack success rate of ASIs through parameter sensitivity. The experimental results of this study provide data support and an effective reference for attacks on and the defense capabilities of various CNNs with regard to AEs in SAR image classification models.

中文翻译:

基于 CNN 的 SAR 图像分类的对抗性示例:经验研究

合成孔径雷达(SAR)具有全天候、全天候的特点,在军事领域发挥着极其重要的作用。以卷积神经网络(CNN)模型为代表的深度学习方法的突破极大地提高了SAR图像识别精度。基于 CNN 的分类模型可以执行高精度分类,但存在对抗对抗样本 (AE) 的安全问题。然而,对AEs的研究大多局限于自然图像,遥感图像(SAR、多光谱等)还没有得到广泛的研究。为了探索 SAR 图像(ASI)AE 的基本特征,我们使用两种经典的白盒攻击方法从两个 SAR 图像分类数据集生成 ASI,然后评估六种常用 CNN 的脆弱性。结果表明,ASI 在欺骗在 SAR 图像上训练的 CNN 方面非常有效,如获得的高攻击成功率所示。由于 CNN 之间的结构差异,不同的 CNN 在面对 ASI 时表现出不同的脆弱性。我们发现非目标攻击算法生成的ASI具有攻击选择性,这与原始SAR图像的特征空间分布和分类模型的决策边界有关。我们提出了基于样本边界的 AE 选择性距离来成功解释 ASI 的攻击选择性。我们还通过参数敏感性分析了图像参数,如图像大小和通道数,对 ASI 攻击成功率的影响。
更新日期:2021-01-01
down
wechat
bug