当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial attacks on deep-learning-based SAR image target recognition
Journal of Network and Computer Applications ( IF 8.7 ) Pub Date : 2020-03-27 , DOI: 10.1016/j.jnca.2020.102632
Teng Huang , Qixiang Zhang , Jiabao Liu , Ruitao Hou , Xianmin Wang , Ya Li

Synthetic aperture radar (SAR) image target recognition has consistently been a research hotspot in the field of radar image interpretation. Compared with traditional target recognition algorithms, SAR target recognition algorithms based on deep learning offer end-to-end feature learning, which can effectively improve the target recognition rate, making them an important method for radar target recognition. However, recent research shows that optical image recognition methods based on deep learning are vulnerable to adversarial examples. In SAR image target recognition, whether adversarial examples exist for deep learning algorithms is still an open question. This paper uses three mainstream algorithms to generate adversarial examples to attack three classical deep learning algorithms for SAR image target recognition. The experiments involve publicly real SAR images for white-box and black-box attacks. The results show that SAR target recognition algorithms based on deep learning are potentially vulnerable to adversarial examples.



中文翻译:

基于深度学习的SAR图像目标识别的对抗攻击

合成孔径雷达(SAR)图像目标识别一直是雷达图像解释领域的研究热点。与传统目标识别算法相比,基于深度学习的SAR目标识别算法提供了端到端特征学习,可以有效提高目标识别率,成为雷达目标识别的重要方法。但是,最近的研究表明,基于深度学习的光学图像识别方法容易受到对抗性例子的攻击。在SAR图像目标识别中,是否存在针对深度学习算法的对抗示例仍是一个悬而未决的问题。本文使用三种主流算法来生成对抗性示例,以攻击用于SAR图像目标识别的三种经典深度学习算法。实验涉及针对白盒和黑盒攻击的公开真实SAR图像。结果表明,基于深度学习的SAR目标识别算法可能容易受到对抗示例的攻击。

更新日期:2020-03-27
down
wechat
bug