当前位置: X-MOL 学术Phys. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The technology of adversarial attacks in signal recognition
Physical Communication ( IF 2.0 ) Pub Date : 2020-09-08 , DOI: 10.1016/j.phycom.2020.101199
Haojun Zhao , Qiao Tian , Lei Pan , Yun Lin

The wide application of contour stellar images has helped researchers transform signal classification problems into image classification problems to solve signal recognition based on deep learning. However, deep neural networks (DNN) are quite vulnerable to adversarial examples, thus simply evaluating the adversarial attack performance on the signal sequence recognition model cannot meet the current security requirements. From the perspective of an attacker, this study converts individual signals into stellar contour images, and then generates adversarial examples to evaluate the adversarial attack impacts. The results show that whether the current input sample is a signal sequence or a converted image, the DNN is vulnerable to the threat of adversarial examples. In the selected methods, whether it is under different perturbations or signal-to-noise ratio (SNRs), the momentum iteration method has the best performance among them, and under the perturbation of 0.01, the attack performance is more than 10% higher than the fast gradient sign method. Also, to measure the invisibility of adversarial examples, the contour stellar images before and after the attack were compared to maintain a balance between the attack success rate and the attack concealment.



中文翻译:

信号识别中的对抗攻击技术

等高线恒星图像的广泛应用已帮助研究人员将信号分类问题转化为图像分类问题,从而解决了基于深度学习的信号识别问题。但是,深度神经网络(DNN)容易受到对抗性示例的攻击,因此仅在信号序列识别模型上评估对抗性攻击性能无法满足当前的安全要求。从攻击者的角度来看,该研究将单个信号转换为恒星轮廓图像,然后生成对抗示例以评估对抗攻击的影响。结果表明,无论当前输入样本是信号序列还是转换后的图像,DNN都容易受到对抗性示例的威胁。在选定的方法中,无论是在不同的扰动下还是在信噪比(SNR)下,动量迭代方法在其中都表现最佳,并且在0.01的扰动下,攻击性能比快速梯度符号法高10%以上。另外,为了测量对抗示例的隐身性,将攻击前后的轮廓恒星图像进行比较,以在攻击成功率和攻击隐蔽性之间保持平衡。

更新日期:2020-09-08
down
wechat
bug