当前位置: X-MOL 学术IEEE Trans. Cognit. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training
IEEE Transactions on Cognitive Communications and Networking ( IF 8.6 ) Pub Date : 2022-01-28 , DOI: 10.1109/tccn.2022.3147203
B. R. Manoj 1 , Meysam Sadeghi 1 , Erik G. Larsson 2
Affiliation  

The successful emergence of deep learning (DL) in wireless system applications has raised concerns about new security-related challenges. One such security challenge is adversarial attacks. Although there has been much work demonstrating the susceptibility of DL-based classification tasks to adversarial attacks, regression-based problems in the context of a wireless system have not been studied so far from an attack perspective. The aim of this paper is twofold: (i) we consider a regression problem in a wireless setting and show that adversarial attacks can break the DL-based approach and (ii) we analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly. Specifically, the wireless application considered in this paper is the DL-based power allocation in the downlink of a multicell massive multi-input-multi-output system, where the goal of the attack is to yield an infeasible solution by the DL model. We extend the gradient-based adversarial attacks: fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent method to analyze the susceptibility of the considered wireless application with and without adversarial training. We analyze the deep neural network (DNN) models performance against these attacks, where the adversarial perturbations are crafted using both the white-box and black-box attacks.

中文翻译:

通过深度学习在大规模 MIMO 中分配下行链路功率:对抗性攻击和训练

无线系统应用中深度学习 (DL) 的成功出现引起了人们对新的安全相关挑战的担忧。这样的安全挑战之一是对抗性攻击。尽管已经有很多工作证明了基于 DL 的分类任务对对抗性攻击的敏感性,但到目前为止,还没有从攻击的角度研究无线系统背景下的基于回归的问题。本文的目的有两个:(i)我们考虑无线环境中的回归问题,并表明对抗性攻击可以打破基于 DL 的方法;(ii)我们分析对抗性训练作为对抗性环境中的防御技术的有效性并表明基于DL的无线系统对攻击的鲁棒性显着提高。具体来说,本文考虑的无线应用是多小区大规模多输入多输出系统下行链路中基于DL的功率分配,其中攻击的目标是通过DL模型产生不可行的解决方案。我们扩展了基于梯度的对抗性攻击:快速梯度符号法 (FGSM)、动量迭代 FGSM 和投影梯度下降法,以分析所考虑的无线应用在有和没有对抗训练的情况下的敏感性。我们分析了针对这些攻击的深度神经网络 (DNN) 模型的性能,其中对抗性扰动是使用白盒和黑盒攻击来制作的。我们扩展了基于梯度的对抗性攻击:快速梯度符号法 (FGSM)、动量迭代 FGSM 和投影梯度下降法,以分析所考虑的无线应用在有和没有对抗训练的情况下的敏感性。我们分析了针对这些攻击的深度神经网络 (DNN) 模型的性能,其中对抗性扰动是使用白盒和黑盒攻击来制作的。我们扩展了基于梯度的对抗性攻击:快速梯度符号法 (FGSM)、动量迭代 FGSM 和投影梯度下降法,以分析所考虑的无线应用在有和没有对抗训练的情况下的敏感性。我们分析了针对这些攻击的深度神经网络 (DNN) 模型的性能,其中对抗性扰动是使用白盒和黑盒攻击来制作的。
更新日期:2022-01-28
down
wechat
bug