当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Linear MIMO Precoders With Finite Alphabet Inputs via Stochastic Optimization and Deep Neural Networks (DNNs)
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2021-07-14 , DOI: 10.1109/tsp.2021.3096466
Shusen Jing , Chengshan Xiao

In this paper, we investigate designs of linear precoders for vector Gaussian channels via stochastic optimizations and deep neural networks (DNNs). We assume that the channel inputs are drawn from practical finite alphabets, and we search for precoders maximizing the mutual information between channel inputs and outputs. Though the problem is generally non-convex, we prove that when the right singular matrix of precoder is fixed, any local optima of this problem is a global optima. Based on this fact, an efficient projected stochastic gradient descent (PSGD) algorithm is designed to search the optimal precoders. Moreover, to reduce the complexity of calculating a posterior means involved in gradients calculation, K-best algorithm is adopted to make approximations of a posterior means with negligible loss of accuracy. Furthermore, to avoid explicit calculation of mutual information and its gradients, DNN-based autoencoders (AEs) are constructed for this precoding task, and an efficient training algorithm is proposed. We also prove that the AEs, with ‘softmax’ activation function and ‘categorical cross entropy’ loss, maximize the mutual information under reasonable assumptions. Then, in order to extend the AE methods to large scale systems, ‘sigmoid’ activation function and ‘binary cross entropy’ loss are used such that the size of AEs will not grow prohibitively large. We prove that this maximizes a lower bound of the mutual information under reasonable assumptions. Finally, to make the precoders practical for high speed wireless scenarios, we propose an offline training paradigm which trains DNNs to infer optimal precoders given channel state information instead of training online for every different channel. Simulation results show that all the proposed methods work well in maximizing mutual information and improving bit error rate (BER) performance.

中文翻译:

通过随机优化和深度神经网络 (DNN) 具有有限字母输入的线性 MIMO 预编码器

在本文中,我们研究了向量的线性预编码器的设计 通过随机优化和深度神经网络 (DNN) 的高斯通道。我们假设通道输入是从实际的有限字母表中提取的,并且我们搜索最大化通道输入和输出之间的互信息的预编码器。虽然问题一般是非凸的,但我们证明,当预编码器的右奇异矩阵固定时,该问题的任何局部最优都是全局最优。基于这一事实,设计了一种高效的投影随机梯度下降 (PSGD) 算法来搜索最佳预编码器。此外,为了减少计算的复杂性梯度计算中涉及的后验方法,采用K-best算法来逼近 后验方法,精度损失可以忽略不计。此外,为了避免显式计算互信息及其梯度,为此预编码任务构建了基于 DNN 的自动编码器 (AE),并提出了一种有效的训练算法。我们还证明了具有“softmax”激活函数和“分类交叉熵”损失的 AE 在合理假设下最大化了互信息。然后,为了将 AE 方法扩展到大规模系统,使用了“sigmoid”激活函数和“二元交叉熵”损失,这样 AE 的大小不会变得过大。我们证明这在合理的假设下最大化了互信息的下限。最后,为了使预编码器适用于高速无线场景,我们提出了一种离线训练范式,该范式训练 DNN 以在给定通道状态信息的情况下推断最佳预编码器,而不是针对每个不同通道进行在线训练。仿真结果表明,所有提出的方法在最大化互信息和提高误码率(BER)性能方面效果良好。
更新日期:2021-08-27
down
wechat
bug