当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Machine Learning for MU-MIMO Receive Processing in OFDM Systems
IEEE Journal on Selected Areas in Communications ( IF 16.4 ) Pub Date : 2021-06-18 , DOI: 10.1109/jsac.2021.3087224
Mathieu Goutay , Faycal Ait Aoudia , Jakob Hoydis , Jean-Marie Gorce

Machine learning (ML) starts to be widely used to enhance the performance of multi-user multiple-input multiple-output (MU-MIMO) receivers. However, it is still unclear if such methods are truly competitive with respect to conventional methods in realistic scenarios and under practical constraints. In addition to enabling accurate signal reconstruction on realistic channel models, MU-MIMO receive algorithms must allow for easy adaptation to a varying number of users without the need for retraining. In contrast to existing work, we propose an machine learning (ML)-enhanced MU-MIMO receiver that builds on top of a conventional linear minimum mean squared error (LMMSE) architecture. It preserves the interpretability and scalability of the LMMSE receiver, while improving its accuracy in two ways. First, convolutional neural networks (CNNs) are used to compute an approximation of the second-order statistics of the channel estimation error which are required for accurate equalization. Second, a CNN-based demapper jointly processes a large number of orthogonal frequency-division multiplexing (OFDM) symbols and subcarriers, which allows it to compute better log likelihood ratios (LLRs) by compensating for channel aging. The resulting architecture can be used in the up- and downlink and is trained in an end-to-end manner, removing the need for hard-to-get perfect channel state information (CSI) during the training phase. Simulation results demonstrate consistent performance improvements over the baseline which are especially pronounced in high mobility scenarios.

中文翻译:

OFDM 系统中 MU-MIMO 接收处理的机器学习

机器学习 (ML) 开始被广泛用于增强多用户多输入多输出 (MU-MIMO) 接收器的性能。然而,目前尚不清楚这些方法在现实场景和实际约束下是否真正与传统方法相比具有竞争力。除了在真实信道模型上实现准确的信号重建之外,MU-MIMO 接收算法还必须能够轻松适应不同数量的用户,而无需重新训练。与现有工作相比,我们提出了一种基于传统线性最小均方误差 (LMMSE) 架构的机器学习 (ML) 增强型 MU-MIMO 接收器。它保留了 LMMSE 接收器的可解释性和可扩展性,同时以两种方式提高其准确性。第一的,卷积神经网络 (CNN) 用于计算精确均衡所需的信道估计误差的二阶统计量的近似值。其次,基于 CNN 的解映射器联合处理大量正交频分复用 (OFDM) 符号和子载波,这使其能够通过补偿信道老化来计算更好的对数似然比 (LLR)。由此产生的架构可用于上行和下行链路,并以端到端的方式进行训练,从而消除了在训练阶段对难以获得的完美信道状态信息 (CSI) 的需求。仿真结果表明,在高移动性场景中尤其显着地优于基线的性能改进。
更新日期:2021-07-16
down
wechat
bug