当前位置: X-MOL 学术IEEE Trans. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model-Driven DNN Decoder for Turbo Codes: Design, Simulation and Experimental Results
IEEE Transactions on Communications ( IF 8.3 ) Pub Date : 2020-10-01 , DOI: 10.1109/tcomm.2020.3010964
Yunfeng He , Jing Zhang , Shi Jin , Chao-Kai Wen , Geoffrey Ye Li

This paper presents a novel model-driven deep learning (DL) architecture, called TurboNet, for turbo decoding that integrates DL into the traditional max-log-maximum a posteriori (MAP) algorithm. The TurboNet inherits the superiority of the max-log-MAP algorithm and DL tools and thus presents excellent error-correction capability with low training cost. To design the TurboNet, the original iterative structure is unfolded as deep neural network (DNN) decoding units, where trainable weights are introduced to the max-log-MAP algorithm and optimized through supervised learning. To efficiently train the TurboNet, a loss function is carefully designed to prevent tricky gradient vanishing issue. To further reduce the computational complexity and training cost of the TurboNet, we can prune it into TurboNet+. Compared with the existing black-box DL approaches, the TurboNet+ has considerable advantage in computational complexity and is conducive to significantly reducing the decoding overhead. Furthermore, we also present a simple training strategy to address the overfitting issue, which enable efficient training of the proposed TurboNet+. Simulation results demonstrate TurboNet+’s superiority in error-correction ability, signal-to-noise ratio generalization, and computational overhead. In addition, an experimental system is established for an over-the-air (OTA) test with the help of a 5G rapid prototyping system and demonstrates TurboNet’s strong learning ability and great robustness to various scenarios.

中文翻译:

Turbo 码的模型驱动 DNN 解码器:设计、仿真和实验结果

本文提出了一种新的模型驱动深度学习 (DL) 架构,称为 TurboNet,用于涡轮解码,将 DL 集成到传统的最大对数最大后验 (MAP) 算法中。TurboNet 继承了 max-log-MAP 算法和 DL 工具的优越性,因此在训练成本低的情况下具有出色的纠错能力。为了设计 TurboNet,将原始迭代结构展开为深度神经网络 (DNN) 解码单元,将可训练权重引入 max-log-MAP 算法并通过监督学习进行优化。为了有效地训练 TurboNet,精心设计了一个损失函数来防止棘手的梯度消失问题。为了进一步降低 TurboNet 的计算复杂度和训练成本,我们可以将其修剪为 TurboNet+。与现有的黑盒 DL 方法相比,TurboNet+ 在计算复杂度上具有相当大的优势,有利于显着降低解码开销。此外,我们还提出了一种简单的训练策略来解决过拟合问题,从而能够有效地训练所提出的 TurboNet+。仿真结果证明了 TurboNet+ 在纠错能力、信噪比泛化和计算开销方面的优越性。此外,在5G快速原型系统的帮助下,建立了OTA测试的实验系统,展示了TurboNet强大的学习能力和对各种场景的强大鲁棒性。此外,我们还提出了一种简单的训练策略来解决过拟合问题,从而能够有效地训练所提出的 TurboNet+。仿真结果证明了 TurboNet+ 在纠错能力、信噪比泛化和计算开销方面的优越性。此外,在5G快速原型系统的帮助下,建立了OTA测试的实验系统,展示了TurboNet强大的学习能力和对各种场景的强大鲁棒性。此外,我们还提出了一种简单的训练策略来解决过拟合问题,从而能够有效地训练所提出的 TurboNet+。仿真结果证明了 TurboNet+ 在纠错能力、信噪比泛化和计算开销方面的优越性。此外,在5G快速原型系统的帮助下,建立了OTA测试的实验系统,展示了TurboNet强大的学习能力和对各种场景的强大鲁棒性。
更新日期:2020-10-01
down
wechat
bug