当前位置: X-MOL 学术arXiv.cs.IT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Actor-Critic Learning for Distributed Power Control in Wireless Mobile Networks
arXiv - CS - Information Theory Pub Date : 2020-09-14 , DOI: arxiv-2009.06681
Yasar Sinan Nasir and Dongning Guo

Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.

中文翻译:

无线移动网络分布式功率控制的深度Actor-Critic学习

深度强化学习为解决无线网络中的发射功率控制问题提供了监督深度学习和经典优化的无模型替代方案。多代理深度强化学习方法将每个发射器视为一个单独的学习代理,通过观察本地无线环境来确定其发射功率水平。遵循特定策略,这些代理学习协同最大化全局目标,例如总比率效用函数。这种多代理方案易于扩展并且实际适用于大规模蜂窝网络。在这项工作中,我们在深度 actor-critic 学习的帮助下,更具体地说,通过适应深度确定性策略梯度,提出了一种分布式执行的连续功率控制算法。此外,我们将所提出的功率控制算法集成到一个时隙系统中,在该系统中设备是移动的并且信道条件快速变化。我们使用仿真结果证明了所提出算法的功能。
更新日期:2020-09-16
down
wechat
bug