当前位置: X-MOL 学术Wireless Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Buffer transference strategy for power control in B5G-ultra-dense wireless cellular networks
Wireless Networks ( IF 2.1 ) Pub Date : 2022-08-06 , DOI: 10.1007/s11276-022-03087-6
Alexis Anzaldo , Ángel G. Andrade

Beyond five generation (B5G) systems will demand strict and heterogeneous service requirements for the emerging applications. One solution to meet these demands is the dense deployment of small base stations to provide more capacity and coverage. However, this will lead to high power consumption and greenhouse emissions. Therefore, the resource control policies need to adapt to these network fluctuations to balance the power consumption and meet these demanding requirements. One approach is to implement intelligent algorithms for resource management, such as deep reinforcement learning models. These models can adapt to network changes and unknown conditions. However, while these models adjust to the new requirements, the performance is degraded due to state-space exploration. Therefore, accelerating the learning process is needed to minimize this performance degradation in dynamic environments. One of the approaches to address the above is to transfer the knowledge of other models to improve the learning process. This paper implements a training strategy in an ultra-dense network for power control. The method consists of reusing the previous experiences of models to train new models in complex environments, such as environments with more agents. We evaluate our proposal via simulation. The numerical results demonstrate that adding experiences to the buffer can accelerate the decision on power allocation to increase the network’s performance.



中文翻译:

B5G超密集无线蜂窝网络中功率控制的缓冲转移策略

超越五代 (B5G) 系统将对新兴应用提出严格的异构服务要求。满足这些需求的一种解决方案是密集部署小型基站以提供更多容量和覆盖范围。然而,这将导致高功耗和温室气体排放。因此,资源控制策略需要适应这些网络波动来平衡功耗,满足这些苛刻的要求。一种方法是实现资源管理的智能算法,例如深度强化学习模型。这些模型可以适应网络变化和未知条件。然而,虽然这些模型适应了新的要求,但由于状态空间探索,性能下降了。所以,需要加速学习过程以最大程度地减少动态环境中的性能下降。解决上述问题的方法之一是转移其他模型的知识以改进学习过程。本文在超密集网络中实现了一种用于功率控制的训练策略。该方法包括重用模型的先前经验以在复杂环境中训练新模型,例如具有更多代理的环境。我们通过模拟评估我们的提议。数值结果表明,向缓冲区添加经验可以加快功率分配决策,从而提高网络性能。本文在超密集网络中实现了一种用于功率控制的训练策略。该方法包括重用模型的先前经验以在复杂环境中训练新模型,例如具有更多代理的环境。我们通过模拟评估我们的提议。数值结果表明,向缓冲区添加经验可以加快功率分配决策,从而提高网络性能。本文在超密集网络中实现了一种用于功率控制的训练策略。该方法包括重用模型的先前经验以在复杂环境中训练新模型,例如具有更多代理的环境。我们通过模拟评估我们的提议。数值结果表明,向缓冲区添加经验可以加快功率分配决策,从而提高网络性能。

更新日期:2022-08-06
down
wechat
bug