Abstract
Beyond five generation (B5G) systems will demand strict and heterogeneous service requirements for the emerging applications. One solution to meet these demands is the dense deployment of small base stations to provide more capacity and coverage. However, this will lead to high power consumption and greenhouse emissions. Therefore, the resource control policies need to adapt to these network fluctuations to balance the power consumption and meet these demanding requirements. One approach is to implement intelligent algorithms for resource management, such as deep reinforcement learning models. These models can adapt to network changes and unknown conditions. However, while these models adjust to the new requirements, the performance is degraded due to state-space exploration. Therefore, accelerating the learning process is needed to minimize this performance degradation in dynamic environments. One of the approaches to address the above is to transfer the knowledge of other models to improve the learning process. This paper implements a training strategy in an ultra-dense network for power control. The method consists of reusing the previous experiences of models to train new models in complex environments, such as environments with more agents. We evaluate our proposal via simulation. The numerical results demonstrate that adding experiences to the buffer can accelerate the decision on power allocation to increase the network’s performance.
Similar content being viewed by others
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Code availability
Not applicable.
References
Viswanathan, H., & Mogensen, P. E. (2020). Communications in the 6G era. IEEE Access, 8, 57063–57074. https://doi.org/10.1109/ACCESS.2020.2981745
Alsharif, M. H., Kelechi, A. H., Albreem, M. A., Chaudhry, S. A., Zia, M. S., & Kim, S. (2020). Sixth generation (6G) wireless networks: vision, research activities challenges and potential solutions. Symmetry, 12(4), 676. https://doi.org/10.3390/sym12040676
Teng, Y., Liu, M., Yu, F. R., Leung, V. C. M., Song, M., & Zhang, Y. (2019). Resource allocation for ultra-dense networks: A survey, some research issues and challenges. IEEE Communications Surveys & Tutorials, 21(3), 2134–2168. https://doi.org/10.1109/comst.2018.2867268
Li, W., Wang, J., Shao, Q., & Li, S. (2017). Efficient resource allocation algorithms for energy efficiency maximization in ultra-dense network. Globecom 2017—2017 IEEE Global Communications Conference (pp. 1–6). https://doi.org/10.1109/glocom.2017.8254196
Chuang, M.C., Chen, M.C., & Sun, Y. (2015). Resource management issues in 5G ultra dense smallcell networks. 2015 International Conference on Information Networking (ICOIN) (pp. 159–164). https://doi.org/10.1109/icoin.2015.7057875
Xu, L., Mao, Y., Leng, S., Qiao, G., & Zhao, G. (2017). Energy-efficient resource allocation strategy in ultra-dense small-cell networks: A Stackelberg game approach. 2017 IEEE International Conference on Communications (ICC) (pp. 1–6). https://doi.org/10.1109/ICC.2017.7997289
Shi, Q., Razaviyayn, M., Luo, Z. Q., & He, C. (2011). An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel. IEEE Transactions on Signal Processing, 59(9), 4331–4340. https://doi.org/10.1109/tsp.2011.2147784
Shen, K., & Yu, W. (2018). Fractional programming for communication systems—Part I: Power control and beamforming. IEEE Transactions on Signal Processing, 66(10), 2616–2630. https://doi.org/10.1109/tsp.2018.2812733
Khan, A. A., & Adve, R. S. (2020). Centralized and distributed deep reinforcement learning methods for downlink sum-rate optimization. IEEE Transactions on Wireless Communications, 19(12), 8410–8426. https://doi.org/10.1109/TWC.2020.3022705
Meng, F., Chen, P., Wu, L., & Cheng, J. (2020). Power allocation in multi-user cellular networks: Deep reinforcement learning approaches. IEEE Transactions on Wireless Communications, 19(10), 6255–6267. https://doi.org/10.1109/TWC.2020.3001736
Nasir, Y. S., & Guo, D. (2019). Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks. IEEE Journal on Selected Areas in Communications, 37(10), 2239–2250. https://doi.org/10.1109/JSAC.2019.2933973
Sritharan, S., Weligampola, H., & Gacanin, H. (2020). A study on deep learning for latency constraint applications in beyond 5G wireless systems. IEEE Access, 8, 218037–218061. https://doi.org/10.1109/ACCESS.2020.3040133
Kasgari, A. T. Z., Saad, W., Mozaffari, M., & Poor, H. V. (2021). Experienced deep reinforcement learning with generative adversarial networks (GANs) for model-free ultra-reliable low latency communication. IEEE Transactions on Communications, 69(2), 884–899. https://doi.org/10.1109/TCOMM.2020.3031930
Wang, W., Yang, T., Liu, Y., Hao, J., Hao, X., Hu, Y., Chen, Y., Fan, C., & Gao, Y. (2020). From few to more: large-scale dynamic multiagent curriculum learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7293–7300. https://doi.org/10.1609/aaai.v34i05.6221
Wang, M., Gao, H., & Lv, T. (2017). Energy-efficient user association and power control in the heterogeneous network. IEEE Access, 5, 5059–5068. https://doi.org/10.1109/access.2017.2690305
Sutton, R.S., & Barto, A.G. (1998). Reinforcement Learning: An Introduction (2nd ed.). The MIT Press.
Naparstek, O., & Cohen, K. (2019). Deep multi-user reinforcement learning for distributed dynamic spectrum access. IEEE Transactions on Wireless Communications, 18(1), 310–323. https://doi.org/10.1109/TWC.2018.2879433
Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized experience replay. CoRR, arXiv:1511.05952
Funding
This research was funded in part by the National Council of Science and Technology (CONACyT, México) through the “Fondo Sectorial de Investigación para la Educación under Grant Number 288670-Y”.
Author information
Authors and Affiliations
Contributions
All authors participated in the project conceptualization and formal analysis; AA: carried out the methodology, investigation, software, and writing-original draft; ÁGA: carried out supervision, conceptualization, writing-reviewing and editing, and funding acquisition; All authors approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare relevant to this article’s content.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Anzaldo, A., Andrade, Á.G. Buffer transference strategy for power control in B5G-ultra-dense wireless cellular networks. Wireless Netw 28, 3613–3620 (2022). https://doi.org/10.1007/s11276-022-03087-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11276-022-03087-6