Skip to main content
Log in

Buffer transference strategy for power control in B5G-ultra-dense wireless cellular networks

  • Original Paper
  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Beyond five generation (B5G) systems will demand strict and heterogeneous service requirements for the emerging applications. One solution to meet these demands is the dense deployment of small base stations to provide more capacity and coverage. However, this will lead to high power consumption and greenhouse emissions. Therefore, the resource control policies need to adapt to these network fluctuations to balance the power consumption and meet these demanding requirements. One approach is to implement intelligent algorithms for resource management, such as deep reinforcement learning models. These models can adapt to network changes and unknown conditions. However, while these models adjust to the new requirements, the performance is degraded due to state-space exploration. Therefore, accelerating the learning process is needed to minimize this performance degradation in dynamic environments. One of the approaches to address the above is to transfer the knowledge of other models to improve the learning process. This paper implements a training strategy in an ultra-dense network for power control. The method consists of reusing the previous experiences of models to train new models in complex environments, such as environments with more agents. We evaluate our proposal via simulation. The numerical results demonstrate that adding experiences to the buffer can accelerate the decision on power allocation to increase the network’s performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Code availability

Not applicable.

References

  1. Viswanathan, H., & Mogensen, P. E. (2020). Communications in the 6G era. IEEE Access, 8, 57063–57074. https://doi.org/10.1109/ACCESS.2020.2981745

    Article  Google Scholar 

  2. Alsharif, M. H., Kelechi, A. H., Albreem, M. A., Chaudhry, S. A., Zia, M. S., & Kim, S. (2020). Sixth generation (6G) wireless networks: vision, research activities challenges and potential solutions. Symmetry, 12(4), 676. https://doi.org/10.3390/sym12040676

    Article  Google Scholar 

  3. Teng, Y., Liu, M., Yu, F. R., Leung, V. C. M., Song, M., & Zhang, Y. (2019). Resource allocation for ultra-dense networks: A survey, some research issues and challenges. IEEE Communications Surveys & Tutorials, 21(3), 2134–2168. https://doi.org/10.1109/comst.2018.2867268

    Article  Google Scholar 

  4. Li, W., Wang, J., Shao, Q., & Li, S. (2017). Efficient resource allocation algorithms for energy efficiency maximization in ultra-dense network. Globecom 2017—2017 IEEE Global Communications Conference (pp. 1–6). https://doi.org/10.1109/glocom.2017.8254196

  5. Chuang, M.C., Chen, M.C., & Sun, Y. (2015). Resource management issues in 5G ultra dense smallcell networks. 2015 International Conference on Information Networking (ICOIN) (pp. 159–164). https://doi.org/10.1109/icoin.2015.7057875

  6. Xu, L., Mao, Y., Leng, S., Qiao, G., & Zhao, G. (2017). Energy-efficient resource allocation strategy in ultra-dense small-cell networks: A Stackelberg game approach. 2017 IEEE International Conference on Communications (ICC) (pp. 1–6). https://doi.org/10.1109/ICC.2017.7997289

  7. Shi, Q., Razaviyayn, M., Luo, Z. Q., & He, C. (2011). An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel. IEEE Transactions on Signal Processing, 59(9), 4331–4340. https://doi.org/10.1109/tsp.2011.2147784

    Article  MathSciNet  MATH  Google Scholar 

  8. Shen, K., & Yu, W. (2018). Fractional programming for communication systems—Part I: Power control and beamforming. IEEE Transactions on Signal Processing, 66(10), 2616–2630. https://doi.org/10.1109/tsp.2018.2812733

    Article  MathSciNet  MATH  Google Scholar 

  9. Khan, A. A., & Adve, R. S. (2020). Centralized and distributed deep reinforcement learning methods for downlink sum-rate optimization. IEEE Transactions on Wireless Communications, 19(12), 8410–8426. https://doi.org/10.1109/TWC.2020.3022705

    Article  Google Scholar 

  10. Meng, F., Chen, P., Wu, L., & Cheng, J. (2020). Power allocation in multi-user cellular networks: Deep reinforcement learning approaches. IEEE Transactions on Wireless Communications, 19(10), 6255–6267. https://doi.org/10.1109/TWC.2020.3001736

    Article  Google Scholar 

  11. Nasir, Y. S., & Guo, D. (2019). Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks. IEEE Journal on Selected Areas in Communications, 37(10), 2239–2250. https://doi.org/10.1109/JSAC.2019.2933973

    Article  Google Scholar 

  12. Sritharan, S., Weligampola, H., & Gacanin, H. (2020). A study on deep learning for latency constraint applications in beyond 5G wireless systems. IEEE Access, 8, 218037–218061. https://doi.org/10.1109/ACCESS.2020.3040133

    Article  Google Scholar 

  13. Kasgari, A. T. Z., Saad, W., Mozaffari, M., & Poor, H. V. (2021). Experienced deep reinforcement learning with generative adversarial networks (GANs) for model-free ultra-reliable low latency communication. IEEE Transactions on Communications, 69(2), 884–899. https://doi.org/10.1109/TCOMM.2020.3031930

    Article  Google Scholar 

  14. Wang, W., Yang, T., Liu, Y., Hao, J., Hao, X., Hu, Y., Chen, Y., Fan, C., & Gao, Y. (2020). From few to more: large-scale dynamic multiagent curriculum learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7293–7300. https://doi.org/10.1609/aaai.v34i05.6221

    Article  Google Scholar 

  15. Wang, M., Gao, H., & Lv, T. (2017). Energy-efficient user association and power control in the heterogeneous network. IEEE Access, 5, 5059–5068. https://doi.org/10.1109/access.2017.2690305

    Article  Google Scholar 

  16. Sutton, R.S., & Barto, A.G. (1998). Reinforcement Learning: An Introduction (2nd ed.). The MIT Press.

  17. Naparstek, O., & Cohen, K. (2019). Deep multi-user reinforcement learning for distributed dynamic spectrum access. IEEE Transactions on Wireless Communications, 18(1), 310–323. https://doi.org/10.1109/TWC.2018.2879433

    Article  Google Scholar 

  18. Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized experience replay. CoRR, arXiv:1511.05952

Download references

Funding

This research was funded in part by the National Council of Science and Technology (CONACyT, México) through the “Fondo Sectorial de Investigación para la Educación under Grant Number 288670-Y”.

Author information

Authors and Affiliations

Authors

Contributions

All authors participated in the project conceptualization and formal analysis; AA: carried out the methodology, investigation, software, and writing-original draft; ÁGA: carried out supervision, conceptualization, writing-reviewing and editing, and funding acquisition; All authors approved the final manuscript.

Corresponding author

Correspondence to Ángel G. Andrade.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare relevant to this article’s content.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Anzaldo, A., Andrade, Á.G. Buffer transference strategy for power control in B5G-ultra-dense wireless cellular networks. Wireless Netw 28, 3613–3620 (2022). https://doi.org/10.1007/s11276-022-03087-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-022-03087-6

Keywords

Navigation