当前位置: X-MOL 学术Comput. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning-based resource allocation strategy for Energy Harvesting-Powered Cognitive Machine-to-Machine Networks
Computer Communications ( IF 6 ) Pub Date : 2020-07-11 , DOI: 10.1016/j.comcom.2020.07.015
Yi-Han Xu , Yong-Bo Tian , Prosper Komla Searyoh , Gang Yu , Yueh-Tiam Yong

Machine-to-Machine (M2M) communication is a promising technology that may realize the Internet of Things (IoTs) in future networks. However, due to the features of massive devices and concurrent access requirement, it will cause performance degradation and enormous energy consumption. Energy Harvesting-Powered Cognitive M2M Networks (EH-CMNs) as an attractive solution is capable of alleviating the escalating spectrum deficient to guarantee the Quality of Service (QoS) meanwhile decreasing the energy consumption to achieve Green Communication (GC) became an important research topic. In this paper, we investigate the resource allocation problem for EH-CMNs underlaying cellular uplinks. We aim to maximize the energy efficiency of EH-CMNs with consideration of the QoS of Human-to-Human (H2H) networks and the available energy in EH-devices. In view of the characteristic of EH-CMNs, we formulate the problem to be a decentralized Discrete-time and Finite-state Markov Decision Process (DFMDP), in which each device acts as agent and effectively learns from the environment to make allocation decision without the complete and global network information. Owing to the complexity of the problem, we propose a Deep Reinforcement Learning (DRL)-based algorithm to solve the problem. Numerical results validate that the proposed scheme outperforms other schemes in terms of average energy efficiency with an acceptable convergence speed.



中文翻译:

基于深度强化学习的能量收集动力认知机对机网络资源分配策略

机器对机器(M2M)通信是一种有前途的技术,可以在未来的网络中实现物联网(IoT)。但是,由于大型设备的特性和并发访问需求,将导致性能下降和巨大的能耗。以能量收集为动力的认知M2M网络(EH-CMNs)作为一种有吸引力的解决方案,能够缓解频谱不足以保证服务质量(QoS)的同时升级,同时减少能耗以实现绿色通信(GC)成为重要的研究课题。 。在本文中,我们研究了在蜂窝上行链路基础上的EH-CMN的资源分配问题。我们旨在通过考虑人对人(H2H)网络的QoS和EH设备中的可用能量来最大程度地提高EH-CMN的能源效率。鉴于EH-CMN的特性,我们将问题表示为分散的离散时间和有限状态马尔可夫决策过程(DFMDP),其中每个设备充当代理并有效地从环境中学习以做出分配决策而无需完整的全球网络信息。由于问题的复杂性,我们提出了一种基于深度强化学习(DRL)的算法来解决该问题。数值结果验证了该方案在可接受的收敛速度方面在平均能效方面优于其他方案。其中每个设备充当代理,并有效地从环境中学习以做出分配决策,而无需完整的全局网络信息。由于问题的复杂性,我们提出了一种基于深度强化学习(DRL)的算法来解决该问题。数值结果验证了该方案在可接受的收敛速度方面在平均能效方面优于其他方案。其中每个设备充当代理,并有效地从环境中学习以做出分配决策,而无需完整的全局网络信息。由于问题的复杂性,我们提出了一种基于深度强化学习(DRL)的算法来解决该问题。数值结果验证了该方案在可接受的收敛速度方面在平均能效方面优于其他方案。

更新日期:2020-07-15
down
wechat
bug