当前位置: X-MOL 学术Phys. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cooperative content caching and power allocation strategy for fog radio access networks
Physical Communication ( IF 2.0 ) Pub Date : 2021-03-17 , DOI: 10.1016/j.phycom.2021.101327
Fan Jiang , Xiaoli Zhang , Changyin Sun , Junxuan Wang

Fog radio access networks (F-RANs) architecture is regarded as a prominent solution to deploy the caching and computing functions at the edge nodes of the network. However, the ever-increasing data traffic demand and the latency requirements of emerging applications pose new challenges to F-RANs. To minimize content fetching latency, we propose a cooperative content caching and power allocation scheme, which proactively caches the content at the network edge and enables users to dynamically obtain the desired content either from Fog-Access points (F-APs) or proximate user equipments (UEs) through Device-to-Device (D2D) communication. Furthermore, the proposed scheme allocates appropriate transmit power to D2D user equipments (DUEs) so that the transmission rate can be maximized. Specifically, the cooperative content caching issue is initially formulated as a probability-triggered combinatorial multi-armed bandit (CMAB) framework. By considering the user preference and content popularity prediction, an enhanced multi-agent reinforcement learning algorithm is proposed to obtain an optimal caching strategy. Besides, to minimize the content fetching latency and guarantee that each UE can retrieve the desired content, the power allocation problem is then modeled as maximizing the sum data rates of users. Finally, a Q-learning based power allocation strategy is derived. The simulation results based on the dataset from MovieLens reveal that compared with baseline methods, our proposed cooperative caching and power allocation scheme can not only reduce the content fetching latency but also increase the cache hit rate.



中文翻译:

雾无线电接入网的协作内容缓存和功率分配策略

雾无线电接入网络(F-RAN)体系结构被认为是在网络边缘节点部署缓存和计算功能的杰出解决方案。但是,不断增长的数据流量需求和新兴应用程序的延迟要求对F-RAN提出了新的挑战。为了最大程度地减少内容获取延迟,我们提出了一种协作式内容缓存和功率分配方案,该方案可在网络边缘主动缓存内容,并使用户能够从雾接入点(F-AP)或附近的用户设备动态获取所需的内容。 (UE)通过设备到设备(D2D)通信。此外,所提出的方案将适当的发射功率分配给D2D用户设备(DUE),从而可以使传输速率最大化。具体来说,协作内容缓存问题最初被表述为概率触发的组合式多臂匪盗(CMAB)框架。通过考虑用户偏好和内容流行度预测,提出了一种增强的多主体强化学习算法,以获得最优的缓存策略。此外,为了最小化内容获取等待时间并确保每个UE都可以检索到所需的内容,然后将功率分配问题建模为最大化用户的总数据速率。最后,推导了基于Q学习的功率分配策略。基于MovieLens数据集的仿真结果表明,与基线方法相比,我们提出的协作式缓存和功率分配方案不仅可以减少内容获取延迟,而且可以提高缓存命中率。

更新日期:2021-03-17
down
wechat
bug