当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Collaborative Caching Strategy for RL-Based Content Downloading Algorithm in Clustered Vehicular Networks
IEEE Internet of Things Journal ( IF 8.2 ) Pub Date : 1-9-2023 , DOI: 10.1109/jiot.2023.3235661
Xiaodan Bi 1 , Lian Zhao 1
Affiliation  

With the explosive growth of content request services in the vehicle network, there is an urgent need to speed up the response process of content requests and reduce the backhaul burden on base stations (BSs). However, most traditional content caching strategies only consider the content popularity or cluster-based caching strategies individually, and the access paths are fixed. This article proposes a collaborative caching strategy for reinforcement learning (RL)-based content downloading. Specifically, the vehicles are first clustered by the KK -means algorithm, and the content transmission distance is reduced by caching the contents with high popularity in the cluster head (CH). Then, according to the historical content request information, the long short-term memory is used to predict the popularity of content. The contents with high popularity will be collaboratively cached in the BS and CHs. Finally, the content downloading problem can be described as a Markov decision process, using a deep RL algorithm, deep QQ network (DQN), to solve the target problem which is to minimize the weighted cost, including the downloading delay and failure cost. With the DQN algorithm, the CH can make the access decision for the content request. The proposed collaborative caching strategy for the RL-based content downloading algorithm can greatly reduce the response process and the burden at the BS. The simulation results show that the proposed RL-based method achieved outstanding performance to improve the access hit ratio and reduce the content downloading delay.

中文翻译:


集群车载网络中基于强化学习的内容下载算法的协作缓存策略



随着车载网络中内容请求业务的爆发式增长,迫切需要加快内容请求的响应过程并减轻基站(BS)的回程负担。然而,大多数传统的内容缓存策略仅考虑内容流行度或单独基于集群的缓存策略,并且访问路径是固定的。本文提出了一种基于强化学习(RL)的内容下载的协作缓存策略。具体地,首先通过KK-means算法对车辆进行聚类,通过将流行度高的内容缓存在簇头(CH)中来减少内容传输距离。然后,根据历史内容请求信息,利用长短期记忆来预测内容的受欢迎程度。热度高的内容将在BS和CH中协同缓存。最后,内容下载问题可以描述为马尔可夫决策过程,使用深度强化学习算法、深度QQ网络(DQN)来解决目标问题,即最小化加权成本,包括下载延迟和失败成本。通过DQN算法,CH可以对内容请求做出访问决策。所提出的基于强化学习的内容下载算法的协作缓存策略可以大大减少响应过程和基站的负担。仿真结果表明,所提出的基于强化学习的方法在提高访问命中率和减少内容下载延迟方面取得了出色的性能。
更新日期:2024-08-26
down
wechat
bug