当前位置: X-MOL 学术Ad Hoc Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Q-network based fog node offloading strategy for 5 G vehicular Adhoc Network
Ad Hoc Networks ( IF 4.8 ) Pub Date : 2021-06-03 , DOI: 10.1016/j.adhoc.2021.102565
Ujjawal Maan , Yogesh Chaba

The research on the Vehicular ad-hoc network (VANET) has been accelerated by the 5 G technology. The software-defined network and fog nodes near the vehicles have improved the throughput and latency in the processing of requests. However, the fog nodes are limited with computational resources like memories, RAM, etc. and need to be optimally managed. The estimation of vehicles' future locations can help in the optimal offloading of vehicles' processing requests. This paper has introduced the Kalman filter prediction scheme to estimate the vehicle's next location so that the future availability of fog resources can help in the offloading decision. The deep Q network-based reinforcement learning is used to select the resources-rich fog node in VANET. The Long Term Short Memory-based Deep Q-Network optimally offloads the tasks of the fog nodes as per their available resources thus giving much better performance. The proposed Deep Q-Network algorithm is an efficient solution to offload the request optimally which improves the overall performance of the network. It is found that the average reward by proposed Deep Q-Network is 56.889% more than SARSA learning and is 44.727% more than Q learning.



中文翻译:

5G车载Adhoc网络基于深度Q网络的雾节点卸载策略

5G技术加速了车载自组网(VANET)的研究。车辆附近的软件定义网络和雾节点提高了处理请求的吞吐量和延迟。然而,雾节点受到内存、RAM 等计算资源的限制,需要对其进行优化管理。车辆未来位置的估计有助于优化车辆处理请求的卸载。本文引入了卡尔曼滤波器预测方案来估计车辆的下一个位置,以便未来雾资源的可用性可以帮助卸载决策。基于深度Q网络的强化学习用于选择VANET中资源丰富的雾节点。Long Term Short Memory-based Deep Q-Network 根据雾节点的可用资源优化卸载雾节点的任务,从而提供更好的性能。所提出的 Deep Q-Network 算法是优化卸载请求的有效解决方案,从而提高了网络的整体性能。发现提出的 Deep Q-Network 的平均奖励比 SARSA 学习高 56.889%,比 Q 学习高 44.727%。

更新日期:2021-06-13
down
wechat
bug