Mobile Networks and Applications ( IF 3.8 ) Pub Date : 2022-07-15 , DOI: 10.1007/s11036-022-02010-9 Manoj Kumar Somesula, Anusha Kotte, Sudarshan Chakravarthy Annadanam, Sai Krishna Mothku
Caching the most likely to be requested content at the mobile devices in a cooperative manner can facilitate direct content delivery without fetching content from the remote content server and thus alleviate the user-perceived latency, reduce the burden on backhaul and minimize the duplicated content transmissions. In addition to content popularity, it is also essential to consider the users’ dynamic behaviour for real-time applications, which can further improve the communication chances between user devices, leading to efficient content service time. The majority of previous studies consider stationary network topologies, in which all users remain stationary during data transmission, and the user can receive desired content from the corresponding base station. In this work, we study an essential issue: caching content by taking advantage of user mobility and the randomness of user interaction time. In a cooperative caching problem, we consider a realistic scenario with user devices moving at various velocities. We formulate the cache placement problem as maximization of saved delay with capacity and deadline constraints by considering the contact duration and inter-contact time among the user devices. We designed on-policy learning integrated fuzzy logic-based caching scheme to solve the high dimensionality of the proposed Integer linear programming problem. The proposed caching schemes improve the long-term reward and higher convergence rate than the Q-learning mechanism. Extensive simulation results demonstrate that the proposed cooperative caching mechanism significantly improves the performance in terms of reward, acceleration ratio, hit ratio and offloading ratio compared with existing mechanisms.
中文翻译:
在设备到设备移动边缘网络中使用模糊强化学习的截止日期感知缓存放置方案
以协作的方式在移动设备上缓存最有可能被请求的内容可以促进直接的内容交付,而无需从远程内容服务器获取内容,从而减轻用户感知的延迟,减少回程的负担并最大限度地减少重复的内容传输。除了内容流行度之外,实时应用程序还必须考虑用户的动态行为,这可以进一步提高用户设备之间的通信机会,从而提高内容服务时间。以往的研究大多考虑静态网络拓扑,其中所有用户在数据传输过程中保持静止,并且用户可以从相应的基站接收所需的内容。在这项工作中,我们研究了一个基本问题:通过利用用户移动性和用户交互时间的随机性来缓存内容。在协作缓存问题中,我们考虑用户设备以不同速度移动的现实场景。我们通过考虑用户设备之间的接触持续时间和相互接触时间,将缓存放置问题表述为具有容量和截止期限约束的已保存延迟的最大化。我们设计了基于策略学习的集成模糊逻辑缓存方案,以解决所提出的整数线性规划问题的高维问题。所提出的缓存方案比 Q-learning 机制提高了长期奖励和更高的收敛速度。大量的仿真结果表明,所提出的协作缓存机制在奖励、加速比、