当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cache-Enabled Multicast Content Pushing With Structured Deep Learning
IEEE Journal on Selected Areas in Communications ( IF 13.8 ) Pub Date : 2021-05-10 , DOI: 10.1109/jsac.2021.3078493
Qi Chen , Wei Wang , Wei Chen , F. Richard Yu , Zhaoyang Zhang

The cache-enabled multicast content pushing, which multicasts the content items to multiple users and caches them until requested, is a promising technique to alleviate the heavy network load by enhancing the traffic offloading. This, in turn, has called for the optimization of content pushing strategy while considering both the transmission and caching resources, which jointly result in the complicated coupling among pushing decisions and lead to high computational complexity. Unlike most existing approaches which simplify the pushing problem via bypassing the complicated coupling, in this paper, we propose a multicast content pushing strategy to maximize the offloaded traffic with the cost on content caching based on structured deep learning. Specifically, we design the convolution stage to extract the spatio-temporal correlations of one content item between different pushing decisions, and construct the fully-connected stage to capture the spatial coupling among the decisions of pushing different content items to different user devices. Moreover, to address the absence of the ground truth on multicast content pushing, we relax the transmission constraint to derive a performance upper bound for guiding the training direction. This relaxed problem is solved based on dynamic programming in a bottom-up manner. Compared to the state-of-the-art baselines including both the traditional model-based and the general neural network-based strategies, the proposed pushing strategy achieves significant performance gain in both the random-generated dataset and the real LastFM dataset. In addition, it is also shown that the proposed strategy is robust to the uncertainty of user request information.

中文翻译:


通过结构化深度学习进行启用缓存的多播内容推送



支持缓存的多播内容推送将内容项多播给多个用户并缓存它们直到请求为止,这是一种通过增强流量卸载来减轻繁重网络负载的有前途的技术。这反过来又要求在考虑传输和缓存资源的情况下优化内容推送策略,这共同导致推送决策之间的复杂耦合并导致较高的计算复杂度。与大多数通过绕过复杂耦合来简化推送问题的现有方法不同,在本文中,我们提出了一种基于结构化深度学习的多播内容推送策略,以最大化卸载流量,并以内容缓存为代价。具体来说,我们设计卷积阶段来提取一个内容项在不同推送决策之间的时空相关性,并构建全连接阶段来捕获将不同内容项推送到不同用户设备的决策之间的空间耦合。此外,为了解决多播内容推送缺乏基本事实的问题,我们放宽了传输约束,以导出用于指导训练方向的性能上限。这个宽松的问题是基于动态规划以自下而上的方式解决的。与最先进的基线(包括传统的基于模型和一般的基于神经网络的策略)相比,所提出的推送策略在随机生成的数据集和真实的 LastFM 数据集中都实现了显着的性能增益。此外,还表明所提出的策略对于用户请求信息的不确定性具有鲁棒性。
更新日期:2021-05-10
down
wechat
bug