当前位置: X-MOL 学术arXiv.cs.IT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fundamental Limits of Cache-Aided Broadcast Networks with User Cooperation
arXiv - CS - Information Theory Pub Date : 2020-06-28 , DOI: arxiv-2006.16818
Jiahui Chen, Xiaowen You, Youlong Wu, Haoyu Yin

We consider cache-aided broadcast networks with user cooperation, where a server connects with multiple users and the users can cooperate with each other through a cooperation network. A new definition of transmission delay is introduced to characterize the latency cost during the delivery phase for arbitrary cache-aided networks. We investigate the deterministic caching and decentralized random caching setups respectively. For the deterministic caching setup, we propose new coded caching scheme that fully exploits time resource by allowing parallel transmission between the server and users. A constant multiplicative gap is derived between the information theoretic lower and upper bounds on the transmission delay. For the decentralized random caching setup, we show that if the cache size of each user is larger than a small threshold that tends to zero as the number of users goes to infinity, then the proposed decentralized coded caching scheme approaches an upper bound with a constant multiplicative factor. For both centralized and decentralized scenarios, we characterize cooperation gain (offered by the cooperation among the users) and parallel gain (offered by the parallel transmission between the server and multiple users) that greatly reduce the transmission delay. Furthermore, we show that due to a tradeoff between the parallel gain, cooperation gain and multicast gain, the number of users who parallelly send information should be chosen dynamically according to the system parameters, and letting more users parallelly send information could cause high transmission delay.

中文翻译:

具有用户合作的缓存辅助广播网络的基本限制

我们考虑具有用户合作的缓存辅助广播网络,其中一个服务器连接多个用户,用户可以通过合作网络相互合作。引入了传输延迟的新定义来表征任意缓存辅助网络在交付阶段的延迟成本。我们分别研究了确定性缓存和分散式随机缓存设置。对于确定性缓存设置,我们提出了新的编码缓存方案,通过允许服务器和用户之间的并行传输来充分利用时间资源。在传输延迟的信息理论下限和上限之间得出一个恒定的乘法间隙。对于分散式随机缓存设置,我们表明,如果每个用户的缓存大小大于一个随着用户数量趋于无穷大而趋于零的小阈值,那么所提出的分散编码缓存方案将接近具有恒定乘法因子的上限。对于中心化和去中心化的场景,我们表征了协作增益(由用户之间的合作提供)和并行增益(由服务器和多个用户之间的并行传输提供),大大降低了传输延迟。此外,我们表明,由于并行增益、协作增益和组播增益之间的权衡,应根据系统参数动态选择并行发送信息的用户数量,让更多用户并行发送信息可能会导致高传输延迟. 然后所提出的分散编码缓存方案接近具有恒定乘法因子的上限。对于中心化和去中心化的场景,我们表征了协作增益(由用户之间的合作提供)和并行增益(由服务器和多个用户之间的并行传输提供),大大降低了传输延迟。此外,我们表明,由于并行增益、协作增益和组播增益之间的权衡,应根据系统参数动态选择并行发送信息的用户数量,让更多用户并行发送信息可能会导致高传输延迟. 然后所提出的分散编码缓存方案接近具有恒定乘法因子的上限。对于中心化和去中心化的场景,我们表征了协作增益(由用户之间的合作提供)和并行增益(由服务器和多个用户之间的并行传输提供),大大降低了传输延迟。此外,我们表明,由于并行增益、协作增益和组播增益之间的权衡,应根据系统参数动态选择并行发送信息的用户数量,让更多用户并行发送信息可能会导致高传输延迟. 我们刻画了大大降低传输延迟的合作增益(由用户之间的合作提供)和并行增益(由服务器和多个用户之间的并行传输提供)。此外,我们表明,由于并行增益、协作增益和组播增益之间的权衡,应根据系统参数动态选择并行发送信息的用户数量,让更多用户并行发送信息可能会导致高传输延迟. 我们刻画了大大降低传输延迟的合作增益(由用户之间的合作提供)和并行增益(由服务器和多个用户之间的并行传输提供)。此外,我们表明,由于并行增益、协作增益和组播增益之间的权衡,应根据系统参数动态选择并行发送信息的用户数量,让更多用户并行发送信息可能会导致高传输延迟.
更新日期:2020-07-01
down
wechat
bug