当前位置: X-MOL 学术IEEE Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Improving Learning Efficiency for Wireless Resource Allocation with Symmetric Prior
IEEE Wireless Communications ( IF 10.9 ) Pub Date : 2022-03-28 , DOI: 10.1109/mwc.003.21003437
Chengjian Sun 1 , Jiajun Wu 1 , Chenyang Yang 1
Affiliation  

Improving learning efficiency is paramount for learning resource allocation with deep neural networks (DNNs) in wireless communications over highly dynamic environments. Incorporating domain knowledge into learning is a promising approach to dealing with this issue. It is also an emerging topic in the wireless community. In this article, we briefly summarize two approaches for using domain knowledge: introducing a mathematical model and prior knowledge to deep learning. Then, we consider a type of symmetric prior permutation equivariance, which widely exists in wireless tasks. To explain how such a generic prior is harnessed to improve learning efficiency, we resort to ranking, which jointly sorts the input and output of a DNN. We use power allocation among subcarriers, probabilistic content caching, and interference coordination to illustrate the improvement of learning efficiency by exploiting the property. From the case study, we find that the required training samples to achieve given system performance decreases with the number of subcarriers or contents, owing to an interesting phenomenon called “sample hardening.” Simulation results show that the training samples, the free parameters in DNNs, and the training time can be reduced dramatically by harnessing the prior knowledge. The samples required to train a DNN after ranking can be reduced by 15 ∼ 2,400 folds to achieve the same system performance as the counterpart without using prior.

中文翻译:


利用对称先验提高无线资源分配的学习效率



提高学习效率对于在高度动态环境下的无线通信中使用深度神经网络 (DNN) 进行学习资源分配至关重要。将领域知识融入学习是解决这个问题的一种有前途的方法。这也是无线社区中的一个新兴话题。在本文中,我们简要总结了使用领域知识的两种方法:将数学模型和先验知识引入深度学习。然后,我们考虑一种广泛存在于无线任务中的对称先验置换等方差。为了解释如何利用这种通用先验来提高学习效率,我们诉诸排序,它将 DNN 的输入和输出联合排序。我们使用子载波之间的功率分配、概率内容缓存和干扰协调来说明通过利用该特性来提高学习效率。从案例研究中,我们发现,由于一种称为“样本硬化”的有趣现象,实现给定系统性能所需的训练样本随着子载波或内容数量的增加而减少。仿真结果表明,利用先验知识可以显着减少训练样本、DNN 中的自由参数和训练时间。排序后训练 DNN 所需的样本可以减少 15 ∼ 2,400 倍,以达到与不使用先验的对应系统相同的系统性能。
更新日期:2022-03-28
down
wechat
bug