当前位置: X-MOL 学术IEEE ACM Trans. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Thompson Sampling for Combinatorial Network Optimization in Unknown Environments
IEEE/ACM Transactions on Networking ( IF 3.7 ) Pub Date : 2020-10-02 , DOI: 10.1109/tnet.2020.3025904
Alihan Huyuk , Cem Tekin

Influence maximization, adaptive routing, and dynamic spectrum allocation all require choosing the right action from a large set of alternatives. Thanks to the advances in combinatorial optimization, these and many similar problems can be efficiently solved given an environment with known stochasticity. In this paper, we take this one step further and focus on combinatorial optimization in unknown environments. We consider a very general learning framework called combinatorial multi-armed bandit with probabilistically triggered arms and a very powerful Bayesian algorithm called Combinatorial Thompson Sampling (CTS). Under the semi-bandit feedback model and assuming access to an oracle without knowing the expected base arm outcomes beforehand, we show that when the expected reward is Lipschitz continuous in the expected base arm outcomes CTS achieves $O\left({\sum _{i =1}^{m}\log T/(p_{i}\Delta _{i})}\right)$ regret and $O(\max \{ \mathbb {E}[m\sqrt {T\log T/p^{*}}], \mathbb {E}[m^{2}/p^{*}]\})$ Bayesian regret, where $m$ denotes the number of base arms, $p_{i}$ and $\Delta _{i}$ denote the minimum non-zero triggering probability and the minimum suboptimality gap of base arm $i$ respectively, $T$ denotes the time horizon, and $p^{*}$ denotes the overall minimum non-zero triggering probability. We also show that when the expected reward satisfies the triggering probability modulated Lipschitz continuity, CTS achieves $O(\max \{m\sqrt {T\log T},m^{2}\})$ Bayesian regret, and when triggering probabilities are non-zero for all base arms, CTS achieves $O(1/p^{*}\log (1/p^{*}))$ regret independent of the time horizon. Finally, we numerically compare CTS with algorithms based on upper confidence bounds in several networking problems and show that CTS outperforms these algorithms by at least an order of magnitude in majority of the cases.

中文翻译:

汤普森抽样用于未知环境中的组合网络优化

影响力最大化,自适应路由和动态频谱分配都需要从大量替代方案中选择正确的措施。得益于组合优化的进步,在已知随机性的环境下,可以有效解决这些以及许多类似的问题。在本文中,我们将这一步骤更进一步,并将重点放在未知环境中的组合优化上。我们考虑了一个非常通用的学习框架,称为带有概率触发臂的组合多臂匪徒,以及一种非常强大的称为组合汤普森采样(CTS)的贝叶斯算法。在半强反馈模型下,假设在未事先知道预期的基础结果的情况下访问Oracle, $ O \ left({\ sum _ {i = 1} ^ {m} \ log T /(p_ {i} \ Delta _ {i})} \ right)$ 后悔和 $ O(\ max \ {\ mathbb {E} [m \ sqrt {T \ log T / p ^ {*}}],\ mathbb {E} [m ^ {2} / p ^ {*}] \} )$ 贝叶斯遗憾 $ m $ 表示基本臂数, $ p_ {i} $ $ \ Delta _ {i} $ 表示最小非零触发概率和基部臂的最小次优缺口 $ i $ 分别, $ T $ 表示时间范围,并且 $ p ^ {*} $ 表示总体最小非零触发概率。我们还表明,当预期报酬满足触发概率调制的Lipschitz连续性时,CTS可以达到 $ O(\ max \ {m \ sqrt {T \ log T},m ^ {2} \})$ 贝叶斯后悔,当所有基本武器的触发概率都不为零时,CTS可以实现 $ O(1 / p ^ {*} \ log(1 / p ^ {*}))$ 后悔与时间无关。最后,我们在几个网络问题中将CTS与基于上限置信度的算法进行了数值比较,并表明在大多数情况下CTS的性能至少比这些算法高一个数量级。
更新日期:2020-10-02
down
wechat
bug