当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
{\epsilon}-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
arXiv - CS - Robotics Pub Date : 2020-07-02 , DOI: arxiv-2007.00869
Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee

Resolving the exploration-exploitation trade-off remains a fundamental problem in the design and implementation of reinforcement learning (RL) algorithms. In this paper, we focus on model-free RL using the epsilon-greedy exploration policy, which despite its simplicity, remains one of the most frequently used forms of exploration. However, a key limitation of this policy is the specification of $\varepsilon$. In this paper, we provide a novel Bayesian perspective of $\varepsilon$ as a measure of the uniformity of the Q-value function. We introduce a closed-form Bayesian model update based on Bayesian model combination (BMC), based on this new perspective, which allows us to adapt $\varepsilon$ using experiences from the environment in constant time with monotone convergence guarantees. We demonstrate that our proposed algorithm, $\varepsilon$-\texttt{BMC}, efficiently balances exploration and exploitation on different problems, performing comparably or outperforming the best tuned fixed annealing schedules and an alternative data-dependent $\varepsilon$ adaptation scheme proposed in the literature.

中文翻译:

{\epsilon}-BMC:无模型强化学习中 Epsilon-Greedy 探索的贝叶斯集成方法

解决探索-利用权衡仍然是强化学习 (RL) 算法设计和实现中的一个基本问题。在本文中,我们专注于使用 epsilon-greedy 探索策略的无模型 RL,尽管它很简单,但仍然是最常用的探索形式之一。但是,此策略的一个关键限制是 $\varepsilon$ 的规范。在本文中,我们提供了 $\varepsilon$ 的新贝叶斯视角,作为 Q 值函数均匀性的度量。基于这种新视角,我们引入了基于贝叶斯模型组合 (BMC) 的封闭形式贝叶斯模型更新,这使我们能够在具有单调收敛保证的恒定时间内使用来自环境的经验来适应 $\varepsilon$。我们证明了我们提出的算法,
更新日期:2020-07-03
down
wechat
bug