当前位置: X-MOL 学术J. Parallel Distrib. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed Bayesian optimization of deep reinforcement learning algorithms
Journal of Parallel and Distributed Computing ( IF 3.4 ) Pub Date : 2020-02-04 , DOI: 10.1016/j.jpdc.2019.07.008
M. Todd Young , Jacob D. Hinkle , Ramakrishnan Kannan , Arvind Ramanathan

Significant strides have been made in supervised learning settings thanks to the successful application of deep learning. Now, recent work has brought the techniques of deep learning to bear on sequential decision processes in the area of deep reinforcement learning (DRL). Currently, little is known regarding hyperparameter optimization for DRL algorithms. Given that DRL algorithms are computationally intensive to train, and are known to be sample inefficient, optimizing model hyperparameters for DRL presents significant challenges to established techniques. We provide an open source, distributed Bayesian model-based optimization algorithm, HyperSpace, and show that it consistently outperforms standard hyperparameter optimization techniques across three DRL algorithms.



中文翻译:

深度强化学习算法的分布式贝叶斯优化

得益于深度学习的成功应用,在有监督的学习环境中取得了长足的进步。现在,最新的工作已将深度学习技术带入了深度强化学习(DRL)领域中的顺序决策过程。当前,关于DRL算法的超参数优化知之甚少。鉴于DRL算法训练起来计算量大,并且已知样本效率低,因此针对DRL优化模型超参数给现有技术提出了重大挑战。我们提供了一种开源的,基于贝叶斯模型的分布式优化算法HyperSpace,并表明它在三种DRL算法中始终优于标准的超参数优化技术。

更新日期:2020-02-04
down
wechat
bug