当前位置: X-MOL 学术arXiv.cs.SY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Momentum-Based Policy Gradient Methods
arXiv - CS - Systems and Control Pub Date : 2020-07-13 , DOI: arxiv-2007.06680
Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang

In the paper, we propose a class of efficient momentum-based policy gradient methods for the model-free reinforcement learning, which use adaptive learning rates and do not require any large batches. Specifically, we propose a fast important-sampling momentum-based policy gradient (IS-MBPG) method based on a new momentum-based variance reduced technique and the importance sampling technique. We also propose a fast Hessian-aided momentum-based policy gradient (HA-MBPG) method based on the momentum-based variance reduced technique and the Hessian-aided technique. Moreover, we prove that both the IS-MBPG and HA-MBPG methods reach the best known sample complexity of $O(\epsilon^{-3})$ for finding an $\epsilon$-stationary point of the non-concave performance function, which only require one trajectory at each iteration. In particular, we present a non-adaptive version of IS-MBPG method, i.e., IS-MBPG*, which also reaches the best known sample complexity of $O(\epsilon^{-3})$ without any large batches. In the experiments, we apply four benchmark tasks to demonstrate the effectiveness of our algorithms.

中文翻译:

基于动量的策略梯度方法

在论文中,我们为无模型强化学习提出了一类有效的基于动量的策略梯度方法,它使用自适应学习率并且不需要任何大批量。具体来说,我们提出了一种基于新的基于动量方差减少技术和重要性采样技术的快速重要采样基于动量的策略梯度(IS-MBPG)方法。我们还提出了一种基于动量方差减少技术和 Hessian 辅助技术的快速 Hessian 辅助的基于动量的策略梯度(HA-MBPG)方法。此外,我们证明了 IS-MBPG 和 HA-MBPG 方法都达到了最著名的样本复杂度 $O(\epsilon^{-3})$ 以找到 $\epsilon$-非凹面性能的平稳点函数,每次迭代只需要一个轨迹。特别是,我们提出了一种非自适应版本的 IS-MBPG 方法,即 IS-MBPG*,它也达到了最著名的样本复杂度 $O(\epsilon^{-3})$,无需任何大批量。在实验中,我们应用了四个基准任务来证明我们算法的有效性。
更新日期:2020-08-07
down
wechat
bug