当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Control as Hybrid Inference
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-11 , DOI: arxiv-2007.05838
Alexander Tschantz, Beren Millidge, Anil K. Seth, Christopher L. Buckley

The field of reinforcement learning can be split into model-based and model-free methods. Here, we unify these approaches by casting model-free policy optimisation as amortised variational inference, and model-based planning as iterative variational inference, within a `control as hybrid inference' (CHI) framework. We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference. Using a didactic experiment, we demonstrate that the proposed algorithm operates in a model-based manner at the onset of learning, before converging to a model-free algorithm once sufficient data have been collected. We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines. CHI thus provides a principled framework for harnessing the sample efficiency of model-based planning while retaining the asymptotic performance of model-free policy optimisation.

中文翻译:

作为混合推理的控制

强化学习领域可以分为基于模型和无模型的方法。在这里,我们通过将无模型策略优化作为摊销变分推理,将基于模型的规划作为迭代变分推理,在“控制作为混合推理”(CHI)框架内统一这些方法。我们提出了一个 CHI 的实现,它自然地调节了迭代推理和摊销推理之间的平衡。使用教学实验,我们证明了所提出的算法在学习开始时以基于模型的方式运行,然后在收集到足够的数据后收敛到无模型算法。我们在连续控制基准上验证了我们算法的可扩展性,证明它优于强大的无模型和基于模型的基线。
更新日期:2020-07-14
down
wechat
bug