当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Discriminator Soft Actor Critic without Extrinsic Rewards
arXiv - CS - Machine Learning Pub Date : 2020-01-19 , DOI: arxiv-2001.06808
Daichi Nishio, Daiki Kuyoshi, Toi Tsuneda and Satoshi Yamane

It is difficult to be able to imitate well in unknown states from a small amount of expert data and sampling data. Supervised learning methods such as Behavioral Cloning do not require sampling data, but usually suffer from distribution shift. The methods based on reinforcement learning, such as inverse reinforcement learning and generative adversarial imitation learning (GAIL), can learn from only a few expert data. However, they often need to interact with the environment. Soft Q imitation learning addressed the problems, and it was shown that it could learn efficiently by combining Behavioral Cloning and soft Q-learning with constant rewards. In order to make this algorithm more robust to distribution shift, we propose Discriminator Soft Actor Critic (DSAC). It uses a reward function based on adversarial inverse reinforcement learning instead of constant rewards. We evaluated it on PyBullet environments with only four expert trajectories.

中文翻译:

没有外在奖励的鉴别器软演员评论家

很难从少量的专家数据和抽样数据中很好地模仿未知状态。行为克隆等监督学习方法不需要采样数据,但通常会受到分布偏移的影响。基于强化学习的方法,如逆强化学习和生成对抗性模仿学习(GAIL),只能从少数专家数据中学习。然而,他们经常需要与环境互动。软 Q 模仿学习解决了这些问题,结果表明,通过将行为克隆和软 Q 学习与不断奖励相结合,它可以有效地学习。为了使该算法对分布偏移更加鲁棒,我们提出了鉴别器软演员评论家(DSAC)。它使用基于对抗逆强化学习的奖励函数而不是恒定奖励。我们在只有四个专家轨迹的 PyBullet 环境中对其进行了评估。
更新日期:2020-02-03
down
wechat
bug