当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning more skills through optimistic exploration
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-29 , DOI: arxiv-2107.14226 DJ Strouse, Kate Baumli, David Warde-Farley, Vlad Mnih, Steven Hansen
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-29 , DOI: arxiv-2107.14226 DJ Strouse, Kate Baumli, David Warde-Farley, Vlad Mnih, Steven Hansen
Unsupervised skill learning objectives (Gregor et al., 2016, Eysenbach et
al., 2018) allow agents to learn rich repertoires of behavior in the absence of
extrinsic rewards. They work by simultaneously training a policy to produce
distinguishable latent-conditioned trajectories, and a discriminator to
evaluate distinguishability by trying to infer latents from trajectories. The
hope is for the agent to explore and master the environment by encouraging each
skill (latent) to reliably reach different states. However, an inherent
exploration problem lingers: when a novel state is actually encountered, the
discriminator will necessarily not have seen enough training data to produce
accurate and confident skill classifications, leading to low intrinsic reward
for the agent and effective penalization of the sort of exploration needed to
actually maximize the objective. To combat this inherent pessimism towards
exploration, we derive an information gain auxiliary objective that involves
training an ensemble of discriminators and rewarding the policy for their
disagreement. Our objective directly estimates the epistemic uncertainty that
comes from the discriminator not having seen enough training examples, thus
providing an intrinsic reward more tailored to the true objective compared to
pseudocount-based methods (Burda et al., 2019). We call this exploration bonus
discriminator disagreement intrinsic reward, or DISDAIN. We demonstrate
empirically that DISDAIN improves skill learning both in a tabular grid world
(Four Rooms) and the 57 games of the Atari Suite (from pixels). Thus, we
encourage researchers to treat pessimism with DISDAIN.
中文翻译:
通过乐观探索学习更多技能
无监督技能学习目标(Gregor 等人,2016 年,Eysenbach 等人,2018 年)允许代理在没有外在奖励的情况下学习丰富的行为。他们通过同时训练一个策略来产生可区分的潜在条件轨迹,以及一个鉴别器,通过尝试从轨迹中推断潜在条件来评估可区分性。希望代理通过鼓励每种技能(潜在的)可靠地达到不同的状态来探索和掌握环境。然而,一个固有的探索问题仍然存在:当实际遇到一个新的状态时,鉴别器必然没有看到足够的训练数据来产生准确和自信的技能分类,导致代理的内在奖励较低,并对实际最大化目标所需的那种探索进行有效惩罚。为了对抗这种对探索的固有悲观主义,我们推导出了一个信息增益辅助目标,该目标涉及训练一组鉴别器并针对他们的分歧奖励政策。我们的目标直接估计来自判别器没有看到足够训练示例的认知不确定性,因此与基于伪计数的方法相比,提供更适合真实目标的内在奖励(Burda 等,2019)。我们将这种探索奖励鉴别器分歧称为内在奖励,或 DISDAIN。我们凭经验证明 DISDAIN 提高了表格网格世界(四个房间)和 Atari Suite 的 57 个游戏(从像素)中的技能学习。因此,我们鼓励研究人员用 DISDAIN 来对待悲观情绪。
更新日期:2021-07-30
中文翻译:
通过乐观探索学习更多技能
无监督技能学习目标(Gregor 等人,2016 年,Eysenbach 等人,2018 年)允许代理在没有外在奖励的情况下学习丰富的行为。他们通过同时训练一个策略来产生可区分的潜在条件轨迹,以及一个鉴别器,通过尝试从轨迹中推断潜在条件来评估可区分性。希望代理通过鼓励每种技能(潜在的)可靠地达到不同的状态来探索和掌握环境。然而,一个固有的探索问题仍然存在:当实际遇到一个新的状态时,鉴别器必然没有看到足够的训练数据来产生准确和自信的技能分类,导致代理的内在奖励较低,并对实际最大化目标所需的那种探索进行有效惩罚。为了对抗这种对探索的固有悲观主义,我们推导出了一个信息增益辅助目标,该目标涉及训练一组鉴别器并针对他们的分歧奖励政策。我们的目标直接估计来自判别器没有看到足够训练示例的认知不确定性,因此与基于伪计数的方法相比,提供更适合真实目标的内在奖励(Burda 等,2019)。我们将这种探索奖励鉴别器分歧称为内在奖励,或 DISDAIN。我们凭经验证明 DISDAIN 提高了表格网格世界(四个房间)和 Atari Suite 的 57 个游戏(从像素)中的技能学习。因此,我们鼓励研究人员用 DISDAIN 来对待悲观情绪。