当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research
arXiv - CS - Machine Learning Pub Date : 2020-11-20 , DOI: arxiv-2011.14826
Johan S. Obando-Ceron, Pablo Samuel Castro

Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.

中文翻译:

重温彩虹:促进更具洞察力和包容性的深度强化学习研究

自从DQN引入以来,绝大多数强化学习研究都集中在使用深度神经网络作为函数逼近器的强化学习上。通常会在现已成为标准的一组环境中评估新方法,例如Atari 2600游戏。虽然这些基准有助于标准化评估,但不幸的是,它们的计算成本会扩大具有充足计算资源访问权限的人和没有足够访问计算资源的人之间的差距。在这项工作中,我们认为,尽管社区强调大型环境,但传统的小型环境仍然可以产生有价值的科学见解,并且可以帮助减少贫困社区的进入障碍。为了证实我们的主张,
更新日期:2020-12-01
down
wechat
bug