当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Pre-trained Language Models as Prior Knowledge for Playing Text-based Games
arXiv - CS - Multiagent Systems Pub Date : 2021-07-18 , DOI: arxiv-2107.08408
Ishika Singh, Gargi Singh, Ashutosh Modi

Recently, text world games have been proposed to enable artificial agents to understand and reason about real-world scenarios. These text-based games are challenging for artificial agents, as it requires understanding and interaction using natural language in a partially observable environment. In this paper, we improve the semantic understanding of the agent by proposing a simple RL with LM framework where we use transformer-based language models with Deep RL models. We perform a detailed study of our framework to demonstrate how our model outperforms all existing agents on the popular game, Zork1, to achieve a score of 44.7, which is 1.6 higher than the state-of-the-art model. Our proposed approach also performs comparably to the state-of-the-art models on the other set of text games.

中文翻译:

预先训练的语言模型作为玩基于文本的游戏的先验知识

最近,已经提出了文本世界游戏来使人工代理能够理解和推理现实世界的场景。这些基于文本的游戏对人工代理来说具有挑战性,因为它需要在部分可观察的环境中使用自然语言进行理解和交互。在本文中,我们通过提出一个简单的 RL 和 LM 框架来提高代理的语义理解,在该框架中我们使用基于转换器的语言模型和深度 RL 模型。我们对我们的框架进行了详细研究,以证明我们的模型如何在流行游戏 Zork1 上超越所有现有代理,获得 44.7 的分数,比最先进的模型高 1.6。我们提出的方法在其他文本游戏集上的表现也与最先进的模型相当。
更新日期:2021-07-20
down
wechat
bug