当前位置: X-MOL 学术Complexity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AIBPO: Combine the Intrinsic Reward and Auxiliary Task for 3D Strategy Game
Complexity ( IF 1.7 ) Pub Date : 2021-07-14 , DOI: 10.1155/2021/6698231
Huale Li 1 , Rui Cao 1 , Xuan Wang 1 , Xiaohan Hou 1 , Tao Qian 1 , Fengwei Jia 1 , Jiajia Zhang 1 , Shuhan Qi 1
Affiliation  

In recent years, deep reinforcement learning (DRL) achieves great success in many fields, especially in the field of games, such as AlphaGo, AlphaZero, and AlphaStar. However, due to the reward sparsity problem, the traditional DRL-based method shows limited performance in 3D games, which contain much higher dimension of state space. To solve this problem, in this paper, we propose an intrinsic-based policy optimization (IBPO) algorithm for reward sparsity. In the IBPO, a novel intrinsic reward is integrated into the value network, which provides an additional reward in the environment with sparse reward, so as to accelerate the training. Besides, to deal with the problem of value estimation bias, we further design three types of auxiliary tasks, which can evaluate the state value and the action more accurately in 3D scenes. Finally, a framework of auxiliary intrinsic-based policy optimization (AIBPO) is proposed, which improves the performance of the IBPO. The experimental results show that the method is able to deal with the reward sparsity problem effectively. Therefore, the proposed method may be applied to real-world scenarios, such as 3-dimensional navigation and automatic driving, which can improve the sample utilization to reduce the cost of interactive sample collected by the real equipment.

中文翻译:

AIBPO:结合3D策略游戏的内在奖励和辅助任务

近年来,深度强化学习(DRL)在很多领域取得了巨大的成功,尤其是在游戏领域,如AlphaGo、AlphaZero、AlphaStar等。然而,由于奖励稀疏性问题,传统的基于 DRL 的方法在 3D 游戏中表现出有限的性能,其中包含更高维度的状态空间。为了解决这个问题,在本文中,我们提出了一种基于内在的策略优化(IBPO)算法,用于奖励稀疏性。在 IBPO 中,一种新颖的内在奖励被集成到价值网络中,在奖励稀疏的环境中提供额外的奖励,从而加速训练。此外,为了解决价值估计偏差的问题,我们进一步设计了三种类型的辅助任务,可以在 3D 场景中更准确地评估状态值和动作。最后,提出了一种辅助基于内在的策略优化(AIBPO)框架,提高了IBPO的性能。实验结果表明,该方法能够有效地处理奖励稀疏问题。因此,所提出的方法可以应用于现实世界的场景,如3维导航和自动驾驶,可以提高样本利用率,降低真实设备采集交互式样本的成本。
更新日期:2021-07-14
down
wechat
bug