当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical Reinforcement Learning
ACM Computing Surveys ( IF 16.6 ) Pub Date : 2021-06-05 , DOI: 10.1145/3453160
Shubham Pateria 1 , Budhitama Subagdja 2 , Ah-hwee Tan 2 , Chai Quek 1
Affiliation  

Hierarchical Reinforcement Learning (HRL) enables autonomous decomposition of challenging long-horizon decision-making tasks into simpler subtasks. During the past years, the landscape of HRL research has grown profoundly, resulting in copious approaches. A comprehensive overview of this vast landscape is necessary to study HRL in an organized manner. We provide a survey of the diverse HRL approaches concerning the challenges of learning hierarchical policies, subtask discovery, transfer learning, and multi-agent learning using HRL. The survey is presented according to a novel taxonomy of the approaches. Based on the survey, a set of important open problems is proposed to motivate the future research in HRL. Furthermore, we outline a few suitable task domains for evaluating the HRL approaches and a few interesting examples of the practical applications of HRL in the Supplementary Material.

中文翻译:

分层强化学习

分层强化学习 (HRL) 能够将具有挑战性的长期决策任务自主分解为更简单的子任务。在过去的几年中,HRL 研究的前景得到了深刻的发展,产生了丰富的方法。要以有组织的方式研究 HRL,需要对这片广阔的景观进行全面的概述。我们对各种 HRL 方法进行了调查,这些方法涉及使用 HRL 学习分层策略、子任务发现、迁移学习和多智能体学习的挑战。该调查是根据一种新颖的方法分类法进行的。在调查的基础上,提出了一系列重要的开放性问题,以激励 HRL 的未来研究。此外,
更新日期:2021-06-05
down
wechat
bug