当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
From Pixels to Legs: Hierarchical Learning of Quadruped Locomotion
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-23 , DOI: arxiv-2011.11722 Deepali Jain, Atil Iscen, Ken Caluwaerts
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-23 , DOI: arxiv-2011.11722 Deepali Jain, Atil Iscen, Ken Caluwaerts
Legged robots navigating crowded scenes and complex terrains in the real
world are required to execute dynamic leg movements while processing visual
input for obstacle avoidance and path planning. We show that a quadruped robot
can acquire both of these skills by means of hierarchical reinforcement
learning (HRL). By virtue of their hierarchical structure, our policies learn
to implicitly break down this joint problem by concurrently learning High Level
(HL) and Low Level (LL) neural network policies. These two levels are connected
by a low dimensional hidden layer, which we call latent command. HL receives a
first-person camera view, whereas LL receives the latent command from HL and
the robot's on-board sensors to control its actuators. We train policies to
walk in two different environments: a curved cliff and a maze. We show that
hierarchical policies can concurrently learn to locomote and navigate in these
environments, and show they are more efficient than non-hierarchical neural
network policies. This architecture also allows for knowledge reuse across
tasks. LL networks trained on one task can be transferred to a new task in a
new environment. Finally HL, which processes camera images, can be evaluated at
much lower and varying frequencies compared to LL, thus reducing computation
times and bandwidth requirements.
中文翻译:
从像素到腿:四足运动的分层学习
在现实世界中拥挤的场景和复杂的地形中导航的腿式机器人需要执行动态的腿部动作,同时还要处理视觉输入以避开障碍物和规划路径。我们表明,四足机器人可以通过分层强化学习(HRL)掌握这两种技能。凭借其层次结构,我们的策略通过同时学习高级(HL)和低级(LL)神经网络策略来学习隐式解决此联合问题。这两个层次通过一个低维的隐藏层相连,我们称其为潜在命令。HL接收第一人称视角的摄像机视图,而LL接收来自HL的潜在命令和机器人的板载传感器以控制其执行器。我们训练政策以在两种不同的环境中行走:弯曲的悬崖和迷宫。我们证明了分层策略可以在这些环境中同时学习机车和导航,并表明它们比非分层神经网络策略更有效。该体系结构还允许跨任务重用知识。经过一项任务训练的LL网络可以在新环境中转移到新任务。最后,与LL相比,处理相机图像的HL可以以更低的频率和变化的频率进行评估,从而减少了计算时间和带宽要求。
更新日期:2020-11-25
中文翻译:
从像素到腿:四足运动的分层学习
在现实世界中拥挤的场景和复杂的地形中导航的腿式机器人需要执行动态的腿部动作,同时还要处理视觉输入以避开障碍物和规划路径。我们表明,四足机器人可以通过分层强化学习(HRL)掌握这两种技能。凭借其层次结构,我们的策略通过同时学习高级(HL)和低级(LL)神经网络策略来学习隐式解决此联合问题。这两个层次通过一个低维的隐藏层相连,我们称其为潜在命令。HL接收第一人称视角的摄像机视图,而LL接收来自HL的潜在命令和机器人的板载传感器以控制其执行器。我们训练政策以在两种不同的环境中行走:弯曲的悬崖和迷宫。我们证明了分层策略可以在这些环境中同时学习机车和导航,并表明它们比非分层神经网络策略更有效。该体系结构还允许跨任务重用知识。经过一项任务训练的LL网络可以在新环境中转移到新任务。最后,与LL相比,处理相机图像的HL可以以更低的频率和变化的频率进行评估,从而减少了计算时间和带宽要求。