当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decentralized Deep Reinforcement Learning for a Distributed and Adaptive Locomotion Controller of a Hexapod Robot
arXiv - CS - Robotics Pub Date : 2020-05-21 , DOI: arxiv-2005.11164
Malte Schilling, Kai Konen, Frank W. Ohl, Timo Korthals

Locomotion is a prime example for adaptive behavior in animals and biological control principles have inspired control architectures for legged robots. While machine learning has been successfully applied to many tasks in recent years, Deep Reinforcement Learning approaches still appear to struggle when applied to real world robots in continuous control tasks and in particular do not appear as robust solutions that can handle uncertainties well. Therefore, there is a new interest in incorporating biological principles into such learning architectures. While inducing a hierarchical organization as found in motor control has shown already some success, we here propose a decentralized organization as found in insect motor control for coordination of different legs. A decentralized and distributed architecture is introduced on a simulated hexapod robot and the details of the controller are learned through Deep Reinforcement Learning. We first show that such a concurrent local structure is able to learn better walking behavior. Secondly, that the simpler organization is learned faster compared to holistic approaches.

中文翻译:

六足机器人分布式自适应运动控制器的分散深度强化学习

运动是动物适应性行为的一个主要例子,生物控制原理启发了腿式机器人的控制架构。尽管近年来机器学习已成功应用于许多任务,但深度强化学习方法在应用于连续控制任务中的现实世界机器人时似乎仍然很困难,尤其是不能作为可以很好地处理不确定性的稳健解决方案。因此,人们对将生物学原理纳入此类学习架构产生了新的兴趣。虽然在运动控制中发现的分层组织已经显示出一些成功,但我们在这里提出了在昆虫运动控制中发现的分散组织,用于协调不同的腿。在模拟六足机器人上引入分散和分布式架构,并通过深度强化学习学习控制器的细节。我们首先表明这种并发的局部结构能够学习更好的步行行为。其次,与整体方法相比,更简单的组织学习得更快。
更新日期:2020-05-25
down
wechat
bug