当前位置: X-MOL 学术Robot. Intell. Autom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quadrotor navigation in dynamic environments with deep reinforcement learning
Robotic Intelligence and Automation ( IF 1.9 ) Pub Date : 2021-04-07 , DOI: 10.1108/aa-11-2020-0183
Jinbao Fang , Qiyu Sun , Yukun Chen , Yang Tang

Purpose

This work aims to combine the cloud robotics technologies with deep reinforcement learning to build a distributed training architecture and accelerate the learning procedure of autonomous systems. Especially, a distributed training architecture for navigating unmanned aerial vehicles (UAVs) in complicated dynamic environments is proposed.

Design/methodology/approach

This study proposes a distributed training architecture named experience-sharing learner-worker (ESLW) for deep reinforcement learning to navigate UAVs in dynamic environments, which is inspired by cloud-based techniques. With the ESLW architecture, multiple worker nodes operating in different environments can generate training data in parallel, and then the learner node trains a policy through the training data collected by the worker nodes. Besides, this study proposes an extended experience replay (EER) strategy to ensure the method can be applied to experience sequences to improve training efficiency. To learn more about dynamic environments, convolutional long short-term memory (ConvLSTM) modules are adopted to extract spatiotemporal information from training sequences.

Findings

Experimental results demonstrate that the ESLW architecture and the EER strategy accelerate the convergence speed and the ConvLSTM modules specialize in extract sequential information when navigating UAVs in dynamic environments.

Originality/value

Inspired by the cloud robotics technologies, this study proposes a distributed ESLW architecture for navigating UAVs in dynamic environments. Besides, the EER strategy is proposed to speed up training processes of experience sequences, and the ConvLSTM modules are added to networks to make full use of the sequential experiences.



中文翻译:

动态环境中的四旋翼导航,具有深度强化学习功能

目的

这项工作旨在将云机器人技术与深度强化学习相结合,以构建分布式培训架构,并加速自主系统的学习过程。特别地,提出了一种用于在复杂的动态环境中导航无人机的分布式训练架构。

设计/方法/方法

这项研究提出了一种分布式培训体系结构,称为“经验共享学习者-工作者”(ESLW),用于深度强化学习,以在动态环境中导航无人机,这受基于云技术的启发。使用ESLW架构,在不同环境中运行的多个工作节点可以并行生成训练数据,然后学习者节点通过工作节点收集的训练数据来训练策略。此外,本研究提出了一种扩展的体验重播(EER)策略,以确保该方法可以应用于体验序列,从而提高训练效率。为了了解有关动态环境的更多信息,采用卷积长短期记忆(ConvLSTM)模块从训练序列中提取时空信息。

发现

实验结果表明,在动态环境中导航无人机时,ESLW体系结构和EER策略可加快收敛速度​​,ConvLSTM模块专门用于提取顺序信息。

创意/价值

受云机器人技术的启发,本研究提出了一种分布式ESLW架构,用于在动态环境中导航无人机。此外,提出了EER策略以加快体验序列的训练过程,并将ConvLSTM模块添加到网络中,以充分利用顺序体验。

更新日期:2021-04-08
down
wechat
bug