当前位置: X-MOL 学术IEEE Robot. Automation Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How to Train Your HERON
IEEE Robotics and Automation Letters ( IF 4.6 ) Pub Date : 2021-03-11 , DOI: 10.1109/lra.2021.3065278
Antoine Richard , Stephanie Aravecchia , Thomas Schillaci , Matthieu Geist , Cedric Pradalier

In this letter we apply Deep Reinforcement Learning (Deep RL) and Domain Randomization to solve a navigation task in a natural environment relying solely on a 2D laser scanner. We train a model-based RL agent in simulation to follow lake and river shores and apply it on a real Unmanned Surface Vehicle in a zero-shot setup. We demonstrate that even though the agent has not been trained in the real world, it can fulfill its task successfully and adapt to changes in the robot's environment and dynamics. Finally, we show that the RL agent is more robust, faster, and more accurate than a state-aware Model-Predictive-Controller. Code, simulation environments, pre-trained models, and datasets are available at   https://github.com/AntoineRichard/Heron-RL-ICRA.git .

中文翻译:


如何训练你的 HERON



在这封信中,我们应用深度强化学习(Deep RL)和域随机化来解决自然环境中仅依靠 2D 激光扫描仪的导航任务。我们在模拟中训练基于模型的 RL 代理,以跟踪湖岸和河岸,并将其应用在零射击设置中的真实无人驾驶水面车辆上。我们证明,即使代理尚未在现实​​世界中接受过训练,它也可以成功完成其任务并适应机器人环境和动态的变化。最后,我们证明了 RL 代理比状态感知模型预测控制器更强大、更快、更准确。代码、模拟环境、预训练模型和数据集可在 https://github.com/AntoineRichard/Heron-RL-ICRA.git 获取。
更新日期:2021-03-11
down
wechat
bug