当前位置: X-MOL 学术IEEE Robot. Automation Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Multi-Modal Perception-Based Navigation: A Deep Reinforcement Learning Method
IEEE Robotics and Automation Letters ( IF 4.6 ) Pub Date : 2021-03-08 , DOI: 10.1109/lra.2021.3064461
Xueqin Huang , Han Deng , Wei Zhang , Ran Song , Yibin Li

In this letter, we present a novel navigation system of unmanned ground vehicle (UGV) for local path planning based on deep reinforcement learning. The navigation system decouples perception from control and takes advantage of multi-modal perception for a reliable online interaction with the surrounding environment of the UGV, which enables a direct policy learning for generating flexible actions to avoid collisions with obstacles in the navigation. By replacing the raw RGB images with their semantic segmentation maps as the input and applying a multi-modal fusion scheme, our system trained only in simulation can handle real-world scenes containing dynamic obstacles such as vehicles and pedestrians. We also introduce a modal separation learning to accelerate the training and further boost the performance. Extensive experiments demonstrate that our method closes the gap between simulated and real environments, exhibiting the superiority over state-of-the-art approaches. Please refer to https://vsislab.github.io/mmpbnv1/ for the supplementary video demonstration of UGV navigation in both simulated and real-world environments.

中文翻译:


迈向基于多模态感知的导航:一种深度强化学习方法



在这封信中,我们提出了一种基于深度强化学习的新型无人地面车辆(UGV)导航系统,用于局部路径规划。该导航系统将感知与控制分离,并利用多模态感知与UGV周围环境进行可靠的在线交互,从而实现直接策略学习以生成灵活的动作,以避免与导航中的障碍物发生碰撞。通过用语义分割图替换原始 RGB 图像作为输入并应用多模态融合方案,我们仅经过模拟训练的系统可以处理包含动态障碍物(例如车辆和行人)的现实世界场景。我们还引入了模态分离学习来加速训练并进一步提高性能。大量的实验表明,我们的方法缩小了模拟环境和真实环境之间的差距,表现出优于最先进方法的优越性。请参阅 https://vsislab.github.io/mmpbnv1/,了解模拟和真实环境中 UGV 导航的补充视频演示。
更新日期:2021-03-08
down
wechat
bug