当前位置: X-MOL 学术Adv. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stereo camera visual SLAM with hierarchical masking and motion-state classification at outdoor construction sites containing large dynamic objects
Advanced Robotics ( IF 1.4 ) Pub Date : 2021-01-11 , DOI: 10.1080/01691864.2020.1869586
Runqiu Bao 1 , Ren Komatsu 1 , Renato Miyagusuku 2 , Masaki Chino 3 , Atsushi Yamashita 1 , Hajime Asama 1
Affiliation  

At modern construction sites, utilizing GNSS (Global Navigation Satellite System) to measure the real-time location and orientation (i.e. pose) of construction machines and navigate them is very common. However, GNSS is not always available. Replacing GNSS with on-board cameras and visual simultaneous localization and mapping (visual SLAM) to navigate the machines is a costeffective solution. Nevertheless, at construction sites, multiple construction machines will usually work together and side-by-side, causing large dynamic occlusions in the cameras’ view. Standard visual SLAM cannot handle large dynamic occlusions well. In this work, we propose a motion segmentation method to efficiently extract static parts from crowded dynamic scenes to enable robust tracking of camera ego-motion. Our method utilizes semantic information combined with object-level geometric constraints to quickly detect the static parts of the scene. Then, we perform a two-step coarse-to-fine ego-motion tracking with reference to the static parts. This leads to a novel dynamic visual SLAM formation. We test our proposals through a real implementation based on ORB-SLAM2, and datasets we collected from real construction sites. The results show that when standard visual ∗Code available at: https://github.com/RunqiuBao/kenki-positioning-vSLAM †∗Corresponding author Email: bao@robot.t.u-tokyo.ac.jp ar X iv :2 10 1. 06 56 3v 1 [ cs .R O ] 1 7 Ja n 20 21 A PREPRINT JANUARY 19, 2021 SLAM fails, our method can still retain accurate camera ego-motion tracking in real-time. Comparing to state-of-the-art dynamic visual SLAM methods, ours shows outstanding efficiency and competitive result trajectory accuracy.

中文翻译:

在包含大型动态物体的室外建筑工地,具有分层掩蔽和运动状态分类的立体相机视觉 SLAM

在现代建筑工地,利用GNSS(全球导航卫星系统)来测量建筑机械的实时位置和方向(即姿态)并对其进行导航非常普遍。但是,GNSS 并不总是可用。用机载摄像头和视觉同步定位和映射(视觉 SLAM)代替 GNSS 来导航机器是一种具有成本效益的解决方案。然而,在建筑工地,多台建筑机器通常会并排工作,导致摄像机视野中出现大的动态遮挡。标准视觉 SLAM 不能很好地处理大的动态遮挡。在这项工作中,我们提出了一种运动分割方法,可以有效地从拥挤的动态场景中提取静态部分,以实现对相机自我运动的稳健跟踪。我们的方法利用语义信息结合对象级几何约束来快速检测场景的静态部分。然后,我们参考静态部分执行两步粗到细的自我运动跟踪。这导致了一种新颖的动态视觉 SLAM 形成。我们通过基于 ORB-SLAM2 的真实实现以及从真实建筑工地收集的数据集来测试我们的建议。结果表明,当标准视觉*代码可在:https://github.com/RunqiuBao/kenki-positioning-vSLAM †*通讯作者电子邮件:bao@robot.tu-tokyo.ac.jp ar X iv :2 10 1. 06 56 3v 1 [ cs .RO ] 1 7 Jan 20 21 A PREPRINT 2021 年 1 月 19 日 SLAM 失败,我们的方法仍然可以实时保留准确的相机自我运动跟踪。与最先进的动态视觉 SLAM 方法相比,
更新日期:2021-01-11
down
wechat
bug