当前位置: X-MOL 学术Comput. Animat. Virtual Worlds › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scale‐aware camera localization in 3D LiDAR maps with a monocular visual odometry
Computer Animation and Virtual Worlds ( IF 1.1 ) Pub Date : 2019-05-01 , DOI: 10.1002/cav.1879
Manhui Sun 1 , Shaowu Yang 1 , Henzhu Liu 1
Affiliation  

Localization information is essential for mobile robot systems in navigation tasks. Many visual‐based approaches focus on localizing a robot within prior maps acquired with cameras. It is critical where the Global Positioning System signal is unreliable. In contrast to conventional methods that localize a camera in an image‐based map, we propose a novel approach that localizes a monocular camera within a given three‐dimensional (3D) light detection and ranging (LiDAR) map. We employ visual odometry to reconstruct a semidense set of 3D points from the monocular camera images. These points are continuously matched against the 3D prior LiDAR map by a modified feature‐based point cloud registration method to track a full six‐degree‐of‐freedom camera pose. Since the monocular camera suffers from the scale‐drift problem due to the lack of depth information, the proposed method solves it by adopting updatable scale estimation. Experiments carried out on a publicly large‐scale data set demonstrate that the camera and LiDAR multimodal data matching problem is solved, and the localization accuracy of our method is comparable to state‐of‐the‐art approaches.

中文翻译:

具有单目视觉里程计的 3D LiDAR 地图中的尺度感知相机定位

定位信息对于导航任务中的移动机器人系统至关重要。许多基于视觉的方法专注于在用相机获取的先前地图中定位机器人。在全球定位系统信号不可靠的情况下至关重要。与在基于图像的地图中定位相机的传统方法相比,我们提出了一种新颖的方法,可以在给定的三维 (3D) 光检测和测距 (LiDAR) 地图中定位单目相机。我们采用视觉里程计从单目相机图像中重建一组半密集的 3D 点。这些点通过改进的基于特征的点云配准方法与 3D 先验 LiDAR 地图连续匹配,以跟踪完整的六自由度相机姿势。由于单目相机由于缺乏深度信息而存在尺度漂移问题,所提出的方法通过采用可更新的尺度估计来解决这个问题。在公开的大规模数据集上进行的实验表明,解决了相机和 LiDAR 多模态数据匹配问题,并且我们的方法的定位精度与最先进的方法相当。
更新日期:2019-05-01
down
wechat
bug