当前位置: X-MOL 学术Int. J. Intell. Robot. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global localization of 3D point clouds in building outline maps of urban outdoor environments.
International Journal of Intelligent Robotics and Applications ( IF 2.1 ) Pub Date : 2017-11-22 , DOI: 10.1007/s41315-017-0038-2
Christian Landsiedel 1 , Dirk Wollherr 1
Affiliation  

This paper presents a method to localize a robot in a global coordinate frame based on a sparse 2D map containing outlines of building and road network information and no location prior information. Its input is a single 3D laser scan of the surroundings of the robot. The approach extends the generic chamfer matching template matching technique from image processing by including visibility analysis in the cost function. Thus, the observed building planes are matched to the expected view of the corresponding map section instead of to the entire map, which makes a more accurate matching possible. Since this formulation operates on generic edge maps from visual sensors, the matching formulation can be expected to generalize to other input data, e.g., from monocular or stereo cameras. The method is evaluated on two large datasets collected in different real-world urban settings and compared to a baseline method from literature and to the standard chamfer matching approach, where it shows considerable performance benefits, as well as the feasibility of global localization based on sparse building outline data.

中文翻译:

在城市室外环境的建筑物轮廓图中对3D点云进行全球定位。

本文提出了一种基于稀疏的2D地图在全局坐标系中定位机器人的方法,该地图包含建筑物和道路网络信息的轮廓,而没有位置先验信息。它的输入是对机器人周围环境的单3D激光扫描。该方法扩展了通用倒角匹配通过在成本函数中包含可见性分析,从图像处理中获得模板匹配技术。因此,观察到的建筑平面与相应地图部分的预期视图而不是整个地图相匹配,这使得更精确的匹配成为可能。由于此公式在来自视觉传感器的通用边缘图上运行,因此可以期望将匹配的公式推广到其他输入数据,例如来自单眼或立体相机的数据。该方法在不同的现实世界城市环境中收集的两个大型数据集上进行了评估,并与文献中的基准方法和标准倒角匹配方法进行了比较,该方法显示出显着的性能优势,以及基于稀疏进行全球本地化的可行性建筑轮廓数据。
更新日期:2017-11-22
down
wechat
bug