当前位置: X-MOL 学术Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On-the-Fly Camera and Lidar Calibration
Remote Sensing ( IF 5 ) Pub Date : 2020-04-02 , DOI: 10.3390/rs12071137
Balázs Nagy , Csaba Benedek

Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the camera data which can be matched to the Lidar point clouds; thus, we address the extrinsic calibration problem as a registration task in the 3D domain. The core step of the approach is a two-stage transformation estimation: First, we introduce an object level coarse alignment algorithm operating in the Hough space to transform the SfM-based and the Lidar point clouds into a common coordinate system. Thereafter, we apply a control point based nonrigid transformation refinement step to register the point clouds more precisely. Finally, we calculate the correspondences between the 3D Lidar points and the pixels in the 2D camera domain. We evaluated the method in various real-life traffic scenarios in Budapest, Hungary. The results show that our proposed extrinsic calibration approach is able to provide accurate and robust parameter settings on-the-fly.

中文翻译:

实时摄像头和激光雷达校准

传感器融合是自动驾驶和机器人应用中的主要挑战之一。在本文中,我们提出了一种自动,在线且无目标的相机-激光雷达外部校正方法。我们采用运动结构(SfM)方法从摄像机数据生成3D点云,该数据可以与激光雷达点云匹配。因此,我们将外部校准问题作为3D域中的注册任务来解决。该方法的核心步骤是一个两阶段的变换估计:首先,我们引入一种在Hough空间中运行的对象级粗对准算法,以将基于SfM的激光雷达和Lidar点云转换为通用坐标系。此后,我们应用基于控制点的非刚性变换细化步骤来更精确地注册点云。最后,我们计算3D激光雷达点与2D相机域中的像素之间的对应关系。我们在匈牙利布达佩斯的各种现实交通场景中评估了该方法。结果表明,我们提出的外部校准方法能够即时提供准确而可靠的参数设置。
更新日期:2020-04-02
down
wechat
bug