当前位置: X-MOL 学术Sens. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A robust iterative pose tracking method assisted by modified visual odometer
Sensor Review ( IF 1.6 ) Pub Date : 2020-11-30 , DOI: 10.1108/sr-01-2020-0005
Rupeng Yuan , Fuhai Zhang , Yili Fu , Shuguo Wang

Purpose

The purpose of this paper is to propose a robust iterative LIDAR-based pose tracking method assisted by modified visual odometer to resist initial value disturbance and locate a robot in the environments with certain occlusion.

Design/methodology/approach

At first, an iterative LIDAR-based pose tracking method is proposed. The LIDAR information is filtered and occupancy grid map is pre-processed. The sample generation and scoring are iterated so that the result is converged to the stable value. To improve the efficiency of sample processing, the integer-valued map indices of rotational samples are preserved and translated. All generated samples are analyzed to determine the maximum error direction. Then, a modified visual odometer is introduced for error compensation. The oriented fast and rotated brief (ORB) features are uniformly sampled in the image. A local map which contains key frames for reference is maintained. These two measures ensure that the modified visual odometer is able to return robust result which compensates the error of LIDAR-based pose tracking method in the maximum error direction.

Findings

Three experiments are conducted to prove the advantages of the proposed method. The proposed method can resist initial value disturbance with high computational efficiency, give back credible real-time result in the environment with abundant features and locate a robot in the environment with certain occlusion.

Originality/value

The proposed method is able to give back real-time pose tracking results with robustness. The iterative sample generation enables the robot to resist initial value disturbance. In each iteration, rotational and translational samples are separately generated to enhance computational efficiency. The maximum error direction of LIDAR-based pose tracking method is determined by principle component analysis and compensated by the result of modified visual odometer to give back correct pose in the environment with certain occlusion.



中文翻译:

改进的视觉里程表辅助的鲁棒迭代姿态跟踪方法

目的

本文的目的是提出一种鲁棒的基于LIDAR迭代的姿态跟踪方法,该方法由改进的视觉里程表辅助,以抵抗初始值干扰并在具有一定遮挡的环境中定位机器人。

设计/方法/方法

首先,提出了一种基于迭代激光雷达的姿态跟踪方法。过滤LIDAR信息并预处理占用地图。重复样本生成和评分,以便将结果收敛到稳定值。为了提高样本处理的效率,保留并转换了旋转样本的整数映射索引。分析所有生成的样本以确定最大误差方向。然后,引入了改进的视觉里程表以进行误差补偿。定向的快速旋转简短(ORB)特征在图像中均匀采样。维护了包含关键帧以供参考的本地地图。

发现

进行了三个实验,证明了该方法的优点。所提出的方法能够以较高的计算效率抵抗初始值的干扰,在功能丰富的环境中实时给出可靠的实时结果,并在一定的遮挡环境下定位机器人。

创意/价值

所提出的方法能够给出具有鲁棒性的实时姿势跟踪结果。迭代样本生成使机器人能够抵抗初始值干扰。在每次迭代中,分别生成旋转和平移样本以提高计算效率。基于LIDAR的姿态跟踪方法的最大误差方向通过主成分分析确定,并通过改进的视觉里程表的结果进行补偿,以在一定的遮挡下返回正确的姿态。

更新日期:2020-11-30
down
wechat
bug