当前位置: X-MOL 学术Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Pose estimation of a fast tumbling space noncooperative target using the time-of-flight camera
Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering ( IF 1.0 ) Pub Date : 2021-03-26 , DOI: 10.1177/09544100211000229
Qi-shuai Wang 1 , Guo-ping Cai 1
Affiliation  

This article proposes a pose estimation method for a fast tumbling space noncooperative target. The core idea of this method is to extract the target’s body-fixed coordinate system by using the geometric characteristics of the target’s point cloud and then by the body-fixed coordinate system to realize pose initialization and pose tracking of the target. In the extraction of the body-fixed coordinate system, a point cloud of the target, which can be obtained by a time-of-flight camera, can be divided into small plane point clouds firstly; then the geometric information of these plane point clouds can be utilized to extract the target’s descriptive structures, such as the target surfaces and the solar panel supports; and finally the body-fixed coordinate system can be determined by the geometric characteristics of these structures. The body-fixed coordinate system obtained above can be used to determine the pose of consecutive point clouds of the target, that is, to realize the pose initialization and the pose tracking, and accumulated bias often emerges in the pose tracking. To mitigate the accumulated bias, a pose graph optimization method is adopted. In the end of this article, the performance of the proposed method is evaluated by numerical simulations. Simulation results show that when the distance between the target and the chaser is 10 m, the errors of the estimation results of the target’s attitude and position are 0.025° and 0.026 m, respectively. This means that the proposed method can achieve high-precision pose estimation of the noncooperative target.



中文翻译:

使用飞行时间相机估算快速翻滚空间非合作目标的姿态

本文提出了一种用于快速翻滚空间非合作目标的姿态估计方法。该方法的核心思想是利用目标点云的几何特征提取目标的身体固定坐标系,然后通过身体固定的坐标系实现目标的姿态初始化和姿态跟踪。在固定坐标系的提取中,首先可以将由飞行时间相机获得的目标点云划分为小平面点云。然后可以利用这些平面点云的几何信息来提取目标的描述性结构,例如目标表面和太阳能电池板支架。最终,可以通过这些结构的几何特性确定人体固定的坐标系。上面获得的人体固定坐标系可用于确定目标的连续点云的姿态,即实现姿态初始化和姿态跟踪,并且在姿态跟踪中经常会出现累积的偏差。为了减轻累积的偏差,采用了姿态图优化方法。在本文的最后,通过数值模拟评估了该方法的性能。仿真结果表明,当目标与跟踪器之间的距离为10 m时,目标姿态和位置估计结果的误差分别为0.025°和0.026 m。这意味着所提出的方法可以实现非合作目标的高精度姿态估计。即,为了实现姿势初始化和姿势跟踪,在姿势跟踪中经常会出现累积的偏差。为了减轻累积的偏差,采用了姿态图优化方法。在本文的最后,通过数值模拟评估了该方法的性能。仿真结果表明,当目标与跟踪器之间的距离为10 m时,目标姿态和位置估计结果的误差分别为0.025°和0.026 m。这意味着所提出的方法可以实现非合作目标的高精度姿态估计。即,为了实现姿势初始化和姿势跟踪,在姿势跟踪中经常会出现累积的偏差。为了减轻累积的偏差,采用了姿态图优化方法。在本文的最后,通过数值模拟评估了该方法的性能。仿真结果表明,当目标与跟踪器之间的距离为10 m时,目标姿态和位置估计结果的误差分别为0.025°和0.026 m。这意味着所提出的方法可以实现非合作目标的高精度姿态估计。通过数值模拟对所提方法的性能进行了评估。仿真结果表明,当目标与跟踪器之间的距离为10 m时,目标姿态和位置估计结果的误差分别为0.025°和0.026 m。这意味着所提出的方法可以实现非合作目标的高精度姿态估计。通过数值模拟对所提方法的性能进行了评估。仿真结果表明,当目标与跟踪器之间的距离为10 m时,目标姿态和位置估计结果的误差分别为0.025°和0.026 m。这意味着所提出的方法可以实现非合作目标的高精度姿态估计。

更新日期:2021-03-27
down
wechat
bug