当前位置: X-MOL 学术IEEE Trans. Aerosp. Electron. Sys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous
IEEE Transactions on Aerospace and Electronic Systems ( IF 5.1 ) Pub Date : 2020-12-01 , DOI: 10.1109/taes.2020.2999148
Sumant Sharma , Simone Damico

This article presents the Spacecraft Pose Network (SPN), the first neural network-based method for on-board estimation of the pose, i.e., the relative position and attitude, of a known noncooperative spacecraft using monocular vision. In contrast to other state-of-the-art pose estimation approaches for spaceborne applications, the SPN method does not require the formulation of hand-engineered features and only requires a single grayscale image to determine the pose of the spacecraft relative to the camera. The SPN method uses a convolutional neural network (CNN) with three branches to solve the problem of relative attitude estimation. The first branch of the CNN bootstraps a state-of-the-art object detection algorithm to detect a 2-D bounding box around the target spacecraft in the input image. The region inside the 2-D bounding box is then used by the other two branches of the CNN to determine the relative attitude by initially classifying the input region into discrete coarse attitude labels before regressing to a finer estimate. The SPN method then estimates the relative position by using the constraints imposed by the detected 2-D bounding box and the estimated relative attitude. Further, with the detection of 2-D bounding boxes of subcomponents of the target spacecraft, the SPN method is easily generalizable to estimate the pose of multiple target geometries. Finally, to facilitate integration with navigation filters and perform continuous pose tracking, the SPN method estimates the uncertainty associated with the estimated pose. The secondary contribution of this article is the generation of the Spacecraft PosE Estimation Dataset (SPEED), which is used to train and evaluate the performance of the SPN method. SPEED consists of synthetic as well as actual camera images of a mock-up of the Tango spacecraft from the PRISMA mission. The synthetic images are created by fusing OpenGL-based renderings of the spacecraft's 3-D model with actual images of the Earth captured by the Himawari-8 meteorological satellite. The actual camera images are created using a seven degrees-of-freedom robotic arm, which positions and orients a vision-based sensor with respect to a full-scale mock-up of the Tango spacecraft with submillimeter and submillidegree accuracy. The SPN method, trained only on synthetic images, produces degree-level relative attitude error and cm-level relative position errors when evaluated on the actual camera images with a different distribution not used during training.

中文翻译:

基于神经网络的非合作航天器交会位姿估计

本文介绍了航天器姿态网络 (SPN),这是第一种基于神经网络的方法,用于使用单目视觉对已知的非合作航天器的姿态(即相对位置和姿态)进行机载估计。与其他最先进的星载应用姿态估计方法相比,SPN 方法不需要手工设计特征的公式化,只需要单个灰度图像来确定航天器相对于相机的姿态。SPN 方法使用具有三个分支的卷积神经网络 (CNN) 来解决相对姿态估计问题。CNN 的第一个分支引导了最先进的对象检测算法,以检测输入图像中目标航天器周围的二维边界框。然后,CNN 的其他两个分支使用二维边界框内的区域来确定相对姿态,方法是在回归到更精细的估计之前将输入区域初始分类为离散的粗略姿态标签。然后,SPN 方法通过使用检测到的二维边界框和估计的相对姿态施加的约束来估计相对位置。此外,通过检测目标航天器子组件的二维边界框,SPN 方法很容易推广到估计多个目标几何形状的位姿。最后,为了便于与导航滤波器集成并执行连续姿态跟踪,SPN 方法估计与估计姿态相关的不确定性。本文的次要贡献是生成了航天器姿态估计数据集(SPEED),用于训练和评估 SPN 方法的性能。SPEED 包括来自 PRISMA 任务的 Tango 航天器模型的合成和实际相机图像。合成图像是通过将航天器 3D 模型的基于 OpenGL 的渲染与 Himawari-8 气象卫星捕获的地球实际图像相融合而创建的。实际的相机图像是使用七自由度机械臂创建的,该机械臂以亚毫米和亚毫米精度相对于 Tango 航天器的全尺寸模型定位和定向基于视觉的传感器。仅在合成图像上训练的 SPN 方法,
更新日期:2020-12-01
down
wechat
bug