当前位置: X-MOL 学术Acta Astronaut. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust deep learning LiDAR-based pose estimation for autonomous space landers
Acta Astronautica ( IF 3.1 ) Pub Date : 2022-09-05 , DOI: 10.1016/j.actaastro.2022.08.049
Zakaria Chekakta , Abdelhafid Zenati , Nabil Aouf , Olivier Dubois-Matra

Accurate relative pose estimation of a spacecraft during space landing operation is critical to ensure a safe and successful landing. This paper presents a 3D Light Detection and Ranging (LiDAR) based AI relative navigation architecture solution for autonomous space landing. The proposed architecture is based on a hybrid Deep Recurrent Convolutional Neural Network (DRCNN) combining a Convolutional Neural Network (CNN) with an Recurrent Neural Network (RNN) based on a Long–Short Term Memory (LSTM) network. The acquired 3D LiDAR data is converted into a multi-projected images and feed the DRCNN with depth and other multi-projected imagery. The CNN module of the architecture allows an efficient representation of features, and the RNN module, as an LSTM, provides robust navigation motion estimates. A variety of landing scenarios are considered, simulated and experimented to evaluate the efficiency of the proposed architecture. A LiDAR based imagery data (Range, Slope, and Elevation) is initially created using PANGU (Planet and Asteroid Natural Scene Generation Utility) software and an evaluation of the proposed solution using this data is conducted. Tests using an instrumented Aerial Robot in Gazebo software to simulate landing scenarios on a synthetic but representative lunar terrain (3D digital elevation model) is proposed. Finally, real experiments using a real flying drone equipped with a Velodyne VLP16 3D LiDAR sensor to generate real 3D scene point clouds while landing on a designed down scaled lunar moon landing surface are conducted. All the test results achieved show that the suggested architecture is capable of delivering good 6 Degree of Freedom (DoF) pose precision with a good and reasonable computation.



中文翻译:

用于自主空间着陆器的基于 LiDAR 的稳健深度学习姿态估计

在太空着陆操作期间准确的航天器相对位姿估计对于确保安全和成功着陆至关重要。本文提出了一种基于 3D 光检测和测距 (LiDAR) 的 AI 相关导航架构解决方案,用于自主空间着陆。所提出的架构基于混合深度递归卷积神经网络 (DRCNN),该混合深度递归卷积神经网络 (DRCNN) 将卷积神经网络 (CNN) 与基于长短期记忆 (LSTM) 网络的递归神经网络 (RNN) 相结合。获取的 3D LiDAR 数据被转换为多投影图像,并为 DRCNN 提供深度和其他多投影图像。该架构的 CNN 模块允许有效地表示特征,而 RNN 模块作为 LSTM 提供稳健的导航运动估计。考虑了多种着陆场景,模拟和实验以评估所提出架构的效率。最初使用 PANGU(行星和小行星自然场景生成实用程序)软件创建基于 LiDAR 的图像数据(范围、坡度和高程),并使用该数据对提议的解决方案进行评估。建议使用 Gazebo 软件中的仪表化空中机器人进行测试,以模拟合成但具有代表性的月球地形(3D 数字高程模型)上的着陆场景。最后,使用配备 Velodyne VLP16 3D LiDAR 传感器的真实飞行无人机进行真实实验,以生成真实的 3D 场景点云,同时降落在设计的缩小月球着陆表面上。

更新日期:2022-09-10
down
wechat
bug