当前位置: X-MOL 学术Auton. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cooperative visual-inertial sensor fusion: fundamental equations and state determination in closed-form
Autonomous Robots ( IF 3.5 ) Pub Date : 2019-03-04 , DOI: 10.1007/s10514-019-09841-8
Agostino Martinelli , Alessandro Renzaglia , Alexander Oliva

This paper investigates the visual and inertial sensor fusion problem in the cooperative case and provides new theoretical and basic results. Specifically, the case of two agents is investigated. Each agent is equipped with inertial sensors (accelerometer and gyroscope) and with a monocular camera. By using the monocular camera, each agent can observe the other agent. No additional camera observations (e.g., of external point features in the environment) are considered. First, the entire observable state is analytically derived. This state contains the relative position between the two agents (which includes the absolute scale), the relative velocity, the three Euler angles that express the rotation between the two local frames and all the accelerometer and gyroscope biases. Then, the basic equations that describe this system are analytically obtained. The last part of the paper describes the use of these equations to obtain a closed-form solution that provides the observable state in terms of the visual and inertial measurements provided in a short time interval. This last contribution is the extension of the results presented in Kaiser et al. (IEEE Robot Autom Lett 2(1):18–25, 2017), Martinelli (IEEE Trans Robot 28(1):44–60, 2012; Int J Comput Vis 106(2):138–152, 2014) to the cooperative case. The impact of the presence of the bias on the performance of this closed-form solution is also investigated and a simple and effective method to obtain the gyroscope bias is proposed. Extensive simulations clearly show that the proposed method is successful. It is worth noting that it is possible to automatically retrieve the absolute scale and simultaneously calibrate the gyroscopes not only without any prior knowledge (as in Kaiser et al. IEEE Robot Autom Lett 2(1):18–25, 2017), but also without external point features in the environment.

中文翻译:

协作式视觉惯性传感器融合:基本方程式和闭合形式的状态确定

本文研究了合作案例中的视觉和惯性传感器融合问题,并提供了新的理论和基础结果。具体而言,研究了两种药剂的情况。每个代理都配备有惯性传感器(加速度计和陀螺仪)以及单眼相机。通过使用单眼相机,每个代理可以观察其他代理。没有考虑额外的摄像机观测结果(例如,环境中的外部点特征)。首先,通过分析得出整个可观察状态。此状态包含两个代理之间的相对位置(包括绝对比例),相对速度,表示两个局部框架之间的旋转的三个欧拉角以及所有加速度计和陀螺仪偏置。然后,通过分析得出描述该系统的基本方程式。本文的最后一部分描述了如何使用这些方程式获得封闭形式的解决方案,该解决方案可以在短时间间隔内提供视觉和惯性测量值,从而提供可观察的状态。最后的贡献是Kaiser等人提出的结果的扩展。(IEEE Robot Autom Lett 2(1):18–25,2017),Martinelli(IEEE Trans Robot 28(1):44–60,2012; Int J Comput Vis 106(2):138–152,2014)至合作案。偏置的存在对这种封闭形式的解决方案的性能的影响也进行了研究,并提出了一种简单有效的方法来获得陀螺仪偏置。大量的仿真清楚地表明,该方法是成功的。
更新日期:2019-03-04
down
wechat
bug