当前位置: X-MOL 学术Sci. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A seasonally invariant deep transform for visual terrain-relative navigation
Science Robotics ( IF 26.1 ) Pub Date : 2021-06-23 , DOI: 10.1126/scirobotics.abf3320
Anthony T Fragoso 1 , Connor T Lee 1 , Austin S McCoy 1 , Soon-Jo Chung 1, 2
Affiliation  

Visual terrain-relative navigation (VTRN) is a localization method based on registering a source image taken from a robotic vehicle against a georeferenced target image. With high-resolution imagery databases of Earth and other planets now available, VTRN offers accurate, drift-free navigation for air and space robots even in the absence of external positioning signals. Despite its potential for high accuracy, however, VTRN remains extremely fragile to common and predictable seasonal effects, such as lighting, vegetation changes, and snow cover. Engineered registration algorithms are mature and have provable geometric advantages but cannot accommodate the content changes caused by seasonal effects and have poor matching skill. Approaches based on deep learning can accommodate image content changes but produce opaque position estimates that either lack an interpretable uncertainty or require tedious human annotation. In this work, we address these issues with targeted use of deep learning within an image transform architecture, which converts seasonal imagery to a stable, invariant domain that can be used by conventional algorithms without modification. Our transform preserves the geometric structure and uncertainty estimates of legacy approaches and demonstrates superior performance under extreme seasonal changes while also being easy to train and highly generalizable. We show that classical registration methods perform exceptionally well for robotic visual navigation when stabilized with the proposed architecture and are able to consistently anticipate reliable imagery. Gross mismatches were nearly eliminated in challenging and realistic visual navigation tasks that also included topographic and perspective effects.



中文翻译:

视觉地形相关导航的季节性不变深度变换

视觉地形相对导航 (VTRN) 是一种基于将从机器人车辆获取的源图像与地理参考目标图像进行配准的定位方法。有了现在可用的地球和其他行星的高分辨率图像数据库,即使在没有外部定位信号的情况下,VTRN 也能为空中和太空机器人提供准确、无漂移的导航。然而,尽管 VTRN 具有高精度的潜力,但对于常见和可预测的季节性影响,如光照、植被变化和积雪,它仍然非常脆弱。工程配准算法成熟,具有被证明的几何优势,但不能适应季节性影响引起的内容变化,匹配能力差。基于深度学习的方法可以适应图像内容的变化,但会产生不透明的位置估计,这些估计要么缺乏可解释的不确定性,要么需要繁琐的人工注释。在这项工作中,我们通过在图像转换架构内有针对性地使用深度学习来解决这些问题,该架构将季节性图像转换为稳定的、不变的域,传统算法无需修改即可使用该域。我们的转换保留了传统方法的几何结构和不确定性估计,并在极端季节性变化下展示了卓越的性能,同时还易于训练和高度概括。我们表明,当使用所提出的架构稳定时,经典配准方法在机器人视觉导航方面表现出色,并且能够始终如一地提供可靠的图像。在具有挑战性和逼真的视觉导航任务中几乎消除了严重的不匹配,其中还包括地形和透视效果。

更新日期:2021-06-24
down
wechat
bug