当前位置: X-MOL 学术Rob. Auton. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Change detection using weighted features for image-based localization
Robotics and Autonomous Systems ( IF 4.3 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.robot.2020.103676
Erik Derner , Clara Gomez , Alejandra C. Hernandez , Ramon Barber , Robert Babuška

Abstract Autonomous mobile robots are becoming increasingly important in many industrial and domestic environments. Dealing with unforeseen situations is a difficult problem that must be tackled to achieve long-term robot autonomy. In vision-based localization and navigation methods, one of the major issues is the scene dynamics. The autonomous operation of the robot may become unreliable if the changes occurring in dynamic environments are not detected and managed. Moving chairs, opening and closing doors or windows, replacing objects and other changes make many conventional methods fail. To deal with these challenges, we present a novel method for change detection based on weighted local visual features. The core idea of the algorithm is to distinguish the valuable information in stable regions of the scene from the potentially misleading information in the regions that are changing. We evaluate the change detection algorithm in a visual localization framework based on feature matching by performing a series of long-term localization experiments in various real-world environments. The results show that the change detection method yields an improvement in the localization accuracy, compared to the baseline method without change detection. In addition, an experimental evaluation on a public long-term localization data set with more than 10 000 images reveals that the proposed method outperforms two alternative localization methods on images recorded several months after the initial mapping.

中文翻译:

使用加权特征进行基于图像的定位的变化检测

摘要 自主移动机器人在许多工业和家庭环境中变得越来越重要。处理不可预见的情况是实现机器人长期自主必须解决的难题。在基于视觉的定位和导航方法中,主要问题之一是场景动态。如果没有检测和管理动态环境中发生的变化,机器人的自主操作可能会变得不可靠。移动椅子、打开和关闭门窗、更换物体和其他变化使许多传统方法失效。为了应对这些挑战,我们提出了一种基于加权局部视觉特征的变化检测新方法。该算法的核心思想是将场景稳定区域中的有价值信息与正在变化的区域中潜在的误导信息区分开来。我们通过在各种真实世界环境中执行一系列长期定位实验来评估基于特征匹配的视觉定位框架中的变化检测算法。结果表明,与没有变化检测的基线方法相比,变化检测方法提高了定位精度。此外,对具有 10 000 多张图像的公共长期定位数据集的实验评估表明,在初始映射几个月后记录的图像上,所提出的方法优于两种替代定位方法。我们通过在各种真实世界环境中执行一系列长期定位实验来评估基于特征匹配的视觉定位框架中的变化检测算法。结果表明,与没有变化检测的基线方法相比,变化检测方法提高了定位精度。此外,对具有 10 000 多张图像的公共长期定位数据集的实验评估表明,在初始映射几个月后记录的图像上,所提出的方法优于两种替代定位方法。我们通过在各种真实世界环境中执行一系列长期定位实验来评估基于特征匹配的视觉定位框架中的变化检测算法。结果表明,与没有变化检测的基线方法相比,变化检测方法提高了定位精度。此外,对具有 10 000 多张图像的公共长期定位数据集的实验评估表明,在初始映射几个月后记录的图像上,所提出的方法优于两种替代定位方法。结果表明,与没有变化检测的基线方法相比,变化检测方法提高了定位精度。此外,对具有 10 000 多张图像的公共长期定位数据集的实验评估表明,在初始映射几个月后记录的图像上,所提出的方法优于两种替代定位方法。结果表明,与没有变化检测的基线方法相比,变化检测方法提高了定位精度。此外,对具有 10 000 多张图像的公共长期定位数据集的实验评估表明,在初始映射几个月后记录的图像上,所提出的方法优于两种替代定位方法。
更新日期:2021-01-01
down
wechat
bug