当前位置: X-MOL 学术J. Intell. Robot. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatio-temporal Data Association for Object-augmented Mapping
Journal of Intelligent & Robotic Systems ( IF 3.1 ) Pub Date : 2021-08-03 , DOI: 10.1007/s10846-021-01445-8
Felipe D. B. de Oliveira 1 , Marcondes R. da Silva Jr. 1 , Aluizio F. R. Araújo 1
Affiliation  

Traditionally, visual SLAM methods make use of visual features for mapping and localization. However, the resulting map may lack important semantic information, such as the objects (and their locations) present in the location. Since the same objects may be detected several times during the mapping phase, data association becomes a critical issue: objects viewed from different angles and in different time instants must be fused together into a single instance on the map. In this paper, we propose Spatio-temporal Data Association (STDA) for object-augmented mapping. It is based on expected similarities between consecutive frames (temporal association) and similar non-consecutive frames (spatial association). The experiments suggest that our system is capable of correctly fusing together multiple views of several objects, resulting in only one false positive association in more than 130 detected objects across several datasets. The results are competitive with the state-of-the-art. We also generated object location ground truth annotations for 3 simulated environments to foster further comparison. Finally, the annotated map was used for an object fetching task.



中文翻译:

对象增强映射的时空数据关联

传统上,视觉 SLAM 方法利用视觉特征进行映射和定位。但是,生成的地图可能缺少重要的语义信息,例如该位置中存在的对象(及其位置)。由于在映射阶段可能会多次检测到相同的对象,因此数据关联成为一个关键问题:从不同角度和不同时间观察的对象必须融合到地图上的单个实例中。在本文中,我们提出了用于对象增强映射的时空数据关联(STDA)。它基于连续帧(时间关联)和相似的非连续帧(空间关联)之间的预期相似性。实验表明,我们的系统能够正确地将多个对象的多个视图融合在一起,导致在多个数据集中的 130 多个检测到的对象中只有一个误报关联。结果与最先进的技术相比具有竞争力。我们还为 3 个模拟环境生成了对象位置地面实况注释,以促进进一步比较。最后,带注释的地图用于对象获取任务。

更新日期:2021-08-03
down
wechat
bug