当前位置: X-MOL 学术Rob. Auton. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Appearance-invariant place recognition by adversarially learning disentangled representation
Robotics and Autonomous Systems ( IF 4.3 ) Pub Date : 2020-09-01 , DOI: 10.1016/j.robot.2020.103561
Cao Qin , Yunzhou Zhang , Yan Liu , Sonya Coleman , Dermot Kerr , Guanghao Lv

Abstract Place recognition is an essential component to address the problem of visual navigation and SLAM. The long-term place recognition is challenging as the environment exhibits significant variations across different times of the days, months, and seasons. In this paper, we view appearance changes as multiple domains and propose a Feature Disentanglement Network (FDNet) based on a convolutional auto-encoder and adversarial learning to extract two independent deep features — content and appearance. In our network, the content feature is learned which only retains the content information of images through the competition with the discriminators and content encoder. Besides, we utilize the triplets loss to make the appearance feature encode the appearance information. The generated content features are directly used to measure the similarity of images without dimensionality reduction operations. We use datasets that contain extreme appearance changes to carry out experiments, which show how meaningful recall at 100% precision can be achieved by our proposed method where existing state-of-art approaches often get worse performance.

中文翻译:

通过对抗性学习解开表示的外观不变的位置识别

摘要 地点识别是解决视觉导航和 SLAM 问题的重要组成部分。长期的地点识别具有挑战性,因为环境在天、月和季节的不同时间表现出显着变化。在本文中,我们将外观变化视为多个域,并提出了一种基于卷积自动编码器和对抗性学习的特征解开网络 (FDNet),以提取两个独立的深度特征——内容和外观。在我们的网络中,通过与鉴别器和内容编码器的竞争来学习仅保留图像内容信息的内容特征。此外,我们利用三元组损失使外观特征编码外观信息。生成的内容特征直接用于衡量图像的相似度,无需降维操作。我们使用包含极端外观变化的数据集来进行实验,这表明我们提出的方法可以在 100% 精度下实现有意义的召回,而现有的最先进方法通常会获得更差的性能。
更新日期:2020-09-01
down
wechat
bug