当前位置: X-MOL 学术Auton. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatially-dependent Bayesian semantic perception under model and localization uncertainty
Autonomous Robots ( IF 3.5 ) Pub Date : 2020-06-25 , DOI: 10.1007/s10514-020-09921-0
Yuri Feldman , Vadim Indelman

Semantic perception can provide autonomous robots operating under uncertainty with more efficient representation of their environment and better ability for correct loop closures than only geometric features. However, accurate inference of semantics requires measurement models that correctly capture properties of semantic detections such as viewpoint dependence, spatial correlations, and intra- and inter-class variations. Such models should also gracefully handle open-set conditions which may be encountered, keeping track of the resultant model uncertainty. We propose a method for robust visual classification of an object of interest observed from multiple views in the presence of significant localization uncertainty and classifier noise, and possible dataset shift. We use a viewpoint dependent measurement model to capture viewpoint dependence and spatial correlations in classifier scores, showing how to use it in the presence of localization uncertainty. Assuming a Bayesian classifier providing a measure of uncertainty, we show how its outputs can be fused in the context of the above model, allowing robust classification under model uncertainty when novel scenes are encountered. We present statistical evaluation of our method both in synthetic simulation, and in a 3D environment where rendered images are fed into a Deep Neural Network classifier. We compare to baseline methods in scenarios of varying difficulty showing improved robustness of our method to localization uncertainty and dataset shift. Finally, we validate our contribution w.r.t. localization uncertainty on a dataset of real-world images.

中文翻译:

模型和局部不确定性下的空间依赖贝叶斯语义感知

语义感知可以为不确定环境下运行的自主机器人提供比其几何特征更有效的环境表示以及更好的正确闭环能力。但是,语义的准确推断需要能够正确捕获语义检测属性(例如视点依赖性,空间相关性以及类内和类间变异)的测量模型。此类模型还应适当处理可能遇到的开放式条件,并跟踪所得模型的不确定性。我们提出了一种在存在明显的定位不确定性和分类器噪声以及可能的数据集偏移的情况下从多个视图观察到的感兴趣对象的鲁棒视觉分类的方法。我们使用基于视点的测量模型来捕获分类器分数中的视点依赖性和空间相关性,从而说明如何在存在定位不确定性的情况下使用它。假设贝叶斯分类器提供了不确定性的度量,我们展示了如何在上述模型的上下文中融合其输出,从而在遇到新场景时在模型不确定性下进行鲁棒分类。我们将在合成仿真中以及在将渲染图像输入到深度神经网络分类器的3D环境中,对我们的方法进行统计评估。我们在难度不同的情况下与基线方法进行了比较,显示了我们的方法对定位不确定性和数据集偏移的改进的鲁棒性。最后,我们在真实世界图像数据集上验证我们对定位不确定性的贡献。
更新日期:2020-06-25
down
wechat
bug