Defence Technology ( IF 5.1 ) Pub Date : 2020-09-24 , DOI: 10.1016/j.dt.2020.09.012 Yong-bao Ai , Ting Rui , Xiao-qiang Yang , Jia-lin He , Lei Fu , Jian-bin Li , Ming Lu
A great number of visual simultaneous localization and mapping (VSLAM) systems need to assume static features in the environment. However, moving objects can vastly impair the performance of a VSLAM system which relies on the static-world assumption. To cope with this challenging topic, a real-time and robust VSLAM system based on ORB-SLAM2 for dynamic environments was proposed. To reduce the influence of dynamic content, we incorporate the deep-learning-based object detection method in the visual odometry, then the dynamic object probability model is added to raise the efficiency of object detection deep neural network and enhance the real-time performance of our system. Experiment with both on the TUM and KITTI benchmark dataset, as well as in a real-world environment, the results clarify that our method can significantly reduce the tracking error or drift, enhance the robustness, accuracy and stability of the VSLAM system in dynamic scenes.
中文翻译:
基于物体检测的动态环境中的视觉SLAM
大量的视觉同步定位和映射 (VSLAM) 系统需要假设环境中的静态特征。然而,移动物体会极大地削弱依赖于静态世界假设的 VSLAM 系统的性能。为了应对这个具有挑战性的话题,提出了一种基于 ORB-SLAM2 的动态环境的实时鲁棒 VSLAM 系统。为了减少动态内容的影响,我们在视觉里程计中加入了基于深度学习的物体检测方法,然后加入动态物体概率模型,以提高深度神经网络物体检测的效率,增强实时性。我们的系统。在 TUM 和 KITTI 基准数据集以及真实环境中进行实验,