当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
arXiv - CS - Robotics Pub Date : 2021-04-29 , DOI: arxiv-2104.14682
Aleksandr Kim, Aljoša Ošep, Laura Leal-Taixé

Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time. Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal. On the other hand, cameras provide a dense and rich visual signal that helps to localize even distant objects, but only in the image domain. In this paper, we propose EagerMOT, a simple tracking formulation that eagerly integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics. Using images, we can identify distant incoming objects, while depth estimates allow for precise trajectory localization as soon as objects are within the depth-sensing range. With EagerMOT, we achieve state-of-the-art results across several MOT tasks on the KITTI and NuScenes datasets. Our code is available at https://github.com/aleksandrkim61/EagerMOT.

中文翻译:

EagerMOT:通过传感器融合进行3D多对象跟踪

多对象跟踪(MOT)使移动机器人可以通过在3D空间和时间中定位周围的对象来执行明智的运动计划和导航。现有方法依靠深度传感器(例如,LiDAR)来检测和跟踪3D空间中的目标,但是由于信号的稀疏性,只能检测到有限的感应范围。另一方面,相机提供密集且丰富的视觉信号,有助于甚至定位远处的对象,但仅在图像域内。在本文中,我们提出了EagerMOT,这是一种简单的跟踪方法,可以将两种传感器模式中所有可用的对象观测值迅速地集成在一起,以获得对场景动态的充分知情的解释。使用图像,我们可以识别远处的进入物体,物体进入深度感应范围后,深度估算就可以实现精确的轨迹定位。借助EagerMOT,我们可以在KITTI和NuScenes数据集上的多个MOT任务中获得最先进的结果。我们的代码可从https://github.com/aleksandrkim61/EagerMOT获得。
更新日期:2021-05-03
down
wechat
bug