当前位置: X-MOL 学术IEEE J. Sel. Top. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization
IEEE Journal of Selected Topics in Signal Processing ( IF 7.5 ) Pub Date : 2021-02-11 , DOI: 10.1109/jstsp.2021.3058895
Yizhou Wang , Zhongyu Jiang , Yudong Li , Jenq-Neng Hwang , Guanbin Xing , Hui Liu

Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the commonly used sensors, radar has usually been considered as a robust and cost-effective solution even in adverse driving scenarios, e.g., weak/strong lighting or bad weather. Instead of considering fusing the unreliable information from all available sensors, perception from pure radar data becomes a valuable alternative that is worth exploring. In this paper, we propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm without laborious annotation efforts, to effectively detect objects from the radio frequency (RF) images in real-time. First, the raw signals captured by millimeter-wave radars are transformed to RF images in range-azimuth coordinates. Second, our proposed RODNet takes a snippet of RF images as the input to predict the likelihood of objects in the radar field of view (FoV). Two customized modules are also added to handle multi-chirp information and object relative motion. The proposed RODNet is cross-supervised by a novel 3D localization of detected objects using a camera-radar fusion (CRF) strategy in the training stage. Due to no existing public dataset available for our task, we create a new dataset, named CRUW, 1

The dataset and code are available at https://www.cruwdataset.org/.

which contains synchronized RGB and RF image sequences in various driving scenarios. With intensive experiments, our proposed cross-supervised RODNet achieves 86% average precision and 88% average recall of object detection performance, which shows the robustness in various driving conditions.


中文翻译:

RODNet:由相机-雷达融合对象 3D 定位交叉监督的实时雷达对象检测网络

通过对车辆周围环境的准确可靠感知,促进了各种自主或辅助驾驶策略。在常用的传感器中,雷达通常被认为是一种强大且经济高效的解决方案,即使在不利的驾驶场景中,例如弱/强照明或恶劣天气。与其考虑融合来自所有可用传感器的不可靠信息,不如从纯雷达数据中获取感知成为值得探索的有价值的替代方案。在本文中,我们提出了一种名为 RODNet 的深度雷达目标检测网络,该网络由相机-雷达融合算法交叉监督,无需费力的注释工作,以实时有效地从射频 (RF) 图像中检测目标。第一的,毫米波雷达捕获的原始信号被转换为距离-方位坐标中的射频图像。其次,我们提出的 RODNet 将 RF 图像片段作为输入来预测雷达视场 (FoV) 中物体的可能性。还添加了两个定制模块来处理多啁啾信息和对象相对运动。所提出的 RODNet 在训练阶段使用相机-雷达融合 (CRF) 策略通过检测到的对象的新颖 3D 定位进行交叉监督。由于没有现有的公共数据集可用于我们的任务,我们创建了一个名为 CRUW 的新数据集,还添加了两个定制模块来处理多啁啾信息和对象相对运动。所提出的 RODNet 在训练阶段使用相机-雷达融合 (CRF) 策略通过检测到的对象的新颖 3D 定位进行交叉监督。由于没有现有的公共数据集可用于我们的任务,我们创建了一个名为 CRUW 的新数据集,还添加了两个定制模块来处理多啁啾信息和对象相对运动。所提出的 RODNet 在训练阶段使用相机-雷达融合 (CRF) 策略通过检测到的对象的新颖 3D 定位进行交叉监督。由于没有现有的公共数据集可用于我们的任务,我们创建了一个名为 CRUW 的新数据集, 1

数据集和代码可在 https://www.crewdataset.org/.

其中包含在各种驾驶场景中同步的 RGB 和 RF 图像序列。通过密集的实验,我们提出的交叉监督 RODNet 实现了目标检测性能的 86% 平均精度和 88% 平均召回率,这显示了在各种驾驶条件下的鲁棒性。
更新日期:2021-02-11
down
wechat
bug