当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar
arXiv - CS - Robotics Pub Date : 2020-04-02 , DOI: arxiv-2004.03451
Prannay Kaul, Daniele De Martini, Matthew Gadd, Paul Newman

This paper presents an efficient annotation procedure and an application thereof to end-to-end, rich semantic segmentation of the sensed environment using FMCW scanning radar. We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions. We avoid laborious manual labelling by exploiting the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors, for which semantic segmentation is an already consolidated procedure. The training procedure leverages a state-of-the-art natural image segmentation system which is publicly available and as such, in contrast to previous approaches, allows for the production of copious labels for the radar stream by incorporating four camera and two LiDAR streams. Additionally, the losses are computed taking into account labels to the radar sensor horizon by accumulating LiDAR returns along a pose-chain ahead and behind of the current vehicle position. Finally, we present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.

中文翻译:

RSS-Net:使用 FMCW 雷达进行弱监督的多类语义分割

本文提出了一种高效的注释程序及其在使用 FMCW 扫描雷达的感知环境的端到端、丰富的语义分割中的应用。我们提倡使用雷达而不是用于此任务的传统传感器,因为它可以在更远的范围内运行,并且对不利的天气和光照条件更加稳健。我们通过利用迄今为止收集的最大的以雷达为重点的城市自治数据集,将雷达扫描与 RGB 摄像头和 LiDAR 传感器相关联,避免了繁琐的手动标记,语义分割已经是一个整合的过程。训练过程利用了最先进的自然图像分割系统,该系统是公开可用的,因此,与以前的方法相比,通过结合四个摄像头和两个 LiDAR 流,允许为雷达流生成大量标签。此外,通过沿当前车辆位置前后的位姿链累积 LiDAR 回波,计算损失时会考虑到雷达传感器地平线的标签。最后,我们为网络提供多通道雷达扫描输入,以处理短暂和动态的场景对象。
更新日期:2020-04-08
down
wechat
bug