当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast intent prediction of multi-cyclists in 3D point cloud data using deep neural networks
Neurocomputing ( IF 5.5 ) Pub Date : 2021-09-08 , DOI: 10.1016/j.neucom.2021.09.008
Khaled Saleh 1 , Ahmed Abobakr 2 , Mohammed Hossny 3 , Darius Nahavandi 4 , Julie Iskander 5 , Mohammed Attia 6 , Saeid Nahavandi 4
Affiliation  

Inferring the intended actions of road-sharing users with autonomous ground vehicles in particularly vulnerable ones like cyclists is considered one of the tough tasks facing the wide-spread deployment of autonomous ground vehicles. One of the main reasons for that is the scarcity of the available datasets for that task due to the difficulty in obtaining those datasets in real environments. In this work, we first propose a pipeline that can synthetically produce 3D LiDAR data of cyclists hand-signalling a set of intended actions that are commonly done in real environments. Given the synthetically-produced labelled 3D LiDAR data sequences, we trained a framework that can simultaneously detect, track and give predictions about the intended actions of multi-cyclists in the scene on time. The proposed framework was evaluated using both synthetic and real data from a physical 3D LiDAR sensor. Our proposed framework has scored competitive and robust results in both synthetic and real environments with 88% in F1 measure with higher frame per second rate (12.9 FPS) than the 3D LiDAR sensor frame rate (10 Hz).



中文翻译:

使用深度神经网络在 3D 点云数据中快速预测多骑手的意图

推断具有自主地面车辆的道路共享用户在骑自行车等特别脆弱的人群中的预期行为被认为是自主地面车辆广泛部署所面临的艰巨任务之一。造成这种情况的主要原因之一是由于难以在真实环境中获取这些数据集,因此该任务的可用数据集稀缺。在这项工作中,我们首先提出了一种管道,该管道可以综合生成骑自行车者的 3D LiDAR 数据,通过手势发出一组通常在真实环境中完成的预期动作。鉴于综合生成的标记 3D LiDAR 数据序列,我们训练了一个框架,该框架可以同时检测、跟踪和预测场景中多骑手的预期动作。使用来自物理 3D LiDAR 传感器的合成数据和真实数据对提议的框架进行了评估。我们提出的框架在合成和真实环境中都取得了有竞争力和稳健的结果,其中 88%F1 以比 3D LiDAR 传感器帧速率 (10 Hz) 更高的每秒帧速率 (12.9 FPS) 进行测量。

更新日期:2021-09-16
down
wechat
bug