当前位置: X-MOL 学术IEEE Trans. Nucl. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
3-D Object Tracking in Panoramic Video and LiDAR for Radiological Source–Object Attribution and Improved Source Detection
IEEE Transactions on Nuclear Science ( IF 1.8 ) Pub Date : 2020-12-28 , DOI: 10.1109/tns.2020.3047646
M. R. Marshall 1 , D. Hellfeld 2 , T. H. Y. Joshi 2 , M. Salathe 2 , M. S. Bandstra 2 , K. J. Bilton 1 , R. J. Cooper 2 , J. C. Curtis 2 , V. Negut 2 , A. J. Shurley 1 , K. Vetter 1
Affiliation  

Networked detector systems can be deployed in urban environments to aid in the detection and localization of radiological and/or nuclear material. However, effectively responding to and interpreting a radiological alarm using spectroscopic data alone may be hampered by a lack of situational awareness, particularly in complex environments. This study investigates the use of Light Detection and Ranging (LiDAR) and streaming video to enable real-time object detection and tracking, and the fusion of this tracking information with radiological data for the purposes of enhanced situational awareness and increased detection sensitivity. This work presents an object detection, tracking, and novel source–object attribution analysis that is capable of operating in real time. By implementing this analysis pipeline on a custom-developed system that comprises a static 2 in. $\times 4$ in. $\times16$ in. NaI(Tl) detector colocated with a 64-beam LiDAR and four monocular cameras, we demonstrate the ability to accurately correlate trajectories from tracked objects to spectroscopic gamma-ray data in real time and use physics-based models to reliably discriminate between source-carrying and nonsource-carrying objects. In this work, we describe our approach in detail and present a quantitative performance assessment that characterizes the source–object attribution capabilities of both video and LiDAR. Additionally, we demonstrate the ability to simultaneously track pedestrians and vehicles in a mock urban environment and use this tracking information to improve both detection sensitivity and situational awareness using our contextual-radiological data fusion methodology.

中文翻译:

全景视频和LiDAR中的3-D目标跟踪,用于放射源-目标归因和改进的源检测

联网的探测器系统可以部署在城市环境中,以帮助探测和定位放射和/或核材料。但是,缺乏态势感知可能会妨碍仅使用光谱数据有效地响应和解释放射警报的情况,尤其是在复杂的环境中。这项研究调查了使用光探测与测距(LiDAR)和流视频来实现实时目标检测和跟踪,以及将此跟踪信息与放射线数据融合以增强态势感知和提高检测灵敏度的目的。这项工作提出了能够实时操作的对象检测,跟踪和新颖的源对象归因分析。 $ \次4 $ 在。 $ \ times16 $ 在与64光束LiDAR和四个单眼相机并置的NaI(Tl)探测器中,我们展示了能够准确地实时关联从跟踪对象到光谱伽玛射线数据的轨迹的能力,并使用基于物理学的模型可靠地区分出源携带和非携带对象。在这项工作中,我们将详细描述我们的方法,并提供量化的性能评估,该评估可表征视频和LiDAR的源对象归因能力。此外,我们展示了在模拟的城市环境中同时跟踪行人和车辆的能力,并使用我们的上下文放射线数据融合方法使用此跟踪信息来提高检测灵敏度和态势感知。
更新日期:2021-02-16
down
wechat
bug