当前位置: X-MOL 学术Transp. Res. Part C Emerg. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework
Transportation Research Part C: Emerging Technologies ( IF 8.3 ) Pub Date : 2021-01-16 , DOI: 10.1016/j.trc.2020.102946
Jinlong Li , Zhigang Xu , Lan Fu , Xuesong Zhou , Hongkai Yu

Vehicle detection in traffic surveillance images is an important approach to obtain vehicle data and rich traffic flow parameters. Recently, deep learning based methods have been widely used in vehicle detection with high accuracy and efficiency. However, deep learning based methods require a large number of manually labeled ground truths (bounding box of each vehicle in each image) to train the Convolutional Neural Networks (CNN). In the modern urban surveillance cameras, there are already many manually labeled ground truths in daytime images for training CNN, while there are little or much less manually labeled ground truths in nighttime images. In this paper, we focus on the research to make maximum usage of labeled daytime images (Source Domain) to help the vehicle detection in unlabeled nighttime images (Target Domain). For this purpose, we propose a new situation-sensitive method based on Faster R-CNN with Domain Adaptation (DA) to improve the vehicle detection at nighttime. Furthermore, a situation-sensitive traffic flow parameter estimation method is developed based on the traffic flow theory. We collected a new dataset of 2,200 traffic images (1,200 for daytime and 1,000 for nighttime) of 57,059 vehicles to evaluate the proposed method for the vehicle detection. Another new dataset with three 1,800-frame daytime videos and one 1,800-frame nighttime video of about 260 K vehicles was collected to evaluate and show the estimated traffic flow parameters in different situations. The experimental results show the accuracy and effectiveness of the proposed method.



中文翻译:

从白天到晚上的域适应:一种对情况敏感的车辆检测和交通流参数估计框架

交通监控图像中的车辆检测是获取车辆数据和丰富交通流量参数的重要方法。近来,基于深度学习的方法以高精度和高效率被广泛用于车辆检测。但是,基于深度学习的方法需要大量手动标记的地面真相(每个图像中每个车辆的边界框)来训练卷积神经网络(CNN)。在现代城市监控摄像机中,用于训练CNN的白天图像中已经有许多手动标记的地面真相,而夜间图像中几乎没有或几乎没有人工标记的地面真相。在本文中,我们专注于最大程度地利用带标签的白天图像(源域)的研究,以帮助在未带标签的夜间图像(目标域)中进行车辆检测。以此目的,我们提出了一种基于带域自适应(DA)的Faster R-CNN的新态势敏感方法,以提高夜间车辆的检测能力。此外,基于交通流理论,提出了一种态势敏感的交通流参数估计方法。我们收集了一个新的数据集,其中包含57,059辆车的2,200张交通图像(白天为1,200张,夜间为1,000张),以评估建议的车辆检测方法。收集了另一个新数据集,其中包含约260 K辆车辆的三个1,800帧的日间视频和一个1800帧的夜间视频,以评估和显示不同情况下的估计交通流量参数。实验结果表明了该方法的准确性和有效性。

更新日期:2021-01-18
down
wechat
bug