当前位置: X-MOL 学术J. Electron. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Improved YOLO v3 network-based object detection for blind zones of heavy trucks
Journal of Electronic Imaging ( IF 1.0 ) Pub Date : 2020-09-08 , DOI: 10.1117/1.jei.29.5.053002
Renwei Tu 1 , Zhongjie Zhu 1 , Yongqiang Bai 1 , Gangyi Jiang 2 , Qingqing Zhang 1
Affiliation  

Abstract. Object detection for blind zones is critical to ensuring the driving safety of heavy trucks. We propose a scheme to realize object detection in the blind zones of heavy trucks based on the improved you-only-look-once (YOLO) v3 network. First, according to the actual detection requirements, the targets are determined to establish a new data set of persons, cars, and fallen pedestrians, with a focus on small and medium objects. Subsequently, the network structure is optimized, and the features are enhanced by combining the shallow and deep convolution information of the Darknet platform. In this way, the feature propagation can be effectively enhanced, feature reuse can be promoted, and the network performance for small object detection can be improved. Furthermore, new anchors are obtained by clustering the data set using the K-means technique to improve the accuracy of the detection frame positioning. In the test stage, detection is performed using the trained model. The test results demonstrate that the proposed improved YOLO v3 network is superior to the original YOLO v3 model in terms of the blind zone detection and can satisfy the accuracy and real-time requirements with an accuracy of 94% and runtime of 13.792 ms / frame. Moreover, the mean average precision value for the improved model is 87.82%, which is 2.79% higher than that of the original YOLO v3 network.

中文翻译:

改进的 YOLO v3 基于网络的重型卡车盲区目标检测

摘要。盲区物体检测对于确保重型卡车的行驶安全至关重要。我们提出了一种基于改进的you-only-look-once(YOLO)v3网络实现重型卡车盲区目标检测的方案。首先,根据实际检测需求,确定目标,建立新的人、车、坠落行人数据集,重点针对中小物体。随后对网络结构进行优化,结合Darknet平台的浅层和深层卷积信息增强特征。这样可以有效增强特征传播,促进特征重用,提高网络对小物体检测的性能。此外,使用K-means技术对数据集进行聚类得到新的anchors,提高检测框定位的精度。在测试阶段,使用训练好的模型进行检测。测试结果表明,提出的改进后的YOLO v3网络在盲区检测方面优于原YOLO v3模型,能够以94%的准确率和13.792 ms/帧的运行时间满足精度和实时性要求。此外,改进模型的平均精度值为 87.82%,比原始 YOLO v3 网络高 2.79%。测试结果表明,提出的改进后的YOLO v3网络在盲区检测方面优于原YOLO v3模型,能够以94%的准确率和13.792 ms/帧的运行时间满足精度和实时性要求。此外,改进模型的平均精度值为 87.82%,比原始 YOLO v3 网络高 2.79%。测试结果表明,提出的改进后的YOLO v3网络在盲区检测方面优于原YOLO v3模型,能够以94%的准确率和13.792 ms/帧的运行时间满足精度和实时性要求。此外,改进模型的平均精度值为 87.82%,比原始 YOLO v3 网络高 2.79%。
更新日期:2020-09-08
down
wechat
bug