当前位置: X-MOL 学术J. Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Novel Neural Network Model for Traffic Sign Detection and Recognition under Extreme Conditions
Journal of Sensors ( IF 1.9 ) Pub Date : 2021-07-10 , DOI: 10.1155/2021/9984787
Haifeng Wan 1 , Lei Gao 2 , Manman Su 1 , Qinglong You 3 , Hui Qu 1 , Qirun Sun 1
Affiliation  

Traffic sign detection is extremely important in autonomous driving and transportation safety systems. However, the accurate detection of traffic signs remains challenging, especially under extreme conditions. This paper proposes a novel model called Traffic Sign Yolo (TS-Yolo) based on the convolutional neural network to improve the detection and recognition accuracy of traffic signs, especially under low visibility and extremely restricted vision conditions. A copy-and-paste data augmentation method was used to build a large number of new samples based on existing traffic-sign datasets. Based on You Only Look Once (YoloV5), the mixed depth-wise convolution (MixConv) was employed to mix different kernel sizes in a single convolution operation, so that different patterns with various resolutions can be captured. Furthermore, the attentional feature fusion (AFF) module was integrated to fuse the features based on attention from same-layer to cross-layer scenarios, including short and long skip connections, and even performing the initial fusion with itself. The experimental results demonstrated that, using the YoloV5 dataset with augmentation, the precision was 71.92, which was increased by 34.56 compared with the data without augmentation, and the mean average precision mAP_0.5 was 80.05, which was increased by 33.11 compared with the data without augmentation. When MixConv and AFF were applied to the TS-Yolo model, the precision was 74.53 and 2.61 higher than that with data augmentation only, and the value of mAP_0.5 was 83.73 and 3.68 higher than that based on the YoloV5 dataset with augmentation only. Overall, the performance of the proposed method was competitive with the latest traffic sign detection approaches.

中文翻译:

一种用于极端条件下交通标志检测和识别的新型神经网络模型

交通标志检测在自动驾驶和交通安全系统中极为重要。然而,准确检测交通标志仍然具有挑战性,尤其是在极端条件下。本文提出了一种基于卷积神经网络的新型交通标志 Yolo (TS-Yolo) 模型,以提高交通标志的检测和识别精度,尤其是在低能见度和极度受限的视觉条件下。使用复制粘贴数据增强方法基于现有交通标志数据集构建大量新样本。在 You Only Look Once (YoloV5) 的基础上,使用混合深度卷积 (MixConv) 在单个卷积操作中混合不同的内核大小,从而可以捕获具有各种分辨率的不同模式。此外,集成了注意力特征融合(AFF)模块,基于注意力从同层到跨层场景融合特征,包括短连接和长跳过连接,甚至与自身进行初始融合。实验结果表明,使用有增强的YoloV5数据集,精度为71.92,比没有增强的数据提高了34.56,平均精度mAP_0.5为80.05,比数据提高了33.11无需增补。将 MixConv 和 AFF 应用于 TS-Yolo 模型时,精度比仅使用数据增强的精度高 74.53 和 2.61,mAP_0.5 的值比基于仅增强的 YoloV5 数据集的值高 83.73 和 3.68。全面的,
更新日期:2021-07-12
down
wechat
bug