当前位置: X-MOL 学术J. Real-Time Image Proc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lightweight target detection algorithm based on YOLOv4
Journal of Real-Time Image Processing ( IF 2.9 ) Pub Date : 2022-09-10 , DOI: 10.1007/s11554-022-01251-x
Chuan Liu , Xianchao Wang , Qilin Wu , Jiabao Jiang

Aiming at the problem that the model parameters of YOLOv4 algorithm are large and difficult to deploy in edge computing devices, a lightweight target detection algorithm (Light-YOLOv4) is proposed based on YOLOv4 algorithm. The algorithm uses the GhostNet structure to replace the backbone feature extraction network in YOLOv4 algorithm, and introduces the depthwise separable convolution to replace the vanilla convolution, which greatly reduces the parameters of the original network model. Light-YOLOv4 also replaces the ReLU activation function in the deep structure of GhostNet with the improved lightweight activation function H-MetaACON, which improves the detection accuracy when the amount of model parameters and calculation are basically unchanged. Finally, the coordinate attention module is added to the effective feature layer and the PANet upsampling module, so that the model captures the cross-channel information while capturing the direction and position awareness information to further improve the detection accuracy. The experimental results show that the detection accuracy of the optimized model is improved by 0.89% and the size of the model is reduced to 17.48% compared to the original YOLOv4 model. The Light-YOLOv4 model can effectively reduce the inference calculation of the original model while maintaining high detection accuracy, and significantly improve the detection speed of the model on devices with insufficient computing power.



中文翻译:

基于YOLOv4的轻量级目标检测算法

针对YOLOv4算法模型参数较大且难以在边缘计算设备中部署的问题,提出一种基于YOLOv4算法的轻量级目标检测算法(Light-YOLOv4)。该算法使用GhostNet结构代替YOLOv4算法中的主干特征提取网络,并引入depthwise separable convolution代替vanilla convolution,大大减少了原有网络模型的参数。Light-YOLOv4还将GhostNet深层结构中的ReLU激活函数替换为改进的轻量级激活函数H-MetaACON,在模型参数量和计算量基本不变的情况下提高了检测精度。最后,在有效特征层和PANet上采样模块中加入坐标注意模块,使模型在捕捉方向和位置感知信息的同时捕捉跨通道信息,进一步提高检测精度。实验结果表明,优化后的模型的检测准确率比原 YOLOv4 模型提高了 0.89%,模型尺寸减小到了 17.48%。Light-YOLOv4模型可以在保持较高检测精度的同时有效减少原模型的推理计算量,显着提高模型在算力不足的设备上的检测速度。使模型在捕捉方向和位置感知信息的同时捕捉跨通道信息,进一步提高检测精度。实验结果表明,优化后的模型的检测准确率比原 YOLOv4 模型提高了 0.89%,模型尺寸减小到了 17.48%。Light-YOLOv4模型可以在保持较高检测精度的同时有效减少原模型的推理计算量,显着提高模型在算力不足的设备上的检测速度。使模型在捕捉方向和位置感知信息的同时捕捉跨通道信息,进一步提高检测精度。实验结果表明,优化后的模型的检测准确率比原 YOLOv4 模型提高了 0.89%,模型尺寸减小到了 17.48%。Light-YOLOv4模型可以在保持较高检测精度的同时有效减少原模型的推理计算量,显着提高模型在算力不足的设备上的检测速度。

更新日期:2022-09-10
down
wechat
bug