当前位置: X-MOL 学术IEEE Geosci. Remote Sens. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Low Coupling and Lightweight Algorithm for Ship Detection in Optical Remote Sensing Images
IEEE Geoscience and Remote Sensing Letters ( IF 4.8 ) Pub Date : 2022-07-06 , DOI: 10.1109/lgrs.2022.3188850
Guochao Deng 1 , Qin Wang 2 , Jianfei Jiang 1 , Qirun Hong 1 , Naifeng Jing 1 , Weiguang Sheng 1 , Zhigang Mao 1
Affiliation  

In recent years, many ship detection algorithms based on convolutional neural networks (CNNs) have been proposed to improve the performance of ship detection. However, with the increase in model complexity and size, it is challenging to deploy these models to resource-constrained edge platforms. In this letter, a low coupling algorithm that belongs to anchor-free methods is proposed for ship detection to reduce the model complexity and still obtain a competitive performance. The proposed low coupling network (LCNet) is easy to deploy and contributes to speeding up the inference and improving memory utilization. In addition, we propose a model compression process consisting of the quantization-aware training (QAT) method and a structural pruning method based on Taylor expansion, which can effectively reduce the model size according to hardware resource constraints. Comparative experimental results demonstrate that LCNet outperforms the state-of-the-art ship detection and natural object detection algorithms, with a 95.27% mAP and 88.91% F1 score on the HRSC2016 dataset. Our proposed model compression method also achieves a compression ratio of at least 80% with a negligible loss of performance.

中文翻译:

一种用于光学遥感图像中船舶检测的低耦合轻量级算法

近年来,已经提出了许多基于卷积神经网络(CNN)的船舶检测算法来提高船舶检测的性能。然而,随着模型复杂性和规模的增加,将这些模型部署到资源受限的边缘平台具有挑战性。在这封信中,提出了一种属于无锚方法的低耦合算法,用于船舶检测,以降低模型复杂度并仍然获得有竞争力的性能。所提出的低耦合网络(LCNet)易于部署,有助于加快推理速度和提高内存利用率。此外,我们提出了一种模型压缩过程,包括量化感知训练(QAT)方法和基于泰勒展开的结构剪枝方法,可以根据硬件资源限制有效减小模型大小。对比实验结果表明,LCNet 优于最先进的船舶检测和自然物体检测算法,在 HRSC2016 数据集上的 mAP 为 95.27%,F1 得分为 88.91%。我们提出的模型压缩方法还实现了至少 80% 的压缩率,而性能损失可以忽略不计。
更新日期:2022-07-06
down
wechat
bug