当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Association Loss for Visual Object Detection
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2020-01-01 , DOI: 10.1109/lsp.2020.3013160
Dongli Xu , Jian Guan , Pengming Feng , Wenwu Wang

Convolutional neural network (CNN) is a popular choice for visual object detection where two sub-nets are often used to achieve object classification and localization separately. However, the intrinsic relation between the localization and classification sub-nets was not exploited explicitly for object detection. In this letter, we propose a novel association loss, namely, the proxy squared error (PSE) loss, to entangle the two sub-nets, thus use the dependency between the classification and localization scores obtained from these two sub-nets to improve the detection performance. We evaluate our proposed loss on the MS-COCO dataset and compare it with the loss in a recent baseline, i.e. the fully convolutional one-stage (FCOS) detector. The results show that our method can improve the $\mathrm{AP}$ from 33.8 to 35.4 and ${\rm AP}_{75}$ from 35.4 to 37.8, as compared with the FCOS baseline.

中文翻译:

视觉对象检测的关联损失

卷积神经网络 (CNN) 是视觉对象检测的流行选择,其中两个子网通常用于分别实现对象分类和定位。然而,定位和分类子网之间的内在关系并未明确用于目标检测。在这封信中,我们提出了一种新的关联损失,即代理平方误差(PSE)损失,来纠缠这两个子网,从而利用从这两个子网获得的分类和定位分数之间的依赖性来提高检测性能。我们在 MS-COCO 数据集上评估我们提出的损失,并将其与最近的基线中的损失进行比较,即完全卷积一级 (FCOS) 检测器。结果表明,我们的方法可以提高$\mathrm{AP}$ 从 33.8 到 35.4 和 ${\rm AP}_{75}$ 从 35.4 到 37.8,与 FCOS 基线相比。
更新日期:2020-01-01
down
wechat
bug