当前位置: X-MOL 学术Neural Comput. & Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Arbitrary-oriented object detection via dense feature fusion and attention model for remote sensing super-resolution image
Neural Computing and Applications ( IF 6 ) Pub Date : 2020-05-07 , DOI: 10.1007/s00521-020-04893-9
Fuhao Zou , Wei Xiao , Wanting Ji , Kunkun He , Zhixiang Yang , Jingkuan Song , Helen Zhou , Kai Li

In this paper, we aim at developing a new arbitrary-oriented end-to-end object detection method to further push the frontier of object detection for remote sensing image. The proposed method comprehensively takes into account multiple strategies, such as attention mechanism, feature fusion, rotation region proposal as well as super-resolution pre-processing simultaneously to boost the performance in terms of localization and classification under the faster RCNN-like framework. Specifically, a channel attention network is integrated for selectively enhancing useful features and suppressing useless ones. Next, a dense feature fusion network is designed based on multi-scale detection framework, which fuses multiple layers of features to improve the sensitivity to small objects. In addition, considering the objects for detection are often densely arranged and appear in various orientations, we design a rotation anchor strategy to reduce the redundant detection regions. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 and scene text dataset ICDAR2015 demonstrate that the proposed method can be competitive with or even superior to the state-of-the-art ones, like R2CNN and R2CNN++.



中文翻译:

基于密集特征融合和注意力模型的面向对象的遥感超分辨率图像检测

本文旨在开发一种新的面向任意方向的端到端目标检测方法,以进一步推动遥感图像目标检测的前沿。所提出的方法综合考虑了注意力机制,特征融合,旋转区域提议以及超分辨率预处理等多种策略,从而在更快的RCNN-like框架下提高了定位和分类的性能。具体地说,集成了一个频道注意网络,用于选择性地增强有用的功能并抑制无用的功能。接下来,基于多尺度检测框架设计了密集特征融合网络,融合了多层特征以提高对小物体的敏感性。此外,考虑到要检测的对象通常密集排列并以不同的方向出现,我们设计了一种旋转锚策略来减少多余的检测区域。在两个遥感公共数据集DOTA,NWPU VHR-10和场景文本数据集ICDAR2015上的大量实验表明,所提出的方法可以与R2CNN和R2CNN ++等最新技术竞争甚至更高。

更新日期:2020-05-07
down
wechat
bug