当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FSANet: Feature-and-Spatial-Aligned Network for Tiny Object Detection in Remote Sensing Images
IEEE Transactions on Geoscience and Remote Sensing ( IF 8.2 ) Pub Date : 2022-09-08 , DOI: 10.1109/tgrs.2022.3205052
Jixiang Wu 1 , Zongxu Pan 1 , Bin Lei 1 , Yuxin Hu 1
Affiliation  

Recently, many studies have successfully exploited convolutional neural networks to improve the performance of object detection in remote sensing images. However, detecting tiny objects is still challenging for two main neglected problems: i) The features of tiny objects are insufficient and prone to aliasing during the multiresolution aggregation process; ii) Tiny objects are position-sensitive, resulting in poor localization capabilities. In this article, we propose a feature-and-spatial aligned network (FSANet) to alleviate these issues. FSANet is an anchor-free detector that utilizes the alignment mechanism and progressive optimization strategy to obtain more discriminative features and accurate localization results. Concretely, we first present a feature-aware alignment module ( ${\rm FA}^{{2}}\text{M}$ ) to align features with different resolutions during fusion. By learning transformation offsets, ${\rm FA}^{{2}}\text{M}$ reencodes pixel-spatial information between feature maps of adjacent levels and adaptively adjusts the regular feature interpolation. Additionally, a spatial-aware guidance head (SAGH) is introduced to iteratively optimize network predictions in a coarse-to-fine fashion. SAGH first predicts the object shape at each spatial location on feature maps. For more precise predictions, it then captures geometry-aware convolutional features accordingly to update the coarse localization estimations. Extensive experiments are conducted on three tiny object detection datasets, i.e., AI-TOD, GF1-LRSD, and TinyPerson, demonstrating the effectiveness of our approach.

中文翻译:

FSANet:用于遥感图像中微小物体检测的特征和空间对齐网络

最近,许多研究成功地利用卷积神经网络来提高遥感图像中目标检测的性能。然而,对于两个主要被忽视的问题,检测微小物体仍然具有挑战性:i)微小物体的特征不足,在多分辨率聚合过程中容易出现混叠;ii) 微小物体对位置敏感,导致定位能力差。在本文中,我们提出了一个特征和空间对齐网络(FSANet)来缓解这些问题。FSANet 是一种无锚检测器,它利用对齐机制和渐进优化策略来获得更具辨别力的特征和准确的定位结果。具体来说,我们首先提出了一个特征感知对齐模块( ${\rm FA}^{{2}}\text{M}$ ) 在融合过程中对齐具有不同分辨率的特征。通过学习变换偏移, ${\rm FA}^{{2}}\text{M}$对相邻层的特征图之间的像素空间信息进行重新编码,并自适应地调整常规特征插值。此外,还引入了空间感知引导头 (SAGH),以从粗到细的方式迭代优化网络预测。SAGH 首先预测特征图上每个空间位置的对象形状。对于更精确的预测,它会相应地捕获几何感知的卷积特征,以更新粗略的定位估计。在 AI-TOD、GF1-LRSD 和 TinyPerson 三个微小对象检测数据集上进行了广泛的实验,证明了我们方法的有效性。
更新日期:2022-09-08
down
wechat
bug