当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coordinate Attention for Efficient Mobile Network Design
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-03-04 , DOI: arxiv-2103.02907
Qibin Hou, Daquan Zhou, Jiashi Feng

Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at https://github.com/Andrew-Qibin/CoordAttention.

中文翻译:

高效移动网络设计的协调注意

关于移动网络设计的最新研究表明,通道注意(例如,挤压和激发注意)对于提升模型性能具有显着效果,但它们通常会忽略位置信息,这对于生成空间选择性注意图非常重要。在本文中,我们通过将位置信息嵌入到频道关注中提出了一种新颖的移动网络关注机制,我们将其称为“协调关注”。与通过2D全局池将特征张量转换为单个特征向量的通道关注不同,坐标关注将通道关注分解为两个1D特征编码过程,分别沿两个空间方向聚合特征。这样,可以沿一个空间方向捕获远程依赖关系,同时可以沿另一空间方向保留精确的位置信息。然后将生成的特征图分别编码为一对方向感知和位置敏感的注意图,可以将其互补地应用于输入特征图,以增强关注对象的表示。我们的协调注意力很简单,可以灵活地插入到经典的移动网络中,例如MobileNetV2,MobileNeXt和EfficientNet,而几乎没有计算开销。大量实验表明,我们的协调注意力不仅有益于ImageNet分类,而且更有趣的是,它在下游任务(如对象检测和语义分段)中表现更好。可以在https:// github上找到代码。
更新日期:2021-03-05
down
wechat
bug