当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Person Re-Identification via Attention Pyramid
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-09-02 , DOI: 10.1109/tip.2021.3107211
Guangyi Chen , Tianpei Gu , Jiwen Lu , Jin-An Bao , Jie Zhou

In this paper, we propose an attention pyramid method for person re-identification. Unlike conventional attention-based methods which only learn a global attention map, our attention pyramid exploits the attention regions in a multi-scale manner because human attention varies with different scales. Our attention pyramid imitates the process of human visual perception which tends to notice the foreground person over the cluttered background, and further focus on the specific color of the shirt with close observation. Specifically, we describe our attention pyramid by a “split-attend-merge-stack” principle. We first split the features into multiple local parts and learn the corresponding attentions. Then, we merge local attentions and stack these merged attentions with the residual connection as an attention pyramid. The proposed attention pyramid is a lightweight plug-and-play module that can be applied to off-the-shelf models. We implement our attention pyramid method in two different attention mechanisms including: channel-wise attention and spatial attention. We evaluate our method on four large-scale person re-identification benchmarks including Market-1501, DukeMTMC, CUHK03, and MSMT17. Experimental results demonstrate the superiority of our method, which outperforms the state-of-the-art methods by a large margin with limited computationa cost. Code is available at https://github.com/CHENGY12/APNet .

中文翻译:


通过注意力金字塔进行人员重新识别



在本文中,我们提出了一种用于人员重新识别的注意力金字塔方法。与仅学习全局注意力图的传统基于注意力的方法不同,我们的注意力金字塔以多尺度方式利用注意力区域,因为人类注意力随不同尺度而变化。我们的注意力金字塔模仿了人类视觉感知的过程,即倾向于在杂乱的背景中注意到前景人物,并通过仔细观察进一步关注衬衫的特定颜色。具体来说,我们通过“分裂-参与-合并-堆栈”原则来描述我们的注意力金字塔。我们首先将特征分成多个局部部分并学习相应的注意力。然后,我们合并局部注意力并将这些合并的注意力与残差连接堆叠为注意力金字塔。所提出的注意力金字塔是一个轻量级的即插即用模块,可以应用于现成的模型。我们在两种不同的注意力机制中实现注意力金字塔方法,包括:通道注意力和空间注意力。我们在四个大规模人员重新识别基准上评估我们的方法,包括 Market-1501、DukeMTMC、CUHK03 和 MSMT17。实验结果证明了我们的方法的优越性,它在计算成本有限的情况下大幅优于最先进的方法。代码可在 https://github.com/CHENGY12/APNet 获取。
更新日期:2021-09-02
down
wechat
bug