当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Diverse part attentive network for video-based person re-identification
Pattern Recognition Letters ( IF 3.9 ) Pub Date : 2021-06-15 , DOI: 10.1016/j.patrec.2021.05.020
Xiujun Shu , Ge Li , Longhui Wei , Jia-Xing Zhong , Xianghao Zang , Shiliang Zhang , Yaowei Wang , Yongsheng Liang , Qi Tian

Attention mechanisms have achieved success in video-based person re-identification (re-ID). However, current global attentions tend to focus on the most salient parts, e.g., clothes, and ignore other subtle but valuable cues, e.g., hair, bag, and shoes. They still do not make full use of valuable information from diverse parts of human bodies. To tackle this issue, we propose a Diverse Part Attentive Network (DPAN) to exploit discriminative and diverse body cues. The framework consists of two modules: spatial diverse part attention and temporal diverse part attention. The spatial module utilizes channel grouping to exploit diverse parts of human bodies including salient and subtle parts. The temporal module aims to learn diverse weights for fusing learned features. Besides, this framework is lightweight, which introduces marginal parameters and computational complexities. Extensive experiments were conducted on three popular benchmarks, i.e. iLIDS-VID, PRID2011 and MARS. Our method achieves competitive performance on these datasets compared with state-of-the-art methods.



中文翻译:

基于视频的人重识别的多部分注意力网络

注意机制在基于视频的人员重新识别(re-ID)方面取得了成功。然而,当前的全球注意力往往集中在最显着的部分,例如衣服,而忽略了其他微妙但有价值的线索,例如头发、包和鞋子。他们仍然没有充分利用来自人体不同部位的有价值的信息。为了解决这个问题,我们提出了一个 Diverse Part Attentive Network (DPAN) 来利用区分性和多样化的身体线索。该框架由两个模块组成:空间多样化部分注意和时间多样化部分注意。空间模块利用通道分组来开发人体的不同部分,包括显着和微妙部分。时间模块旨在学习不同的权重以融合学习到的特征。此外,这个框架是轻量级的,这引入了边际参数和计算复杂性。在三个流行的基准测试中进行了广泛的实验,即 iLIDS-VID、PRID2011 和 MARS。与最先进的方法相比,我们的方法在这些数据集上取得了有竞争力的性能。

更新日期:2021-06-28
down
wechat
bug