当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Infrared-visible cross-modal person re-identification via dual-attention collaborative learning
Signal Processing: Image Communication ( IF 3.4 ) Pub Date : 2022-09-13 , DOI: 10.1016/j.image.2022.116868
Yunshang Li , Ying Chen

Person re-identification is regarded as a retrieval task for searching the same person in different cameras, within which infrared-visible cross-modal re-identification (VI-ReID) is challenging because the inter-class distance is larger than the intra-class distance. In this paper, a dual-attention collaborative(DAC) learning method is proposed, which unites channel and spatial attentive deep features to obtain supplementary information for multiple classifiers via a cross-modal consistency constraint. A channel attention and part-wise spatial pooling are adopted for discriminative feature learning. A multiple-classifier strategy with a cross-modal consistency constraints is presented for the cross-modal identification. In this way complementary information among modality-sharable classifier and modality-specific classifier can be better utilized. The experimental results show that the proposed method distinctly outperforms the baseline method by a margin of 9.83% Rank-1 and 6.84% mAP on SYSU-MM01.



中文翻译:

基于双注意力协作学习的红外-可见跨模态行人再识别

人员重识别被认为是在不同相机中搜索同一个人的检索任务,其中红外可见跨模态重识别(VI-ReID)具有挑战性,因为类间距离大于类内距离距离。在本文中,提出了一种双注意力协作(DAC)学习方法,该方法将通道和空间注意力深度特征结合起来,通过跨模态一致性约束为多个分类器获取补充信息。通道注意和部分空间池被用于判别特征学习。提出了一种具有跨模态一致性约束的多分类器策略用于跨模态识别。这样可以更好地利用模态可共享分类器和模态特定分类器之间的互补信息。

更新日期:2022-09-13
down
wechat
bug