当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2021-06-17 , DOI: 10.1109/tnnls.2021.3085978
Liyan Zhang , Guodong Du , Fan Liu , Huawei Tu , Xiangbo Shu

Cross-modality visible-infrared person reidentification (VI-ReID), which aims to retrieve pedestrian images captured by both visible and infrared cameras, is a challenging but essential task for smart surveillance systems. The huge barrier between visible and infrared images has led to the large cross-modality discrepancy and intraclass variations. Most existing VI-ReID methods tend to learn discriminative modality-sharable features based on either global or part-based representations, lacking effective optimization objectives. In this article, we propose a novel global-local multichannel (GLMC) network for VI-ReID, which can learn multigranularity representations based on both global and local features. The coarse- and fine-grained information can complement each other to form a more discriminative feature descriptor. Besides, we also propose a novel center loss function that aims to simultaneously improve the intraclass cross-modality similarity and enlarge the interclass discrepancy to explicitly handle the cross-modality discrepancy issue and avoid the model fluctuating problem. Experimental results on two public datasets have demonstrated the superiority of the proposed method compared with state-of-the-art approaches in terms of effectiveness.

中文翻译:

用于跨模态可见红外人员重新识别的全局-局部多粒度学习。

跨模态可见红外人员重新识别 (VI-ReID) 旨在检索由可见光和红外摄像机捕获的行人图像,是智能监控系统的一项具有挑战性但必不可少的任务。可见光和红外图像之间的巨大障碍导致了巨大的跨模态差异和类内变化。大多数现有的 VI-ReID 方法倾向于学习基于全局或​​基于部分的表示的判别模态共享特征,缺乏有效的优化目标。在本文中,我们为 VI-ReID 提出了一种新颖的全局-局部多通道 (GLMC) 网络,它可以学习基于全局和局部特征的多粒度表示。粗粒度和细粒度信息可以相互补充,形成更具判别力的特征描述子。除了,我们还提出了一种新的中心损失函数,旨在同时提高类内跨模态相似性并扩大类间差异,以明确处理跨模态差异问题并避免模型波动问题。在两个公共数据集上的实验结果表明,与最先进的方法相比,所提出的方法在有效性方面的优越性。
更新日期:2021-06-17
down
wechat
bug