当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modality Adversarial Neural Network for Visible-Thermal Person Re-identification
Pattern Recognition ( IF 7.5 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.patcog.2020.107533
Yi Hao , Jie Li , Nannan Wang , Xinbo Gao

Abstract Existing Visible-Thermal Person Re-identification (VT-REID) methods usually adopt two-stream networks for cross-modality images. The two streams are trained to extract features from different modality images respectively. In contrast, we design a Modality Adversarial Neural Network (MANN) to solve VT-REID problem. Our proposed MANN includes a one-stream feature extractor and a modality discriminator. The heterogeneous images are processed by the feature extractor to generate modality-invariant features. And the designed modality discriminator aims to distinguish whether the extracted features are from visible or thermal modality. Moreover, our advanced dual-constrained triplet loss is introduced for better cross-modality matching performance. The experiments on two cross-modality person re-identification datasets show that MANN can effectively learn modality-invariant features and outperform state-of-the-art methods by a large margin.

中文翻译:

用于可见热人重新识别的模态对抗神经网络

摘要 现有的可见热人重识别(VT-REID)方法对于跨模态图像通常采用双流网络。训练这两个流以分别从不同模态图像中提取特征。相比之下,我们设计了一个模态对抗神经网络 (MANN) 来解决 VT-REID 问题。我们提出的 MANN 包括一个单流特征提取器和一个模态鉴别器。异质图像由特征提取器处理以生成模态不变特征。设计的模态鉴别器旨在区分提取的特征是来自可见模态还是热模态。此外,我们引入了我们先进的双约束三元组损失以获得更好的跨模态匹配性能。
更新日期:2020-11-01
down
wechat
bug