当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning with noisy labels method for unsupervised domain adaptive person re-identification
Neurocomputing ( IF 6 ) Pub Date : 2021-05-04 , DOI: 10.1016/j.neucom.2021.04.120
Xiaodi Zhu , Yanfeng Li , Jia Sun , Houjin Chen , Jinlei Zhu

Unsupervised domain adaptive (UDA) person re-identification (re-ID) aims to adapt the model trained on a labeled source domain to an unlabeled target domain. For pseudo-label-based UDA methods, pseudo labels noise is the main problem for model degradation and the factors that cause noise are complex. In this paper, a novel learning with noisy labels (LNL) method for UDA person re-ID is proposed to address this problem by analyzing the noise data itself. LNL learns with noise data from two aspects, including noise correction and noise resistance. According to the idea of neighbor consistency, pseudo labels correction (PLC) based on sample similarity is designed to correct the noisy pseudo labels before training. In order to solve the problem of noise labels in deep learning, noise recognition based on similarity and confidence relationship (SACR) is designed. Then, an easy-to-hard model collaborative training (MCT) strategy is developed, which can resist noise during the training process and obtain a more robust training model. To further avoid overfitting of noisy samples, the re-weighting (RW) method is employed in MCT. The proposed LNL model achieves considerable results of 75.2%/88.9% and 62.5%/77.4% mAP/Rank-1 on DukeMTMC-reID-to-Market-1501 and Market-1501-to-DukeMTMC-reID UDA tasks.



中文翻译:

带有噪声标签的学习方法用于无监督域自适应人的重新识别

无监督域自适应(UDA)人员重新识别(re-ID)的目的是使在标记源域上训练的模型适应于未标记目标域。对于基于伪标签的UDA方法,伪标签噪声是模型降级的主要问题,导致噪声的因素也很复杂。本文提出了一种新颖的带噪声标签的学习方法,用于UDA人re-ID,以通过分析噪声数据本身来解决此问题。LNL从两个方面学习噪声数据,包括噪声校正和抗噪声能力。根据邻居一致性的思想,设计了基于样本相似度的伪标签校正(PLC)技术,可以在训练前对噪声较大的伪标签进行校正。为了解决深度学习中的噪声标签问题,设计了基于相似度和置信度关系(SACR)的噪声识别。然后,开发了一种易于使用的模型协作训练(MCT)策略,该策略可以在训练过程中抵抗噪声并获得更强大的训练模型。为了进一步避免噪声样本过度拟合,在MCT中采用了重新加权(RW)方法。提出的LNL模型在DukeMTMC-reID-to-Market-1501和Market-1501-DukeMTMC-reID UDA任务上获得了75.2%/ 88.9%和62.5%/ 77.4%mAP / Rank-1的可观结果。

更新日期:2021-05-13
down
wechat
bug