当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Beyond Triplet Loss: Person Re-identification with Fine-grained Difference-aware Pairwise Loss
arXiv - CS - Information Retrieval Pub Date : 2020-09-22 , DOI: arxiv-2009.10295
Cheng Yan, Guansong Pang, Xiao Bai, Jun Zhou, Lin Gu

Person Re-IDentification (ReID) aims at re-identifying persons from different viewpoints across multiple cameras. Capturing the fine-grained appearance differences is often the key to accurate person ReID, because many identities can be differentiated only when looking into these fine-grained differences. However, most state-of-the-art person ReID approaches, typically driven by a triplet loss, fail to effectively learn the fine-grained features as they are focused more on differentiating large appearance differences. To address this issue, we introduce a novel pairwise loss function that enables ReID models to learn the fine-grained features by adaptively enforcing an exponential penalization on the images of small differences and a bounded penalization on the images of large differences. The proposed loss is generic and can be used as a plugin to replace the triplet loss to significantly enhance different types of state-of-the-art approaches. Experimental results on four benchmark datasets show that the proposed loss substantially outperforms a number of popular loss functions by large margins; and it also enables significantly improved data efficiency.

中文翻译:

超越三元组损失:使用细粒度差异感知成对损失重新识别人员

行人重识别 (ReID) 旨在从多个摄像头的不同视点重新识别人。捕捉细粒度的外观差异往往是准确行人 ReID 的关键,因为许多身份只有在查看这些细粒度的差异时才能区分。然而,大多数最先进的人 ReID 方法(通常由三元组损失驱动)无法有效地学习细粒度特征,因为它们更侧重于区分大的外观差异。为了解决这个问题,我们引入了一种新颖的成对损失函数,它使 ReID 模型能够通过自适应地对小差异图像执行指数惩罚和对大差异图像执行有界惩罚来学习细粒度特征。提议的损失是通用的,可以用作插件来替换三元组损失,以显着增强不同类型的最先进方法。在四个基准数据集上的实验结果表明,所提出的损失大大优于许多流行的损失函数;它还可以显着提高数据效率。
更新日期:2020-09-23
down
wechat
bug