当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantic-Guided Pixel Sampling for Cloth-Changing Person Re-Identification
IEEE Signal Processing Letters ( IF 3.2 ) Pub Date : 2021-06-23 , DOI: 10.1109/lsp.2021.3091924
Xiujun Shu , Ge Li , Xiao Wang , Weijian Ruan , Qi Tian

Cloth-changing person re-identification (re-ID) is a new rising research topic that aims at retrieving pedestrians whose clothes are changed. This task is quite challenging and has not been fully studied to date. Current works mainly focus on body shape or contour sketch, but they are not robust enough due to view and posture variations. The key to this task is to exploit cloth-irrelevant cues. This paper proposes a semantic-guided pixel sampling approach for the cloth-changing person re-ID task. We do not explicitly define which feature to extract but force the model to automatically learn cloth-irrelevant cues. Specifically, we firstly recognize the pedestrian's upper clothes and pants, then randomly change them by sampling pixels from other pedestrians. The changed samples retain the identity labels but exchange the pixels of clothes or pants among different pedestrians. Besides, we adopt a loss function to constrain the learned features to keep consistent before and after changes. In this way, the model is forced to learn cues that are irrelevant to upper clothes and pants. We conduct extensive experiments on the latest released PRCC dataset. Our method achieved 65.8% on Rank1 accuracy, which outperforms previous methods with a large margin. The code is available at https://github.com/shuxjweb/pixel_sampling.git.

中文翻译:


用于换衣人员重新识别的语义引导像素采样



换衣服的行人重新识别(re-ID)是一个新兴的研究课题,旨在检索换衣服的行人。这项任务相当具有挑战性,迄今为止尚未得到充分研究。目前的作品主要集中在身体形状或轮廓素描上,但由于视角和姿势的变化,它们不够稳健。这项任务的关键是利用与服装无关的线索。本文提出了一种用于换衣行人重识别任务的语义引导像素采样方法。我们没有明确定义要提取哪个特征,而是强制模型自动学习与布料无关的线索。具体来说,我们首先识别行人的上衣和裤子,然后通过从其他行人采样像素来随机改变它们。更改后的样本保留了身份标签,但在不同行人之间交换了衣服或裤子的像素。此外,我们采用损失函数来约束学习到的特征在变化前后保持一致。这样,模型就被迫学习与上衣和裤子无关的线索。我们对最新发布的 PRCC 数据集进行了广泛的实验。我们的方法在 Rank1 准确率上达到了 65.8%,大大优于以前的方法。该代码可在 https://github.com/shuxjweb/pixel_sampling.git 获取。
更新日期:2021-06-23
down
wechat
bug