当前位置: X-MOL 学术Multimedia Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Pose-guided feature region-based fusion network for occluded person re-identification
Multimedia Systems ( IF 3.5 ) Pub Date : 2021-02-14 , DOI: 10.1007/s00530-021-00752-2
Gengsheng Xie , Xianbin Wen , Liming Yuan , Jianchen Wang , Changlun Guo , Yansong Jia , Minghao Li

Learning distinguishing features from training datasets while filtering features of occlusions is critical to person retrieval scenarios. Most of the current person re-identification (Re-ID) methods based on classification or deep metric representation learning tend to overlook occlusion issues on the training set. Such representations from obstacles are easily over-fitted and misleading due to being considered as a part of the human body. To alleviate the occlusion problem, we propose a pose-guided feature region-based fusion network (PFRFN), to utilize pose landmarks as guidance to guide local learning for a good property of local feature, and the representation learning risk is evaluated on each part loss separately. Compared with only using global classification loss, concurrently considering local loss and the results of robust pose estimation enable the deep network to learn the representations of the body parts that prominently displayed in the image and gain the discriminative faculties on occluded scenes. Experimental results on multiple datasets, i.e., Market-1501, DukeMTMC, CUHK03, demonstrate the effectiveness of our method in a variety of scenarios.



中文翻译:

基于姿势引导特征区域的融合网络用于被遮挡人的重新识别

从训练数据集中学习区分特征,同时过滤遮挡特征对于人员检索场景至关重要。当前大多数基于分类或深度度量表示学习的人员重新识别(Re-ID)方法往往会忽略训练集上的遮挡问题。由于被认为是人体的一部分,因此这种来自障碍物的表示很容易被过度拟合和误导。为了缓解遮挡问题,我们提出了一种基于姿势指导特征区域的融合网络(PFRFN),以姿势标志为指导来指导局部特征的良好学习,并对代表学习的风险进行了评估。分别损失。与仅使用全局分类损失相比,同时考虑局部损失和稳健的姿势估计结果,可使深度网络学习图像中突出显示的身体部位的表示,并获得对遮挡场景的判别能力。在多个数据集(即Market-1501,DukeMTMC,CUHK03)上的实验结果证明了我们的方法在各种情况下的有效性。

更新日期:2021-02-15
down
wechat
bug