当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Sparse and Identity-Preserved Hidden Attributes for Person Re-Identification.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2019-10-17 , DOI: 10.1109/tip.2019.2946975
Zheng Wang , Junjun Jiang , Yang Wu , Mang Ye , Xiang Bai , Shin'ichi Satoh

Person re-identification (Re-ID) aims at matching person images captured in non-overlapping camera views. To represent person appearance, low-level visual features are sensitive to environmental changes, while high-level semantic attributes, such as "short-hair" or "long-hair", are relatively stable. Hence, researches have started to design semantic attributes to reduce the visual ambiguity. However, to train a prediction model for semantic attributes, it requires plenty of annotations, which are hard to obtain in practical large-scale applications. To alleviate the reliance on annotation efforts, we propose to incrementally generate Deep Hidden Attribute (DHA) based on baseline deep network for newly uncovered annotations. In particular, we propose an auto-encoder model that can be plugged into any deep network to mine latent information in an unsupervised manner. To optimize the effectiveness of DHA, we reform the auto-encoder model with additional orthogonal generation module, along with identity-preserving and sparsity constraints. 1) Orthogonally generating: In order to make DHAs different from each other, Singular Vector Decomposition (SVD) is introduced to generate DHAs orthogonally. 2) Identity-preserving constraint: The generated DHAs should be distinct for telling different persons, so we associate DHAs with person identities. 3) Sparsity constraint: To enhance the discriminability of DHAs, we also introduce the sparsity constraint to restrict the number of effective DHAs for each person. Experiments conducted on public datasets have validated the effectiveness of the proposed network. On two large-scale datasets, i.e., Market-1501 and DukeMTMC-reID, the proposed method outperforms the state-of-the-art methods.

中文翻译:

学习稀疏和身份保留的隐藏属性以进行人员重新识别。

人员重新识别(Re-ID)旨在匹配在不重叠相机视图中捕获的人员图像。为了表示人的外观,低级视觉特征对环境变化敏感,而高级语义属性(例如“短发”或“长发”)相对稳定。因此,研究已经开始设计语义属性以减少视觉模糊性。但是,为了训练语义属性的预测模型,它需要大量注释,而这些注释在实际的大规模应用中很难获得。为了减轻对注释工作的依赖,我们建议基于基准深度网络为新发现的注释增量生成深度隐藏属性(DHA)。特别是,我们提出了一种自动编码器模型,该模型可以插入任何深度网络中,从而以无人监督的方式挖掘潜在信息。为了优化DHA的有效性,我们使用附加的正交生成模块以及身份保留和稀疏性约束来改革自动编码器模型。1)正交生成:为了使DHA彼此不同,引入奇异矢量分解(SVD)来正交生成DHA。2)身份保留约束:所生成的DHA应该不同以区分不同的人,因此我们将DHA与人的身份相关联。3)稀疏性约束:为了增强DHA的可分辨性,我们还引入了稀疏性约束来限制每个人的有效DHA数量。在公共数据集上进行的实验已经验证了所提出网络的有效性。在两个大型数据集(Market-1501和DukeMTMC-reID)上,所提出的方法优于最新方法。
更新日期:2020-04-22
down
wechat
bug