当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Person Re-Identification by Deep Asymmetric Metric Embedding.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2018-12-14 , DOI: 10.1109/tpami.2018.2886878
Hong-Xing Yu , Ancong Wu , Wei-Shi Zheng

Person re-identification (Re-ID) aims to match identities across non-overlapping camera views. Researchers have proposed many supervised Re-ID models which require quantities of cross-view pairwise labelled data. This limits their scalabilities to many applications where a large amount of data from multiple disjoint camera views is available but unlabelled. Although some unsupervised Re-ID models have been proposed to address the scalability problem, they often suffer from the view-specific bias problem which is caused by dramatic variances across different camera views, e.g., different illumination, viewpoints and occlusion. The dramatic variances induce specific feature distortions in different camera views, which can be very disturbing in finding cross-view discriminative information for Re-ID in the unsupervised scenarios, since no label information is available to help alleviate the bias. We propose to explicitly address this problem by learning an unsupervised asymmetric distance metric based on cross-view clustering. The asymmetric distance metric allows specific feature transformations for each camera view to tackle the specific feature distortions. We then design a novel unsupervised loss function to embed the asymmetric metric into a deep neural network, and therefore develop a novel unsupervised deep framework named the DE ep C lustering-based A symmetric ME tric L earning ( DECAMEL ). In such a way, DECAMEL jointly learns the feature representation and the unsupervised asymmetric metric. DECAMEL learns a compact cross-view cluster structure of Re-ID data, and thus help alleviate the view-specific bias and facilitate mining the potential cross-view discriminative information for unsupervised Re-ID. Extensive experiments on seven benchmark datasets whose sizes span several orders show the effectiveness of our framework.

中文翻译:

深度不对称度量嵌入技术可以对无监督人员进行重新识别。

人员重新识别(Re-ID)旨在在不重叠的摄像机视图中匹配身份。研究人员提出了许多受监督的Re-ID模型,这些模型需要大量的交叉视图成对标记数据。这将它们的可扩展性限制为许多应用程序,在这些应用程序中,来自多个不相交相机视图的大量数据可用但未标记。尽管已经提出了一些无监督的Re-ID模型来解决可伸缩性问题,但是它们经常遭受特定于视图的偏差问题,该问题是由不同摄像机视图(例如,不同的照明,视点和遮挡)之间的剧烈差异引起的。剧烈的差异会导致不同相机视角的特定特征失真,这可能会在无人监督的情况下为Re-ID找到跨视角的判别信息时非常令人不安,因为没有标签信息可用来帮助减轻偏见。我们建议通过学习基于交叉视图聚类的无监督非对称距离度量来明确解决此问题。非对称距离度量允许针对每个摄像机视图进行特定的特征转换,以解决特定的特征失真。然后,我们设计了一个新的无监督损失函数,将非对称度量嵌入到深度神经网络中,因此开发了一种名为“无监督”的新型无监督深度框架。 ep C 基于光泽 一个 对称的 特里克 大号 收入( 德卡梅尔 )。这样,DECAMEL可以共同学习特征表示和无监督的非对称度量。DECAMEL学习了一个紧凑的Re-ID数据跨视图簇结构,从而有助于减轻特定于视图的偏见,并有助于挖掘无监督Re-ID的潜在跨视图判别信息。对七个基准数据集进行了广泛的实验,这些数据集的大小跨越了几个数量级,显示了我们框架的有效性。
更新日期:2020-03-06
down
wechat
bug