当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-05-12 , DOI: 10.1007/s11263-021-01474-8
Yan Huang , Qiang Wu , Jingsong Xu , Yi Zhong , Zhaoxiang Zhang

Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method.



中文翻译:

具有背景移位缓解功能的无监督域自适应,可用于人员重新识别

无监督域自适应已成为跨域人员重新识别(re-ID)的一种流行方法。有两种基于此方法的解决方案。一种解决方案是建立一个模型,用于跨两个不同域的数据转换。因此,可以将源域中的数据传输到目标域,在其中可以通过丰富的源域数据来训练re-ID模型。另一种解决方案是使用目标域数据以及相应的虚拟标签来训练re-ID模型。两种解决方案中的约束条件都很明确。第一个解决方案在很大程度上依赖于数据转换模型的质量。此外,最终的re-ID模型是由源域数据训练的,但缺乏目标域的知识。实际上,第二种解决方案将目标域数据与虚拟标签混合,将源域数据与真实的注释信息混合在一起。但是,这种简单的混合并不能很好地考虑两个域的数据之间的原始信息鸿沟。域之间的背景差异可能在很大程度上造成了这种差距。本文提出了一种抑制背景漂移的生成对抗网络(SBSGAN),以减轻两个域之间的数据差距。为了解决上述第一个解决方案中的限制,本文提出了一种具有更新策略的密集关联2流(DA-2S)网络,以便从生成的数据中最好地学习区分ID特征,该特征既考虑了人体信息又考虑了某些信息。在环境中有用的与ID相关的提示。使用具有相应虚拟标签的目标域数据进一步更新构建的re-ID模型。

更新日期:2021-05-12
down
wechat
bug