当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Maximum Structural Generation Discrepancy for Unsupervised Domain Adaptation
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 5-11-2022 , DOI: 10.1109/tpami.2022.3174526
Haifeng Xia 1 , Taotao Jing 1 , Zhengming Ding 2
Affiliation  

Unsupervised domain adaptation (UDA) has recently become an appealing research topic in visual recognition, since it exploits all accessible well-labeled source data to train a model with high generalization on target domain without any annotations. However, due to the significant domain discrepancy, the bottleneck for UDA is to learn effective domain-invariant feature representations. To fight off such an obstacle, we propose a novel cross-domain learning framework named Maximum Structural Generation Discrepancy (MSGD) to accurately estimate and mitigate domain shift via introducing an intermediate domain. First, the cross-domain topological structure is explored to propagate target samples to generate a novel intermediate domain paired with the specific source instances. The intermediate domain plays as the bridge to gradually reduce distribution divergence across source and target domains. Concretely, the similar category semantic across source and intermediate features tends to naturally conduct the class-level alignment on eliminating their domain shift. In terms of no target annotation, the domain-level alignment manner is suitable to narrow down the distance between intermediate and target domains. Moreover, to produce high-quality generative instances, we develop the class-driven collaborative translation (CDCT) module to generate class-consistent cross-domain samples in each mini-batch with the assistance of pseudo-labels. Extensive experimental analyses on five domain adaptation benchmarks demonstrate the effectiveness of our MSGD on solving UDA problem.

中文翻译:


无监督域适应的最大结构生成差异



无监督域适应(UDA)最近已成为视觉识别领域一个有吸引力的研究主题,因为它利用所有可访问的标记良好的源数据来训练在目标域上具有高度泛化性的模型,而无需任何注释。然而,由于显着的域差异,UDA 的瓶颈是学习有效的域不变特征表示。为了克服这样的障碍,我们提出了一种名为最大结构生成差异(MSGD)的新型跨域学习框架,通过引入中间域来准确估计和减轻域转移。首先,探索跨域拓扑结构来传播目标样本,以生成与特定源实例配对的新颖中间域。中间域充当桥梁,逐渐减少源域和目标域之间的分布差异。具体而言,源特征和中间特征之间的相似类别语义往往会自然地进行类级别对齐,以消除其域转移。在无目标注释的情况下,域级对齐方式适合缩小中间域和目标域之间的距离。此外,为了产生高质量的生成实例,我们开发了类驱动协作翻译(CDCT)模块,以在伪标签的帮助下在每个小批量中生成类一致的跨域样本。对五个领域适应基准的广泛实验分析证明了我们的 MSGD 在解决 UDA 问题上的有效性。
更新日期:2024-08-28
down
wechat
bug