当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint Clustering and Discriminative Feature Alignment for Unsupervised Domain Adaptation
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-09-10 , DOI: 10.1109/tip.2021.3109530
Wanxia Deng , Qing Liao , Lingjun Zhao , Deke Guo , Gangyao Kuang , Dewen Hu , Li Liu

Unsupervised Domain Adaptation (UDA) aims to learn a classifier for the unlabeled target domain by leveraging knowledge from a labeled source domain with a different but related distribution. Many existing approaches typically learn a domain-invariant representation space by directly matching the marginal distributions of the two domains. However, they ignore exploring the underlying discriminative features of the target data and align the cross-domain discriminative features, which may lead to suboptimal performance. To tackle these two issues simultaneously, this paper presents a Joint Clustering and Discriminative Feature Alignment (JCDFA) approach for UDA, which is capable of naturally unifying the mining of discriminative features and the alignment of class-discriminative features into one single framework. Specifically, in order to mine the intrinsic discriminative information of the unlabeled target data, JCDFA jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data, where the classification of the source domain can guide the clustering learning of the target domain to locate the object category. We then conduct the cross-domain discriminative feature alignment by separately optimizing two new metrics: 1) an extended supervised contrastive learning, i.e. , semi-supervised contrastive learning 2) an extended Maximum Mean Discrepancy (MMD), i.e. , conditional MMD, explicitly minimizing the intra-class dispersion and maximizing the inter-class compactness. When these two procedures, i.e. , discriminative features mining and alignment are integrated into one framework, they tend to benefit from each other to enhance the final performance from a cooperative learning perspective. Experiments are conducted on four real-world benchmarks ( e.g. , Office-31, ImageCLEF-DA, Office-Home and VisDA-C). All the results demonstrate that our JCDFA can obtain remarkable margins over state-of-the-art domain adaptation methods. Comprehensive ablation studies also verify the importance of each key component of our proposed algorithm and the effectiveness of combining two learning strategies into a framework.

中文翻译:


用于无监督域适应的联合聚类和判别性特征对齐



无监督域适应(UDA)旨在通过利用来自具有不同但相关分布的标记源域的知识来学习未标记目标域的分类器。许多现有方法通常通过直接匹配两个域的边缘分布来学习域不变的表示空间。然而,他们忽略了探索目标数据的潜在判别特征并对齐跨域判别特征,这可能导致性能不佳。为了同时解决这两个问题,本文提出了一种 UDA 的联合聚类和判别特征对齐(JCDFA)方法,该方法能够自然地将判别特征的挖掘和类判别特征的对齐统一到一个框架中。具体来说,为了挖掘未标记目标数据的内在判别信息,JCDFA联合学习两个任务的共享编码表示:标记源数据的监督分类和未标记目标数据的判别聚类,其中源域的分类可以指导目标域的聚类学习来定位对象类别。然后,我们通过分别优化两个新指标来进行跨域判别特征对齐:1)扩展的监督对比学习,即半监督对比学习2)扩展的最大平均差异(MMD),即条件MMD,显式最小化类内分散性和类间紧凑性最大化。当这两个过程,即 ,判别性特征挖掘和对齐被集成到一个框架中,从合作学习的角度来看,它们往往会相互受益以提高最终性能。实验是在四个真实世界基准(例如 Office-31、ImageCLEF-DA、Office-Home 和 VisDA-C)上进行的。所有结果都表明,我们的 JCDFA 可以比最先进的域适应方法获得显着的优势。全面的消融研究还验证了我们提出的算法的每个关键组成部分的重要性以及将两种学习策略结合到一个框架中的有效性。
更新日期:2021-09-10
down
wechat
bug