Expert Systems with Applications ( IF 8.5 ) Pub Date : 2020-11-27 , DOI: 10.1016/j.eswa.2020.114381 Guanqun Wei , Zhiqiang Wei , Lei Huang , Jie Nie , Xiaojing Li
For a target task where the labeled data are unavailable, unsupervised domain adaptive learning performs transfer learning from labeled source data to unlabeled target data. Previous deep domain adaption methods mainly learned the global domain shift between different domains, the global distributions are aligned without considering the correspondence information between the same class data of different domains. Recently, more and more researchers pay attention to semantic alignment that focuses on accurately aligning the distributions of the same class data from different domains. However, most of them ignore two points: the learning of the global distribution of the target domain data; the compactness of intra-class domain data and the discrimination of inter-class domain data, which lead to unsatisfying transfer learning performance. To resolve this problem, we propose a Center-aligned Domain Adaptation Network (CenterDA) to facilitate the semantic alignment, In this study, for each class in label space, we learn a common class center for all data with the same class label in the source and target domains, which allows us to learn the global distribution of the target domain data under the supervised learning of the source domain data. Furthermore, we minimize the distance between the deep features and its common class center to compact the feature representations of data. In this manner, we achieve the desired goals: The global distribution of the target domain data is learned by common class center. Second, the source and the target domain data of the same class are aligned near the common center. Third, we model the intra-class compactness and the inter-class separability modeling. Extensive experiments on three datasets show that our method achieves remarkable results on image classification and has comparable performance with the latest methods.
中文翻译:
中心对准的域自适应网络,用于图像分类
对于标记数据不可用的目标任务,无监督域自适应学习执行从标记源数据到未标记目标数据的转移学习。以前的深度域适应方法主要学习不同域之间的全局域移位,全局分布是对齐的,而没有考虑不同域的相同类数据之间的对应信息。近来,越来越多的研究人员将注意力集中在语义对齐上,语义对齐专注于准确对齐来自不同域的同一类数据的分布。但是,它们中的大多数都忽略了两点:学习目标域数据的全局分布;类内域数据的紧凑性和类间域数据的区分,导致迁移学习性能不令人满意。为了解决这个问题,我们提出了一个中心对齐的域自适应网络(CenterDA)来促进语义对齐。源域和目标域,这使我们能够在源域数据的监督学习下学习目标域数据的全局分布。此外,我们将深度特征与其公共类中心之间的距离最小化,以压缩数据的特征表示。通过这种方式,我们达到了预期的目标:目标领域数据的全局分布是由公共班级中心学习的。其次,同一类别的源和目标域数据在公共中心附近对齐。第三,我们对类内部的紧凑性和类间的可分离性建模进行建模。在三个数据集上的大量实验表明,我们的方法在图像分类上取得了显着的结果,并且具有与最新方法相当的性能。