当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning explicitly transferable representations for domain adaptation.
Neural Networks ( IF 7.8 ) Pub Date : 2020-06-25 , DOI: 10.1016/j.neunet.2020.06.016
Mengmeng Jing 1 , Jingjing Li 1 , Ke Lu 1 , Lei Zhu 2 , Yang Yang 1
Affiliation  

Domain adaptation tackles the problem where the training source domain and the test target domain have distinctive data distributions, and therefore improves the generalization ability of deep models. The very popular mechanism of domain adaptation is to learn a new feature representation which is supposed to be domain-invariant, so that the classifiers trained on the source domain can be directly applied to the target domain. However, recent work reveals that learning new feature representations may potentially deteriorate the adaptability of the original features and increase the expected error bound of the target domain. To address this, we propose to adapt classifiers rather than features. Specifically, we fill in the distribution gaps between domains by some additional transferable representations which are explicitly learned from the original features while keeping the original features unchanged. In addition, we argue that transferable representations should be able to be translated from one domain to the other with appropriate mappings. At the same time, we introduce conditional entropy to mitigate the semantic confusion during mapping. Experiments on both standard and large-scale datasets verify that our method is able to achieve the new state-of-the-art results on unsupervised domain adaptation.



中文翻译:

学习显式可转移表示形式以进行域适应。

域自适应解决了训练源域和测试目标域具有独特数据分布的问题,因此提高了深度模型的泛化能力。领域适应的一种非常流行的机制是学习一个新的特征表示,该特征表示应该是领域不变的,这样在源域上训练的分类器可以直接应用于目标域。但是,最近的工作表明,学习新的特征表示可能会破坏原始特征的适应性,并增加目标域的预期误差范围。为了解决这个问题,我们建议改编分类器而不是特征。特别,我们通过一些额外的可转移表示法来填充域之间的分配差距,这些表示法是从原始特征中明确学习的,同时保持原始特征不变。此外,我们认为,可转移的表示形式应该能够通过适当的映射从一个域转换为另一个域。同时,我们引入条件熵来减轻映射过程中的语义混乱。在标准数据集和大规模数据集上进行的实验证明,我们的方法能够在无监督域自适应的情况下获得最新的最新结果。我们引入条件熵来减轻映射过程中的语义混乱。在标准数据集和大规模数据集上进行的实验证明,我们的方法能够在无监督域自适应的情况下获得最新的最新结果。我们引入条件熵来减轻映射过程中的语义混乱。在标准数据集和大规模数据集上进行的实验证明,我们的方法能够在无监督域自适应的情况下获得最新的最新结果。

更新日期:2020-06-30
down
wechat
bug