当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Entropy Optimization for Unsupervised Domain Adaptation.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2021-05-03 , DOI: 10.1109/tnnls.2021.3073119
Ao Ma , Jingjing Li , Ke Lu , Lei Zhu , Heng Tao Shen

Domain adaptation is proposed to deal with the challenging problem where the probability distribution of the training source is different from the testing target. Recently, adversarial learning has become the dominating technique for domain adaptation. Usually, adversarial domain adaptation methods simultaneously train a feature learner and a domain discriminator to learn domain-invariant features. Accordingly, how to effectively train the domain-adversarial model to learn domain-invariant features becomes a challenge in the community. To this end, we propose in this article a novel domain adaptation scheme named adversarial entropy optimization (AEO) to address the challenge. Specifically, we minimize the entropy when samples are from the independent distributions of source domain or target domain to improve the discriminability of the model. At the same time, we maximize the entropy when features are from the combined distribution of source domain and target domain so that the domain discriminator can be confused and the transferability of representations can be promoted. This minimax regime is well matched with the core idea of adversarial learning, empowering our model with transferability as well as discriminability for domain adaptation tasks. Also, AEO is flexible and compatible with different deep networks and domain adaptation frameworks. Experiments on five data sets show that our method can achieve state-of-the-art performance across diverse domain adaptation tasks.

中文翻译:

无监督域自适应的对抗熵优化。

在训练源的概率分布与测试目标的概率分布不同的情况下,提出了域自适应方法来解决具有挑战性的问题。最近,对抗学习已成为领域适应的主要技术。通常,对抗性领域自适应方法会同时训练特征学习者和领域鉴别器以学习领域不变特征。因此,如何有效地训练领域对抗模型以学习领域不变特征成为社区中的挑战。为此,我们在本文中提出了一种新的领域自适应方案,称为对抗熵优化(AEO),以应对这一挑战。具体来说,当样本来自源域或目标域的独立分布时,我们将熵最小化,以提高模型的可分辨性。同时,当特征来自源域和目标域的组合分布时,我们使熵最大化,从而可以混淆域识别符并促进表示的可传递性。此minimax机制与对抗性学习的核心思想很好地匹配,使我们的模型具有可转移性和可区分域适应任务的能力。此外,AEO灵活且可与不同的深度网络和域适应框架兼容。在五个数据集上进行的实验表明,我们的方法可以在各种领域自适应任务中实现最先进的性能。当特征来自源域和目标域的组合分布时,我们使熵最大化,以便可以区分域识别符并促进表示的可传递性。此minimax机制与对抗性学习的核心思想很好地匹配,使我们的模型具有可转移性和可区分域适应任务的能力。此外,AEO灵活且可与不同的深度网络和域适应框架兼容。在五个数据集上进行的实验表明,我们的方法可以在各种领域自适应任务中实现最先进的性能。当特征来自源域和目标域的组合分布时,我们使熵最大化,以便可以区分域识别符并促进表示的可传递性。此minimax机制与对抗性学习的核心思想很好地匹配,使我们的模型具有可转移性和可区分域适应任务的能力。此外,AEO灵活且可与不同的深度网络和域适应框架兼容。在五个数据集上进行的实验表明,我们的方法可以在各种领域自适应任务中实现最先进的性能。AEO是灵活的,并且与不同的深度网络和域适应框架兼容。在五个数据集上进行的实验表明,我们的方法可以在各种领域自适应任务中实现最先进的性能。AEO是灵活的,并且与不同的深度网络和域适应框架兼容。在五个数据集上进行的实验表明,我们的方法可以在各种领域自适应任务中实现最先进的性能。
更新日期:2021-05-03
down
wechat
bug