当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transferable attention networks for adversarial domain adaptation
Information Sciences Pub Date : 2020-06-18 , DOI: 10.1016/j.ins.2020.06.016
Changchun Zhang , Qingjie Zhao , Yu Wang

Domain adaptation is one of the fundamental challenges in transfer learning. How to effectively transfer knowledge from labeled source domain to unlabeled target domain is critical for domain adaptation, as it benefits to reduce the considerable performance gap due to domain shift. Existing methods of domain adaptation address this issue via matching the global features across domains. However, not all features are transferable for domain adaptation, while forcefully matching the untransferable features may lead to negative transfer. In this paper, we propose a novel method dubbed transferable attention networks (TAN) to address this issue. The proposed TAN focuses on the feature alignment by utilizing adversarial optimization. Specifically, we utilize the self-attention mechanism to weight extracted features, such that the influence of untransferable features can be effectively eliminated. Meanwhile, to exploit the complex multi-modal structures of domain adaptation, we use learned features and classifier predictions as the condition to train the adversarial networks. Furthermore, we further propose that the accurately transferable features should enable domain discrepancy to minimum. Three loss functions are introduced into the adversarial networks: classification loss, attention transfer loss, and condition transfer loss. Extensive experiments on Office-31, ImageCLEF-DA, Office-Home, and VisDA-2017 datasets testify that the proposed approach yields state-of-the-art results.



中文翻译:

可转移注意力网络用于对抗领域的适应

领域适应是迁移学习中的基本挑战之一。如何有效地将知识从标记的源域转移到未标记的目标域对于域适应至关重要,因为它可以减少因域转移而导致的巨大性能差距。现有的域适应方法通过跨域匹配全局特征来解决此问题。但是,并非所有功能都可以转让用于领域调整,而强制匹配不可转让的功能可能会导致负面转让。在本文中,我们提出了一种称为可转移注意力网络(TAN)的新颖方法来解决此问题。拟议的TAN通过利用对抗性优化专注于特征对齐。具体来说,我们利用自我关注机制来权重提取特征,这样就可以有效地消除不可转移特征的影响。同时,为了利用领域适应的复杂多模式结构,我们使用学习的特征和分类器预测作为训练对抗网络的条件。此外,我们进一步建议,可准确转移的功能应使域差异降至最低。对抗网络中引入了三种损失函数:分类损失,注意力转移损失和条件转移损失。在Office-31,ImageCLEF-DA,Office-Home和VisDA-2017数据集上进行的大量实验证明,该方法可产生最新的结果。我们使用学习到的特征和分类器预测作为训练对抗网络的条件。此外,我们进一步建议,可准确转移的功能应使域差异降至最低。对抗网络中引入了三种损失函数:分类损失,注意力转移损失和条件转移损失。在Office-31,ImageCLEF-DA,Office-Home和VisDA-2017数据集上进行的大量实验证明,该方法可产生最新的结果。我们使用学习到的特征和分类器预测作为训练对抗网络的条件。此外,我们进一步建议,可准确转移的功能应使域差异降至最低。对抗网络中引入了三种损失函数:分类损失,注意力转移损失和条件转移损失。在Office-31,ImageCLEF-DA,Office-Home和VisDA-2017数据集上进行的大量实验证明,该方法可产生最新的结果。

更新日期:2020-06-18
down
wechat
bug