当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2022-10-04 , DOI: 10.1109/tpami.2021.3109287
Jingjing Li 1 , Zhekai Du 1 , Lei Zhu 2 , Zhengming Ding 3 , Ke Lu 1 , Heng Tao Shen 1
Affiliation  

Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community.

中文翻译:

对抗性攻击的发散不可知无监督域适应。

传统的机器学习算法存在这样的问题,即在现有数据上训练的模型无法很好地泛化到从其他分布采样的数据。为了解决这个问题,无监督域适应 (UDA) 将从标记良好的源域学习的知识转移到标记数据不可用的不同但相关的目标域。大多数现有的 UDA 方法都假设来自源域和目标域的数据在训练期间可用且完整。因此,可以制定和最小化两个域之间的差异。在本文中,我们考虑了一种更实用但更具挑战性的 UDA 设置,其中源域数据或目标域数据未知。传统的 UDA 方法将无法通过此设置,因为由于缺少源数据或目标数据,域分歧是不可知的。从技术上讲,我们从一种新颖的观点——对抗性攻击——来研究 UDA,并在一个统一的框架中解决与分歧无关的自适应学习问题。具体来说,我们首先通过调查 UDA 和对抗性攻击之间的内在关系来报告我们方法的动机。然后我们精心设计对抗样本来攻击训练模型并利用这些对抗样本。我们认为,如果模型能够抵御我们的攻击,那么它的泛化能力将得到显着提高,从而提高目标域的性能。从理论上讲,我们基于域适应理论分析了我们方法的泛化界限。在传统、无源和无目标 UDA 设置下的多个 UDA 基准的大量实验结果证实,与以前的方法相比,我们的方法能够实现良好的性能。值得注意的是,这项工作扩展了领域适应和对抗性攻击的范围,并有望在社区中激发更多的想法。
更新日期:2021-09-03
down
wechat
bug