当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Tackling unsupervised multi-source domain adaptation with optimism and consistency
arXiv - CS - Machine Learning Pub Date : 2020-09-29 , DOI: arxiv-2009.13939 Diogo Pernes and Jaime S. Cardoso
arXiv - CS - Machine Learning Pub Date : 2020-09-29 , DOI: arxiv-2009.13939 Diogo Pernes and Jaime S. Cardoso
It has been known for a while that the problem of multi-source domain
adaptation can be regarded as a single source domain adaptation task where the
source domain corresponds to a mixture of the original source domains.
Nonetheless, how to adjust the mixture distribution weights remains an open
question. Moreover, most existing work on this topic focuses only on minimizing
the error on the source domains and achieving domain-invariant representations,
which is insufficient to ensure low error on the target domain. In this work,
we present a novel framework that addresses both problems and beats the current
state of the art by using a mildly optimistic objective function and
consistency regularization on the target samples.
中文翻译:
以乐观和一致性解决无监督多源域适应问题
众所周知,多源域自适应问题可以看作是单源域自适应任务,其中源域对应于原始源域的混合。尽管如此,如何调整混合分布权重仍然是一个悬而未决的问题。此外,关于该主题的大多数现有工作仅关注最小化源域上的错误并实现域不变表示,这不足以确保目标域上的低错误。在这项工作中,我们提出了一个新颖的框架,该框架通过对目标样本使用温和的乐观目标函数和一致性正则化来解决这两个问题并击败当前最先进的技术。
更新日期:2020-09-30
中文翻译:
以乐观和一致性解决无监督多源域适应问题
众所周知,多源域自适应问题可以看作是单源域自适应任务,其中源域对应于原始源域的混合。尽管如此,如何调整混合分布权重仍然是一个悬而未决的问题。此外,关于该主题的大多数现有工作仅关注最小化源域上的错误并实现域不变表示,这不足以确保目标域上的低错误。在这项工作中,我们提出了一个新颖的框架,该框架通过对目标样本使用温和的乐观目标函数和一致性正则化来解决这两个问题并击败当前最先进的技术。