当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Domain Adaptation by Joint Distribution Invariant Projections
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-08-05 , DOI: 10.1109/tip.2020.3013167
Sentao Chen , Mehrtash Harandi , Xiaona Jin , Xiaowei Yang

Domain adaptation addresses the learning problem where the training data are sampled from a source joint distribution (source domain), while the test data are sampled from a different target joint distribution (target domain). Because of this joint distribution mismatch, a discriminative classifier naively trained on the source domain often generalizes poorly to the target domain. In this article, we therefore present a Joint Distribution Invariant Projections (JDIP) approach to solve this problem. The proposed approach exploits linear projections to directly match the source and target joint distributions under the L2L^{2} -distance. Since the traditional kernel density estimators for distribution estimation tend to be less reliable as the dimensionality increases, we propose a least square method to estimate the L2L^{2} -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution. Furthermore, we introduce a kernel version of JDIP to account for inherent nonlinearity in the data. We show that the proposed learning problems can be naturally cast as optimization problems defined on the product of Riemannian manifolds. To be comprehensive, we also establish an error bound, theoretically explaining how our method works and contributes to reducing the target domain generalization error. Extensive empirical evidence demonstrates the benefits of our approach over state-of-the-art domain adaptation methods on several visual data sets.

中文翻译:


通过联合分布不变投影进行域适应



域适应解决了学习问题,其中训练数据从源联合分布(源域)中采样,而测试数据从不同的目标联合分布(目标域)中采样。由于这种联合分布不匹配,在源域上简单训练的判别分类器通常很难推广到目标域。因此,在本文中,我们提出了一种联合分布不变投影(JDIP)方法来解决这个问题。所提出的方法利用线性投影来直接匹配 L2L^{2} 距离下的源和目标联合分布。由于用于分布估计的传统核密度估计器随着维数的增加往往不太可靠,因此我们提出了一种最小二乘法来估计 L2L^{2} 距离,而不需要估计两个联合分布,从而导致二次问题与解析解。此外,我们引入了 JDIP 的内核版本来解决数据中固有的非线性问题。我们表明,所提出的学习问题可以自然地转化为黎曼流形乘积上定义的优化问题。为了全面起见,我们还建立了一个误差界限,从理论上解释了我们的方法如何工作并有助于减少目标域泛化误差。大量的经验证据表明,我们的方法在多个视觉数据集上优于最先进的域适应方法。
更新日期:2020-08-05
down
wechat
bug