当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Rethinking Maximum Mean Discrepancy for Visual Domain Adaptation
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2021-07-09 , DOI: 10.1109/tnnls.2021.3093468
Wei Wang 1 , Haojie Li 1 , Zhengming Ding 2 , Feiping Nie 3 , Junyang Chen 4 , Xiao Dong 5 , Zhihui Wang 1
Affiliation  

Existing domain adaptation approaches often try to reduce distribution difference between source and target domains and respect domain-specific discriminative structures by some distribution [e.g., maximum mean discrepancy (MMD)] and discriminative distances (e.g., intra-class and inter-class distances). However, they usually consider these losses together and trade off their relative importance by estimating parameters empirically. It is still under insufficient exploration so far to deeply study their relationships to each other so that we cannot manipulate them correctly and the model’s performance degrades. To this end, this article theoretically proves two essential facts: 1) minimizing MMD equals to jointly minimizing their data variance with some implicit weights but, respectively, maximizing the source and target intra-class distances so that feature discriminability degrades and 2) the relationship between intra-class and inter-class distances is as one falls and another rises. Based on this, we propose a novel discriminative MMD with two parallel strategies to correctly restrain the degradation of feature discriminability or the expansion of intra-class distance; specifically: 1) we directly impose a tradeoff parameter on the intra-class distance that is implicit in the MMD according to 1) and 2) we reformulate the inter-class distance with special weights that are analogical to those implicit ones in the MMD and maximizing it can also lead to the intra-class distance falling according to 2). Notably, we do not consider the two strategies in one model due to 2). The experiments on several benchmark datasets not only prove the validity of our revealed theoretical results but also demonstrate that the proposed approach could perform better than some compared state-of-art methods substantially. Our preliminary MATLAB code will be available at https://github.com/WWLoveTransfer/ .

中文翻译:


重新思考视域适应的最大平均差异



现有的域适应方法通常尝试减少源域和目标域之间的分布差异,并通过某种分布[例如,最大平均差异(MMD)]和判别距离(例如,类内和类间距离)尊重特定于域的判别结构。 。然而,他们通常会一起考虑这些损失,并通过凭经验估计参数来权衡它们的相对重要性。到目前为止,我们还没有进行足够的探索来深入研究它们之间的关系,因此我们无法正确地操纵它们,并且模型的性能会下降。为此,本文从理论上证明了两个基本事实:1)最小化 MMD 等于通过一些隐式权重共同最小化其数据方差,但分别最大化源和目标类内距离,从而导致特征可辨别性下降;2)关系阶级内和阶级间的距离是一个人下降,另一个人上升。基于此,我们提出了一种具有两种并行策略的新型判别性MMD,以正确抑制特征判别性的退化或类内距离的扩大;具体来说:1)我们根据1)直接对MMD中隐含的类内距离施加权衡参数,2)我们使用与MMD中隐含的权重类似的特殊权重重新表达类间距离,并且根据2)最大化它也会导致类内距离下降。值得注意的是,由于 2),我们不会在一个模型中考虑这两种策略。 在几个基准数据集上的实验不仅证明了我们所揭示的理论结果的有效性,而且还证明了所提出的方法可以比一些比较先进的方法表现得更好。我们的初步 MATLAB 代码将在 https://github.com/WWLoveTransfer/ 上提供。
更新日期:2021-07-09
down
wechat
bug