当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust adaptation regularization based on within-class scatter for domain adaptation.
Neural Networks ( IF 6.0 ) Pub Date : 2020-01-17 , DOI: 10.1016/j.neunet.2020.01.009
Liran Yang 1 , Ping Zhong 2
Affiliation  

In many practical applications, the assumption that the distributions of the data employed for training and test are identical is rarely valid, which would result in a rapid decline in performance. To address this problem, the domain adaptation strategy has been developed in recent years. In this paper, we propose a novel unsupervised domain adaptation method, referred to as Robust Adaptation Regularization based on Within-Class Scatter (WCS-RAR), to simultaneously optimize the regularized loss, the within-class scatter, the joint distribution between domains, and the manifold consistency. On the one hand, to make the model robust against outliers, we adopt an l2,1-norm based loss function in virtue of its row sparsity, instead of the widely-used l2-norm based squared loss or hinge loss function to determine the residual. On the other hand, to well preserve the structure knowledge of the source data within the same class and strengthen the discriminant ability of the classifier, we incorporate the minimum within-class scatter into the process of domain adaptation. Lastly, to efficiently solve the resulting optimization problem, we extend the form of the Representer Theorem through the kernel trick, and thus derive an elegant solution for the proposed model. The extensive comparison experiments with the state-of-the-art methods on multiple benchmark data sets demonstrate the superiority of the proposed method.

中文翻译:

基于类内散布的鲁棒自适应正则化用于域自适应。

在许多实际应用中,用于训练和测试的数据分布相同的假设很少成立,这将导致性能迅速下降。为了解决这个问题,近年来已经开发了域自适应策略。在本文中,我们提出了一种新的无监督域自适应方法,称为基于类内散布的鲁棒自适应正则化(WCS-RAR),以同时优化正则化损失,类内散布,域之间的联合分布,和多方面的一致性。一方面,为了使模型对异常值具有鲁棒性,我们基于行稀疏性采用了基于l2,1-范数的损失函数,而不是广泛使用的基于l2-norm的平方损失或铰链损失函数来确定模型。剩余的。另一方面,为了很好地保留同一类中源数据的结构知识并增强分类器的判别能力,我们将最小的类内散布纳入域自适应过程中。最后,为了有效地解决由此产生的优化问题,我们通过内核技巧扩展了Representer定理的形式,从而为所提出的模型提供了一种优雅的解决方案。在多个基准数据集上使用最新技术进行的广泛比较实验证明了该方法的优越性。为了有效地解决由此产生的优化问题,我们通过内核技巧扩展了Representer定理的形式,从而为所提出的模型提供了一种优雅的解决方案。在多个基准数据集上使用最新技术进行的广泛比较实验证明了该方法的优越性。为了有效地解决由此产生的优化问题,我们通过内核技巧扩展了Representer定理的形式,从而为所提出的模型提供了一种优雅的解决方案。在多个基准数据集上使用最新技术进行的广泛比较实验证明了该方法的优越性。
更新日期:2020-01-17
down
wechat
bug