当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A transductive transfer learning approach for image classification
International Journal of Machine Learning and Cybernetics ( IF 5.6 ) Pub Date : 2020-09-20 , DOI: 10.1007/s13042-020-01200-9
Samaneh Rezaei , Jafar Tahmoresnezhad , Vahid Solouk

Among machine learning paradigms, unsupervised transductive transfer learning is useful when no labeled data from the target domain are available at training time, but there is accessible unlabeled target data during training phase instead. The current paper proposes a novel unsupervised transductive transfer learning method to find the specific and shared features across the source and the target domains. The proposed learning method then maps both domains into the respective subspaces with minimum marginal and conditional distribution divergences. It is shown that the discriminative learning across domains leads to boost the model performance. Hence, the proposed method discriminates the classes of both domains via maximizing the distance between each sample-pairs with different labels and via minimizing the distance between each instance-pairs of the same classes. We verified our approach using standard visual benchmarks, with the average accuracy of 46 experiments as 76.5%, which rates rather high in comparison with other state-of-the-art transfer learning methods through various cross-domain tasks.



中文翻译:

图像分类的传递传递学习方法

在机器学习范式中,当在训练时没有来自目标域的标记数据可用,但是在训练阶段却可以访问未标记的目标数据时,无监督的转换学习非常有用。本论文提出了一种新颖的无监督转导学习方法,以在源域和目标域之间找到特定的和共享的特征。然后,提出的学习方法将两个域映射到具有最小边际和条件分布差异的相应子空间。结果表明,跨领域的判别学习可以提高模型性能。因此,所提出的方法通过最大化具有不同标签的每个样本对之间的距离以及最小化相同类别的每个实例对之间的距离来区分两个域的类别。我们使用标准的视觉基准对我们的方法进行了验证,46个实验的平均准确度为76.5%,与通过各种跨域任务进行的其他最先进的转移学习方法相比,该评分很高。

更新日期:2020-09-20
down
wechat
bug