当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Class-specific Reconstruction Transfer Learning for Visual Recognition Across Domains.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2019-11-05 , DOI: 10.1109/tip.2019.2948480
Shanshan Wang , Lei Zhang , Wangmeng Zuo , Bob Zhang

Subspace learning and reconstruction have been widely explored in recent transfer learning work. Generally, a specially designed projection and reconstruction transfer functions bridging multiple domains for heterogeneous knowledge sharing are wanted. However, we argue that the existing subspace reconstruction based domain adaptation algorithms neglect the class prior, such that the learned transfer function is biased, especially when data scarcity of some class is encountered. Different from those previous methods, in this paper, we propose a novel class-wise reconstruction-based adaptation method called Class-specific Reconstruction Transfer Learning (CRTL), which optimizes a well modeled transfer loss function by fully exploiting intra-class dependency and inter-class independency. The merits of the CRTL are three-fold. 1) Using a class-specific reconstruction matrix to align the source domain with the target domain fully exploits the class prior in modeling the domain distribution consistency, which benefits the cross-domain classification. 2) Furthermore, to keep the intrinsic relationship between data and labels after feature augmentation, a projected Hilbert-Schmidt Independence Criterion (pHSIC), that measures the dependency between data and label, is first proposed in transfer learning community by mapping the data from raw space to RKHS. 3) In addition, by imposing low-rank and sparse constraints on the class-specific reconstruction coefficient matrix, the global and local data structure that contributes to domain correlation can be effectively preserved. Extensive experiments on challenging benchmark datasets demonstrate the superiority of the proposed method over state-of-the-art representation-based domain adaptation methods. The demo code is available in https://github.com/wangshanshanCQU/CRTL.

中文翻译:

跨域视觉识别的特定于类的重构迁移学习。

在最近的迁移学习工作中,已经广泛探索了子空间学习和重建。通常,需要一种专门设计的投影和重构传递函数,其将多个域桥接在一起以实现异构知识共享。但是,我们认为现有的基于子空间重构的域自适应算法会先忽略类,从而使学习的传递函数产生偏差,尤其是在遇到某些类的数据短缺时。与以前的方法不同,在本文中,我们提出了一种新的基于类重构的适应方法,称为类特定重构转移学习(CRTL),该方法通过充分利用类内依赖和相互之间的关系来优化模型良好的转移损失函数。级的独立性。CRTL的优点有三点。1)使用特定于类的重构矩阵将源域与目标域对齐,可以在对域分布一致性进行建模之前充分利用该类,这有利于跨域分类。2)此外,为了在特征增强后保持数据和标签之间的内在联系,首先在迁移学习社区中通过映射原始数据来提出一种预测的希尔伯特-施密特独立性标准(pHSIC),该标准用于测量数据和标签之间的依赖性。 RKHS的空间。3)此外,通过在特定于类的重构系数矩阵上施加低秩和稀疏约束,可以有效地保留有助于域相关的全局和局部数据结构。在具有挑战性的基准数据集上进行的大量实验证明,该方法优于基于表示形式的最新领域自适应方法。演示代码可在https://github.com/wangshanshanCQU/CRTL中获得。
更新日期:2020-04-22
down
wechat
bug