当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Discrepant collaborative training by Sinkhorn divergences
Image and Vision Computing ( IF 4.2 ) Pub Date : 2021-05-21 , DOI: 10.1016/j.imavis.2021.104213
Yan Han , Soumava Kumar Roy , Lars Petersson , Mehrtash Harandi

Deep Co-Training algorithms are typically comprised of two distinct and diverse feature extractors that simultaneously attempt to learn task-specific features from the same inputs. Achieving such an objective is, however, not trivial, despite its innocent look. This is because homogeneous networks tend to mimic each other under the collaborative training setup. Keeping this difficulty in mind, we make use of the newly proposed S divergence to encourage diversity between homogeneous networks. The S divergence encapsulates popular measures such as maximum mean discrepancy and the Wasserstein distance under the same umbrella and provides us with a principled, yet simple and straightforward mechanism. Our empirical results in two domains, classification in the presence of noisy labels and semi-supervised image classification, clearly demonstrate the benefits of the proposed framework in learning distinct and diverse features. We show that in these respective settings, we achieve impressive results by a notable margin.



中文翻译:

Sinkhorn divergences 的差异协作训练

深度协同训练算法通常由两个截然不同的特征提取器组成,它们同时尝试从相同的输入中学习特定于任务的特征。然而,实现这样一个目标并非易事,尽管它看起来很无辜。这是因为在协作训练设置下,同构网络倾向于相互模仿。记住这个困难,我们利用新提出的S∈divergence来鼓励同构网络之间的多样性。在S 散度封装了流行的度量,例如最大平均差异和同一伞下的 Wasserstein 距离,并为我们提供了一个有原则但简单而直接的机制。我们在两个领域的实证结果,存在噪声标签的分类和半监督图像分类,清楚地证明了所提出的框架在学习不同和多样化特征方面的好处。我们表明,在这些各自的设置中,我们以显着的优势取得了令人印象深刻的结果。

更新日期:2021-06-17
down
wechat
bug