当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lower-bounded proper losses for weakly supervised classification
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.02893
Shuhei M. Yoshida, Takashi Takenouchi, Masashi Sugiyama

This paper discusses the problem of weakly supervised learning of classification, in which instances are given weak labels that are produced by some label-corruption process. The goal is to derive conditions under which loss functions for weak-label learning are proper and lower-bounded -- two essential requirements for the losses used in class-probability estimation. To this end, we derive a representation theorem for proper losses in supervised learning, which dualizes the Savage representation. We use this theorem to characterize proper weak-label losses and find a condition for them to be lower-bounded. Based on these theoretical findings, we derive a novel regularization scheme called generalized logit squeezing, which makes any proper weak-label loss bounded from below, without losing properness. Furthermore, we experimentally demonstrate the effectiveness of our proposed approach, as compared to improper or unbounded losses. Those results highlight the importance of properness and lower-boundedness. The code is publicly available at https://github.com/yoshum/lower-bounded-proper-losses.

中文翻译:

弱监督分类的下限适当损失

本文讨论了分类的弱监督学习问题,在这种情况下,给出了由某些标签损坏过程产生的弱标签的情况。目标是得出条件,在这种条件下弱标签学习的损失函数是适当的且较低的-这是类别概率估计中使用的损失的两个基本要求。为此,我们导出了在监督学习中适当损失的表示定理,这使Savage表示双重化。我们使用该定理来表征适当的弱标签损失,并为它们的下界找到条件。根据这些理论发现,我们得出了一种称为广义logit压缩的新颖正则化方案,该方案使任何适当的弱标签丢失都从下面限制,而不会失去正确性。此外,我们通过实验证明了与不适当或无限制的损失相比,我们提出的方法的有效性。这些结果突出了正确性和低界限性的重要性。该代码可在https://github.com/yoshum/lower-bounded-proper-losses上公开获得。
更新日期:2021-03-05
down
wechat
bug