当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
P-DIFF+: Improving learning classifier with noisy labels by Noisy Negative Learning loss
Neural Networks ( IF 6.0 ) Pub Date : 2021-08-02 , DOI: 10.1016/j.neunet.2021.07.024
QiHao Zhao 1 , Wei Hu 1 , Yangyu Huang 2 , Fan Zhang 1
Affiliation  

Learning deep neural network (DNN) classifier with noisy labels is a challenging task because the DNN can easily over-fit on these noisy labels due to its high capability. In this paper, we present a very simple but effective training paradigm called P-DIFF+, which can train DNN classifiers but obviously alleviate the adverse impact of noisy labels. Our proposed probability difference distribution implicitly reflects the probability of a training sample to be clean, then this probability is employed to re-weight the corresponding sample during the training process. Moreover, Noisy Negative Learning(NNL) loss can be further employed to re-weight samples. P-DIFF+ can achieve good performance even without prior-knowledge on the noise rate of training samples. Experiments on benchmark datasets demonstrate that P-DIFF+ is superior to the state-of-the-art sample selection methods.



中文翻译:

P-DIFF+:通过噪声负学习损失改进带有噪声标签的学习分类器

学习带有噪声标签的深度神经网络 (DNN) 分类器是一项具有挑战性的任务,因为 DNN 由于其高能力很容易在这些噪声标签上过度拟合。在本文中,我们提出了一种非常简单但有效的训练范式,称为P-DIFF+,它可以训练 DNN 分类器,但明显减轻了噪声标签的不利影响。我们提出的概率差异分布隐含地反映了训练样本干净的概率,然后在训练过程中利用这个概率对相应的样本进行重新加权。此外,噪声负学习(NNL)损失可以进一步用于重新加权样本。即使没有关于训练样本噪声率的先验知识,P-DIFF+ 也可以获得良好的性能。基准数据集的实验表明 P-DIFF+ 优于最先进的样本选择方法。

更新日期:2021-08-19
down
wechat
bug