当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Collaborative learning with corrupted labels.
Neural Networks ( IF 6.0 ) Pub Date : 2020-02-26 , DOI: 10.1016/j.neunet.2020.02.010
Yulin Wang 1 , Rui Huang 1 , Gao Huang 1 , Shiji Song 1 , Cheng Wu 1
Affiliation  

Deep neural networks (DNNs) have been very successful for supervised learning. However, their high generalization performance often comes with the high cost of annotating data manually. Collecting low-quality labeled dataset is relatively cheap, e.g., using web search engines, while DNNs tend to overfit to corrupted labels easily. In this paper, we propose a collaborative learning (co-learning) approach to improve the robustness and generalization performance of DNNs on datasets with corrupted labels. This is achieved by designing a deep network with two separate branches, coupled with a relabelling mechanism. Co-learning could safely recover the true labels of most mislabeled samples, not only preventing the model from overfitting the noise, but also exploiting useful information from all the samples. Although being very simple, the proposed algorithm is able to achieve high generalization performance even a large portion of the labels are corrupted. Experiments show that co-learning consistently outperforms existing state-of-the-art methods on three widely used benchmark datasets.



中文翻译:

标签损坏的协作学习。

深度神经网络(DNN)在监督学习方面非常成功。但是,它们的高泛化性能通常伴随着手动注释数据的高成本。例如,使用Web搜索引擎,收集低质量的标签数据集相对便宜,而DNN往往容易过度适合损坏的标签。在本文中,我们提出了一种协作学习(共学习)方法来提高DNN在带有损坏标签的数据集上的鲁棒性和泛化性能。这是通过设计具有两个单独分支的深度网络以及重新标记来实现的机制。共同学习可以安全地恢复大多数贴错标签的样本的真实标签,不仅可以防止模型过度拟合噪声,还可以利用所有样本中的有用信息。尽管非常简单,但是即使大部分标签被破坏,所提出的算法也能够实现较高的泛化性能。实验表明,在三个广泛使用的基准数据集上,共同学习始终优于现有的最新技术。

更新日期:2020-02-26
down
wechat
bug