当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Meta joint optimization: a holistic framework for noisy-labeled visual recognition
Applied Intelligence ( IF 3.4 ) Pub Date : 2021-05-12 , DOI: 10.1007/s10489-021-02392-5
Jialin Shi , Zheng Cao , Ji Wu

Collecting large-scale data with clean labels for supervised training is practically challenging. It is easier to collect a dataset with noisy labels, but such noise may degrade the performance of deep neural networks (DNNs). This paper targets at this challenge by wisely leveraging both relatively clean data and relatively noisy data. In this work, we propose meta-joint optimization (MJO), a novel and holistic framework for learning with noisy labels. The framework can jointly learn DNN parameters and correct noisy labels. We first estimate the label quality and conduct data division which dynamically divides the training data into relatively clean data and relatively noisy data. To better optimize the DNN parameters, we regard the relatively noisy data as an unlabeled set and further apply interpolation consistency training in a semi-supervised manner for information reuse of relatively noisy data. For better label optimization, we propose centroid-induced label updating to optimize noisy labels themselves. Concretely, we calculate the centroids of each class based on the relatively clean data and update labels of relatively noisy data. Finally, we conduct experiments on both synthetic and real-world datasets. The results demonstrate the advantageous performance of the proposed method compared to state-of-the-art baselines.



中文翻译:

元联合优化:带有噪声标签的视觉识别的整体框架

收集带有干净标签的大规模数据以进行有监督的培训实际上具有挑战性。收集带有噪声标签的数据集比较容易,但是这种噪声可能会降低深度神经网络(DNN)的性能。本文通过明智地利用相对干净的数据和相对嘈杂的数据来应对这一挑战。在这项工作中,我们提出了元联合优化(MJO),这是一种用于学习带有噪声标签的新颖且整体的框架。该框架可以共同学习DNN参数并纠正噪声标签。我们首先估计标签质量并进行数据划分,该划分将训练数据动态地划分为相对干净的数据和相对嘈杂的数据。为了更好地优化DNN参数,我们将相对嘈杂的数据视为未标记集,并进一步以半监督的方式应用插值一致性训练,以对相对嘈杂的数据进行信息重用。为了更好地优化标签,我们建议使用质心诱导的标签更新来优化嘈杂的标签本身。具体来说,我们根据相对干净的数据计算每个类别的质心,并更新相对嘈杂的数据的标签。最后,我们在合成数据集和真实数据集上进行实验。结果表明,与最新基准相比,该方法具有优越的性能。我们根据相对干净的数据计算每个类别的质心,并更新相对嘈杂的数据的标签。最后,我们在合成数据集和真实数据集上进行实验。结果表明,与最新基准相比,该方法具有优越的性能。我们根据相对干净的数据计算每个类别的质心,并更新相对嘈杂的数据的标签。最后,我们在合成数据集和真实数据集上进行实验。结果表明,与最新基准相比,该方法具有优越的性能。

更新日期:2021-05-12
down
wechat
bug