当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Deep Learners Change Their Mind: Learning Dynamics for Active Learning
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-07-30 , DOI: arxiv-2107.14707
Javad Zolfaghari Bengar, Bogdan Raducanu, Joost van de Weijer

Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.

中文翻译:

当深度学习者改变主意时:主动学习的学习动态

主动学习旨在选择要注释的样本,从而为学习算法带来最大的性能改进。许多方法通过测量样本的信息量来解决这个问题,并基于样本网络预测的确定性来解决这个问题。然而,众所周知,神经网络对其预测过于自信,因此是评估样本信息量的不可靠来源。在本文中,我们提出了一种新的基于信息量的主动学习方法。我们的度量源自神经网络的学习动态。更准确地说,我们在算法训练期间跟踪未标记数据池的标签分配。我们使用称为标签分散的度量来捕获学习动态,当网络在网络训练期间始终为样本分配相同标签时,该值较低,而当分配的标签频繁更改时,该值较高。我们表明标签分散是网络不确定性的有希望的预测器,并在两个基准数据集上表明基于标签分散的主动学习算法获得了出色的结果。
更新日期:2021-08-02
down
wechat
bug