当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Batch mode active learning via adaptive criteria weights
Applied Intelligence ( IF 3.4 ) Pub Date : 2020-11-14 , DOI: 10.1007/s10489-020-01953-4
Hao Li , Yongli Wang , Yanchao Li , Gang Xiao , Peng Hu , Ruxin Zhao

Batch mode active learning (BMAL) is absorbed in training reliable classifier with deficient labeled examples by efficiently querying the most valuable unlabeled examples for supervision. In particular, BMAL always selects examples based on the decent-designed criteria, such as (un)certainty and representativeness, etc. However, existing BMAL approaches make a naive trade-off between the criteria and simply combine them with fixed weights, which may yield suboptimal batch selection since the criteria of unlabeled examples would fluctuate after retraining classifier with the newly augmented training set as the learning of classifier progresses. Instead, the weights of the criteria should be assigned properly. To overcome this problem, this paper proposes a novel A daptive C riteria W eights active learning method, abbreviated ACW, which dynamically combines the example selection criteria together to select critical examples for semi-supervised classification. Concretely, we first assign an initial value to each criterion weight, then the current optimal batch is picked from unlabeled pool. Thereafter, the criteria weights are learned and adjusted adaptively by minimizing the objective function with the selected batch at each round. To the best of our knowledge, this work is the first attempt to explore adaptive criteria weights in the context of active learning. The superiority of ACW against the existing state-of-the-art BMAL approaches has also been validated by extensive experimental results on widely used datasets.



中文翻译:

通过自适应准则权重进行批处理模式主动学习

批处理模式主动学习(BMAL)通过有效地查询最有价值的未标记示例进行监督,从而专注于训练带有不足标记示例的可靠分类器。特别是,BMAL总是根据体面设计的标准(例如不确定性代表性来选择示例。但是,现有的BMAL方法在这些标准之间进行了幼稚的权衡,并简单地将它们与固定权重结合在一起,这可能由于在分类器的学习过程中使用新增加的训练集对分类器进行再训练后,未标记示例的标准可能会发生波动,因此会产生次优的批次选择。相反,应该适当分配标准的权重。为了克服这个问题,本文提出了一种新颖daptive Ç riteria W¯¯八分主动学习方法,简写ACW,其动态地组合了例如选择标准一起,以选择半监督分类关键例子。具体而言,我们首先为每个标准权重分配一个初始值,然后从未标记的库中选择当前的最佳批次。此后,通过在每个回合中将选定批次的目标函数最小化,来学习和自适应地调整标准权重。据我们所知,这项工作是在主动学习的背景下探索适应标准权重的首次尝试。ACW相对于现有的最新BMAL方法的优越性也已通过广泛使用的数据集上广泛的实验结果得到了验证。

更新日期:2020-11-15
down
wechat
bug