当前位置: X-MOL 学术Mach. Learn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
Machine Learning ( IF 7.5 ) Pub Date : 2019-10-10 , DOI: 10.1007/s10994-019-05838-7
Han-Jia Ye , Xiang-Rong Sheng , De-Chuan Zhan

Considering the data collection and labeling cost in real-world applications, training a model with limited examples is an essential problem in machine learning, visual recognition, etc. Directly training a model on such few-shot learning (FSL) tasks falls into the over-fitting dilemma, which would turn to an effective task-level inductive bias as a key supervision. By treating the few-shot task as an entirety, extracting task-level pattern, and learning a task-agnostic model initialization, the model-agnostic meta-learning (MAML) framework enables the applications of various models on the FSL tasks. Given a training set with a few examples, MAML optimizes a model via fixed gradient descent steps from an initial point chosen beforehand. Although this general framework possesses empirically satisfactory results, its initialization neglects the task-specific characteristics and aggravates the computational burden as well. In this manuscript, we propose our AdaptiVely InitiAlized Task OptimizeR ( Aviator ) approach for few-shot learning, which incorporates task context into the determination of the model initialization. This task-specific initialization facilitates the model optimization process so that it obtains high-quality model solutions efficiently. To this end, we decouple the model and apply a set transformation over the training set to determine the initial top-layer classifier. Re-parameterization of the first-order gradient descent approximation promotes the gradient back-propagation. Experiments on synthetic and benchmark data sets validate that our Aviator approach achieves the state-of-the-art performance, and visualization results demonstrate the task-adaptive features of our proposed Aviator method.

中文翻译:

自适应初始化任务优化器的小样本学习:一种实用的元学习方法

考虑到实际应用中的数据收集和标记成本,在机器学习、视觉识别等中训练具有有限样本的模型是一个基本问题。直接在此类少样本学习 (FSL) 任务上训练模型属于过时-拟合困境,这将转化为有效的任务级归纳偏差作为关键监督。通过将小样本任务视为一个整体,提取任务级模式,并学习与任务无关的模型初始化,模型无关元学习 (MAML) 框架使各种模型能够在 FSL 任务上应用。给定包含几个示例的训练集,MAML 通过从预先选择的初始点开始的固定梯度下降步骤优化模型。虽然这个一般框架具有经验上令人满意的结果,它的初始化忽略了特定于任务的特征并加重了计算负担。在这篇手稿中,我们提出了我们的 AdaptiVely InitiAlized Task OptimizeR (Aviator) 方法,用于小样本学习,它将任务上下文纳入模型初始化的确定中。这种特定于任务的初始化有助于模型优化过程,从而有效地获得高质量的模型解决方案。为此,我们解耦模型并对训练集应用集合变换来确定初始的顶层分类器。一阶梯度下降近似的重新参数化促进了梯度反向传播。在合成和基准数据集上的实验验证了我们的 Aviator 方法达到了最先进的性能,
更新日期:2019-10-10
down
wechat
bug