当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised meta-learning for few-shot learning
Pattern Recognition ( IF 7.5 ) Pub Date : 2021-03-19 , DOI: 10.1016/j.patcog.2021.107951
Hui Xu , Jiaxing Wang , Hao Li , Deqiang Ouyang , Jie Shao

Meta-learning is an effective tool to address the few-shot learning problem, which requires new data to be classified considering only a few training examples. However, when used for classification, it requires large labeled datasets, which are not always available in practice. In this paper, we propose an unsupervised meta-learning algorithm that learns from an unlabeled dataset and adapts to downstream human-specific tasks with few labeled data. The proposed algorithm constructs tasks using clustering embedding methods and data augmentation functions to satisfy two critical class distinction requirements. To alleviate the biases and the weak diversity problem introduced by data augmentation functions, the proposed algorithm uses two methods, which are shifting the feeding data between the inner-outer loops and a novel data augmentation function. We further provide theoretical analysis of the effect of augmentation data in the inner/outer loop. Experiments on the MiniImagenet and Omniglot datasets demonstrate that the proposed unsupervised meta-learning approach outperforms other tested unsupervised representation learning approaches and two recent unsupervised meta-learning baselines. Compared with supervised meta-learning approaches, certain results produced by our method are quite close to those produced by such methods trained on the human-designed labeled tasks.



中文翻译:

无监督元学习,可进行几次学习

元学习是解决少数学习问题的有效工具,该学习问题要求仅考虑几个训练示例就可以对新数据进行分类。但是,当用于分类时,它需要带有大标签的数据集,实际上并不总是可用。在本文中,我们提出了一种无监督的元学习算法,该算法从一个未标记的数据集中学习,并适应于带有少量标记数据的下游特定于人类的任务。所提出的算法使用聚类嵌入方法和数据扩充功能来构造任务,以满足两个关键的类别区分要求。为了减轻数据增强函数引入的偏差和弱分集问题,该算法采用两种方法,即在内部-外部循环之间移动馈送数据,以及使用一种新颖的数据增强函数。我们进一步提供了内部/外部循环中增强数据效果的理论分析。在MiniImagenet和Omniglot数据集上进行的实验表明,提出的无监督元学习方法优于其他经过测试的无监督表示学习方法和两个最新的无监督元学习基线。与有监督的元学习方法相比,我们的方法所产生的某些结果与在人工设计的带有标签的任务上训练的这种方法所产生的结果非常接近。在MiniImagenet和Omniglot数据集上进行的实验表明,提出的无监督元学习方法优于其他经过测试的无监督表示学习方法和两个最新的无监督元学习基线。与有监督的元学习方法相比,我们的方法所产生的某些结果与在人工设计的带有标签的任务上训练的这种方法所产生的结果非常接近。在MiniImagenet和Omniglot数据集上进行的实验表明,提出的无监督元学习方法优于其他经过测试的无监督表示学习方法和两个最新的无监督元学习基线。与有监督的元学习方法相比,我们的方法所产生的某些结果与在人工设计的带有标签的任务上训练的这种方法所产生的结果非常接近。

更新日期:2021-03-27
down
wechat
bug