当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generalizing from a Few Examples
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2020-06-12 , DOI: 10.1145/3386252
Yaqing Wang 1 , Quanming Yao 2 , James T. Kwok 3 , Lionel M. Ni 4
Affiliation  

Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this article, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimizer is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications, and theories, are also proposed to provide insights for future research. 1

中文翻译:

从几个例子概括

机器学习在数据密集型应用程序中非常成功,但在数据集较小时经常受到阻碍。最近,提出了少样本学习(FSL)来解决这个问题。使用先验知识,FSL 可以快速泛化到仅包含少数具有监督信息的样本的新任务。在本文中,我们进行了彻底的调查,以充分了解 FSL。从 FSL 的正式定义开始,我们将 FSL 与几个相关的机器学习问题区分开来。然后我们指出,FSL 的核心问题是经验风险最小化器是不可靠的。基于如何使用先验知识来处理这个核心问题,我们从三个角度对 FSL 方法进行分类:(i)数据,它使用先验知识来增强监督体验;(ii) 型号,它使用先验知识来减小假设空间的大小;(iii) 算法,它使用先验知识来改变对给定假设空间中最佳假设的搜索。通过这种分类法,我们审查并讨论了每个类别的优缺点。还提出了在 FSL 问题设置、技术、应用和理论方面的有希望的方向,为未来的研究提供见解。1
更新日期:2020-06-12
down
wechat
bug