当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-04-19 , DOI: 10.1007/s11263-020-01381-4
Han-Jia Ye , Hexiang Hu , De-Chuan Zhan

Object recognition in the real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to recognize the populated head visual concepts reliably and meanwhile efficiently learn about emerging new tail categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail. In this paper, we investigate the problem of generalized few-shot learning (GFSL)—a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes. We propose the ClAssifier SynThesis LEarning (Castle), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes with a shared neural dictionary, shedding light upon the inductive GFSL. Furthermore, we propose an adaptive version of Castle (a Castle) that adapts the head classifiers conditioned on the incoming tail training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, a Castle can handle GFSL with classes from heterogeneous domains effectively. Castle and a Castle demonstrate superior performances than existing GFSL algorithms and strong baselines on MiniImageNet as well as TieredImageNet datasets. More interestingly, they outperform previous state-of-the-art methods when evaluated with standard few-shot learning criteria.



中文翻译:

广义少量学习的学习自适应分类器综合

现实世界中的对象识别需要处理长尾甚至开放式数据。理想的视觉系统需要可靠地识别出头部的视觉概念,并通过一些训练实例有效地了解新兴的新尾巴类别。类平衡的多次射击学习和几次射击学习可以通过学习针对头部的强分类器或针对尾部学习多次射击的分类器来解决该问题的一方面。在本文中,我们研究了广义的少拍学习(GFSL)问题-部署期间需要一个模型来了解少拍的尾巴类别并同时对头类进行分类。我们提出ClAssifier综合学习(Castle),这是一个学习框架,除了可以通过共享的神经词典来学习头类的多类分类器之外,还学习如何合成经过校准的少击分类器,从而为归纳式GFSL提供了启发。此外,我们提出的自适应版本城堡一个 城堡),能够适应头部分类条件对传入尾训练样例,产生一个框架,允许有效向后知识转移。因此,一个 城堡可以从异构的有效域处理与类GFSL。城堡一个 城堡证明比现有GFSL算法高超的性能和强大的基线迷你ImageNet以及分层ImageNet数据集。更有趣的是,当用标准的一次性学习标准进行评估时,它们的性能优于以前的最新方法。

更新日期:2021-04-19
down
wechat
bug