当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot Learning
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11856 Vinay Kumar Verma, Kevin Liang, Nikhil Mehta, Lawrence Carin
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11856 Vinay Kumar Verma, Kevin Liang, Nikhil Mehta, Lawrence Carin
Zero-shot learning (ZSL) has been shown to be a promising approach to
generalizing a model to categories unseen during training by leveraging class
attributes, but challenges still remain. Recently, methods using generative
models to combat bias towards classes seen during training have pushed the
state of the art of ZSL, but these generative models can be slow or
computationally expensive to train. Additionally, while many previous ZSL
methods assume a one-time adaptation to unseen classes, in reality, the world
is always changing, necessitating a constant adjustment for deployed models.
Models unprepared to handle a sequential stream of data are likely to
experience catastrophic forgetting. We propose a meta-continual zero-shot
learning (MCZSL) approach to address both these issues. In particular, by
pairing self-gating of attributes and scaled class normalization with
meta-learning based training, we are able to outperform state-of-the-art
results while being able to train our models substantially faster
($>100\times$) than expensive generative-based approaches. We demonstrate this
by performing experiments on five standard ZSL datasets (CUB, aPY, AWA1, AWA2
and SUN) in both generalized zero-shot learning and generalized continual
zero-shot learning settings.
中文翻译:
连续广义零射学习的元学习属性自选通
零击学习(ZSL)已被证明是一种有前途的方法,可以利用班级属性将模型推广到训练期间看不见的类别,但是挑战仍然存在。近来,使用生成模型来消除对训练过程中出现的类的偏见的方法推动了ZSL的发展,但是这些生成模型训练起来可能很慢或计算量很大。此外,虽然许多以前的ZSL方法假定一次性适应未见的类,但实际上,世界总是在变化,需要对部署的模型进行不断的调整。未准备好处理顺序数据流的模型可能会遭受灾难性的遗忘。我们提出了一种元连续零镜头学习(MCZSL)方法来解决这两个问题。特别是,通过将属性的自我门控和可缩放的班级归一化与基于元学习的训练相结合,我们能够胜过最新的结果,同时能够比昂贵的方法更快地训练我们的模型($> 100 \ times $)基于生成的方法。我们通过在广义零镜头学习和广义连续零镜头学习设置中对五个标准ZSL数据集(CUB,aPY,AWA1,AWA2和SUN)进行实验来证明这一点。
更新日期:2021-02-24
中文翻译:
连续广义零射学习的元学习属性自选通
零击学习(ZSL)已被证明是一种有前途的方法,可以利用班级属性将模型推广到训练期间看不见的类别,但是挑战仍然存在。近来,使用生成模型来消除对训练过程中出现的类的偏见的方法推动了ZSL的发展,但是这些生成模型训练起来可能很慢或计算量很大。此外,虽然许多以前的ZSL方法假定一次性适应未见的类,但实际上,世界总是在变化,需要对部署的模型进行不断的调整。未准备好处理顺序数据流的模型可能会遭受灾难性的遗忘。我们提出了一种元连续零镜头学习(MCZSL)方法来解决这两个问题。特别是,通过将属性的自我门控和可缩放的班级归一化与基于元学习的训练相结合,我们能够胜过最新的结果,同时能够比昂贵的方法更快地训练我们的模型($> 100 \ times $)基于生成的方法。我们通过在广义零镜头学习和广义连续零镜头学习设置中对五个标准ZSL数据集(CUB,aPY,AWA1,AWA2和SUN)进行实验来证明这一点。