当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ACAE-REMIND for online continual learning with compressed feature replay
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2021-07-15 , DOI: 10.1016/j.patrec.2021.06.025
Kai Wang 1 , Joost van de Weijer 1 , Luis Herranz 1
Affiliation  

Online continual learning aims to learn from a non-IID stream of data from a number of different tasks, where the learner is only allowed to consider data once. Methods are typically allowed to use a limited buffer to store some of the images in the stream. Recently, it was found that feature replay, where an intermediate layer representation of the image is stored (or generated) leads to superior results than image replay, while requiring less memory. Quantized exemplars can further reduce the memory usage. However, a drawback of these methods is that they use a fixed (or very intransigent) backbone network. This significantly limits the learning of representations that can discriminate between all tasks. To address this problem, we propose an auxiliary classifier auto-encoder (ACAE) module for feature replay at intermediate layers with high compression rates. The reduced memory footprint per image allows us to save more exemplars for replay. In our experiments, we conduct task-agnostic evaluation under online continual learning setting and get state-of-the-art performance on ImageNet-Subset, CIFAR100 and CIFAR10 dataset.



中文翻译:

ACAE-REMIND 用于具有压缩特征重放的在线持续学习

在线持续学习旨在从来自许多不同任务的非 IID 数据流中学习,其中学习者只能考虑一次数据。通常允许方法使用有限的缓冲区来存储流中的一些图像。最近,发现特征重放,其中存储(或生成)图像的中间层表示比图像重放产生更好的结果,同时需要更少的内存。量化样本可以进一步减少内存使用。然而,这些方法的一个缺点是它们使用固定的(或非常顽固的)骨干网络。这极大地限制了可以区分所有任务的表征的学习。为了解决这个问题,我们提出了一个辅助分类器自动编码器(ACAE)模块,用于在具有高压缩率的中间层进行特征重放。每张图像减少的内存占用使我们能够为重放保存更多的样本。在我们的实验中,我们在在线持续学习设置下进行与任务无关的评估,并在 ImageNet-Subset、CIFAR100 和 CIFAR10 数据集上获得最先进的性能。

更新日期:2021-07-29
down
wechat
bug