当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning
arXiv - CS - Machine Learning Pub Date : 2020-11-25 , DOI: arxiv-2011.14960
Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks. The ability to extend the knowledge of a model with new data without forgetting previously learned samples is a fundamental requirement in continual learning. Existing solutions address it by either replaying past data from memory, which is unsustainable with growing training data, or by reconstructing past samples with generative models that are trained to generalize beyond training data and, hence, miss important details of individual samples. In this paper, we take the best of both worlds and introduce a novel generative rehearsal approach called BinPlay. Its main objective is to find a quality-preserving encoding of past samples into precomputed binary codes living in the autoencoder's binary latent space. Since we parametrize the formula for precomputing the codes only on the chronological indices of the training samples, the autoencoder is able to compute the binary embeddings of rehearsed samples on the fly without the need to keep them in memory. Evaluation on three benchmark datasets shows up to a twofold accuracy improvement of BinPlay versus competing generative replay methods.

中文翻译:

BinPlay:用于生成重放连续学习的二进制潜在自动编码器

我们引入了一种二进制潜在空间自动编码器体系结构来排练训练样本,以进行神经网络的持续学习。在不忘记先前学习的样本的情况下,用新数据扩展模型知识的能力是持续学习的基本要求。现有的解决方案通过从内存中回放过去的数据(这对于不断增长的训练数据是不可持续的)或通过使用生成模型重建过去的样本进行训练来解决的,而生成的模型经过训练后可以推广到训练数据之外,因此会丢失各个样本的重要细节。在本文中,我们将两全其美,并介绍了一种称为BinPlay的新颖的排练方法。其主要目的是找到将过去样本的保留质量的编码转换为自动编码器的二进制潜在空间中的预计算二进制代码。由于我们仅对训练样本的时间顺序上的代码进行了参数化公式计算,因此自动编码器能够即时计算出演练样本的二进制嵌入,而无需将其保留在内存中。对三个基准数据集的评估显示,BinPlay与竞争性生成重放方法相比,其准确性提高了两倍。
更新日期:2020-12-01
down
wechat
bug