当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lifelong Dual Generative Adversarial Nets Learning in Tandem
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2023-06-01 , DOI: 10.1109/tcyb.2023.3271388
Fei Ye 1 , Adrian G. Bors 1
Affiliation  

Continually capturing novel concepts without forgetting is one of the most critical functions sought for in artificial intelligence systems. However, even the most advanced deep learning networks are prone to quickly forgetting previously learned knowledge after training with new data. The proposed lifelong dual generative adversarial networks (LD-GANs) consist of two generative adversarial networks (GANs), namely, a Teacher and an Assistant teaching each other in tandem while successively learning a series of tasks. A single discriminator is used to decide the realism of generated images by the dual GANs. A new training algorithm, called the lifelong self knowledge distillation (LSKD) is proposed for training the LD-GAN while learning each new task during lifelong learning (LLL). LSKD enables the transfer of knowledge from one more knowledgeable player to the other jointly with learning the information from a newly given dataset, within an adversarial playing game setting. In contrast to other LLL models, LD-GANs are memory efficient and does not require freezing any parameters after learning each given task. Furthermore, we extend the LD-GANs to being the Teacher module in a Teacher–Student network for assimilating data representations across several domains during LLL. Experimental results indicate a better performance for the proposed framework in unsupervised lifelong representation learning when compared to other methods.

中文翻译:


终身双生成对抗网络串联学习



不断捕捉新概念而不忘记是人工智能系统所寻求的最关键功能之一。然而,即使是最先进的深度学习网络在使用新数据进行训练后也容易很快忘记以前学到的知识。所提出的终身双生成对抗网络(LD-GAN)由两个生成对抗网络(GAN)组成,即教师和助理在连续学习一系列任务的同时互相教学。使用单个判别器来决定双 GAN 生成的图像的真实度。提出了一种称为终身自知识蒸馏(LSKD)的新训练算法,用于在终身学习(LLL)期间学习每个新任务的同时训练 LD-GAN。 LSKD 能够在对抗性游戏环境中,将知识从一个更有知识的玩家转移到另一个玩家,同时从新给定的数据集中学习信息。与其他 LLL 模型相比,LD-GAN 内存效率高,并且在学习每个给定任务后不需要冻结任何参数。此外,我们将 LD-GAN 扩展为师生网络中的教师模块,用于在 LLL 期间同化跨多个域的数据表示。实验结果表明,与其他方法相比,所提出的框架在无监督终身表示学习中具有更好的性能。
更新日期:2023-06-01
down
wechat
bug