当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Frosting Weights for Better Continual Training
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-01-07 , DOI: arxiv-2001.01829
Xiaofeng Zhu, Feng Liu, Goce Trajcevski, Dingding Wang

Training a neural network model can be a lifelong learning process and is a computationally intensive one. A severe adverse effect that may occur in deep neural network models is that they can suffer from catastrophic forgetting during retraining on new data. To avoid such disruptions in the continuous learning, one appealing property is the additive nature of ensemble models. In this paper, we propose two generic ensemble approaches, gradient boosting and meta-learning, to solve the catastrophic forgetting problem in tuning pre-trained neural network models.

中文翻译:

糖霜重量更好的持续训练

训练神经网络模型可能是一个终生学习的过程,并且是一个计算密集型的过程。深度神经网络模型中可能出现的一个严重不利影响是,它们在重新训练新数据时可能会遭受灾难性遗忘。为了避免持续学习中的这种中断,一个吸引人的特性是集成模型的可加性。在本文中,我们提出了两种通用的集成方法,梯度提升和元学习,以解决调整预训练神经网络模型中的灾难性遗忘问题。
更新日期:2020-01-10
down
wechat
bug