当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Continual Learning Using Bayesian Neural Networks
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-08-31 , DOI: 10.1109/tnnls.2020.3017292
Honglin Li , Payam Barnaghi , Shirin Enshaeifar , Frieder Ganz

Continual learning models allow them to learn and adapt to new changes and tasks over time. However, in continual and sequential learning scenarios, in which the models are trained using different data with various distributions, neural networks (NNs) tend to forget the previously learned knowledge. This phenomenon is often referred to as catastrophic forgetting. The catastrophic forgetting is an inevitable problem in continual learning models for dynamic environments. To address this issue, we propose a method, called continual Bayesian learning networks (CBLNs), which enables the networks to allocate additional resources to adapt to new tasks without forgetting the previously learned tasks. Using a Bayesian NN, CBLN maintains a mixture of Gaussian posterior distributions that are associated with different tasks. The proposed method tries to optimize the number of resources that are needed to learn each task and avoids an exponential increase in the number of resources that are involved in learning multiple tasks. The proposed method does not need to access the past training data and can choose suitable weights to classify the data points during the test time automatically based on an uncertainty criterion. We have evaluated the method on the MNIST and UCR time-series data sets. The evaluation results show that the method can address the catastrophic forgetting problem at a promising rate compared to the state-of-the-art models.

中文翻译:

使用贝叶斯神经网络进行持续学习

持续学习模型使他们能够随着时间的推移学习和适应新的变化和任务。然而,在连续和顺序学习场景中,模型使用具有各种分布的不同数据进行训练,神经网络 (NN) 往往会忘记之前学到的知识。这种现象通常被称为灾难性遗忘。灾难性遗忘是动态环境持续学习模型中不可避免的问题。为了解决这个问题,我们提出了一种称为连续贝叶斯学习网络 (CBLN) 的方法,它使网络能够分配额外的资源来适应新任务,而不会忘记先前学习的任务。使用贝叶斯神经网络,CBLN 维护与不同任务相关联的高斯后验分布的混合。所提出的方法试图优化学习每个任务所需的资源数量,并避免学习多个任务所涉及的资源数量呈指数增长。所提出的方法不需要访问过去的训练数据,并且可以根据不确定性标准在测试时间内选择合适的权重对数据点进行自动分类。我们已经在 MNIST 和 UCR 时间序列数据集上评估了该方法。评估结果表明,与最先进的模型相比,该方法可以以有希望的速度解决灾难性遗忘问题。所提出的方法不需要访问过去的训练数据,并且可以根据不确定性标准在测试时间内选择合适的权重对数据点进行自动分类。我们已经在 MNIST 和 UCR 时间序列数据集上评估了该方法。评估结果表明,与最先进的模型相比,该方法可以以有希望的速度解决灾难性遗忘问题。所提出的方法不需要访问过去的训练数据,并且可以根据不确定性标准在测试时间内选择合适的权重对数据点进行自动分类。我们已经在 MNIST 和 UCR 时间序列数据集上评估了该方法。评估结果表明,与最先进的模型相比,该方法可以以有希望的速度解决灾难性遗忘问题。
更新日期:2020-08-31
down
wechat
bug