当前位置: X-MOL 学术IEEE Access › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reduce the Difficulty of Incremental Learning With Self-Supervised Learning
IEEE Access ( IF 3.9 ) Pub Date : 2021-09-14 , DOI: 10.1109/access.2021.3112745
Linting Guan , Yan Wu

Incremental learning requires a learning model to learn new tasks without forgetting the learned tasks continuously. However, when a deep learning model learns new tasks, it will catastrophically forget tasks it has learned before. Researchers have proposed methods to alleviate catastrophic forgetting; these methods only consider extracting features related to tasks learned before, suppression to extract features for unlearned tasks. As a result, when a deep learning model learns new tasks incrementally, the model needs to learn to extract the relevant features of the newly learned task quickly; this requires a significant change in the model’s behavior of extracting features, which increases the learning difficulty. Therefore, the model is caught in the dilemma of reducing the learning rate to retain existing knowledge or increasing the learning rate to learn new knowledge quickly. We present a study aiming to alleviate this problem by introducing self-supervised learning into incremental learning methods. We believe that the task-independent self-supervised learning signal helps the learning model extract features not only effective for the current learned task but also suitable for other tasks that have not been learned. We give a detailed algorithm combining self-supervised learning signals and incremental learning methods. Extensive experiments on several different datasets show that self-supervised signal significantly improves the accuracy of most incremental learning methods without the need for additional labeled data. We found that the self-supervised learning signal works best for the replay-based incremental learning method.

中文翻译:

通过自监督学习降低增量学习的难度

增量学习需要一个学习模型来学习新任务,而不会不断忘记所学的任务。然而,当深度学习模型学习新任务时,它会灾难性地忘记之前学过的任务。研究人员提出了减轻灾难性遗忘的方法;这些方法只考虑提取与之前学习的任务相关的特征,抑制提取未学习任务的特征。因此,当深度学习模型增量学习新任务时,模型需要学习快速提取新学习任务的相关特征;这需要模型提取特征的行为发生显着变化,这增加了学习难度。所以,该模型陷入了降低学习率以保留现有知识或提高学习率以快速学习新知识的困境。我们提出了一项旨在通过将自监督学习引入增量学习方法来缓解这个问题的研究。我们认为独立于任务的自监督学习信号有助于学习模型提取特征,不仅对当前学习的任务有效,而且适用于其他尚未学习的任务。我们给出了结合自监督学习信号和增量学习方法的详细算法。对几个不同数据集的大量实验表明,自监督信号显着提高了大多数增量学习方法的准确性,而无需额外的标记数据。
更新日期:2021-09-24
down
wechat
bug