当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Knowledge distillation for incremental learning in semantic segmentation
Computer Vision and Image Understanding ( IF 4.3 ) Pub Date : 2021-01-23 , DOI: 10.1016/j.cviu.2021.103167
Umberto Michieli , Pietro Zanuttigh

Deep learning architectures have shown remarkable results in scene understanding problems, however they exhibit a critical drop of performances when they are required to learn incrementally new tasks without forgetting old ones. This catastrophic forgetting phenomenon impacts on the deployment of artificial intelligence in real world scenarios where systems need to learn new and different representations over time. Current approaches for incremental learning deal only with image classification and object detection tasks, while in this work we formally introduce incremental learning for semantic segmentation. We tackle the problem applying various knowledge distillation techniques on the previous model. In this way, we retain the information about learned classes, whilst updating the current model to learn the new ones. We developed four main methodologies of knowledge distillation working on both output layers and internal feature representations. We do not store any image belonging to previous training stages and only the last model is used to preserve high accuracy on previously learned classes. Extensive experimental results on the Pascal VOC2012 and MSRC-v2 datasets show the effectiveness of the proposed approaches in several incremental learning scenarios.



中文翻译:

语义提炼用于语义分割中的增量学习

深度学习架构在场景理解问题上已显示出非凡的成果,但是当要求他们学习渐进式新任务而又不会忘记旧任务时,它们表现出严重的性能下降。这种灾难性的遗忘现象会影响现实场景中人工智能的部署,在现实场景中,系统需要随着时间的推移学习新的和不同的表示形式。当前的增量学习方法仅处理图像分类和对象检测任务,而在这项工作中,我们正式引入增量学习进行语义分割。我们在以前的模型上应用各种知识提炼技术来解决这个问题。这样,我们保留了有关已学习课程的信息,同时更新了当前模型以学习新课程。我们开发了用于输出层和内部特征表示的知识蒸馏的四种主要方法。我们不会存储任何属于先前训练阶段的图像,只有最后一个模型用于保留先前学习的课程的高精度。在Pascal VOC2012和MSRC-v2数据集上的大量实验结果表明,该方法在几种增量学习方案中的有效性。

更新日期:2021-02-05
down
wechat
bug