当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SID: Incremental learning for anchor-free object detection via Selective and Inter-related Distillation
Computer Vision and Image Understanding ( IF 4.5 ) Pub Date : 2021-05-29 , DOI: 10.1016/j.cviu.2021.103229
Can Peng , Kun Zhao , Sam Maksoud , Meng Li , Brian C. Lovell

Incremental learning requires a model to continually learn new tasks from streaming data. However, traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task — a problem known as catastrophic forgetting. In this paper, we address this issue in the context of anchor-free object detection, which is a new trend in computer vision as it is simple, fast, and flexible. Simply adapting current incremental learning strategies fails on these anchor-free detectors due to lack of consideration of their specific model structures. To deal with the challenges of incremental learning on anchor-free object detectors, we propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID). In addition, a novel evaluation metric is proposed to better assess the performance of detectors under incremental learning conditions. By selective distilling at the proper locations and further transferring additional instance relation knowledge, our method demonstrates significant advantages on the benchmark datasets PASCAL VOC and COCO.



中文翻译:

SID:通过选择性和相关蒸馏进行无锚对象检测的增量学习

增量学习需要一个模型来不断地从流数据中学习新任务。然而,在新任务上对训练有素的深度神经网络进行传统微调会显着降低旧任务的性能——这个问题被称为灾难性遗忘。在本文中,我们在无锚对象检测的背景下解决了这个问题,这是计算机视觉的一个新趋势,因为它简单、快速和灵活。由于缺乏对特定模型结构的考虑,简单地调整当前的增量学习策略在这些无锚检测器上是失败的。为了应对无锚物体检测器增量学习的挑战,我们提出了一种新的增量学习范式,称为选择性和相关蒸馏(SID)。此外,提出了一种新的评估指标,以更好地评估增量学习条件下检测器的性能。通过在适当的位置进行选择性蒸馏并进一步转移额外的实例关系知识,我们的方法在基准数据集 PASCAL VOC 和 COCO 上展示了显着的优势。

更新日期:2021-06-02
down
wechat
bug