当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Memory-Efficient Class-Incremental Learning for Image Classification
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2021-05-03 , DOI: 10.1109/tnnls.2021.3072041
Hanbin Zhao 1 , Hui Wang 1 , Yongjian Fu 1 , Fei Wu 1 , Xi Li 1
Affiliation  

With the memory-resource-limited constraints, class-incremental learning (CIL) usually suffers from the “catastrophic forgetting” problem when updating the joint classification model on the arrival of newly added classes. To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer. To utilize the memory buffer more efficiently, we propose to keep more auxiliary low-fidelity exemplar samples, rather than the original real-high-fidelity exemplar samples. Such a memory-efficient exemplar preserving scheme makes the old-class knowledge transfer more effective. However, the low-fidelity exemplar samples are often distributed in a different domain away from that of the original exemplar samples, that is, a domain shift. To alleviate this problem, we propose a duplet learning scheme that seeks to construct domain-compatible feature extractors and classifiers, which greatly narrows down the above domain gap. As a result, these low-fidelity auxiliary exemplar samples have the ability to moderately replace the original exemplar samples with a lower memory cost. In addition, we present a robust classifier adaptation scheme, which further refines the biased classifier (learned with the samples containing distillation label knowledge about old classes) with the help of the samples of pure true class labels. Experimental results demonstrate the effectiveness of this work against the state-of-the-art approaches. We will release the code, baselines, and training statistics for all models to facilitate future research.

中文翻译:


内存高效类——图像分类的增量学习



由于内存资源有限的限制,类增量学习(CIL)在新添加类到来时更新联合分类模型时通常会遇到“灾难性遗忘”问题。为了解决遗忘问题,许多 CIL 方法通过将一些示例样本保留到大小受限的内存缓冲区中来转移旧类的知识。为了更有效地利用内存缓冲区,我们建议保留更多辅助低保真样本,而不是原始的真实高保真样本。这种节省内存的范例保存方案使旧知识转移更加有效。然而,低保真度样本通常分布在与原始样本不同的域中,即域移位。为了缓解这个问题,我们提出了一种双重学习方案,旨在构建领域兼容的特征提取器和分类器,从而大大缩小上述领域差距。因此,这些低保真辅助样本能够以较低的内存成本适度替换原始样本。此外,我们提出了一种鲁棒的分类器适应方案,该方案在纯真实类标签样本的帮助下进一步细化有偏差的分类器(通过包含有关旧类的蒸馏标签知识的样本进行学习)。实验结果证明了这项工作相对于最先进的方法的有效性。我们将发布所有模型的代码、基线和训练统计数据,以方便未来的研究。
更新日期:2021-05-03
down
wechat
bug