当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting
Image and Vision Computing ( IF 4.2 ) Pub Date : 2020-12-05 , DOI: 10.1016/j.imavis.2020.104079
Mark Schutera , Frank M. Hafner , Jochen Abhau , Veit Hagenmeyer , Ralf Mikut , Markus Reischl

Perception systems, to a large extent, rely on neural networks. Commonly, the training of neural networks uses a finite amount of data. The usual assumption is that an appropriate training dataset is available, which covers all relevant domains. This abstract will follow the example of different lighting conditions in autonomous driving scenarios. In real-world datasets, a single source domain, such as day images, often dominates the dataset composition. This poses a risk to overfit on specific source domain features within the dataset, and implicitly breaches the assumption of full or relevant domain coverage. While applying the model to data outside of the source domain, the performance drops, posing a significant challenge for data-driven methods. A common approach is supervised retraining of the model on additional data. Supervised training requires the laborious acquisition and labeling of an adequate amount of data and often becomes infeasible when data augmentation strategies are not applicable. Furthermore, retraining on additional data often causes a performance drop in the source domain, so-called catastrophic forgetting. In this paper, we present a self-supervised continuous domain adaptation method. A model trained supervised on the source domain (day) is used to generate pseudo labels on the samples of an adjacent target domain (dawn). The pseudo labels and samples enable to fine-tune the existing model, which, as a result, is adapted into the intermediate domain. By iteratively repeating these steps, the model reaches the target domain (night). The results, of the novel method, on the MNIST dataset and its modification, the continuous rotatedMNIST dataset demonstrate a domain adaptation of 86.2%, and a catastrophic forgetting of only 1.6% in the target domain. The work contributes a hyperparameter ablation study, analysis, and discussion of the new learning strategy.



中文翻译:

Cuepervision:自我监督学习,实现连续领域适应,而不会造成灾难性的遗忘

感知系统在很大程度上依赖于神经网络。通常,神经网络的训练使用有限数量的数据。通常的假设是可以使用一个涵盖所有相关领域的合适的训练数据集。该摘要将遵循自动驾驶场景中不同照明条件的示例。在现实世界的数据集中,单个来源域(例如日图像)通常主导着数据集的构成。这会导致过度拟合数据集中特定源域特征的风险,并隐含违反了完全或相关域覆盖范围的假设。在将模型应用于源域之外的数据时,性能会下降,这对数据驱动方法构成了巨大挑战。一种常见的方法是在附加数据上监督模型的再训练。有监督的培训要求费力的获取和标记足够数量的数据,并且在不适用数据扩充策略时通常变得不可行。此外,对其他数据进行再培训通常会导致源域的性能下降,即所谓的灾难性遗忘。在本文中,我们提出了一种自我监督的连续域自适应方法。在源域(日期)上受监督训练的模型用于在相邻目标域(黎明)的样本上生成伪标签。伪标签和样本可以微调现有模型,从而将其适配到中间域中。通过反复重复这些步骤,模型可以到达目标域(夜间)。新方法在MNIST数据集上的结果及其修改,连续旋转的MNIST数据集显示了86.2%的域适应性,目标域中的灾难性遗忘率仅为1.6%。这项工作有助于对新的学习策略进行超参数消融研究,分析和讨论。

更新日期:2020-12-25
down
wechat
bug