当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-07-14 , DOI: arxiv-2007.07077
Le Thanh Nguyen-Meidine, Atif Bela, Madhu Kiran, Jose Dolz, Louis-Antoine Blais-Morin, Eric Granger

Unsupervised domain adaptation (UDA) seeks to alleviate the problem of domain shift between the distribution of unlabeled data from the target domain w.r.t. labeled data from the source domain. While the single-target UDA scenario is well studied in the literature, Multi-Target Domain Adaptation (MTDA) remains largely unexplored despite its practical importance, e.g., in multi-camera video-surveillance applications. The MTDA problem can be addressed by adapting one specialized model per target domain, although this solution is too costly in many real-world applications. Blending multiple targets for MTDA has been proposed, yet this solution may lead to a reduction in model specificity and accuracy. In this paper, we propose a novel unsupervised MTDA approach to train a CNN that can generalize well across multiple target domains. Our Multi-Teacher MTDA (MT-MTDA) method relies on multi-teacher knowledge distillation (KD) to iteratively distill target domain knowledge from multiple teachers to a common student. The KD process is performed in a progressive manner, where the student is trained by each teacher on how to perform UDA for a specific target, instead of directly learning domain adapted features. Finally, instead of combining the knowledge from each teacher, MT-MTDA alternates between teachers that distill knowledge, thereby preserving the specificity of each target (teacher) when learning to adapt to the student. MT-MTDA is compared against state-of-the-art methods on several challenging UDA benchmarks, and empirical results show that our proposed model can provide a considerably higher level of accuracy across multiple target domains. Our code is available at: https://github.com/LIVIAETS/MT-MTDA

中文翻译:

通过知识蒸馏的无监督多目标域适应

无监督域自适应 (UDA) 旨在缓解来自目标域的未标记数据与来自源域的标记数据之间的域转移问题。虽然单目标 UDA 场景在文献中得到了很好的研究,但多目标域适应 (MTDA) 尽管具有实际重要性,例如在多摄像机视频监控应用中,但在很大程度上仍未得到探索。MTDA 问题可以通过为每个目标域调整一个专门的模型来解决,尽管这种解决方案在许多实际应用中成本太高。已经提出了为 MTDA 混合多个目标,但这种解决方案可能会导致模型特异性和准确性的降低。在本文中,我们提出了一种新的无监督 MTDA 方法来训练 CNN,该方法可以在多个目标域之间很好地泛化。我们的多教师 MTDA (MT-MTDA) 方法依赖于多教师知识蒸馏 (KD) 来迭代地将多位教师的目标领域知识提炼给一个普通学生。KD 过程以渐进的方式执行,其中每个教师都对学生进行关于如何针对特定目标执行 UDA 的培训,而不是直接学习领域适应特征。最后,MT-MTDA 不是结合每个老师的知识,而是在提取知识的老师之间交替,从而在学习适应学生时保留每个目标(老师)的特异性。在几个具有挑战性的 UDA 基准测试中将 MT-MTDA 与最先进的方法进行了比较,实证结果表明,我们提出的模型可以在多个目标域中提供相当高的准确度。我们的代码可在以下位置获得:
更新日期:2020-11-11
down
wechat
bug