当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Uniform Priors for Data-Efficient Transfer
arXiv - CS - Machine Learning Pub Date : 2020-06-30 , DOI: arxiv-2006.16524 Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
arXiv - CS - Machine Learning Pub Date : 2020-06-30 , DOI: arxiv-2006.16524 Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
Deep Neural Networks have shown great promise on a variety of downstream
applications; but their ability to adapt and generalize to new data and tasks
remains a challenge. However, the ability to perform few or zero-shot
adaptation to novel tasks is important for the scalability and deployment of
machine learning models. It is therefore crucial to understand what makes for
good, transfer-able features in deep networks that best allow for such
adaptation. In this paper, we shed light on this by showing that features that
are most transferable have high uniformity in the embedding space and propose a
uniformity regularization scheme that encourages better transfer and feature
reuse. We evaluate the regularization on its ability to facilitate adaptation
to unseen tasks and data, for which we conduct a thorough experimental study
covering four relevant, and distinct domains: few-shot Meta-Learning, Deep
Metric Learning, Zero-Shot Domain Adaptation, as well as Out-of-Distribution
classification. Across all experiments, we show that uniformity regularization
consistently offers benefits over baseline methods and is able to achieve
state-of-the-art performance in Deep Metric Learning and Meta-Learning.
中文翻译:
数据高效传输的统一先验
深度神经网络在各种下游应用中显示出巨大的潜力;但他们适应和概括新数据和任务的能力仍然是一个挑战。然而,对新任务执行少量或零次适应的能力对于机器学习模型的可扩展性和部署很重要。因此,了解什么是最适合这种适应的深层网络中良好的、可转移的特征是至关重要的。在本文中,我们通过展示最可转移的特征在嵌入空间中具有高度均匀性来阐明这一点,并提出了一种鼓励更好的转移和特征重用的均匀性正则化方案。我们评估了其促进适应看不见的任务和数据的能力的正则化,为此,我们进行了一项全面的实验研究,涵盖四个相关且不同的领域:少样本元学习、深度度量学习、零样本域适应以及分布外分类。在所有实验中,我们表明一致性正则化始终比基线方法具有优势,并且能够在深度度量学习和元学习中实现最先进的性能。
更新日期:2020-10-14
中文翻译:
数据高效传输的统一先验
深度神经网络在各种下游应用中显示出巨大的潜力;但他们适应和概括新数据和任务的能力仍然是一个挑战。然而,对新任务执行少量或零次适应的能力对于机器学习模型的可扩展性和部署很重要。因此,了解什么是最适合这种适应的深层网络中良好的、可转移的特征是至关重要的。在本文中,我们通过展示最可转移的特征在嵌入空间中具有高度均匀性来阐明这一点,并提出了一种鼓励更好的转移和特征重用的均匀性正则化方案。我们评估了其促进适应看不见的任务和数据的能力的正则化,为此,我们进行了一项全面的实验研究,涵盖四个相关且不同的领域:少样本元学习、深度度量学习、零样本域适应以及分布外分类。在所有实验中,我们表明一致性正则化始终比基线方法具有优势,并且能够在深度度量学习和元学习中实现最先进的性能。