当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generalization Bottleneck in Deep Metric Learning
Information Sciences Pub Date : 2021-09-14 , DOI: 10.1016/j.ins.2021.09.023
Zhanxuan Hu 1, 2, 3 , Danyang Wu 4 , Feiping Nie 4 , Rong Wang 4
Affiliation  

Deep metric learning aims to learn a non-linear function that maps raw-data to a discriminative lower-dimensional embedding space, where semantically similar samples have larger similarity than dissimilar ones. Most existing approaches process each raw-data in two steps, by mapping the raw-data to a higher-dimensional feature space via a fixed backbone, followed by mapping the higher-dimensional feature space to a lower-dimensional embedding space via a linear layer. This paradigm, however, inevitably leads to a Generalization Bottleneck (GB) problem. Specifically, GB refers to a limitation that the generalization capacity of lower-dimensional embedding space is inferior to the higher-dimensional feature space in the test stage. To mitigate the capacity gap between feature space and embedding space, we propose to introduce a fully-learnable module, dubbed Relational Knowledge Preserving (RKP), that improves the generalization capacity of lower-dimensional embedding space by transferring the mutual similarity of instances. Our proposed RKP module can be integrated into a general deep metric learning approach. And, experiments conducted on different benchmarks show that it can significantly improve the performance of original model.



中文翻译:

深度度量学习中的泛化瓶颈

深度度量学习旨在学习一个非线性函数,将原始数据映射到一个有判别力的低维嵌入空间,其中语义相似的样本比不同的样本具有更大的相似性。大多数现有方法分两步处理每个原始数据,通过固定主干将原始数据映射到高维特征空间,然后通过线性层将高维特征空间映射到低维嵌入空间. 然而,这种范式不可避免地会导致泛化瓶颈 (GB) 问题。具体来说,GB是指测试阶段低维嵌入空间泛化能力不如高维特征空间的局限性。为了缩小特征空间和嵌入空间之间的容量差距,我们建议引入一个完全可学习的模块,称为关系知识保留(RKP),通过转移实例的相互相似性来提高低维嵌入空间的泛化能力。我们提出的 RKP 模块可以集成到通用的深度度量学习方法中。并且,在不同基准上进行的实验表明,它可以显着提高原始模型的性能。

更新日期:2021-09-14
down
wechat
bug