当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning deep discriminative embeddings via joint rescaled features and log-probability centers
Pattern Recognition ( IF 8 ) Pub Date : 2021-01-27 , DOI: 10.1016/j.patcog.2021.107852
Huayue Cai , Xiang Zhang , Long Lan , Guohua Dong , Chuanfu Xu , Xinwang Liu , Zhigang Luo

Recently softmax based loss functions have surged to advance image classification and face verification. Most efforts boost discrimination of the softmax loss by using novel angular margins in varying ways, but few analyze where the discrimination truly comes from whilst considering the power of relieving the overfitting to enhance the softmax loss. In this paper, we firstly delve into such mainstream of softmax based loss functions in theory, and recognize the importance of easing overfitting to the softmax loss. In terms of such analysis, this paper intends to bring the softmax loss up to the competitive level with current well-behaved loss functions. We do this in two ways: (1) regularizing the softmax to relieve the overfitting by learning the log-probability centers, and (2) rescaling deep embeddings of the softmax with a constant scale to further enhance inter-class separability in Euclidean space. We call the resulting loss function rLogCenter loss for short. Simple and interpretable as our loss is, it guides CNNs to induce performance gains in the experiments of both image classification and face verification.



中文翻译:

通过联合重新缩放的特征和对数概率中心来学习深度判别式嵌入

近来,基于softmax的损失函数已经激增,以推进图像分类和面部验证。大多数努力通过以不同方式使用新颖的角余量来促进对softmax损失的区分,但是很少有人在考虑缓解过度拟合以增强softmax损失的能力的同时分析歧视的真正来源。在本文中,我们首先在理论上深入研究了基于softmax的损失函数的这种主流,并认识到缓解过度拟合softmax损失的重要性。根据这种分析,本文旨在利用目前表现良好的损失函数将softmax损失提高到竞争水平。我们通过两种方式进行操作:(1)通过学习对数概率中心来对softmax进行正则化,以缓解过度拟合;(2)以恒定的比例缩放softmax的深层嵌入,以进一步增强欧式空间中的类间可分离性。我们将结果损失函数简称为rLogCenter损失。就我们的损失而言,它是简单易懂的,它指导CNN在图像分类和面部验证实验中诱导性能提升。

更新日期:2021-02-15
down
wechat
bug