当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
More Trainable Inception-ResNet for Face Recognition
Neurocomputing ( IF 6 ) Pub Date : 2020-10-01 , DOI: 10.1016/j.neucom.2020.05.022
Shuai Peng , Hongbo Huang , Weijun Chen , Liang Zhang , Weiwei Fang

Abstract In recent years, applications of face recognition have increased significantly. Despite the successful application of deep convolutional neural network (DCNN), training such networks is still a challenging task that needs a lot of experience and carefully tuning. Based on the Inception-ResNet network, we propose a novel method to mitigate the difficulty of training such deep convolutional neural network and improve its performance simultaneously. The residual scaling factor used in the Inception-ResNet module is a manually set fixed value. We believe that changing the value to a trainable parameter and initializing it to a small value can improve the stability of the model training. We further adopted a small trick of alternating the ReLU activation function with the Leaky ReLU and PReLU. The proposed model slightly increased the number of training parameters but improved training stability and performance significantly. Extensive experiments are conducted on VGGFace2, MS1MV2, IJBB and LFW datasets. The results show that the proposed trainable residual scaling factor (TRSF) and PReLU can promote the accuracy notably while stabilizing training process.

中文翻译:

用于人脸识别的更可训练的 Inception-ResNet

摘要 近年来,人脸识别的应用显着增加。尽管深度卷积神经网络(DCNN)得到了成功的应用,但训练这样的网络仍然是一项具有挑战性的任务,需要大量的经验和仔细的调优。基于 Inception-ResNet 网络,我们提出了一种新方法来减轻训练这种深度卷积神经网络的难度并同时提高其性能。Inception-ResNet 模块中使用的残差缩放因子是一个手动设置的固定值。我们认为将值更改为可训练的参数并初始化为较小的值可以提高模型训练的稳定性。我们进一步采用了一个小技巧,将 ReLU 激活函数与 Leaky ReLU 和 PReLU 交替使用。所提出的模型略微增加了训练参数的数量,但显着提高了训练稳定性和性能。在 VGGFace2、MS1MV2、IJBB 和 LFW 数据集上进行了大量实验。结果表明,所提出的可训练残差缩放因子(TRSF)和 PReLU 可以在稳定训练过程的同时显着提高准确性。
更新日期:2020-10-01
down
wechat
bug