当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Surrogate Network-based Sparseness Hyper-parameter Optimization for Deep Expression Recognition
Pattern Recognition ( IF 7.5 ) Pub Date : 2021-03-01 , DOI: 10.1016/j.patcog.2020.107701
Weicheng Xie , Wenting Chen , Linlin Shen , Jinming Duan , Meng Yang

Abstract For facial expression recognition, the sparseness constraints of the features or weights can improve the generalization ability of a deep network. However, the optimization of the hyper-parameters in fusing different sparseness strategies demands much computation, when the traditional gradient-based algorithms are used. In this work, an iterative framework with surrogate network is proposed for the optimization of hyper-parameters in fusing different sparseness strategies. In each iteration, a network with significantly smaller model complexity is fitted to the original large network based on four Euclidean losses, where the hyper-parameters are optimized with heuristic optimizers. Since the surrogate network uses the same deep metrics and embeds the same hyper-parameters as the original network, the optimized hyper-parameters are then used for the training of the original deep network in the next iteration. While the performance of the proposed algorithm is justified with a tiny model, i.e. LeNet on the FER2013 database, our approach achieved competitive performances on six publicly available expression datasets, i.e., FER2013, CK+, Oulu-CASIA, MMI, AFEW and AffectNet.

中文翻译:

基于代理网络的深度表达识别稀疏超参数优化

摘要 对于面部表情识别,特征或权重的稀疏性约束可以提高深度网络的泛化能力。然而,当使用传统的基于梯度的算法时,融合不同稀疏策略的超参数优化需要大量计算。在这项工作中,提出了一种具有代理网络的迭代框架,用于在融合不同稀疏策略时优化超参数。在每次迭代中,基于四个欧几里德损失,将模型复杂度明显较小的网络拟合到原始大型网络中,其中使用启发式优化器优化超参数。由于代理网络使用与原始网络相同的深度度量并嵌入了相同的超参数,然后将优化后的超参数用于下一次迭代中原始深度网络的训练。虽然所提出算法的性能通过 FER2013 数据库上的 LeNet 小模型证明是合理的,但我们的方法在六个公开可用的表达数据集上取得了有竞争力的性能,即 FER2013、CK+、Oulu-CASIA、MMI、AFEW 和 AffectNet。
更新日期:2021-03-01
down
wechat
bug