当前位置: X-MOL 学术Appl. Math. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A weight initialization based on the linear product structure for neural networks
Applied Mathematics and Computation ( IF 3.5 ) Pub Date : 2021-10-19 , DOI: 10.1016/j.amc.2021.126722
Qipin Chen 1 , Wenrui Hao 1 , Juncai He 2
Affiliation  

Weight initialization plays an important role in training neural networks and also affects tremendous deep learning applications. Various weight initialization strategies have already been developed for different activation functions with different neural networks. These initialization algorithms are based on minimizing the variance of the parameters between layers and might still fail when neural networks are deep, e.g., dying ReLU. To address this challenge, we study neural networks from a nonlinear computation point of view and propose a novel weight initialization strategy that is based on the linear product structure (LPS) of neural networks. The proposed strategy is derived from the polynomial approximation of activation functions by using theories of numerical algebraic geometry to guarantee to find all the local minima. We also provide a theoretical analysis that the LPS initialization has a lower probability of dying ReLU comparing to other existing initialization strategies. Finally, we test the LPS initialization algorithm on both fully connected neural networks and convolutional neural networks to show its feasibility, efficiency, and robustness on public datasets.



中文翻译:

基于线性乘积结构的神经网络权重初始化

权重初始化在训练神经网络中起着重要作用,也影响着巨大的深度学习应用。已经针对不同神经网络的不同激活函数开发了各种权重初始化策略。这些初始化算法基于最小化层之间参数的方差,并且当神经网络很深时,例如,死亡 ReLU,可能仍然会失败。为了应对这一挑战,我们从非线性计算的角度研究神经网络,并提出了一种基于神经网络线性乘积结构 (LPS) 的新型权重初始化策略。所提出的策略是通过使用数值代数几何理论来保证找到所有局部最小值,从激活函数的多项式逼近中推导出来的。我们还提供了一个理论分析,即与其他现有的初始化策略相比,LPS 初始化具有更低的 ReLU 死亡概率。最后,我们在全连接神经网络和卷积神经网络上测试 LPS 初始化算法,以展示其在公共数据集上的可行性、效率和鲁棒性。

更新日期:2021-10-19
down
wechat
bug