当前位置: X-MOL 学术EURASIP J. Adv. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
High-dimensional neural feature design for layer-wise reduction of training cost
EURASIP Journal on Advances in Signal Processing ( IF 1.9 ) Pub Date : 2020-09-10 , DOI: 10.1186/s13634-020-00695-2
Alireza M. Javid , Arun Venkitaraman , Mikael Skoglund , Saikat Chatterjee

We design a rectified linear unit-based multilayer neural network by mapping the feature vectors to a higher dimensional space in every layer. We design the weight matrices in every layer to ensure a reduction of the training cost as the number of layers increases. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An 2-norm convex constraint is used in the minimization to reduce the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost, and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer. We show that the proposed architecture is norm-preserving and provides an invertible feature vector and, therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.



中文翻译:

高维神经特征设计可逐层降低培训成本

通过将特征向量映射到每一层中的较高维空间,我们设计了基于整流线性单元的多层神经网络。我们设计每一层的权重矩阵,以确保随着层数的增加而减少训练成本。如果凸成本最小化,则在高维空间中向目标的线性投影将导致较低的训练成本。一个2-norm凸约束用于最小化以减少泛化误差并避免过度拟合。通过分析得出网络的正则化超参数以保证训练成本的单调递减,因此,无需交叉验证即可在每一层中找到正则化超参数。我们表明,所提出的体系结构是保持规范的,并提供了一个可逆的特征向量,因此,可用于减少任何其他使用线性投影来估计目标的学习方法的训练成本。

更新日期:2020-09-10
down
wechat
bug