当前位置: X-MOL 学术Neural Comput. & Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convergence of a modified gradient-based learning algorithm with penalty for single-hidden-layer feed-forward networks
Neural Computing and Applications ( IF 4.5 ) Pub Date : 2018-09-29 , DOI: 10.1007/s00521-018-3748-y
Jian Wang , Bingjie Zhang , Zhaoyang Sang , Yusong Liu , Shujun Wu , Quan Miao

Abstract

Based on a novel algorithm, known as the upper-layer-solution-aware (USA), a new algorithm, in which the penalty method is introduced into the empirical risk, is studied for training feed-forward neural networks in this paper, named as USA with penalty. Both theoretical analysis and numerical results show that it can control the magnitude of weights of the networks. Moreover, the deterministic theoretical analysis of the new algorithm is proved. The monotonicity of the empirical risk with penalty term is guaranteed in the training procedure. The weak and strong convergence results indicate that the gradient of the total error function with respect to weights tends to zero, and the weight sequence goes to a fixed point when the iterations approach positive infinity. Numerical experiment has been implemented and effectively verifies the proved theoretical results.



中文翻译:

改进的基于梯度的带惩罚的单隐层前馈网络学习算法的收敛性

摘要

基于一种称为上层解决方案感知(USA)的新算法,本文研究了一种将惩罚方法引入经验风险的新算法,用于训练前馈神经网络,该算法称为作为美国的惩罚。理论分析和数值结果均表明,它可以控制网络权重的大小。此外,证明了新算法的确定性理论分析。训练过程中保证了带有惩罚项的经验风险的单调性。弱和强收敛结果表明,总误差函数相对于权重的梯度趋于零,并且当迭代接近正无穷大时,权重序列达到固定点。

更新日期:2020-03-30
down
wechat
bug