当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks
Neural Processing Letters ( IF 3.1 ) Pub Date : 2020-10-23 , DOI: 10.1007/s11063-020-10374-w
Yan Xiong , Xin Tong

Gradient method is often used for the feedforward neural network training. Most of the studies so far have been focused on the square error function. In this paper, a novel entropy error function is proposed for the feedforward neural network training. The week and strong convergence analysis of the gradient method based on the entropy error function with batch input training patterns is strictly proved. Numerical examples are also given by the end of the paper for verifying the effectiveness and correctness. Compared with the square error function, our method provides both faster learning speed and better generalization for the given test problems.



中文翻译:

基于熵误差函数的前馈神经网络间歇梯度法的收敛性

梯度法通常用于前馈神经网络训练。到目前为止,大多数研究都集中在平方误差函数上。本文提出了一种新的熵误差函数用于前馈神经网络训练。严格证明了基于熵误差函数并具有批量输入训练模式的梯度法的周和强收敛性分析。最后通过数值例子验证了有效性和正确性。与平方误差函数相比,对于给定的测试问题,我们的方法提供了更快的学习速度和更好的概括性。

更新日期:2020-10-30
down
wechat
bug