当前位置: X-MOL 学术Front. Inform. Technol. Electron. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fractional-order global optimal backpropagation machine trained by an improved fractional-order steepest descent method
Frontiers of Information Technology & Electronic Engineering ( IF 2.7 ) Pub Date : 2020-07-03 , DOI: 10.1631/fitee.1900593
Yi-fei Pu , Jian Wang

We introduce the fractional-order global optimal backpropagation machine, which is trained by an improved fractional-order steepest descent method (FSDM). This is a fractional-order backpropagation neural network (FBPNN), a state-of-the-art fractional-order branch of the family of backpropagation neural networks (BPNNs), different from the majority of the previous classic first-order BPNNs which are trained by the traditional first-order steepest descent method. The reverse incremental search of the proposed FBPNN is in the negative directions of the approximate fractional-order partial derivatives of the square error. First, the theoretical concept of an FBPNN trained by an improved FSDM is described mathematically. Then, the mathematical proof of fractional-order global optimal convergence, an assumption of the structure, and fractional-order multi-scale global optimization of the FBPNN are analyzed in detail. Finally, we perform three (types of) experiments to compare the performances of an FBPNN and a classic first-order BPNN, i.e., example function approximation, fractional-order multi-scale global optimization, and comparison of global search and error fitting abilities with real data. The higher optimal search ability of an FBPNN to determine the global optimal solution is the major advantage that makes the FBPNN superior to a classic first-order BPNN.



中文翻译:

改进的分数阶最速下降法训练的分数阶全局最优反向传播机

我们介绍了分数阶全局最优反向传播机器,该机器通过改进的分数阶最速下降法(FSDM)进行训练。这是分数阶反向传播神经网络(FBPNN),是反向传播神经网络(BPNN)系列的最新分数阶分支,与大多数以前的经典一阶BPNN不同由传统的一阶最速下降法训练。所提出的FBPNN的反向增量搜索是在平方误差的近似分数阶偏导数的负方向上。首先,用数学方法描述了由改进的FSDM训练的FBPNN的理论概念。然后,分数阶全局最优收敛的数学证明,结构的假设,详细分析了FBPNN的分数阶多尺度全局优化。最后,我们执行三种(类型)实验,以比较FBPNN和经典一阶BPNN的性能,例如,示例函数逼近,分数阶多尺度全局优化,以及使用以下命令比较全局搜索和错误拟合能力真实数据。FBPNN确定全局最优解的更高的最佳搜索能力是使FBPNN优于经典一阶BPNN的主要优势。以及将全局搜索和错误拟合能力与实际数据进行比较。FBPNN确定全局最优解的更高的最佳搜索能力是使FBPNN优于经典一阶BPNN的主要优势。以及将全局搜索和错误拟合能力与实际数据进行比较。FBPNN确定全局最优解的更高的最佳搜索能力是使FBPNN优于经典一阶BPNN的主要优势。

更新日期:2020-07-03
down
wechat
bug