当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Political Optimizer Based Feedforward Neural Network for Classification and Function Approximation
Neural Processing Letters ( IF 2.6 ) Pub Date : 2021-01-02 , DOI: 10.1007/s11063-020-10406-5
Qamar Askari , Irfan Younas

Political optimizer (PO) is a recently proposed human-behavior inspired meta-heuristic, which has shown tremendous performance on complex multimodal functions as well as engineering optimization problems. Good convergence speed and well-balanced exploratory and exploitative behavior of PO convince us to employ PO for the training of feedforward neural network (FNN). The FNN-training problem is formulated as an optimization problem in which the objective is to minimize the mean squared error (MSE) or cross entropy (CE). The weights and biases of the FNN are arranged in the form of a vector called a candidate solution. The performance of the proposed trainer is evaluated on 5 classification data-sets and 5 function-approximation data-sets, which have already been used in the literature. In recent years, grey wolf optimizer, moth flame optimization, multi-verse optimizer, sine-cosine algorithm, whale optimization algorithm, ant lion optimizer, and Salp swarm algorithm have successfully been applied on neural network training. In this paper, we compare the performance of PO with these algorithms and show that PO either outperforms them or performs equivalently. The MSE, CE, training set accuracy, and test set accuracy are used as metrics for the comparative analysis. The non-parametric Wilcoxon’s rank-sum test is used to show the statistical significance of the results. Based on the tremendous performance, we highly recommend using PO for the training of artificial neural networks to solve the classification and regression problems.



中文翻译:

基于政治优化器的前馈神经网络用于分类和函数逼近

政治优化器(PO)是最近提出的以人类行为为灵感的元启发式方法,它在复杂的多峰函数以及工程优化问题上显示出卓越的性能。PO的良好收敛速度和良好的探索性与开发性行为使我们相信可以使用PO来训练前馈神经网络(FNN)。FNN训练问题被表述为一个优化问题,其目标是最小化均方误差(MSE)或交叉熵(CE)。FNN的权重和偏差以称为候选解的向量的形式排列。建议的训练器的性能是根据5种分类数据集和5种函数逼近数据集进行评估的,这些数据集已在文献中使用。近年来,灰太狼优化器,飞蛾火焰优化,多诗词优化器,正弦余弦算法,鲸鱼优化算法,蚁群优化器和Salp群算法已成功应用于神经网络训练中。在本文中,我们将PO与这些算法的性能进行比较,并表明PO的性能优于或等效。MSE,CE,训练集准确性和测试集准确性用作比较分析的指标。非参数Wilcoxon的秩和检验用于显示结果的统计学意义。基于出色的性能,我们强烈建议使用PO训练人工神经网络,以解决分类和回归问题。Salp群算法已成功应用于神经网络训练中。在本文中,我们将PO与这些算法的性能进行比较,并表明PO的性能优于或等效。MSE,CE,训练集准确性和测试集准确性用作比较分析的指标。非参数Wilcoxon的秩和检验用于显示结果的统计学意义。基于出色的性能,我们强烈建议使用PO训练人工神经网络,以解决分类和回归问题。Salp群算法已成功应用于神经网络训练中。在本文中,我们将PO与这些算法的性能进行比较,并表明PO的性能优于或等效。MSE,CE,训练集准确性和测试集准确性用作比较分析的指标。非参数Wilcoxon的秩和检验用于显示结果的统计学意义。基于出色的性能,我们强烈建议使用PO训练人工神经网络,以解决分类和回归问题。非参数Wilcoxon的秩和检验用于显示结果的统计学意义。基于出色的性能,我们强烈建议使用PO训练人工神经网络,以解决分类和回归问题。非参数Wilcoxon的秩和检验用于显示结果的统计学意义。基于出色的性能,我们强烈建议使用PO训练人工神经网络,以解决分类和回归问题。

更新日期:2021-01-02
down
wechat
bug