当前位置: X-MOL 学术J. Mol. Graph. Model. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Performance comparison of wavelet neural network and adaptive neuro-fuzzy inference system with small data sets.
Journal of Molecular Graphics and Modelling ( IF 2.9 ) Pub Date : 2020-07-24 , DOI: 10.1016/j.jmgm.2020.107698
Reza Tabaraki 1 , Mina Khodabakhshi 1
Affiliation  

In this work, performance of wavelet neural network (WNN) and adaptive neuro-fuzzy inference system (ANFIS) models were compared with small data sets by different criteria such as second order corrected Akaike information criterion (AICc), Bayesian information criterion (BIC), root mean squared error (RMSE), mean absolute relative error (MARE), coefficient of determination (R2), external Q2 function (QF32) and concordance correlation coefficient (CCC). Another criterion was the over-fitting. Ten data sets were selected from literature and their data were divided into training, test, and validation sets. Network parameters were optimized for WNN and ANFIS models and the best architectures with the lowest errors were selected for each data set. A precise survey of the number of permitted adjustable parameters (NPAP) and the total number of adjustable parameters (TNAP) in WNN and ANFIS models was shown that 60% of the ANFIS models and 30% of the WNN models had over-fitting. As a rule of thumb, to avoid over-fitting it is suggested that the ratio of the number of observations in training set to the number of input neurons must be greater than 10 and 20 for WNN and ANFIS, respectively. The smaller ratio required in WNN indicates its flexibility vs. ANFIS that relates to differences in structure and connections in the both networks.



中文翻译:

小数据集与小波神经网络和自适应神经模糊推理系统的性能比较。

在这项工作中,通过不同的标准(例如二阶校正的Akaike信息标准(AICc),贝叶斯信息标准(BIC)),将小波神经网络(WNN)和自适应神经模糊推理系统(ANFIS)模型的性能与小型数据集进行了比较。 ,均方根误差(RMSE),平均绝对相对误差(MARE),确定系数(R 2),外部Q 2函数(F32)和一致性相关系数(CCC)。另一个标准是过度拟合。从文献中选择了十个数据集,并将它们的数据分为训练集,测试集和验证集。网络参数针对WNN和ANFIS模型进行了优化,并且为每个数据集选择了具有最低误差的最佳架构。对WNN和ANFIS模型中允许的可调参数(NPAP)数量和可调参数总数(TNAP)数量的精确调查显示,有60%的ANFIS模型和30%的WNN模型具有过度拟合。作为经验法则,为避免过度拟合,建议对于WNN和ANFIS,训练集中的观察数与输入神经元数的比率必须分别大于10和20。WNN中要求的较小比例表示其灵活性与

更新日期:2020-07-24
down
wechat
bug