当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Parameter-Free Extreme Learning Machine for Imbalanced Classification
Neural Processing Letters ( IF 3.1 ) Pub Date : 2020-06-17 , DOI: 10.1007/s11063-020-10282-z
Li Li , Kaiyi Zhao , Ruizhi Sun , Jiangzhang Gan , Gang Yuan , Tong Liu

Imbalanced data distribution is a common problem in classification situations, that is the number of samples in different categories varies greatly, thus increasing the classification difficulty. Although many methods have been used for the imbalanced data classification, there are still problems with low classification accuracy in minority class and adding additional parameter settings. In order to increase minority classification accuracy in imbalanced problem, this paper proposes a parameter-free weighting learning mechanism based on extreme learning machine and sample loss values to balance the number of samples in each training step. The proposed method mainly includes two aspects: the sample weight learning process based on the sample losses; the sample selection process and weight update process according to the constraint function and iterations. Experimental results on twelve datasets from the KEEL repository show that the proposed method could achieve more balanced and accurate results than other compared methods in this work.



中文翻译:

用于分类不平衡的无参数极限学习机

数据分布不平衡是分类情况下的普遍问题,即不同类别的样本数量变化很大,从而增加了分类难度。尽管已经将许多方法用于不平衡数据分类,但是仍然存在少数类中分类精度较低以及添加其他参数设置的问题。为了提高不平衡问题中少数族裔分类的准确性,提出了一种基于极限学习机和样本损失值的无参数加权学习机制,以平衡每个训练步骤的样本数量。所提出的方法主要包括两个方面:基于样本损失的样本权重学习过程。根据约束函数和迭代进行样本选择过程和权重更新过程。在KEEL资料库中的12个数据集上的实验结果表明,与本工作中的其他比较方法相比,该方法可以实现更加平衡和准确的结果。

更新日期:2020-06-17
down
wechat
bug