当前位置: X-MOL 学术J. Franklin Inst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust regularized extreme learning machine for regression with non-convex loss function via DC program
Journal of the Franklin Institute ( IF 3.7 ) Pub Date : 2020-05-29 , DOI: 10.1016/j.jfranklin.2020.05.027
Kuaini Wang , Huimin Pei , Jinde Cao , Ping Zhong

Extreme learning machine (ELM) is considered as a powerful data-driven modeling method and has been widely used to various practical fields. It relies on the assumption that samples are completely clean without noise or worst yet. However, this is often not the case in the real-world applications, and results in poor robustness. In this paper, we focus on addressing a key issue of inefficiency in ELM when confronting with outliers. Introducing the non-convex loss function, we propose a robust regularized extreme learning machine for regression by difference of convex functions (DC) program, denoted as RRELM. The proposed non-convex loss function sets a constant penalty on any large outliers to suppress their negative effects, and can be decomposed into the difference of two convex functions. The RRELM can be successfully solved by DC optimization. Numerical experiments were conducted on various datasets to examine the validity of RRELM. Each experiment was randomly contaminated with 0%, 10%, 20%, 30% and 40% outliers levels in the training samples. We also applied RRELM to the financial time series datasets prediction. The experimental results verify that the proposed RRELM can yield superior generalization performance. Moreover, it is less affected with the increasing proportions of outliers than the competing method.



中文翻译:

强大的正则化极限学习机,可通过DC程序进行非凸损失函数的回归

极限学习机(ELM)被认为是一种功能强大的数据驱动建模方法,已被广泛应用于各种实际领域。它基于这样的假设:样品是完全干净的,没有噪音或最差的。但是,在实际应用中通常不是这种情况,并且会导致鲁棒性差。在本文中,我们专注于解决离群值时ELM效率低下的关键问题。在介绍非凸损失函数时,我们提出了一种鲁棒的正则化极限学习机,用于通过凸函数(DC)程序的差异(称为RRELM)进行回归。拟议的非凸损失函数对任何较大的离群值都设置了恒定的惩罚以抑制其负面影响,并且可以分解为两个凸函数的差。可以通过DC优化成功解决RRELM。对各种数据集进行了数值实验,以检验RRELM的有效性。每个实验都随机受到训练样本中0%,10%,20%,30%和40%离群值的污染。我们还将RRELM应用于财务时间序列数据集预测。实验结果证明,提出的RRELM可以产生出色的泛化性能。而且,与竞争方法相比,离群值比例的增加对其影响较小。

更新日期:2020-07-14
down
wechat
bug