Knowledge-Based Systems ( IF 8.8 ) Pub Date : 2021-06-12 , DOI: 10.1016/j.knosys.2021.107226 Liang-Rui Ren , Jin-Xing Liu , Ying-Lian Gao , Xiang-Zhen Kong , Chun-Hou Zheng
Kernel Risk-Sensitive Loss (KRSL) is a nonlinear similarity measure defined in kernel space, which enable the gradient based method to achieve higher accuracy while effectively weakening the negative effects caused by noise and outliers. Defined as a kernel function expectation between two random variables, KRSL has been successfully applied in robust machine learning and signal processing. Extreme Learning Machine, as one of the most popular methods of machine learning, has attracted great attention in supervised learning and semi-supervised learning. However, when the data contains noise and outliers, the manifold structure of the data is not considered or the neural network structure is too complex, the performance of traditional ELM methods will decline. Therefore, based on KRSL, hyper-graph regularization and -norm, we first propose a more robust ELM method named Kernel Risk-Sensitive Loss Based Hyper-graph Regularized Robust Extreme Learning Machine (KRSL-HRELM). In KRSL-HRELM, KRSL is introduced into ELM to enhance its ability to handle noise and outliers. Moreover, the hyper-graph regularization is integrated into the method to learn the higher-order geometric structure information between the data. In addition, the -norm is introduced to constrain the output weight matrix to obtain a sparse network model. Inspired by other semi-supervised ELM methods, we extend KRSL-HRELM to semi-supervised learning and propose its semi-supervised version semi-supervised KRSL-HRELM (SS- KRSL-HRELM). Empirical studies on a large number of real-world datasets show that the proposed methods are competitive with other advanced supervised or semi-supervised learning methods in terms of robustness and efficiency.
中文翻译:
基于核风险敏感损失的超图正则化鲁棒极限学习机及其半监督分类扩展
Kernel Risk-Sensitive Loss (KRSL) 是定义在核空间中的非线性相似性度量,它使基于梯度的方法能够获得更高的准确度,同时有效削弱噪声和异常值带来的负面影响。KRSL 定义为两个随机变量之间的核函数期望,已成功应用于稳健的机器学习和信号处理。极限学习机作为最流行的机器学习方法之一,在监督学习和半监督学习中引起了极大的关注。但是,当数据中包含噪声和异常值,不考虑数据的流形结构或神经网络结构过于复杂时,传统ELM方法的性能就会下降。因此,基于 KRSL,超图正则化和-norm,我们首先提出了一种更稳健的 ELM 方法,名为 Kernel Risk-Sensitive Loss Based Hyper-graph Regularized Robust Extreme Learning Machine (KRSL-HRELM)。在 KRSL-HRELM 中,KRSL 被引入 ELM 以增强其处理噪声和异常值的能力。此外,将超图正则化集成到学习数据之间高阶几何结构信息的方法中。除此之外引入 -norm 来约束输出权重矩阵以获得稀疏网络模型。受其他半监督 ELM 方法的启发,我们将 KRSL-HRELM 扩展到半监督学习,并提出其半监督版本半监督 KRSL-HRELM (SS-KRSL-HRELM)。对大量真实世界数据集的实证研究表明,所提出的方法在鲁棒性和效率方面与其他先进的监督或半监督学习方法相比具有竞争力。