当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical extreme learning machine with L21-norm loss and regularization
International Journal of Machine Learning and Cybernetics ( IF 5.6 ) Pub Date : 2020-11-23 , DOI: 10.1007/s13042-020-01234-z
Rui Li , Xiaodan Wang , Yafei Song , Lei Lei

Recently, multilayer extreme learning machine (ELM) algorithms have been extensively studied for hierarchical abstract representation learning in the ELM community. In this paper, we investigate the specific combination of \(L_{21}\)-norm based loss function and regularization to improve the robustness and the sparsity of multilayer ELM. As we all known, the mean square error (MSE) cost function (or squared \(L_{2}\)-norm cost function) is commonly used as optimization cost function for ELM, but it is sensitive to outliers and impulsive noises that are pervasive in real-world data. Our \(L_{21}\)-norm loss function can lessen the harmful influence caused by noises and outliers and enhance robustness and stability of the learned model. Additionally, the row sparse inducing \(L_{21}\)-norm regularization can learn the most-relevant sparse representation and reduce the intrinsic complexity of the learning model. We propose a specific combination of \(L_{21}\)-norm loss function and regularization ELM auto-encoder (LR21-ELM-AE), and then stack LR21-ELM-AE hierarchically to construct the hierarchical extreme learning machine (H-LR21-ELM). Experiments conducted on several well-known benchmark datasets are presented, the results show that the proposed H-LR21-ELM can generate a more robust, more discriminative and sparser model compared with the other state-of-the-art multilayer ELM algorithms.



中文翻译:

具有L21范数丢失和正则化的分层极限学习机

最近,多层极限学习机(ELM)算法已被广泛研究用于ELM社区中的层次抽象表示学习。在本文中,我们研究了基于\(L_ {21} \)-范数的损失函数和正则化的特定组合,以提高多层ELM的鲁棒性和稀疏性。众所周知,均方误差(MSE)成本函数(或\(L_ {2} \)-标准成本函数的平方)通常用作ELM的优化成本函数,但它对异常值和脉冲噪声很敏感在现实世界的数据中无处不在。我们的\(L_ {21} \)-范数损失函数可以减少由噪声和异常值引起的有害影响,并增强学习模型的鲁棒性和稳定性。此外,行稀疏诱导\(L_ {21} \)-范数正则化可以学习最相关的稀疏表示,并降低学习模型的内在复杂性。我们建议\(L_ {21} \)的特定组合-规范损失函数和正则化ELM自动编码器(LR21-ELM-AE),然后分层堆叠LR21-ELM-AE,以构建分层的极限学习机(H-LR21-ELM)。提出了在几个著名的基准数据集上进行的实验,结果表明,与其他最新的多层ELM算法相比,所提出的H-LR21-ELM可以生成更健壮,更具区分性和稀疏性的模型。

更新日期:2020-11-25
down
wechat
bug