当前位置: X-MOL 学术Electron. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
k-Sparse extreme learning machine
Electronics Letters ( IF 1.1 ) Pub Date : 2020-11-03 , DOI: 10.1049/el.2020.1840
N. Raza 1 , M. Tahir 1 , K. Ali 2
Affiliation  

Extreme learning machine (ELM) is a single layer feed-forward neural network with advantages of fast training and good generalization properties. However, when the size of the hidden layer is increased, both of these advantages are lost as the redundant information may cause overfitting. Traditional way to deal with the issue is to introduce regularisation which promote sparsity but in the output layer weight matrix. In this Letter, we proposed the use of sparsity inside the output of the hidden layer such as to use it as the only non-linearity in the hidden layer. In the proposed formulation, we use linear activation function inside the hidden layer and keep k highest activity-bearing neurons as a measure of sparsity. Using the principal component analysis, we project the resulting output layer matrix onto a low-dimensional space in order to further remove redundant and irrelevant information, and speed up the training process. In order to verify the feasibility and effectiveness of the proposed method, we test and compare it with a number of ELM variants using benchmark datasets. Compared with these methods, our results demonstrate that the proposed method achieves better accuracy performance consistently across many different benchmark datasets.

中文翻译:

ķ-稀疏极限学习机

极限学习机(ELM)是一种单层前馈神经网络,具有训练速度快和泛化性能好的优点。但是,当增加隐藏层的大小时,这两个优点都将丢失,因为冗余信息可能会导致过度拟合。解决该问题的传统方法是在输出层权重矩阵中引入促进稀疏性的正则化。在这封信中,我们建议在隐藏层的输出中使用稀疏性,例如将其用作隐藏层中唯一的非线性。在建议的公式中,我们在隐藏层内部使用线性激活函数并保持ķ具有最高活动能力的神经元,以衡量稀疏程度。使用主成分分析,我们将生成的输出层矩阵投影到一个低维空间上,以进一步去除多余和不相关的信息,并加快训练过程。为了验证该方法的可行性和有效性,我们使用基准数据集对其进行了测试,并将其与许多ELM变体进行了比较。与这些方法相比,我们的结果表明,在许多不同的基准数据集上,所提出的方法始终具有更好的准确性。
更新日期:2020-11-06
down
wechat
bug