当前位置: X-MOL 学术Cognit. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Densely Connected Deep Extreme Learning Machine Algorithm
Cognitive Computation ( IF 5.4 ) Pub Date : 2020-08-08 , DOI: 10.1007/s12559-020-09752-2
X. W. Jiang , T. H. Yan , J. J. Zhu , B. He , W. H. Li , H. P. Du , S. S. Sun

As a single hidden layer feed-forward neural network, the extreme learning machine (ELM) has been extensively studied for its short training time and good generalization ability. Recently, with the deep learning algorithm becoming a research hotspot, some deep extreme learning machine algorithms such as multi-layer extreme learning machine (ML-ELM) and hierarchical extreme learning machine (H-ELM) have also been proposed. However, the deep ELM algorithm also has many shortcomings: (1) when the number of model layers is shallow, the random feature mapping makes the sample features cannot be fully learned and utilized; (2) when the number of model layers is deep, the validity of the sample features will decrease after continuous abstraction and generalization. In order to solve the above problems, this paper proposes a densely connected deep ELM algorithm: dense-HELM (D-HELM). Benchmark data sets of different sizes have been employed for the property of the D-HELM algorithm. Compared with the H-ELM algorithm on the benchmark dataset, the average test accuracy is increased by 5.34% and the average training time is decreased by 21.15%. On the NORB dataset, the proposed D-HELM algorithm still maintains the best classification results and the fastest training speed. The D-HELM algorithm can make full use of the features of hidden layer learning by using the densely connected network structure and effectively reduce the number of parameters. Compared with the H-ELM algorithm, the D-HELM algorithm significantly improves the recognition accuracy and accelerates the training speed of the algorithm.

中文翻译:

密集连接的深度极限学习机算法

作为单隐藏层前馈神经网络,极限学习机(ELM)具有训练时间短,泛化能力强等优点,因此得到了广泛的研究。近年来,随着深度学习算法成为研究热点,还提出了一些深度极限学习机算法,例如多层极限学习机(ML-ELM)和分层极限学习机(H-ELM)。但是,深度ELM算法也有许多缺点:(1)当模型层数较浅时,随机特征映射使样本特征无法充分学习和利用。(2)当模型层数很深时,样本特征的有效性在连续抽象和泛化后会降低。为了解决上述问题,本文提出了一种紧密连接的深层ELM算法:密集HELM(D-HELM)。D-HELM算法的属性已采用了不同大小的基准数据集。与基准数据集上的H-ELM算法相比,平均测试准确性提高了5.34%,平均训练时间减少了21.15%。在NORB数据集上,提出的D-HELM算法仍保持最佳分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。D-HELM算法的属性已采用了不同大小的基准数据集。与基准数据集上的H-ELM算法相比,平均测试准确性提高了5.34%,平均训练时间减少了21.15%。在NORB数据集上,提出的D-HELM算法仍保持最佳分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。D-HELM算法的属性已采用了不同大小的基准数据集。与基准数据集上的H-ELM算法相比,平均测试准确性提高了5.34%,平均训练时间减少了21.15%。在NORB数据集上,提出的D-HELM算法仍保持最佳分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。与基准数据集上的H-ELM算法相比,平均测试准确性提高了5.34%,平均训练时间减少了21.15%。在NORB数据集上,提出的D-HELM算法仍保持最佳分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。与基准数据集上的H-ELM算法相比,平均测试准确性提高了5.34%,平均训练时间减少了21.15%。在NORB数据集上,提出的D-HELM算法仍保持最佳分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。提出的D-HELM算法仍然保持最佳的分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。提出的D-HELM算法仍然保持最佳的分类结果和最快的训练速度。D-HELM算法通过使用紧密连接的网络结构,可以充分利用隐藏层学习的功能,并有效减少参数数量。与H-ELM算法相比,D-HELM算法大大提高了识别精度,加快了算法的训练速度。
更新日期:2020-08-08
down
wechat
bug