当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Downsizing and enhancing broad learning systems by feature augmentation and residuals boosting
Complex & Intelligent Systems ( IF 5.8 ) Pub Date : 2020-04-09 , DOI: 10.1007/s40747-020-00139-2
Runshan Xie , Shitong Wang

Recently, a broad learning system (BLS) has been theoretically and experimentally confirmed to be an efficient incremental learning system. To get rid of deep architecture, BLS shares the same architecture and learning mechanism of the well-known functional link neural networks (FLNN), but works in broad learning way on both the randomly mapped features of original features of data and their randomly generated enhancement nodes. As such, BLS often requires a huge heap of hidden nodes to achieve the prescribed or satisfactory performance, which may inevitably cause both overwhelming storage requirement and overfitting phenomenon. In this study, a stacked architecture of broad learning systems called D&BLS is proposed to achieve enhanced performance and simultaneously downsize the system architecture. By boosting the residuals between previous and current layers and simultaneously augmenting the original input space with the outputs of the previous layer as the inputs of current layer, D&BLS stacks several lightweight BLS sub-systems to guarantee stronger feature representation capability and better classification/regression performance. Three fast incremental learning algorithms of D&BLS are also developed, without the need for the whole re-training. Experimental results on some popular datasets demonstrate the effectiveness of D&BLS in the sense of both enhanced performance and reduced system architecture.



中文翻译:

通过功能增强和残差增加缩小和增强广泛的学习系统

最近,广泛的学习系统(BLS)在理论和实验上已被证实是一种有效的增量学习系统。为了摆脱深层架构,BLS与著名的功能链接神经网络(FLNN)共享相同的架构和学习机制,但对原始数据特征的随机映射特征及其随机生成的增强功能以​​广泛的学习方式工作节点。因此,BLS通常需要大量的隐藏节点才能达到规定的或令人满意的性能,这可能不可避免地导致压倒性的存储需求和过度拟合现象。在这项研究中,提出了一种称为D&BLS的广泛学习系统的堆叠体系结构,以实现增强的性能并同时缩小系统体系结构的尺寸。通过增加先前层和当前层之间的残差,并同时以先前层的输出作为当前层的输入来扩大原始输入空间,D&BLS堆叠了多个轻量级BLS子系统,以确保更强大的特征表示能力和更好的分类/回归性能。还开发了D&BLS的三种快速增量学习算法,而无需整个重新训练。在一些受欢迎的数据集上的实验结果证明了D&BLS在增强性能和简化系统架构方面的有效性。BLS堆叠了多个轻量级BLS子系统,以确保更强大的特征表示能力和更好的分类/回归性能。还开发了D&BLS的三种快速增量学习算法,而无需整个重新训练。在一些受欢迎的数据集上的实验结果证明了D&BLS在增强性能和简化系统架构方面的有效性。BLS堆叠了多个轻量级BLS子系统,以确保更强大的特征表示能力和更好的分类/回归性能。还开发了D&BLS的三种快速增量学习算法,而无需整个重新训练。在一些受欢迎的数据集上的实验结果证明了D&BLS在增强性能和简化系统架构方面的有效性。

更新日期:2020-04-09
down
wechat
bug