当前位置: X-MOL 学术J. Franklin Inst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines
Journal of the Franklin Institute ( IF 4.1 ) Pub Date : 2020-07-08 , DOI: 10.1016/j.jfranklin.2020.04.033
Jie Zhang , Yanjiao Li , Wendong Xiao , Zhiqiang Zhang

In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning.



中文翻译:

非迭代和快速深度学习:多层极限学习机

在过去的十年中,深度学习技术推动了我们日常生活的许多方面,并引起了越来越多的研究兴趣。但是,传统的深度学习方法,例如深度信念网络(DBN),受限玻尔兹曼机器(RBM)和卷积神经网络(CNN),由于大量参数和参数的微调而耗费了训练时间。复杂的层次结构。此外,上述复杂性使得理论上难以分析和证明那些传统深度学习方法的通用近似。为了解决这个问题,提出了多层极限学习机(ML-ELM),它加速了深度学习的发展。与传统的深度学习相比,由于随机特征映射机制,ML-ELM是非迭代且快速的。在本文中,我们对ML-ELM的开发进行了全面的回顾,包括堆叠式ELM自动编码器(ELM-AE),残余ELM和基于局部接受场的ELM(ELM-LRF),并探讨了它们的应用。此外,我们还将讨论随机神经网络与常规深度学习之间的联系。

更新日期:2020-09-02
down
wechat
bug