当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SAOSA: Stable Adaptive Optimization for Stacked Auto-encoders
Neural Processing Letters ( IF 2.6 ) Pub Date : 2020-06-22 , DOI: 10.1007/s11063-020-10277-w
Ali Moradi Vartouni , Mohammad Teshnehlab , Saeed Sedighian Kashi

The stacked auto-encoders are considered deep learning algorithms automatically extracting meaningful unsupervised features from the input data using a hierarcfhical learning process. The parameters are learnt layer-by-layer in each auto-encoder (AE). As optimization is one of the main components of the neural networks and auto-encoders, the learning rate is one of the crucial hyper-parameters of neural networks and AE. This issue on a large scale and especially sparse data sets is more important. In this paper, we adapt the learning rate for special AE corresponding to various components of AE networks in each stochastic gradient calculation and analyze the theoretical convergence of back-propagation learning for the proposed method. We also promote our methodology for online adaptive optimizations suitable for deep learning. We obtain promising results compared to constant learning rates on the (1) MNIST digit, (2) blogs-Gender-100 text, (3) smartphone based recognition of human activities and postural transitions time series, and (4) EEG brainwave feeling emotions time series classification tasks using a single machine.

中文翻译:

SAOSA:​​堆叠式自动编码器的稳定自适应优化

堆叠式自动编码器被认为是深度学习算法,它是使用层次学习过程从输入数据中自动提取有意义的无监督特征的。在每个自动编码器(AE)中逐层学习参数。由于优化是神经网络和自动编码器的主要组成部分之一,因此学习率是神经网络和AE的关键超参数之一。大规模(尤其是稀疏的数据集)问题更重要。在本文中,我们在每次随机梯度计算中调整了与AE网络各个组成部分相对应的特殊AE的学习率,并分析了反向传播学习的理论收敛性。我们还推广适用于深度学习的在线自适应优化方法。
更新日期:2020-06-22
down
wechat
bug