当前位置: X-MOL 学术Int. J. Pattern Recognit. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Comparison of Optimization Algorithms for Deep Learning
International Journal of Pattern Recognition and Artificial Intelligence ( IF 0.9 ) Pub Date : 2020-02-06 , DOI: 10.1142/s0218001420520138
Derya Soydaner 1
Affiliation  

In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behavior of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.

中文翻译:

深度学习优化算法的比较

近年来,我们见证了深度学习的兴起。深度神经网络已在许多领域证明了它们的成功。然而,随着神经网络越来越深入和数据集越来越大,这些网络的优化变得更加困难。因此,在过去几年中已经提出了更高级的优化算法。在这项研究中,详细研究了广泛使用的深度学习优化算法。为此,这些称为自适应梯度方法的算法适用于有监督和无监督任务。通过指出它们与基本优化算法的差异,比较了算法在训练期间的行为和四个图像数据集的结果,即 MNIST、CIFAR-10、Kaggle Flowers 和 Labeled Faces in the Wild。
更新日期:2020-02-06
down
wechat
bug