当前位置: X-MOL 学术Appl. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Adaptive Optimization Method Based on Learning Rate Schedule for Neural Networks
Applied Sciences ( IF 2.838 ) Pub Date : 2021-01-18 , DOI: 10.3390/app11020850
Dokkyun Yi , Sangmin Ji , Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.

中文翻译:

基于学习速率调度的神经网络自适应优化方法

人工智能(AI)通过优化根据学习数据构建的成本函数来实现。更改成本函数中的参数是一个AI学习过程(或为方便起见而进行AI学习)。如果AI学习表现良好,那么成本函数的值就是全局最小值。为了获得经验丰富的AI学习,该参数不应使成本函数的值处于全局最小值。动量法是一种有用的优化方法。但是,当成本函数的值满足全局最小值时,动量法难以停止参数(不间断问题)。所提出的方法基于动量法。为了解决动量法的不间断问题,我们将成本函数的值用于我们的方法。因此,随着学习方法的进行,我们方法中的机制会通过成本函数值的影响来减少参数的变化量。我们通过收敛性证明和现有方法的数值实验验证了该方法,以确保学习效果良好。
更新日期:2021-01-18
down
wechat
bug