当前位置: X-MOL 学术Stat. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Analysis of stochastic gradient descent in continuous time
Statistics and Computing ( IF 1.6 ) Pub Date : 2021-05-09 , DOI: 10.1007/s11222-021-10016-8
Jonas Latz

Stochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled with a continuous-time Markov process living on a finite state space. The dynamical system—a gradient flow—represents the gradient descent part, the process on the finite state space represents the random subsampling. Processes of this type are, for instance, used to model clonal populations in fluctuating environments. After introducing it, we study theoretical properties of the stochastic gradient process: We show that it converges weakly to the gradient flow with respect to the full target function, as the learning rate approaches zero. We give conditions under which the stochastic gradient process with constant learning rate is exponentially ergodic in the Wasserstein sense. Then we study the case, where the learning rate goes to zero sufficiently slowly and the single target functions are strongly convex. In this case, the process converges weakly to the point mass concentrated in the global minimum of the full target function; indicating consistency of the method. We conclude after a discussion of discretisation strategies for the stochastic gradient process and numerical experiments.



中文翻译:

连续时间随机梯度下降的分析

随机梯度下降是一种优化方法,将经典梯度下降与目标函数内的随机子采样相结合。在这项工作中,我们将随机梯度过程介绍为随机梯度下降的连续时间表示。随机梯度过程是一个动态系统,与生活在有限状态空间中的连续时间马尔可夫过程耦合。动力学系统-梯度流-代表梯度下降部分,有限状态空间上的过程代表随机二次采样。例如,这种类型的过程用于对波动环境中的克隆种群进行建模。在介绍了它之后,我们研究了随机梯度过程的理论特性:我们证明了相对于整个目标函数,该过程弱地收敛于梯度流,随着学习率接近零。在Wasserstein的意义上,我们给出了具有恒定学习率的随机梯度过程按指数遍历的条件。然后,我们研究以下情况:学习速率足够缓慢地趋于零,并且单个目标函数具有很强的凸性。在这种情况下,过程会收敛到质量集中在整个目标函数的全局最小值的程度;指示方法的一致性。在讨论了随机梯度过程和数值实验的离散化策略后,我们得出结论。学习率足够缓慢地趋于零,并且单个目标函数是强烈凸的。在这种情况下,过程会收敛到质量集中在整个目标函数的全局最小值的程度;指示方法的一致性。在讨论了随机梯度过程和数值实验的离散化策略后,我们得出结论。学习率足够缓慢地趋于零,并且单个目标函数是强烈凸的。在这种情况下,过程会收敛到质量集中在整个目标函数的全局最小值的程度;指示方法的一致性。在讨论了随机梯度过程和数值实验的离散化策略后,我们得出结论。

更新日期:2021-05-09
down
wechat
bug