当前位置: X-MOL 学术Inverse Probl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Discrepancy Principle for Stochastic Gradient Descent
Inverse Problems ( IF 2.0 ) Pub Date : 2020-09-01 , DOI: 10.1088/1361-6420/abaa58
Tim Jahn 1 , Bangti Jin 2
Affiliation  

Stochastic gradient descent (SGD) is a promising numerical method for solving large-scale inverse problems. However, its theoretical properties remain largely underexplored in the lens of classical regularization theory. In this note, we study the classical discrepancy principle, one of the most popular \textit{a posteriori} choice rules, as the stopping criterion for SGD, and prove the finite iteration termination property and the convergence of the iterate in probability as the noise level tends to zero. The theoretical results are complemented with extensive numerical experiments.

中文翻译:

关于随机梯度下降的差异原理

随机梯度下降 (SGD) 是解决大规模逆问题的一种很有前途的数值方法。然而,在经典正则化理论的视角下,其理论特性在很大程度上仍未得到充分探索。在这篇笔记中,我们研究了经典的差异原理,它是最流行的 \textit{a后验} 选择规则之一,作为 SGD 的停止标准,并证明有限迭代终止属性和概率迭代的收敛性作为噪声水平趋于零。理论结果得到了广泛的数值实验的补充。
更新日期:2020-09-01
down
wechat
bug