当前位置: X-MOL 学术SIAM J. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
SIAM Journal on Optimization ( IF 2.6 ) Pub Date : 2020-05-19 , DOI: 10.1137/19m1271798
Bangti Jin , Zehui Zhou , Jun Zou

SIAM Journal on Optimization, Volume 30, Issue 2, Page 1421-1450, January 2020.
In this work, we analyze the regularizing property of the stochastic gradient descent for the numerical solution of a class of nonlinear ill-posed inverse problems in Hilbert spaces. At each step of the iteration, the method randomly chooses one equation from the nonlinear system to obtain an unbiased stochastic estimate of the gradient and then performs a descent step with the estimated gradient. It is a randomized version of the classical Landweber method for nonlinear inverse problems, and it is highly scalable to the problem size and holds significant potential for solving large-scale inverse problems. Under the canonical tangential cone condition, we prove the regularizing property for a priori stopping rules and then establish the convergence rates under a suitable sourcewise condition and a range invariance condition.


中文翻译:

非线性不适定问题的随机梯度下降收敛性

SIAM优化杂志,第30卷,第2期,第1421-1450页,2020年1月。
在这项工作中,我们分析了希尔伯特空间中一类非线性不适定逆问题数值解的随机梯度下降的正则化性质。在迭代的每个步骤中,该方法都会从非线性系统中随机选择一个方程式以获得梯度的无偏随机估计,然后对估计的梯度执行下降步骤。它是用于非线性逆问题的经典Landweber方法的随机版本,可高度扩展到问题大小,并具有解决大规模逆问题的巨大潜力。在正切圆锥条件下,我们证明了先验停止规则的正则化性质,然后在适当的源条件和范围不变条件下建立了收敛速度。
更新日期:2020-07-23
down
wechat
bug