当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces
Computational Optimization and Applications ( IF 1.6 ) Pub Date : 2021-01-12 , DOI: 10.1007/s10589-020-00259-y
Caroline Geiersbach , Teresa Scarinci

For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonconvex and nonsmooth problems, where the nonsmooth part is convex and the nonconvex part is the expectation, which is assumed to have a Lipschitz continuous gradient. The optimization variable is an element of a Hilbert space. We show almost sure convergence of strong limit points of the random sequence generated by the algorithm to stationary points. We demonstrate the stochastic proximal gradient algorithm on a tracking-type functional with a \(L^1\)-penalty term constrained by a semilinear PDE and box constraints, where input terms and coefficients are subject to uncertainty. We verify conditions for ensuring convergence of the algorithm and show a simulation.



中文翻译:

Hilbert空间中非凸问题的随机近邻梯度方法

对于有限维问题,长期以来一直采用随机逼近方法来解决随机优化问题。尤其是对于非凸物镜,它们在无限维问题上的应用还鲜为人知。本文介绍了应用于希尔伯特空间的随机近端梯度方法的收敛结果,该方法受具有带有随机输入和系数的偏微分方程(PDE)约束的优化问题的激励。我们研究了非凸和非光滑问题的随机算法,其中非光滑部分是凸的,非凸部分是期望的,假定它具有Lipschitz连续梯度。优化变量是希尔伯特空间的元素。我们几乎可以肯定地表明算法生成的随机序列的强极限点收敛到平稳点。我们在跟踪型函数上演示了随机近端梯度算法\(L ^ 1 \)-由半线性PDE和盒约束约束的惩罚项,其中输入项和系数受不确定性的影响。我们验证了确保算法收敛的条件,并进行了仿真。

更新日期:2021-01-13
down
wechat
bug