当前位置: X-MOL 学术SIAM J. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stochastic Three Points Method for Unconstrained Smooth Minimization
SIAM Journal on Optimization ( IF 2.6 ) Pub Date : 2020-10-01 , DOI: 10.1137/19m1244378
El Houcine Bergou , Eduard Gorbunov , Peter Richtárik

SIAM Journal on Optimization, Volume 30, Issue 4, Page 2726-2749, January 2020.
In this paper we consider the unconstrained minimization problem of a smooth function in $\mathbb{R}^n$ in a setting where only function evaluations are possible. We design a novel randomized derivative-free algorithm---the stochastic three points (STP) method---and analyze its iteration complexity. At each iteration, STP generates a random search direction according to a certain fixed probability law. Our assumptions on this law are very mild: roughly speaking, all laws which do not concentrate all measures on any halfspace passing through the origin will work. For instance, we allow for the uniform distribution on the sphere and also distributions that concentrate all measures on a positive spanning set. Although our approach is designed to not explicitly use derivatives, it covers some first order methods. For instance, if the probability law is chosen to be the Dirac distribution concentrated on the sign of the gradient, then STP recovers the signed gradient descent method. If the probability law is the uniform distribution on the coordinates of the gradient, then STP recovers the randomized coordinate descent method. The complexity of STP depends on the probability law via a simple characteristic closely related to the cosine measure which is used in the analysis of deterministic direct search (DDS) methods. Unlike in DDS, where $O(n)$ ($n$ is the dimension of $x$) function evaluations must be performed in each iteration in the worst case, our method only requires two new function evaluations per iteration. Consequently, while the complexity of DDS depends quadratically on $n$, our method depends linearly on $n$.


中文翻译:

无约束平滑极小化的随机三点法

SIAM优化杂志,第30卷,第4期,第2726-2749页,2020年1月。
在本文中,我们考虑了在只能进行函数求值的情况下$ \ mathbb {R} ^ n $中的光滑函数的无约束最小化问题。我们设计了一种新颖的随机无导数算法-随机三点(STP)方法-并分析了其迭代复杂度。在每次迭代中,STP根据某个固定概率定律生成随机搜索方向。我们对此法则的假设非常温和:大致而言,所有不将所有度量集中在通过原点的任何半空间上的法则都可以使用。例如,我们允许球体上的均匀分布,也允许将所有度量集中在正跨度集中的分布。尽管我们的方法设计为不显式使用导数,但它涵盖了一些一阶方法。例如,如果选择概率定律为集中在梯度符号上的狄拉克分布,则STP恢复有符号梯度下降方法。如果概率定律是梯度坐标上的均匀分布,则STP恢复随机坐标下降法。STP的复杂性取决于与确定性直接搜索(DDS)方法的分析中使用的余弦测度密切相关的简单特征,取决于概率定律。与DDS中的$ O(n)$($ n $是$ x $的维数)在最坏的情况下必须执行的函数不同,我们的方法每次迭代仅需要两个新的函数求值。因此,虽然DDS的复杂度二次取决于$ n $,但我们的方法却线性取决于$ n $。
更新日期:2020-11-13
down
wechat
bug