当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A derivative-free optimization algorithm for the efficient minimization of functions obtained via statistical averaging
Computational Optimization and Applications ( IF 1.6 ) Pub Date : 2020-02-04 , DOI: 10.1007/s10589-020-00172-4
Pooriya Beyhaghi , Ryan Alimo , Thomas Bewley

This paper considers the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of design parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in engineering applications. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. The present paper proposes a new optimization algorithm to adjust the amount of sampling associated with each function evaluation, making function evaluations more accurate (and, thus, more expensive), as required, as convergence is approached. The work builds on our algorithm for Delaunay-based Derivative-free Optimization via Global Surrogates (\({\varDelta }\)-DOGS, see JOGO https://doi.org/10.1007/s10898-015-0384-2). The new algorithm, dubbed \(\alpha \)-DOGS, substantially reduces the overall cost of the optimization process for problems of this important class. Further, under certain well-defined conditions, rigorous proof of convergence to the global minimum of the problem considered is established.

中文翻译:

一种无导数优化算法,可有效地最小化通过统计平均获得的函数

本文考虑了在稳定的遍历过程的无穷多个平均设计参数的有效最小化方法。从物理或数字实验中得出的这类问题有时执行起来很昂贵,在工程应用中无处不在。在此类问题中,使用有限采样确定的任何给定功能评估都与可量化的不确定性相关联,可以通过附加采样来减少不确定性。本文提出了一种新的优化算法,用于调整与每个函数评估相关的采样量,使函数评估在趋于收敛时可以根据需要更准确(因此更昂贵)。\({\ varDelta} \)- DOGS,请参见JOGO https://doi.org/10.1007/s10898-015-0384-2)。被称为\(\ alpha \)- DOGS的新算法大大降低了针对此类重要问题的优化过程的总体成本。此外,在某些明确定义的条件下,建立了严格的证据证明收敛到所考虑问题的全局最小值。
更新日期:2020-02-04
down
wechat
bug