当前位置: X-MOL 学术Evol. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modelling Evolutionary Algorithms with Stochastic Differential Equations
Evolutionary Computation ( IF 4.6 ) Pub Date : 2018-12-01 , DOI: 10.1162/evco_a_00216
Jorge Pérez Heredia 1
Affiliation  

There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

中文翻译:

用随机微分方程建模进化算法

通过更传统的数学对象(例如常微分方程或马尔可夫链)对进化算法 (EA) 的行为进行建模,重新引起了人们的兴趣。优点是由于存在完善的方法,分析变得非常容易。然而,这通常是以忽略有关过程的信息为代价的。在这里,我们介绍使用随机微分方程 (SDE) 来研究 EA。SDE 可以为随机过程的动力学生成简单的分析结果,这与马尔可夫链不同,马尔可夫链可以生成关于动力学的严格但笨拙的表达式。另一方面,与常微分方程 (ODE) 不同,它们不会丢弃有关过程随机性的信息。我们表明这些特别适用于固定预算方案的分析,并从运行时分析中呈现加法和乘法漂移定理的类似物。此外,我们推导出了一个新的更通用的乘法漂移定理,该定理也涵盖了非精英 EA。该定理同时允许正面和负面结果,即使在问题无法有效优化时也能提供有关算法进度的信息。最后,我们提供了一些著名的启发式算法的结果,即随机游走 (RW)、随机局部搜索 (RLS)、(1+1) EA、大都会算法 (MA) 和强选择弱突变 (SSWM) 算法. 我们推导出了一个新的更通用的乘法漂移定理,该定理也涵盖了非精英 EA。该定理同时允许正面和负面结果,即使在问题无法有效优化时也能提供有关算法进度的信息。最后,我们提供了一些著名的启发式算法的结果,即随机游走 (RW)、随机局部搜索 (RLS)、(1+1) EA、大都会算法 (MA) 和强选择弱突变 (SSWM) 算法. 我们推导出了一个新的更通用的乘法漂移定理,该定理也涵盖了非精英 EA。该定理同时允许正面和负面结果,即使在问题无法有效优化时也能提供有关算法进度的信息。最后,我们提供了一些著名的启发式算法的结果,即随机游走 (RW)、随机局部搜索 (RLS)、(1+1) EA、大都会算法 (MA) 和强选择弱突变 (SSWM) 算法.
更新日期:2018-12-01
down
wechat
bug