当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-04-07 , DOI: arxiv-2004.03266
Amirhossein Rajabi and Carsten Witt

Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps. We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1+$\lambda$) EA and show that it combines the previous benefits of this algorithm on unimodal problems with more efficient multimodal optimization. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.

中文翻译:

多模态优化的自调整进化算法

最近的理论研究表明,在二进制搜索空间的进化算法中,自调整和自适应机制可以证明优于静态设置。然而,这些研究中的绝大多数都集中在单峰函数上,这些函数不需要算法同时翻转几个位来取得进展。事实上,现有的自调整算法并不是为了检测局部最优而设计的,并且在跨越大的汉明差距方面没有任何明显的好处。我们建议使用一种称为停滞检测的机制,可以将其作为模块添加到现有的进化算法(无论是否有先前的自调整算法)。添加到简单的 (1+1) EA,我们在著名的 Jump 基准测试中证明了一个预期的运行时间,它对应于渐近最优的参数设置,并且优于其他多模式优化机制,如重尾突变。我们还在自调整 (1+$\lambda$) EA 的上下文中研究了该模块,并表明它结合了该算法在单峰问题上的先前优势与更有效的多峰优化。为了探索该方法的局限性,我们还提供了一个示例,其中包括停滞检测在内的两种自调整机制都无助于找到有利的突变率设置。最后,我们通过实验研究了我们的停滞检测模块。我们还在自调整 (1+$\lambda$) EA 的上下文中研究了该模块,并表明它结合了该算法在单峰问题上的先前优势与更有效的多峰优化。为了探索该方法的局限性,我们还提供了一个示例,其中包括停滞检测在内的两种自调整机制都无助于找到有利的突变率设置。最后,我们通过实验研究了我们的停滞检测模块。我们还在自调整 (1+$\lambda$) EA 的上下文中研究了该模块,并表明它结合了该算法在单峰问题上的先前优势与更有效的多峰优化。为了探索该方法的局限性,我们还提供了一个示例,其中包括停滞检测在内的两种自调整机制都无助于找到有利的突变率设置。最后,我们通过实验研究了我们的停滞检测模块。
更新日期:2020-06-03
down
wechat
bug