当前位置: X-MOL 学术IEEE T. Evolut. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Parallel Black-Box Complexity with Tail Bounds
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2020-12-01 , DOI: 10.1109/tevc.2019.2954234
Per Kristian Lehre , Dirk Sudholt

We propose a new black-box complexity model for search algorithms evaluating $\lambda $ search points in parallel. The parallel unary unbiased black-box complexity gives lower bounds on the number of function evaluations every parallel unary unbiased black-box algorithm needs to optimize a given problem. It captures the inertia caused by offspring populations in evolutionary algorithms and the total computational effort in parallel metaheuristics.1 We present complexity results for LeadingOnes and OneMax. Our main result is a general performance limit: we prove that on every function every $\lambda $ -parallel unary unbiased algorithm needs at least a certain number of evaluations (a function of problem size and $\lambda $ ) to find any desired target set of up to exponential size, with an overwhelming probability. This yields lower bounds for the typical optimization time on unimodal and multimodal problems, for the time to find any local optimum, and for the time to even get close to any optimum. The power and versatility of this approach is shown for a wide range of illustrative problems from combinatorial optimization. Our performance limits can guide parameter choice and algorithm design; we demonstrate the latter by presenting an optimal $\lambda $ -parallel algorithm for OneMax that uses parallelism most effectively.

This article significantly extends preliminary results which appeared in [1].



中文翻译:

具有尾部边界的并行黑盒复杂度

我们提出了一种新的黑盒复杂度模型来评估搜索算法 $\lambda $ 并行搜索点。并行一元无偏黑盒复杂度给出了函数评估数量的下限每一个并行一元无偏黑盒算法需要优化给定的问题。它捕获了进化算法中后代种群引起的惯性以及并行元启发式中的总计算工作量。1我们展示了LeadingOnes 和OneMax 的复杂度结果。我们的主要结果是一般性能限制:我们证明每一个 功能 每一个 $\lambda $ -parallel unary unbiased algorithm 至少需要一定数量的评估(问题大小和 $\lambda $ ) 以极高的概率找到任何所需的高达指数大小的目标集。这为单峰和多峰问题的典型优化时间产生了下限,以便找到任何当地的最佳,并且有时间甚至接近任何最佳。这种方法的强大功能和多功能性在来自组合优化的各种说明性问题中得到了体现。我们的性能限制可以指导参数选择和算法设计;我们通过提出一个最佳 $\lambda $ -OneMax 的并行算法,最有效地使用并行性。

这篇文章大大扩展了出现在 [1].

更新日期:2020-12-01
down
wechat
bug