当前位置: X-MOL 学术Algorithmica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Complex Parameter Landscape of the Compact Genetic Algorithm
Algorithmica ( IF 1.1 ) Pub Date : 2020-11-04 , DOI: 10.1007/s00453-020-00778-4
Johannes Lengler , Dirk Sudholt , Carsten Witt

The compact Genetic Algorithm (cGA) evolves a probability distribution favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We study the intricate dynamics of the cGA on the test function OneMax, and how its performance depends on the hypothetical population size K, which determines how quickly decisions about promising bit values are fixated in the probabilistic model. It is known that the cGA and the Univariate Marginal Distribution Algorithm (UMDA), a related algorithm whose population size is called $$\lambda$$ , run in expected time $$O(n \log n)$$ when the population size is just large enough ( $$K = \varTheta (\sqrt{n}\log n)$$ and $$\lambda = \varTheta (\sqrt{n}\log n)$$ , respectively) to avoid wrong decisions being fixated. The UMDA also shows the same performance in a very different regime ( $$\lambda =\varTheta (\log n)$$ , equivalent to $$K = \varTheta (\log n)$$ in the cGA) with much smaller population size, but for very different reasons: many wrong decisions are fixated initially, but then reverted efficiently. If the population size is even smaller ( $$o(\log n)$$ ), the time is exponential. We show that population sizes in between the two optimal regimes are worse as they yield larger runtimes: we prove a lower bound of $$\varOmega (K^{1/3}n + n \log n)$$ for the cGA on OneMax for $$K = O(\sqrt{n}/\log ^2 n)$$ . For $$K = \varOmega (\log ^3 n)$$ the runtime increases with growing K before dropping again to $$O(K\sqrt{n} + n \log n)$$ for $$K = \varOmega (\sqrt{n} \log n)$$ . This suggests that the expected runtime for the cGA is a bimodal function in K with two very different optimal regions and worse performance in between.

中文翻译:

紧凑遗传算法的复杂参数图

紧凑遗传算法 (cGA) 通过从分布中重复采样并根据有希望的样本对其进行更新,从而演化出有利于底层搜索空间中最优解的概率分布。我们在测试函数 OneMax 上研究 cGA 的复杂动态,以及它的性能如何取决于假设的总体大小 K,这决定了概率模型中固定有关有希望的位值的决定的速度。众所周知,cGA 和 Univariate Marginal Distribution Algorithm (UMDA),一种相关的算法,其种群规模称为 $$\lambda$$ ,当种群规模为足够大(分别为 $$K = \varTheta (\sqrt{n}\log n)$$ 和 $$\lambda = \varTheta (\sqrt{n}\log n)$$ )以避免错误的决定被固定。UMDA 在非常不同的机制中也表现出相同的性能( $$\lambda =\varTheta (\log n)$$ ,相当于 cGA 中的 $$K = \varTheta (\log n)$$)人口规模,但出于非常不同的原因:许多错误的决定最初是固定的,但随后有效地恢复了。如果人口规模更小( $$o(\log n)$$ ),则时间是指数级的。我们表明,两个最优方案之间的人口规模更糟,因为它们产生了更大的运行时间:我们证明了 $$\varOmega (K^{1/3}n + n \log n)$$ 上的 cGA 的下界$$K = O(\sqrt{n}/\log ^2 n)$$ 的 OneMax。对于 $$K = \varOmega (\log ^3 n)$$,运行时间随着 K 的增加而增加,然后再次下降到 $$O(K\sqrt{n} + n \log n)$$ 对于 $$K = \ varOmega (\sqrt{n} \log n)$$ 。
更新日期:2020-11-04
down
wechat
bug