当前位置: X-MOL 学术Evol. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*
Evolutionary Computation ( IF 4.6 ) Pub Date : 2020-09-01 , DOI: 10.1162/evco_a_00258
Andrei Lissovoi 1 , Pietro S Oliveto 1 , John Alasdair Warwicker 1
Affiliation  

Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n=108) and shed some light on the best choices for the parameter τ in various situations.

中文翻译:

简单的超启发式算法控制随机本地搜索的邻域大小,最适合领先者*

选择超启发式 (HH) 是随机搜索方法,它在优化过程中从一组低级启发式方法中选择和执行启发式方法。机器学习机制通常用于决定在每个决策步骤中应应用哪种低级启发式方法。在本文中,我们分析了复杂的学习机制对于 HH 的良好表现是否总是必要的。为此,我们考虑了文献中最简单的 HH,并严格分析了它们在 LeadOnes 基准函数中的表现。我们的分析表明,标准的简单随机、排列、贪婪和随机梯度 HH 没有显示出学习迹象。虽然前 HH 不会尝试从低级启发式算法的过去表现中学习,Random Gradient HH 背后的想法是继续利用当前选择的启发式方法,只要它成功即可。因此,它嵌入了具有最短记忆力的强化学习机制。然而,当扰乱组合优化问题的合理解决方案时,有希望的启发式在下一步成功的概率相对较低。我们概括了“简单的”随机梯度 HH,因此可以在固定的时间段 τ 内衡量成功,而不是单次迭代。对于 LeadOnes,我们证明了广义随机梯度 (GRG) HH 可以学习在运行期间使随机局部搜索的邻域大小适应最优。因此,我们证明它具有通过低级启发式(具有不同邻域大小的随机局部搜索)达到低阶项可实现的最佳性能。我们还证明了 HH 的性能随着可供选择的低级局部搜索启发式数量的增加而提高。特别是,通过访问 k 个低级局部搜索试探法,它优于使用 k 个试探法的任何子集的最佳算法。最后,我们表明,如果考虑到任何时间性能,GRG 相对于使用标准位变异的随机局部搜索和进化算法的优势会增加(即,如果寻求近似解而不是精确解,则性能差距更大)。
更新日期:2020-09-01
down
wechat
bug