当前位置: X-MOL 学术IEEE T. Evolut. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tackling Large-Scale and Combinatorial Bi-level Problems with a Genetic Programming Hyper-heuristic
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2020-02-01 , DOI: 10.1109/tevc.2019.2906581
Emmanuel Kieffer , Gregoire Danoy , Matthias R. Brust , Pascal Bouvry , Anass Nagih

Combinatorial bi-level optimization remains a challenging topic, especially when the lower-level is an ${\mathcal {NP}}$ -hard problem. In this paper, we tackle large-scale and combinatorial bi-level problems using GP hyper-heuristics, i.e., an approach that permits to train heuristics like a machine learning model. Our contribution aims at targeting the intensive and complex lower-level optimizations that occur when solving a large-scale and combinatorial bi-level problem. For this purpose, we consider hyper-heuristics through heuristic generation. Using a GP hyper-heuristic approach, we train greedy heuristics in order to make them more reliable when encountering unseen lower-level instances that could be generated during bi-level optimization. To validate our approach referred to as GA+AGH, we tackle instances from the bi-level cloud pricing optimization problem (BCPOP) that model the trading interactions between a cloud service provider and cloud service customers. Numerical results demonstrate the abilities of the trained heuristics to cope with the inherent nested structure that makes bi-level optimization problems so hard. Furthermore, it has been shown that training heuristics for lower-level optimization permits to outperform human-based heuristics and metaheuristics which constitute an excellent outcome for bi-level optimization.

中文翻译:

使用遗传编程超启发式解决大规模和组合双层问题

组合双层优化仍然是一个具有挑战性的话题,特别是当较低级别是 ${\mathcal {NP}}$ -hard 问题时。在本文中,我们使用 GP 超启发式(即一种允许像机器学习模型一样训练启发式方法)来解决大规模和组合的双层问题。我们的贡献旨在针对在解决大规模和组合双层问题时发生的密集和复杂的低级优化。为此,我们通过启发式生成来考虑超启发式。使用 GP 超启发式方法,我们训练贪婪启发式方法,以便在遇到可能在双层优化期间生成的看不见的低级实例时使它们更可靠。为了验证我们称为 GA+AGH 的方法,我们处理来自双层云定价优化问题 (BCPOP) 的实例,该问题对云服务提供商和云服务客户之间的交易交互进行建模。数值结果证明了训练有素的启发式方法能够处理使双层优化问题如此困难的固有嵌套结构。此外,已经表明,用于较低级别优化的训练启发式方法可以胜过基于人类的启发式方法和元启发式方法,这构成了双层优化的极好结果。
更新日期:2020-02-01
down
wechat
bug