当前位置: X-MOL 学术Evol. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms
Evolutionary Computation ( IF 4.6 ) Pub Date : 2018-12-01 , DOI: 10.1162/evco_a_00217
Leonardo C T Bezerra 1 , Manuel López-Ibáñez 2 , Thomas Stützle 3
Affiliation  

Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters—such as evolutionary operators, population size, etc.—whose configuration may be tuned for each scenario. Instead of relying on a common or “default” parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.

中文翻译:

高性能多目标和多目标进化算法的大规模实验评估

在过去的几十年中,多目标进化算法 (MOEA) 的研究产生了大量算法和关于性能评估工具的丰富文献,以评估和比较它们。然而,新提出的 MOEA 通常与极少数、通常是十年前的 MOEA 进行比较。这种明显矛盾的一个原因是缺乏一个共同的比较基线,随后的每项研究通常都会设计自己的实验场景,与其他研究略有不同。因此,MOEA 的最新技术是一个有争议的话题。本文报告了对涵盖广泛实验场景的大量 MOEA 的系统、全面评估。本研究的一个新颖之处在于与多目标优化 (MO) 相关的高级算法组件之间的分离,它表征了每个特定的 MOEA,以及潜在的参数——例如进化算子、种群大小等——其配置可以针对每个场景进行调整。我们不依赖于可能对特定 MO​​EA 或场景表现不佳且无意中存在偏差的通用或“默认”参数配置,而是使用自动算法配置方法为每个场景调整每个 MOEA 的参数。我们的结果证实了该领域的一些假设知识,同时它们提供了关于 MOEA 对多目标问题的相对性能的新见解。例如,在某些条件下,基于指标的 MOEA 在处理此类问题时比之前假设的更具竞争力。我们还分析了影响性能的特定问题特征、性能指标之间的一致性、以及对文献中使用的默认配置的调整配置的改进。最后,所产生的数据公开可用,以激发进一步的分析和未来比较的基线。
更新日期:2018-12-01
down
wechat
bug