当前位置: X-MOL 学术IEEE Trans. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interactive Decomposition Multiobjective Optimization Via Progressively Learned Value Functions
IEEE Transactions on Fuzzy Systems ( IF 10.7 ) Pub Date : 11-12-2018 , DOI: 10.1109/tfuzz.2018.2880700
Ke Li , Renzhi Chen , Dragan Savic , Xin Yao

Decomposition has become an increasingly popular technique for evolutionary multiobjective optimization (EMO). A decomposition-based EMO algorithm is usually designed to approximate a whole Pareto-optimal front (PF). However, in practice, a decision maker (DM) might only be concerned in her/his region of interest (ROI), i.e., a part of the PF. Solutions outside that might be useless or even noisy to the decision-making procedure. Furthermore, there is no guarantee that the preferred solutions will be found when many-objective problems. This paper develops an interactive framework for the decomposition-based EMO algorithm to lead a DM to the preferred solutions of her/his choice. It consists of three modules, i.e., consultation, preference elicitation, and optimization. Specifically, after every several generations, the DM is asked to score a few candidate solutions in a consultation session. Thereafter, an approximated value function, which models the DM's preference information, is progressively learned from the DM's behavior. In the preference elicitation session, the preference information learned in the consultation module is translated into the form that can be used in a decomposition-based EMO algorithm, i.e., a set of reference points that are biased toward the ROI. The optimization module, which can be any decomposition-based EMO algorithm in principle, utilizes the biased reference points to guide its search process. Extensive experiments on benchmark problems with three to ten objectives fully demonstrate the effectiveness of our proposed method for finding the DM's preferred solutions.

中文翻译:


通过逐步学习的价值函数进行交互式分解多目标优化



分解已成为一种越来越流行的进化多目标优化(EMO)技术。基于分解的 EMO 算法通常设计为近似整个帕累托最优前沿 (PF)。然而,在实践中,决策者(DM)可能只关心她/他的感兴趣区域(ROI),即PF的一部分。外部的解决方案可能毫无用处,甚至对决策过程产生干扰。此外,不能保证在解决多目标问题时会找到首选解决方案。本文为基于分解的 EMO 算法开发了一个交互式框架,以引导 DM 找到她/他选择的首选解决方案。它由咨询、偏好获取、优化三个模块组成。具体来说,每隔几代之后,DM 就会被要求在咨询会议中对一些候选解决方案进行评分。此后,从 DM 的行为中逐步学习对 DM 的偏好信息进行建模的近似值函数。在偏好启发会话中,在咨询模块中学习到的偏好信息被转换成可以在基于分解的EMO算法中使用的形式,即一组偏向ROI的参考点。优化模块原则上可以是任何基于分解的EMO算法,利用有偏差的参考点来指导其搜索过程。对具有三到十个目标的基准问题进行的广泛实验充分证明了我们提出的寻找 DM 首选解决方案的方法的有效性。
更新日期:2024-08-22
down
wechat
bug