当前位置: X-MOL 学术IEEE T. Evolut. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Regularized Evolutionary Multitask Optimization: Learning to Intertask Transfer in Aligned Subspace
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2020-09-11 , DOI: 10.1109/tevc.2020.3023480
Zedong Tang , Maoguo Gong , Yue Wu , Wenfeng Liu , Yu Xie

This article proposes a novel and computationally efficient explicit intertask information transfer strategy between optimization tasks by aligning the subspaces. In evolutionary multitasking, the tasks might have biases embedded in function landscapes and decision spaces, which often causes the threat of predominantly negative transfer. However, the complementary information among different tasks can give an enhanced performance of solving complicated problems when properly harnessed. In this article, we distill this insight by introducing an intertask knowledge transfer strategy implemented in the low-dimension subspaces via a learnable alignment matrix. Specifically, to unveil the significant features of the function landscapes, the task-specific low-dimension subspaces is established based on the distribution information of subpopulations possessed by tasks, respectively. Next, the alignment matrix between pairwise subspaces is learned by minimizing the discrepancies of the subspaces. Given the aligned subspaces by applying the alignment matrix to subspaces’ base vectors, the individuals from different tasks are then projected into aligned subspaces and reproduce therein. Moreover, since this method only considers the leading eigenvectors, it turns out to be intrinsically regularized and noise-insensitive. Comprehensive experiments are conducted on the synthetic and practical benchmark problems so as to assess the efficacy of the proposed method. According to the experimental results, the proposed method exhibits a superior performance compared with existing evolutionary multitask optimization algorithms.

中文翻译:


正则化进化多任务优化:学习对齐子空间中的任务间转移



本文通过对齐子空间,提出了一种新颖且计算高效的优化任务之间的显式任务间信息传输策略。在进化多任务处理中,任务可能会在功能景观和决策空间中嵌入偏差,这通常会导致主要负迁移的威胁。然而,如果利用得当,不同任务之间的互补信息可以提高解决复杂问题的性能。在本文中,我们通过引入通过可学习的对齐矩阵在低维子空间中实现的任务间知识转移策略来提炼这一见解。具体来说,为了揭示功能景观的显着特征,根据任务所拥有的子群体的分布信息分别建立任务特定的低维子空间。接下来,通过最小化子空间的差异来学习成对子空间之间的对齐矩阵。通过将对齐矩阵应用于子空间的基向量,给出对齐的子空间,然后将来自不同任务的个体投影到对齐的子空间中并在其中再现。此外,由于该方法仅考虑主要特征向量,因此它本质上是正则化的且对噪声不敏感。对综合和实际基准问题进行了综合实验,以评估所提方法的有效性。实验结果表明,与现有的进化多任务优化算法相比,该方法表现出优越的性能。
更新日期:2020-09-11
down
wechat
bug