当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Block-Level Knowledge Transfer for Evolutionary Multitask Optimization.
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2023-05-22 , DOI: 10.1109/tcyb.2023.3273625
Yi Jiang 1 , Zhi-Hui Zhan 1 , Kay Chen Tan 2 , Jun Zhang 3
Affiliation  

Evolutionary multitask optimization is an emerging research topic that aims to solve multiple tasks simultaneously. A general challenge in solving multitask optimization problems (MTOPs) is how to effectively transfer common knowledge between/among tasks. However, knowledge transfer in existing algorithms generally has two limitations. First, knowledge is only transferred between the aligned dimensions of different tasks rather than between similar or related dimensions. Second, the knowledge transfer among the related dimensions belonging to the same task is ignored. To overcome these two limitations, this article proposes an interesting and efficient idea that divides individuals into multiple blocks and transfers knowledge at the block-level, called the block-level knowledge transfer (BLKT) framework. BLKT divides the individuals of all the tasks into multiple blocks to obtain a block-based population, where each block corresponds to several consecutive dimensions. Similar blocks coming from either the same task or different tasks are grouped into the same cluster to evolve. In this way, BLKT enables the transfer of knowledge between similar dimensions that are originally either aligned or unaligned or belong to either the same task or different tasks, which is more rational. Extensive experiments conducted on CEC17 and CEC22 MTOP benchmarks, a new and more challenging compositive MTOP test suite, and real-world MTOPs all show that the performance of BLKT-based differential evolution (BLKT-DE) is superior to the compared state-of-the-art algorithms. In addition, another interesting finding is that the BLKT-DE is also promising in solving single-task global optimization problems, achieving competitive performance with some state-of-the-art algorithms.

中文翻译:

用于进化多任务优化的块级知识转移。

进化多任务优化是一个新兴的研究课题,旨在同时解决多个任务。解决多任务优化问题 (MTOP) 的一个普遍挑战是如何在任务之间有效地传递常识。然而,现有算法中的知识转移通常有两个局限性。首先,知识仅在不同任务的对齐维度之间传递,而不是在相似或相关维度之间传递。其次,忽略了属于同一任务的相关维度之间的知识转移。为了克服这两个限制,本文提出了一个有趣且高效的想法,将个人分成多个块并在块级别传输知识,称为块级知识传输(BLKT)框架。BLKT将所有任务的个体分成多个块,得到一个基于块的种群,每个块对应几个连续的维度。来自相同任务或不同任务的相似块被分组到同一个集群中以进行进化。通过这种方式,BLKT 使得原本要么对齐或不对齐,要么属于同一任务或不同任务的相似维度之间的知识迁移更加合理。在 CEC17 和 CEC22 MTOP 基准测试、一个新的、更具挑战性的综合 MTOP 测试套件和真实世界的 MTOP 上进行的大量实验都表明,基于 BLKT 的差分进化 (BLKT-DE) 的性能优于比较的 state-of-最先进的算法。此外,
更新日期:2023-05-22
down
wechat
bug