当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Improving Intelligence of Evolutionary Algorithms Using Experience Share and Replay
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-08-10 , DOI: arxiv-2009.08936
Majdi I. Radaideh, Koroush Shirvan

We propose PESA, a novel approach combining Particle Swarm Optimisation (PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid Algorithm, inspired from reinforcement learning. PESA hybridizes the three algorithms by storing their solutions in a shared replay memory. Next, PESA applies prioritized replay to redistribute data between the three algorithms in frequent form based on their fitness and priority values, which significantly enhances sample diversity and algorithm exploration. Additionally, greedy replay is used implicitly within SA to improve PESA exploitation close to the end of evolution. The validation against 12 high-dimensional continuous benchmark functions shows superior performance by PESA against standalone ES, PSO, and SA, under similar initial starting points, hyperparameters, and number of generations. PESA shows much better exploration behaviour, faster convergence, and ability to find the global optima compared to its standalone counterparts. Given the promising performance, PESA can offer an efficient optimisation option, especially after it goes through additional multiprocessing improvements to handle complex and expensive fitness functions.

中文翻译:

使用经验分享和回放提高进化算法的智能

我们提出了 PESA,这是一种在混合算法中结合粒子群优化 (PSO)、进化策略 (ES) 和模拟退火 (SA) 的新方法,其灵感来自强化学习。PESA 通过将它们的解决方案存储在共享重放内存中来混合这三种算法。接下来,PESA 应用优先重放,根据适应度和优先级值在三种算法之间以频繁形式重新分配数据,这显着增强了样本多样性和算法探索。此外,在 SA 中隐式使用贪婪重放以改进接近进化结束时的 PESA 开发。对 12 个高维连续基准函数的验证表明,在相似的初始起点、超参数和代数下,PESA 对独立 ES、PSO 和 SA 具有卓越的性能。与其独立的对应物相比,PESA 显示出更好的探索行为、更快的收敛以及找到全局最优值的能力。鉴于有希望的性能,PESA 可以提供有效的优化选项,特别是在它经过额外的多处理改进以处理复杂且昂贵的适应度函数之后。
更新日期:2020-09-21
down
wechat
bug