当前位置: X-MOL 学术Swarm Evol. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Fast and efficient stochastic opposition-based learning for differential evolution in numerical optimization
Swarm and Evolutionary Computation ( IF 10 ) Pub Date : 2020-09-08 , DOI: 10.1016/j.swevo.2020.100768
Tae Jong Choi , Julian Togelius , Yun-Gyung Cheong

A fast and efficient stochastic opposition-based learning (OBL) variant is proposed in this paper. OBL is a machine learning concept to accelerate the convergence of soft computing algorithms, which consists of simultaneously calculating an original solution and its opposite. Recently, a stochastic OBL variant called BetaCOBL was proposed, which is capable of controlling the degree of opposite solutions, preserving useful information held by original solutions, and preventing the waste of fitness evaluations. While it has shown outstanding performance compared to several state-of-the-art OBL variants, the high computational cost of BetaCOBL may hinder it from cost-sensitive optimization problems. Also, as it assumes that the decision variables of a given problem are independent, BetaCOBL may be ineffective for optimizing inseparable problems. In this paper, we propose an improved BetaCOBL that mitigates all the limitations. The proposed algorithm called iBetaCOBL reduces the computational cost from O(NP2 · D) to O(NP · D) (NP and D stand for population size and a dimension, respectively) using a linear time diversity measure. Also, the proposed algorithm preserves strongly dependent variables that are adjacent to each other using multiple exponential crossover. We used differential evolution (DE) variants to evaluate the performance of the proposed algorithm. The results of the performance evaluations on a set of 58 test functions show the excellent performance of iBetaCOBL compared to ten state-of-the-art OBL variants, including BetaCOBL.



中文翻译:

快速高效的基于随机对立面学习的数值优化中的差分演化

本文提出了一种快速有效的基于对立的随机学习(OBL)变体。OBL是一种机器学习概念,用于加速软计算算法的收敛,该算法包括同时计算原始解决方案及其反义词。最近,有人提出了一种随机的OBL变量BetaCOBL,它能够控制相反解的程度,保留原始解所保存的有用信息并防止适应度评估的浪费。尽管与几种最先进的OBL变体相比,它表现出了卓越的性能,但BetaCOBL的高计算成本可能会使其免受成本敏感型优化问题的困扰。同样,由于假定给定问题的决策变量是独立的,因此BetaCOBL对于优化不可分离的问题可能无效。在本文中,我们提出了一种改进的BetaCOBL,它可以减轻所有限制。所提出的算法iBetaCOBL降低了ONP 2  ·  D)到ONP  ·  D)(NPD分别代表种群大小和维度),使用线性时间分集测度。而且,所提出的算法使用多个指数交叉来保留彼此相邻的强因变量。我们使用差分进化(DE)变体来评估所提出算法的性能。一组58个测试功能的性能评估结果表明,与包括BetaCOBL在内的十个最新的OBL变体相比,iBetaCOBL具有出色的性能。

更新日期:2020-09-08
down
wechat
bug