当前位置: X-MOL 学术Intell. Data Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An improved opposition based learning firefly algorithm with dragonfly algorithm for solving continuous optimization problems
Intelligent Data Analysis ( IF 0.9 ) Pub Date : 2020-03-27 , DOI: 10.3233/ida-194485
Mehdi Abedi , Farhad Soleimanian Gharehchopogh

Nowadays, the existence of continuous optimization problems has led researchers to come up with a variety of methods to solve continues optimization problems. The metaheuristic algorithms are one of the most popular and common ways to solve continuous optimization problems. Firefly Algorithm (FA) is a successful metaheuristic algorithm for solving continuous optimization problems; however, although this algorithm performs very well in local search, it has weaknesses and disadvantages in finding solution in global search. This problem has caused this algorithm to be trapped locally and the balance between exploration and exploitation cannot be well maintained. In this paper, three different approaches based on the Dragonfly Algorithm (DA) processes and the OBL method are proposed to improve exploration, performance, efficiency and information-sharing of the FA and to avoid the FA getting stuck in local trap. In the first proposed method (FADA), the robust processes of DA are used to improve the exploration, performance and efficiency of the FA; and the second proposed method (OFA) uses an Opposition-Based Learning (OBL) algorithm to accelerate the convergence and exploration of the FA. Finally, in the third approach, which is referred to as OFADA in this paper, a hybridization of the hybrid FADA and the OBL method is used to improve the convergence and accuracy of the FA. The three proposed methods were implemented on functions with 2, 4, 10, and 30 dimensions. The results of the implementation of these three proposed methods showed that OFADA approach outperformed the other two proposed methods and other compared metaheuristic algorithms in different dimensions. In addition, all the three proposed methods provided better results compared with other metaheuristic algorithms on small-dimensional functions. However, performance of many metaheuristic algorithms decreased with increasing the dimensions of the functions. While the three proposed methods, in particular the OFADA approach, have been able to make better converge with the higher-dimensional optimization functions toward the target in comparison with other metaheuristic algorithms, and to show a high performance.

中文翻译:

基于蜻蜓算法的改进的基于对立学习萤火虫算法,解决了连续优化问题

如今,连续优化问题的存在促使研究人员提出了各种方法来解决持续优化问题。元启发式算法是解决连续优化问题的最流行,最常用的方法之一。Firefly算法(FA)是成功解决连续优化问题的元启发式算法;然而,尽管该算法在本地搜索中表现很好,但是在全局搜索中寻找解决方案却有缺点和缺点。这个问题已导致该算法被局限在本地,并且无法很好地保持勘探与开发之间的平衡。本文提出了三种基于蜻蜓算法(DA)流程和OBL方法的不同方法来提高勘探,性能,FA的效率和信息共享,并避免FA陷入本地陷阱。在第一个提出的方法(FADA)中,DA的鲁棒过程被用来改善FA的探索,性能和效率。第二种提出的方​​法(OFA)使用基于对立的学习(OBL)算法来加速FA的收敛和探索。最后,在第三种方法中,本文将其称为OFADA,使用混合FADA和OBL方法的混合来提高FA的收敛性和准确性。在具有2、4、10和30维的函数上实现了三种建议的方法。这三种建议方法的实施结果表明,OFADA方法在不同维度上优于其他两种建议方法以及其他比较的元启发式算法。此外,与其他针对小维函数的元启发式算法相比,所提出的三种方法均提供了更好的结果。但是,许多元启发式算法的性能随着函数尺寸的增加而降低。尽管与其他元启发式算法相比,这三种提出的方​​法(尤其是OFADA方法)已经能够更好地收敛于高维优化函数,并朝目标方向收敛。与其他针对小维函数的元启发式算法相比,这三种方法均提供了更好的结果。但是,许多元启发式算法的性能随着函数尺寸的增加而降低。尽管与其他元启发式算法相比,这三种提出的方​​法(尤其是OFADA方法)已经能够更好地收敛于高维优化函数,并朝目标方向收敛。与其他针对小维函数的元启发式算法相比,这三种方法均提供了更好的结果。但是,许多元启发式算法的性能随着函数尺寸的增加而降低。尽管与其他元启发式算法相比,这三种提出的方​​法(尤其是OFADA方法)已经能够更好地收敛于高维优化函数,并朝目标方向收敛。
更新日期:2020-03-27
down
wechat
bug