Abstract

The particle swarm optimization (PSO) is a wide used optimization algorithm, which yet suffers from trapping in local optimum and the premature convergence. Many studies have proposed the improvements to address the drawbacks above. Most of them have implemented a single strategy for one problem or a fixed neighborhood structure during the whole search process. To further improve the PSO performance, we introduced a simple but effective method, named adaptive particle swarm optimization with Gaussian perturbation and mutation (AGMPSO), consisting of three strategies. Gaussian perturbation and mutation are incorporated to promote the exploration and exploitation capability, while the adaptive strategy is introduced to ensure dynamic implement of the former two strategies, which guarantee the balance of the searching ability and accuracy. Comparison experiments of proposed AGMPSO and existing PSO variants in solving 29 benchmark functions of CEC 2017 test suites suggest that, despite the simplicity in architecture, the proposed AGMPSO obtains a high convergence accuracy and significant robustness which are proven by conducted Wilcoxon’s rank sum test.

1. Introduction

Particle swarm optimization (PSO) is an evolutionary computing technique proposed by Kennedy and Eberhart in 1995 [1], originating from the simulation of predation and other behaviors of bird flocks and fish schools. The solution of each optimization problem in the algorithm is similar to a “particle” in the search space. The particle swarm algorithm randomly generates an initial swarm and gives each particle a random velocity. During the optimization process, the particles adjust the velocity and trajectory according to the experience of themselves and companions, so that the whole swam contains the ability to fly to a better search area. Involving few parameters and with easy implementation, PSO has been widely used in many fields such as function optimization, neural network training, fuzzy system control, pattern recognition, and engineering application. However, the PSO algorithm still has problems such as premature and easily falling into local optimum when tackling complex multimodal problems. In order to improve the solving ability of particle swarm optimization, researchers have proposed methods, such as an adjustment of the inertial parameters of particle swarm algorithm, including dynamic policies and adaptive methods, learning factors, and social factors [2], a neighborhood searching strategy to strengthen the exploration of the neighborhood of the current population [3], an adoption of the information-sharing mechanism to enhance population diversity and avoid premature algorithm convergence [4], and the integrations with other algorithms, such as the combination of particle swarm optimization algorithm and immune algorithm, genetic algorithm, and artificial bee colony algorithm [5].

A variety of improved methods are proposed to solve the existing problems of PSO. The inertia weight and velocity parameters were dynamically adjusted through the particle swarm’s convergence state to speed up the convergence speed and balance the global search and local search capabilities [6]. Alatas et al. [7] proposed a strategy to learn from outstanding individuals other than the optimal particles to adapt to the solution of high-dimensional problems. Zhao et al. [8] introduced biology principles to give the particles the ability of multi-crossover and swarm colonization behaviors. Liang et al. [9] implemented a neighborhood development strategy to improve the algorithm’s search ability. Chen et al. [10] leveraged the optimal information for all other particles to update the velocity of the particles in different dimensions. A particle swarm algorithm with lifecycle and challenging behavior is proposed to preserve particles’ activity during the evolution of particle swarm, which is beneficial to the global range search [11]. Frans and Engelbrecht [12] incorporated chaos into the particle movement process, so that the particle swarm alternately moves between chaos and stability, gradually approaching the best point. Tian [13] initialized the particle swarm in a chaotic manner to ensure that the particle swarm can be evenly distributed in the solution space and achieve better global search capabilities. Du et al. [14] and Munlin and Anantathanavit [15] proposed to use multiple methods to realize the particles’ evolution to improve the search capability of the algorithm. Kiran [16] improved the efficiency of the algorithm evolution with a new evolutionary mechanism. It improves the calculating speed of the algorithm and particularly has an advantage in solving multimodal problems.

These methods improve the performance of PSO to certain extent, but there are still many flaws, such as high complexity in architecture and low convergence speed. The solutions, such as adaptive, perturbation, and mutation, were incorporated to address these problems.

Aiming to attain the prominent performance of PSO, the adaptive strategy was introduced to dynamically update the parameters of the algorithm in prior studies, i.e., inertia weight [1721], velocity and position [22, 23], and ω, c1, and c2 of each particle [5, 24]. Wang et al. [25] and Li and Cheng et al. [26] introduced a mixed adaptive strategy to adjust the parameters in order to balance the search and convergence capabilities. Beyond adaptive updates the parameters, more complex adaptive strategies are proposed. The particles are randomized based on the detection of the changes of value [27]. In order to adaptively maintain the social attribution of swarm, the inactive particles are taken off based on the diversity of fitness between current particle and the best historical experience [28].

In order to promote particles to jump out of local optimum further improving the global searching ability, multiple perturbation strategies were introduced. A chaotic perturbation was incorporated into the PSO algorithm, which improved particles’ diversity [29]. Mahmoodabadi et al. [30] utilized Cauchy perturbation and reverse learning to accelerate the particle swarm’s convergence and escape the local optimal solution. Wang et al. [31] proposed nonuniform mutation and multistage perturbation of particles, which perturbs the optimal solution at different stages of evolution, thereby increasing group diversity and increasing the probability of jumping out of local extreme points.

For increasing vitality and diversity of particles, mutation strategy has been implemented in many optimizations of PSO. Pehlivanoglu [32] applied a mutation strategy into global random diversity and local controlled diversity. The undesired particles are replaced following the mutation strategy to accelerate the convergence speed [33]. Large-scale mutation and small-scale mutation are conducted to prevent the premature convergence while guaranteeing the convergence speed [34].

By contrast with the prior studies, we proposed an adaptive principle according to which the perturbation and mutation are conducted to balance the convergence accuracy and rate. Our main contributions are summarized as follows:(1)An adaptive adjusting rule is incorporated following the cosine law, in order that the particles are interfered with larger amplitude to improve the particle’s global search ability in the early stage and with a smaller amplitude to improve the convergence accuracy.(2)Following the adaptive strategy, the Gaussian perturbation is incorporated to pump the optimal particle to jump out of the local optimum.(3)Identically, according to the adaptive strategy, the mutation is implemented to improve the diversity of particles that have stagnated evolution and to balance the ratio of inheritance and mutation to ensure the population’s searching ability.

2. Adaptive Mutation Particle Swarm Optimization Algorithm with Gaussian Perturbation

2.1. Basic Particle Swarm Optimization

PSO first initializes a swarm. A particle in the swarm represents a solution of each search space, and each particle has two parameters: position and velocity. Assuming that the size of the current swarm P(t) is N, the position, and the velocity of the i-th particle in the swarm are expressed as and , where is the dimension of the problem and is an evolutionary algebra. The particles are evaluated by a previously designed fitness function. The particle updates its velocity and position through the swarm optimal and the individual historical optimal in the iterative process. The equations for updating the velocity and position of a particle are as follows:where is the particle’s inertia weight, which determines the degree of influence of the particle’s previous velocity on the current velocity, is the particle self-cognition learning coefficient, is the social cognitive learning coefficient, and and are random numbers between 0 and 1.

2.2. Task Definition

As the number of iterations increases in the standard PSO algorithm, the particles will gradually approach the optimal solution, the evolutionary rate and swarm diversity will gradually decrease. Once the optimal particle falls into the local optimum, it hardly escapes. To address this problem, we incorporate the Gaussian perturbation and mutation strategy where the threshold stop_num is set to define whether the particles are in the evolutionary stagnation state or not. Since fitness of a particle ceased to evolve, record the continuous times as tag, namely, the times of i-th particle as tag(i) and the times of as . If  ≥ stop_num, it means that the evolution of the population has stagnated and the Gaussian perturbation is applied to pump the population to jump out of the local optimum. If tag(i) ≥ stop_num, it means that the evolution of this particle has stagnated, and the mutation is conducted to update the particle. Either perturbation or mutation is conducted following the proposed adaptive strategy, guaranteeing the balance of searching ability and accuracy. The process of AGMPSO is shown in Figure 1.

2.2.1. Adaptive Strategy

In many previous studies, the amplitude of interference in PSO parameters remains the same during the iterations, which is not beneficial for the convergence in the later period. Aiming to reach the ideal state of PSO, a larger amplitude of interference is required at the early iterative stage to ensure a better global searching ability, and a smaller one in the late iterative stage to guarantee the convergence. Hence, a dynamically adaptive strategy is necessary. In this study, we introduce the probability Pc adaptively altering following the cosine law as iteration increases, and the equation is as follows:where t is current evolutionary iteration, Pc(t) (as shown in Figure 2) is the probability of application of Gaussian perturbation or mutation of t iteration, c3 is the adaptive coefficient, and Max_Gen is the maximum number of iterations.

2.2.2. Gaussian Perturbation Strategy

Gaussian perturbation is adopted in the particle , which is in the evolutionary stagnation state, to improve the ability to jump out of the local optimum. In order to ensure that each dimension of has a possibility of escaping the local optimum, each dimension of is updated by Gaussian perturbation with probability Pc(t), which guides the perturbation to conduct adaptively so as to ensure the better ability of escaping from the local optimum at early stages and also the better convergence ability during the later stages.

In addition to adapting the algorithm stages, via adaptive variance δ, the Gaussian perturbation strategy is capable of adapting the proposed PSO algorithm to different functions according to whose value spaces (as shown in equation (4).

When the evolution of particle is stagnant, that is,  ≥ stop_num, the perturbation is implemented as follows:where is a random number between 0 and 1, is the optimal particle of the d-th dimension, and is the adaptive variance of the d-th dimension.

2.2.3. Mutation Strategy

The mutation strategy is utilized to improve the particle diversity of the algorithm and balance the ratio of the mutation and the inheritance to ensure the convergence. Similar to the Gaussian perturbation strategy, the mutation strategy can adapt our algorithm to different functions as follows:where is an adaptive mutation coefficient, is a random number between 0 and 1, and is the degree of adaptive mutation of d-th dimension.

When a particle falls into the evolutionary stagnation, the mutation operator is introduced into partial dimensions in the speed updated equation (6), which increases the diversity of the population getting rid of the constraints of particles, especially improves the search ability of particles with low speed due to converging near , and promotes the particle utilization. Since the dimension of the particle also uses the probability to mutate, the mutated range of the algorithm’s particles is large in the initial period, which is favorable to the global search. In the later period, the mutated range and the inheritance ratio are small, which is beneficial for algorithm convergence.

2.2.4. Algorithm Complexity Analysis

The computational costs of the standard PSO include the initialization O(mn), fitness evaluation O(mn), and velocity and position update O(2mn) (m and n are the swarm size and dimension, respectively). Thus, the time complexity of the PSO is O(mn). Compared with the standard PSO, AMGPSO involves two operators. However, the Gaussian perturbation operator O(n) or the mutation operator O(mn) need to be conducted separately only when the global best position is stagnant, or the personal best position is stagnant within several iterations. The worst-case time complexity of AGMPSO is O(mn), including the initialization O(mn), evaluation O(mn+n), and update O(2mn+mn). The conclusion can be drawn from the above component complexity analyses; AGMPSO contains the same level of time complexity as the standard PSO algorithm.

3. Experiments and Discussions

3.1. Algorithm Aggregation Degree Analysis

The ideal status of PSO is that, in the early stage, the particles can explore the solution space more dispersedly, but in the later stage, they can better aggregate to obtain higher convergence accuracy. We introduce an aggregation degree to analyze the convergence status of the standard PSO and the AGMPSO. When the particle fitness deviation from the group average value is larger, and the particle aggregation degree is larger, the particle diversity is better, and the algorithm search ability is stronger. Aggregation degree of the t-th generation particle is expressed as follows:where is the average fitness value of t-th generation particle, is the maximum fitness value of t-th generation particle, and is the minimum fitness value of t-th generation particle.

Figure 3 shows the comparison of aggregation degree between the standard PSO and AGMPSO while solving the Rastrigin function. (A) and (B) are the aggregation curves of standard PSO and AGMPSO in 3500 iterations, and (C) and (D) are enlarged screenshots of the last 20 iterations. It can be seen that the standard PSO holds the higher particle aggregation even in later iterations, which indicates the high diversity, that is, the poor convergence. It is worth noting that the aggregation degree remains a high level, which means the swarm did not converge to a satisfactory extent till the end. In contrary, AGMPSO can maintain higher diversity throughout the early period and lower diversity in the later period, which ensures both the global search ability and the convergence.

3.2. Comparison with PSO Variants

In order to evaluate the performance of the proposed algorithm, the comparison experiments of AGMPSO with PSO, TSLPSO, HFPSO, and MPEPSO are conducted in this section, and parameters of each algorithm are listed in Table 1. All experiments were performed under Windows 10 system, eight-core processor (Intel (R) Core (TM) i7-10700K CPU @ 3.80 GHz), 16G memory using MATLAB R2018a.

3.2.1. Benchmark Functions

CEC 2017 [38] test suites are introduced in experiments, including the 29 benchmark functions divided into four categories: unimodal functions, simple multimodal functions, hybrid functions, and composition functions. Based on the description of the definitions of CEC 2017 test suits, F2 has been excluded because of its unstable behavior especially for higher dimensions.

Two series of experiments are performed, the dimension of each test function of each series is 10 and 30, respectively, the population size is 30, and each algorithm runs each benchmark function for 30 runs independently. According to the definitions of CEC 2017 test suits, the stop condition of each run is that the maximum number of function evaluations (MaxFES) reaches 10000  D, that is, MaxFES = 100,000 for 10D, MaxFES = 300,000 for 30D. Table 2 shows the search range and the global optimum of benchmark functions.

3.2.2. Comparison of Simulation Results for Benchmark Functions

The comparison results of mean values (mean), standard deviation (std), and Wilcoxon rank sum test (h) produced by all compared PSO variants on 10-dimensional tested functions are represented in Table 3 and 30-dimensional in Table 4, where the optimal results are marked in bold. The comparison results of computation time of each run are in Figures 4 and 5.

As shown in Table 3, AGMPSO performs well for unimodal functions, especially getting optimal mean fitness value on F3, despite the same result of TSLPSO on F3. In solving the multimodal functions from F4 to F10, AGMPSO has superior advantage, achieving better results on 5 out of 7 functions. However, AGMPSO does not obtain advantage solving the hybrid functions (F11–F20), and HFPSO gets the same achievement as proposed algorithm: 4 out of 10. It is still worth noting that the standard deviations of AGMPSO are smaller than HFPSO, which indicates more stability. For composition functions (F21–F30), AGMPSO algorithm perfectly reflects the better ability to search global optimum on 6 test functions, compared with the other competitors. In general, AGMPSO is top ranked on 14 out of all functions. Although on F3, F28, and F29, the proposed algorithm does not contain the statistically significant advantage, its mean fitness values equal those of winning algorithms, which makes AGMPSO take the second rank on the abovementioned function.

For 30-dimensional experiments in Table 4, since the iterations greatly increase, the results of all algorithms have improved in varying degrees. AGMPSO achieves optimal mean fitness value on F3 and F20. It gets the same performance as in 10-dimensional experiments in unimodal and multimodal functions, gets 7 out of 10 best results in hybrid functions, and 6 out of 10 best results in composition functions. In general, AGMPSO outperforms peers for the 19 benchmark functions.

In short, AGMPSO is top ranked in both 10-dimensional and 30-dimensional experiments.

The standard deviations of AGMPSO on different benchmark functions are generally smaller than those of comparison PSO variants, which indicates better robustness of our algorithm.

Aiming to analyze the computational efficiency of compared peers, the average computation time for each algorithm to run all benchmark functions is depicted in Figures 4 and 5. It can be concluded that AGMPSO consumes lower computational overhead than its peers and the advantage grows obvious when iteration increases as shown in Figure 5. It is worth noting that, despite shared the same time complexity of standard PSO, the time consumption of AGMPSO is significantly lower indicating higher computational efficiency.

In summary, the outstanding results of AGMPSO in terms of average computational time and fitness values demonstrate that our proposed algorithm obtains higher search accuracy and convergence rate than its peers, meanwhile, with the significant robust.

3.2.3. Wilcoxon’s Rank Sum Test Results

The Wilcoxon rank sum test at a significance level of α = 0.05 is performed on the ranking between AGMPSO and other PSO variants to analyze their statistical significance. Tables 3 and 4 list the results of Wilcoxon rank sum test on fitness values of all 29 functions. In both tables, value (+/−/∼) indicates that the AGMPSO performs significantly better, significantly worse, or not statistically significant than its competitor.

It can be observed from the results that AGMPSO is significantly better than compared PSO variants in most of test functions. AGMPSO gets the same mean values with TSLPSO, standard PSO, and HFPSO on 10-dimensional F3, F28, and F29, while with TSLPSO on 30-dimensional F3. However, from the rank sum test results, AGMPSO performs significantly worse than the peer on above functions. Although the proposed algorithm does not gain the superior rank on these functions, it still takes the second rank in mean fitness value on each function, and the difference is slight.

3.2.4. Convergence Progresses

In order to observe the convergence speed of all the peer algorithms, the convergence processes in random runs of the comparison algorithms on benchmark functions of 10 dimensions are depicted in Figures 6 and 7.

In Figure 6, AGMPSO does not show the characteristics of rapid convergence on F1 and F3 in the initial period because of the strong particle diversity yielded by proposed mutation strategy; meanwhile, the high solution accuracy is achieved by our algorithm at the later convergence stage.

From F4 to F10, we can observe that the other comparison algorithms fall into the local minima to different extent, while AGMPSO attains the favorable performances on all multimodal functions. However, it is noteworthy that the effect of proposed adaptive strategies is not obvious on F4, F6, and F9, which may cause the low diversity in the early period and fail to find the global optima. This is demonstrated by the result of F6 in Table 3.

From the performances for F11 to F20, the similar problems of rapid convergence in the initial period can be observed on F11, F14, and F15. Furthermore, AGMPSO fails to achieve the satisfactory results on F18. Despite with the 4 outstanding achievements and the improvement in 30-dimensional experiment results, due to the rapid convergence rate at the early stage, the exploration capability of proposed algorithm in solving hybrid functions should be promoted.

The results presented in F21 to F30 depict that exploitation capability of AGMPSO is higher than most of the peers. The gradual convergence process on F21, F24, F26, F28, and F30 can be seen, which reflects the balance of exploration and exploitation of our algorithm.

The conscious summary can be drawn that AGMPSO performs well on most of functions. Meanwhile, the proposed algorithm does not yield satisfactory performances on hybrid functions, so do the other PSO variants, which suggests much room for improvement in solving these functions. It turns out from the 30-dimensional comparison results that, despite the low efficiency, increasing the number of iterations could be an improvement insight worth discussing.

By contrast with the other competitors, a relatively slow and gradual convergence curve presents in solving many functions, which exactly portrays the algorithm’s intent of finding out more promising solutions.

4. Conclusion

Adaptive particle swarm optimization with Gaussian perturbation and mutation is proposed to address the existing drawback of standard PSO. To prevent trapping into local optimum, Gaussian perturbation is implemented to global optima, further increasing the exploitation capability. For the nonoptimal particles that fall into the evolutionary stagnation, the mutation is leveraged to promote the particles’ diversity and utilization to improve the exploration ability. Simultaneously, the adaptive strategy regulates the interference level of Gaussian perturbation and mutation during different evolutional stages in order to balance the searching ability and accuracy. The visual result of aggregation analysis validates this dynamic process. The performance on benchmark functions of CEC 2017 test suits manifests that AGMPSO outperforms its competitors by a big margin in terms of searching accuracy, searching reliability, and searching efficiency.

In future works, considering the powerful global search ability of the particle swarm algorithm, it can be considered to optimize the topology, connection weights, and thresholds of the neural networks or combine the global optimization ability of PSO with the local optimization ability of the BPNN to improve the generalization and learning performance of the neural network.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grants 71371181 and 71672193 and by the Research Foundation of Xian International Studies University under grant BSZA2019003.