Abstract

The particle swarm optimization algorithm (PSO) is a meta-heuristic algorithm with swarm intelligence. It has the advantages of easy implementation, high convergence accuracy, and fast convergence speed. However, PSO suffers from falling into a local optimum or premature convergence, and a better performance of PSO is desired. Some methods adopt improvements in PSO parameters, particle initialization, or topological structure to enhance the global search ability and performance of PSO. These methods contribute to solving the problems above. Inspired by them, this paper proposes a variant of PSO with competitive performance called UCPSO. UCPSO combines three effective improvements: a cosine inertia weight, uniform initialization, and a rank-based strategy. The cosine inertia weight is an inertia weight in the form of a variable-period cosine function. It adopts a multistage strategy to balance exploration and exploitation. Uniform initialization can prevent the aggregation of initial particles. It distributes initial particles uniformly to avoid being trapped in a local optimum. A rank-based strategy is employed to adjust an individual particle’s inertia weight. It enhances the swarm’s capabilities of exploration and exploitation at the same time. Comparative experiments are conducted to validate the effectiveness of the three improvements. Experiments show that the UCPSO improvements can effectively improve global search ability and performance.

1. Introduction

Since the particle swarm optimization algorithm (PSO) was proposed by Kennedy and Eberhart in 1995 [1], it has obtained great achievements in finding the optimal value of continuous nonlinear equations [2]. PSO is a special branch of evolutionary algorithms. It adopts social learning among swarms and the self-cognition of individuals to replace the common theory of evolution algorithms (EAs)-survival of the fittest [3]. The bionics mechanism of PSO gives PSO swarm intelligence [4] and enables PSO to imitate the complex search behaviour of swarms, such as bird swarms, fish schools, and ant colonies. Different from EAs, PSO has a simpler iteration mechanism and fewer control parameters [5]. Therefore, it is widely used in practical engineering areas. For example, PSO is usually applied to image processing [6], parameter optimization [7, 8], scheduling optimization [9], clustering [10], and price forecasting [11].

One of the main advantages of PSO is its easy implementation [12]. A large number of numerical experiments also prove that PSO has high convergence accuracy and a fast convergence speed [13]. Consequently, researchers have carried out numerous studies on PSO. However, some limitations of PSO have been found during long-term research work. When facing complex functions, PSO is troubled by falling into a local optimum or premature convergence [14]. Meanwhile, the performance of the original PSO is inadequate and must be improved. To solve these problems, many researchers have proposed improvements. These improvements mainly focus on PSO parameters, particle initialization, and population topology.

Inertia weight is a very important parameter in the PSO algorithm. It controls the balance between the two critical behaviours of PSO: global and local search. Researchers have created various forms of inertia weight. These forms of inertia weight enhance the performance of PSO. Shi and Eberhart proposed a parameter called inertia weight ω in PSO to balance exploration and exploitation [15]. The appearance of ω created a new way to improve the performance of PSO. Then, Eberhart and Shi introduced a linear decreasing inertia weight [16]. This linear decrease in inertia weight greatly improves the comprehensive performance of PSO. It relieves the problem of falling into a local optimum. Recently, Tian et al. adopted a multistage strategy to refine the change process of inertia weight. They split the curve of inertia weight into two stages to satisfy specific requirements. The variant proposed by them achieves excellent performance in the experiment [17].

The quality of the initial particles is related to the PSO results. Many works have been performed to distribute initial particles more dispersedly and make initial particles closer to the global optimum. Tian [18], Zhang [19], and Xu [20] adopted chaotic sequences for particle initialization to increase the diversity of initial particles. Chaotic initialization achieves certain success compared to random initialization under the same conditions. Rahnamayan [21] employed the symmetry strategy in swarm initialization. Symmetry initialization can prevent initial particles from being distant from the global optimum. DMPSO [22] combines chaotic initialization with opposition-based initialization. Experiments validate that the hybrid initialization can recognize the search area better. In addition, MCJPSO [23] randomly divides the entire search space and distributed particles over a search space in independent slots. This semirandom initialization can overcome the limitation of the original PSO. Rauf et al. used the Weibull probability sequence to generate numbers at random locations for swarm initialization. This method is able to enhance the diversity of swarms [24].

To enhance the global search ability and the comprehensive performance of PSO, a uniform initialized particle swarm optimization algorithm with cosine inertia weight (UCPSO) is proposed in this paper. UCPSO combines three effective improvements: an inertia weight in the form of a variable-period cosine function, uniform initialization, and a rank-based strategy for individual particle inertia weights. The cosine inertia weight introduced in this paper adopts the multistage strategy. It divides the change process of inertia weight into three stages. It can balance exploration and exploitation more specifically, help particles transform from global search to local search smoothly, and improve the convergence accuracy. Uniform initialization initializes a particle randomly as the basic point and then generates other initial particles based on this basic point. The initial particles are evenly distributed in each dimension, and the positions of each particle in each dimension are random. This mechanism can prevent the aggregation of initial particles. It distributes particles uniformly to recognize the search area more comprehensively. Uniform initialization is able to avoid falling into a local optimum and improve the search efficiency. In addition, this paper employs a rank-based strategy to adjust individual particle inertia weights. It makes the particles that are close to the swarm’s best position focus on mining and makes particles that are far away from the swarm’s best position keep exploring. It can enhance the global and local search ability of swarms at the same time.

In recent years, researchers have proposed some effective variants of PSO. Ye et al. proposed an improved multiswarm particle swarm optimization with dynamic learning strategy (PSO-DLS). It classified particles of each subswarm into ordinary particles and communication particles [25]. Lynn and Suganthan proposed the ensemble particle swarm optimizer (EPSO), which combines the characteristics of several PSO variants [26]. In EPSO, the best-performing algorithm for each generation can be determined by a self-adaptive scheme. Heterogeneous comprehensive learning particle swarm optimization (HCLPSO) divides the whole swarm into an exploration subpopulation and an exploitation subpopulation [27]. The CL strategy is used to breed learning exemplars for both of them. Gong et al. proposed genetic learning particle swarm optimization (GLPSO), which adopts selection, mutation, and selection [28]. By performing these operators on the historical information of particles, GLPSO is able to construct diversified and high-qualified learning exemplars to guide the swarm.

The purpose of designing UCPSO is to obtain a variant of PSO that has a good comprehensive performance and the ability to escape from a local optimum. In addition, three improvements in UCPSO should be easy to use. They are introduced to help researchers improve the global search ability and performance of PSO. A large number of comparative experiments based on benchmark functions were used to validate the effectiveness of the UCPSO improvements.

This paper is organized as follows. Section 2 introduces the standard PSO and related research on inertia weight and particle initialization. Section 3 describes the UCPSO and the three improvements in detail. Experiments are presented in Section 4. The conclusion is given in the Section 5.

2. Particle Swarm Optimization Algorithms

2.1. Standard PSO

PSO is a stochastic algorithm based on population [15]. It finds the optimal solution in a given range by mimicking the behaviour of birds. The particles in the swarm are potential solutions, and n is the total number of particles in the swarm. Every particle remembers its current position Xi, its current velocity Vi, and the best position that it has ever been . The swarm also remembers the swarm’s best position Gbest. The are all D-dimensional vectors, . Particles find the optimal solution through iterations. The position and velocity of particles are updated as follows:where t denotes the current iteration, . In addition, . ω is the inertia weight. c1 and c2 are acceleration coefficients, which control the influence of Pbesti, and Gbest in the iteration. r1 and r2 are random numbers in the range 0, 1. PSO is usually terminated after reaching the allowed maximum number of iterations or meeting the stopping criterion. The best solution of a problem is the final Gbest. Figure 1 shows the concept of a particle’s iteration in a graphical way.

In equation (1), represents the effect of inertia, represents the effect of self-cognition, and represents the effect of social learning. The cooperation between them contributes to finding the optimal solution. The function used to evaluate the position of a particle is usually called the fitness function . To prevent particles from exceeding the search area, the position and velocity of a particle are always limited in the allowed range. When a particle reaches the boundary of the search area, its velocity should be reversed to improve the search efficiency. The pseudocode of the standard PSO is shown as follows (Algorithm 1):

(1)Initialize n, c1, c2, ω, particles’ position X and velocity V, then find Pbest and , t = 1;
(2)while (t ≤ tmax or the precision is not met)
(3)for i = 1 : n
(4)  for d = 1 : D
(5)   Update velocity by equation (1);
(6)   Update position xid by equation (2);
(7)  End
(8)  if
(9)   ;
(10)   if
(11)    ;
(12)   End
(13)  End
(14)End
(15)t = t +1;
(16)End
2.2. Different Forms of Inertia Weight

The inertia weight reflects the influence of the previous velocity on the new velocity . A large inertia weight can prevent particles from going to the region of interest (hereafter called the ROI) immediately. This makes particles continue to search outside the ROI for a period of time. A small inertia weight can make particles go to the ROI immediately and search in the ROI. That is, a large inertia weight enhances the global search capability (hereafter called exploration), and a small inertia weight enhances the local search capability (hereafter called exploitation). Exploration can prevent particles from falling into a local optimum, but it also leads to low convergence accuracy and a slow convergence speed. Exploitation can accelerate the convergence speed and improve the convergence accuracy, but it makes the algorithm converge prematurely or become trapped in a local optimum easily [29]. These two functions greatly influence the performance of PSO. Therefore, it is very important to choose an appropriate inertia weight.

Since inertia weight was proposed, many researchers have made contributions in this field. Some classical forms of inertia weight have been proposed, such as time invariant [15], linear time variant [16, 30], nonlinear time variant [17, 31], and other forms of inertia weight [3234]. The famous forms of inertia weight mentioned above are described in detail in the subsections below.

2.2.1. Time Invariant Inertia Weight

To improve the performance of the original PSO, Shi and Eberhart proposed a parameter called inertia weight ω to balance exploration and exploitation in 1998 [15]. First, the inertia weight appeared in the form of a constant. They found that a large inertia weight facilitates exploration, while a small inertia weight facilitates exploitation. The recommended range of inertia weight is [0.9, 1.2]. The computational results showed that the overall performance of PSO was improved empirically: is easy to implement, so it has been widely used.

2.2.2. Linear Time Variant Inertia Weight

After the concept of inertia weight was proposed, a linear time variant inertia weight was introduced in [16] to further improve the performance of the PSO algorithm. The mechanism of the linear time variant inertia weight is shown in the following equation:

The initial value to the final value as the number of iterations increases. The linear time variant inertia weight takes the demands of particles in different periods into account. The recommended values of and are .

There are many other mechanisms of the linear time variant inertia weight. Specifically, Zheng proposed an increasing linear time variant inertia weight . It is proven that the increasing mechanism performs better than the decreasing mechanism in some test functions [30]. The mechanism of inertia weight is

2.2.3. Nonlinear Time Variant Inertia Weight

Based on the linear time variant inertia weight, some researchers think that the nonlinear mechanism is more suitable for the demand of particles. Therefore, many nonlinear time variant mechanisms have been proposed. Chatterjee [31] introduced a nonlinear time variant inertia weight combined with a quadratic function, and its mechanism is

For the sake of a better inertia weight, some researchers abandoned the continuous function and began to research the multiple stages of inertia weight. Tian [17] proposed a sigmoid increasing inertia weight and obtained an algorithm with satisfactory performance. The mechanism of inertia weight is as follows:

2.2.4. Other Forms of Inertia Weight

Some researchers also proposed other effective strategies to adjust the inertia weight, such as random strategy, chaotic strategy, and self-adaptive strategy.

Randomness is the natural property of PSO, and it is also the reason why PSO can be applied to almost all optimization problems. It is difficult to predict whether exploration or exploitation would be better during the iteration. To address this problem, researchers thought of using random strategies to adjust the inertia weight. A random inertia weight was introduced in [32]. The mechanism of inertia weight is shown inwhere is a random number in the range [0, 1], so .

Feng [33] used a chaotic strategy to adjust the inertia weight and obtain a chaotic inertia weight . He added a chaotic term to the linearly decreasing inertia weight. The mechanism of inertia weight iswhere the initial value of is a random number in the range [0, 1], and .

Different from the random strategy and chaotic strategy, some researchers chose an index to adjust the value of inertia weight in real time. This kind of index can provide feedback on the state of the swarm. Zhang [34] et al. adopted an index φi to monitor the state of each particle, and they proposed a self-adjusted inertia weight . The mechanism of inertia weight is shown in the following equations:where μ = 100. On the right side of equation (11), the numerator is the Euclidean distance from the position of the ith particle to , and the denominator is the Euclidean distance from the position of the ith particle to Pbesti. Therefore, the index φi can reflect the state of the ith particle dynamically. The performance of the self-adaptive inertia weight in some test functions is excellent, but the mechanism of the self-adaptive inertia weight is usually very complicated. It is difficult to design a widely used index.

2.3. Particle Initialization

In addition, the quality of particle initialization is also critical to the performance of the PSO algorithm. The volatility of particle initialization is the primary cause of the volatility of convergence speed and accuracy.

2.3.1. Random Initialization

The original method of particle initialization is random initialization. The position of each dimension of each particle is distributed in the allowed range independently and randomly, as shown in the following equation:

After initialization, if particles are close to the global optimum, PSO tends to have good performance; if the particles are concentrated near the local optimum, PSO tends to fail. The random strategy of particle initialization inevitably leads to volatility of initialization. However, if particle initialization abandons randomness, it is very difficult for PSO to solve various optimization problems without prior knowledge. Fixed initialization can only solve specific problems.

2.3.2. Chaotic Initialization

Chaotic initialization employs chaotic sequences to make particles more scattered. A common chaotic sequence called a logistic map is widely used because of its simple employment [35]. Its mechanism is as follows:where is the chaotic variable, Z1 is a random value in the range 0, 1, and other chaotic variables are obtained by equation (14). To prevent chaotic variables from falling into a cycle, Z1 ≠ 0, 0.25, 0.5, 0.75, and 1. a is a constant that controls the level of chaos, and the recommended range for a is [3.5699, 4]. In this paper, ZN is obtained by equation (14) for 10 iterations to make the initial swarm more chaotic. After n × D chaotic variables are obtained, the chaotic initialization can be completed by replacing the random matrix with chaotic variables during the process of initialization.

2.3.3. Opposition-Based Initialization

For two particles that are symmetrical about the centre of the search area, one particle of the two is closer to the global optimum than the other (it is a special case that the two distances are equal). Opposition-based initialization generates a subswarm randomly and combines it with its symmetric subswarm. Therefore, opposition-based initialization can avoid the situation in which all particles are far from the global optimum. Rahnamayan [21] introduced opposition-based initialization, and the mechanism of opposition-based initialization is shown in

3. UCPSO Algorithm

Based on research on inertia weight and particle initialization, UCPSO is proposed to be a competitive variant of PSO. UCPSO adopts three new strategies, and their details are represented in the following sections.

3.1. Inertia Weight in the Form of Variable-Period Cosine Function

There is a popular form of nonlinear time variant inertia weight. It maintains a large value in the early stage and keeps a small value in the final stage. In common optimization problems, it can enhance the global search ability of PSO. Then, the multistage inertia weight was proposed. It puts forward more specific requirements for the change process of inertia weight: (a) initial stage: inertia weight keeps a large value for a period of time to carry out global search and reduces the probability of falling into local optimum (this stage is also called global search stage); (b) intermediate stage: inertia weight drops rapidly and transits from global search to local search (this stage is also called decelerating transition stage); (c) the final stage: inertia weight keeps a small value for a long time to help PSO converge to an accurate optimal solution quickly (this stage is also called the local search stage).

The change process of the cosine function in the range meets the requirements of the multistage inertia weight. In the range , the cosine function maintains a large value and changes slowly. In the range , the cosine function declines rapidly. In the range , the cosine function maintains a small value and changes slowly. Because the cosine function is simple and easy to use, an inertia weight in cosine form is adopted in this paper. However, the original cosine function is not consistent with the requirements of the multistage inertia weight. Therefore, the original cosine function needs to be adjusted. An iterative term is added into the cosine function to adjust the period and is rescaled in the range as shown in the following equations:where a is a constant that can adjust the period of . The values of a1, a2, and a3 can control the length of each stage in According to the requirements of , we limit the phase in the range 0, π and is required to increase from 0 to π. While t increases from 0 to needs to increase from 0 to . Therefore, a1, a2, and a3 have to satisfy the following equation:

The pseudocode of updating ωcos is shown as follows (Algorithm 2):

(1)Let a = a1;
(2)for t = 1 : 
(3) Update by equation (16);
(4)if
(5)  ;
(6)else if
(7)  a = a2;
(8)end
(9);
(10)End

The parameter analysis experiment for a1, a2, and a3 is shown in Section 4.2. The recommended configuration of is . The curves of different inertia weights and are displayed in Figures 24. In addition, their parameters are also illustrated.

To make the inertia weight have the same value range, in these inertia weights, all ωini are set to 0.9, and all are set to 0.4 .

3.2. Uniform Initialization

There is no clear mechanism in random initialization, chaotic initialization, and opposition-based initialization to avoid the aggregation of initial particles. This situation will lead some areas to be searched repeatedly and some areas to be ignored. This reduces the search efficiency and the possibility of finding a global optimum.

To solve this problem, a particle initialization method with both randomness and uniformity (called uniform initialization) is proposed in this paper. In Algorithm 3, Line 1 initializes a particle randomly to be the base point, and is the position of the base point. Lines 2–4 generate a random matrix to ensure the randomness of the initial particles. Lines 5–12 divide the length of each dimension by n to obtain the minimum distance between particles in the corresponding dimension. Lines 5–12 distribute particles in each dimension uniformly and avoid the aggregation of particles. If the position of a particle exceeds the allowed range of a dimension, it will subtract the range of the corresponding dimension. The pseudocode of uniform initialization is shown as follows:

(1)Initialize X1 in the search area randomly by equation (13);
(2)for d = 1 : D
(3) Randomly rearrange to get ;
(4)End
(5)for i = 1 : n
(6)for d = 1 : D
(7)  
(8)  if
(9)   ;
(10)  End
(11)End
(12)End

Uniform initialization ensures that the distances between particles are larger than a certain value. It can avoid the aggregation of particles and distribute initial particles uniformly to recognize more areas at the beginning. Figure 5 represents the result of uniform initialization. We find that the level of aggregation is low, and the distribution of particles is uniform. Uniform initialization is a good combination of randomness and uniformity. Figure 5 is completed under the configuration: the search area is a 4 × 4 rectangle.

3.3. Rank-Based Strategy for Individual Particle’s Inertia Weight (RIW)

The forms of inertia weight mentioned above are all assigned numerical values according to the state of the whole swarm. However, in fact, particles’ states are diverse. The individual particle’s need may not follow the swarm’s need. Particles that are already in the ROI need a small inertia weight to exploit. Particles that are far away from the ROI need a large inertia weight to explore. The single value of inertia weight cannot satisfy both requirements at the same time. Cooperation between these two kinds of particles can maximize the benefit of the whole swarm.

A rank-based strategy is adopted by this paper to solve the problem. Generally, the particles with small fitness values are in the current ROI, while the particles with large are outside the ROI. Particles are sorted by fitness value from small to large. Adding a rank-based strategy to inertia weight can take both the overall and individual requirements into account simultaneously. The mechanism of the RIWs is shown in equations (20) and (21):where ωi denotes the ith particle’s inertia weight, denotes the swarm’s inertia weight, bi is the adjustment factor for the ith particle’s inertia weight, and ranki denotes the ranking of the ith particle according to the fitness value.

The pseudocode of the RIWs is shown as follows (Algorithm 4):

(1)while (t ≤ tmax or the precision is not met)
(2) Sort to get ;
(3)for i = 1 : n
(4)  Update ;
(5)  Update bi by equation (21);
(6)  Update ωi by equation (20);
(7)End
(8);
(9)End

The parameter analysis experiment for b1, b3 is shown in Section 4.2. The recommended configuration of .

This paper adopts the above three mechanisms in the standard PSO and proposes a uniform initialized particle swarm optimization with cosine inertia weight (UCPSO). To elaborate the mechanism of UCPSO, the pseudocode of UCPSO is shown as follows (Algorithm 5):

(1)Initialize , particles’ velocity and initialize particles’ position X by Algorithm 2, then find Pbest and , t = 1;
(2)Calculate by Algorithm 3;
(3)while ( or the precision is not met)
(4) Sort to get ;
(5)for i = 1 : n
(6)  Update by equation (21);
(7)  ;
(8)  for d = 1 : D
(9)   Update velocity by equation (1);
(10)   Restrict in ;
(11)   Update position by equation (2);
(12)   Restrict in ;
(13)   if
(14)    ;
(15)   End
(16)  end
(17)  if
(18)   ;
(19)   if
(20)    ;
(21)   End
(22)  End
(23)End
(24);
(25)End

4. Experimental Results and Discussion

4.1. Experimental Setup

The experiments in Sections 4.24.4 are performed on benchmark functions [36]. The details of are specified in Table 1. include 2 many-local-minima functions, 2 bowl-shaped functions, and 2 valley-shaped functions. Therefore, are representative. The experiment to compare UCPSO with PSO, MCJPSO [23], PSO-DLS [25], EPSO [26], HCLPSO [27], and GLPSO [28] and is based on CEC2020 benchmark functions, as shown in Table 2. The parameter configurations of the other six algorithms are set according to their original references, which are shown in Table 3.

The nonparametric Wilcoxon signed-rank test is used to examine the significant difference between algorithms. In this article, a Wilcoxon signed-rank test at a 5% significance level is used. A pairwise comparison is conducted over the results obtained through several runs. The symbol “+” indicates that the proposed algorithm performs significantly better than the compared algorithms. The symbol “=” indicates that the proposed algorithm is not significantly different from the compared algorithm. The symbol “−” indicates that the compared algorithm performs significantly better than the proposed algorithm.

The criteria include the mean of the best solutions (Mean), the standard deviation of the best solutions (SD), success rate (SR) [37] and the average number of iterations (Average number). SR reflects the probability of obtaining a satisfactory result. To reduce the impact of extreme values, only the successful iterations are counted when calculating average number. When the algorithm iterates successfully, the current number of iterations will be recorded to calculate average number, and the algorithm will continue to iterate. Average number reflects the convergence speed effectively when combined with SR. Whether the algorithm iterates successfully or unsuccessfully is judged according to the following content:where ε is the allowed maximum error. ε is often set as 0.001 in the engineering field. If , the current iteration is successful; if , the current iteration is unsuccessful.

For a fair comparison, all algorithms use the following unified configuration: , and the initial velocity is set to zero. For each test function, 1000 independent runs are performed in Sections 4.24.4, and 30 independent runs are performed in Section 4.5. The allowed maximum number of iterations is used as the termination criterion for all algorithms. All algorithms are implemented in MATLAB R2018b and executed on the same PC with an Intel® Core(TM) i7-7700 CPU @ 3.6 GHz and 16 GB RAM.

4.2. Parameter Analysis

Based on the standard PSO with , 15 different configurations of a1, a2, and a3 are compared on the benchmark functions f1f6. In these configurations, the duration of each stage changes in the step size of tmax/8. Therefore, a relatively good parameter configuration can be obtained. All variants are tested on the experimental settings in Section 4.1.

In Table 4, when , the best performance is obtained. In Figure 2, we can see that the curve of with satisfies the requirements of the multistage inertia weight.

The inertia weights of particles that are ranked in the middle remain constant, so the value of is equal to 1. According to the requirements of the inertia weight of the particles, . Particles that are already in the ROI need a small inertia weight to exploit. Particles that are far away from the ROI need a large inertia weight to explore. Therefore, should be less than 1 and should be greater than 1. A minor adjustment of inertia weight enhances the cooperation between the swarm. If the inertia weight becomes too small, the diversity of the swarm will decrease. This situation will lead to premature convergence. If the inertia weight becomes too large, it will lead to low convergence accuracy because particles outside the ROI insufficiently participate in local search.

The effects of and are relatively independent. Therefore, the effects of different values of or on the standard PSO with RIWs are compared separately. The comparative experiments are based on benchmark functions . All variants are tested on the experimental settings in Section 4.1.

In Table 5, when , the three best results are obtained. In Table 6, when , the three best results are obtained. When , the adjustment of inertia weight is not large. Therefore, the recommended parameters of RIWs are selected as .

4.3. Comparing Inertia Weight with Other Forms of Inertia Weight

To compare inertia weights and , they are added into the standard PSO. The configurations of and are elaborated in Section 3.1. In the inertia weight , and . For brevity, the standard PSO with inertia weight is abbreviated as PSO- and so on. All variants are tested on the experimental settings in Section 4.1.

Table 7 shows the performance of standard PSO variants with nine different forms of inertia weight. If the SR of a variant is 0, its average number will be omitted. We can find that PSO- has the highest SR in . In PSO- still obtains the second highest SR. maintains a large value for a period of time, so its global search ability is improved. PSO- has the smallest mean of the four benchmark functions and the smallest SD of the three benchmark functions. In and , PSO- has the smallest mean and SD, but its SR is still lower than PSO-‘s SR. adopted a multistage strategy to meticulously guide the behaviour of particles at different stages. Therefore, PSO- obtains a better convergence quality than other variants. PSO- has the fastest convergence speed in and a moderate convergence speed in the other five benchmark functions.

Figure 6 shows the convergence characteristics of the nine variants. The convergence speed of PSO- is not very fast at the beginning, but PSO- converges to the smallest fitness value in all six benchmark functions. In and , PSO- outperforms the other variants. PSO- is able to avoid becoming trapped in the local optimum converts to local search quickly and smoothly. These factors help PSO- converge to an accurate solution.

4.4. Comparing Uniform Initialization with Other Particles Initializations

Random initialization, chaotic initialization, opposition-based initialization, and uniform initialization are compared on the benchmark functions . The four particle initializations are all added to the standard PSO- to be compared. The configurations of random initialization, chaotic initialization and opposition-based initialization are elaborated in Section 2.3. For chaotic initialization, is set to 4. Uniform initialization adopts the recommended configuration in Section 3.2. The four variants are abbreviated as PSO-rand, PSO-chaotic, PSO-opposition, and PSO-uniform for simplicity. All variants are tested on the experimental settings in Section 4.1.

From Table 8, it can be seen that PSO-uniform performs better than the compared variants. PSO-uniform obtains the smallest mean in five benchmark functions and the highest SR in four benchmark functions. If the initial particles gather around a local optimum, the iteration is very likely to converge prematurely. Uniform initialization reduces the possibility of this case. It distributes initial particles more uniformly and makes them more likely to approach the global optimum. This improves the SR and convergence quality of the PSO algorithm. Except for , PSO-uniform has at least the second smallest average number. In addition, it has the smallest Average number in . It can be concluded that PSO-uniform has a better global search ability and a better convergence speed in .

4.5. Comparing UCPSO with Other Variants of PSO

To analyse the performance of UCPSO, UCPSO is compared with PSO, MCJPSO [23], PSO-DLS [25], EPSO [26], HCLPSO [27], and GLPSO [28] on the CEC2020 benchmark functions. The problem dimension of the experiment is set to . UCPSO adopts the following configuration: , , , , , , , and . All algorithms are tested on the experimental settings in Section 4.1. For each test function, 30 independent runs are performed.

From Table 9, it can be observed that UCPSO outperforms PSO and PSO-DLS in 20-dimensional CEC2020 benchmark functions. Except for F1, F8, and F9, UCPSO has better results than MCJPSO. This indicates that the performance of PSO is enhanced by the proposed three improvements. HCLPSO achieves an excellent result in this experiment. In most benchmark functions, UCPSO can keep up with HCLPSO. In F4 and F10, UCPSO performs almost as well as HCLPSO. In F5, UCPSO achieves the best result. In hybrid and composition functions, UCPSO obtains good results and outperforms GLPSO. This proves that the performance of UCPSO is very competitive and that the proposed three improvements are effective.

In Figure 7, the median convergence curves of the four types of benchmark functions are shown. We can see that UCPSO has good convergence performance, especially in the top 500 iterations. UCPSO converges fast and has a high level of convergence accuracy, especially in F4 and F5. UCPSO can maintain a strong exploration and exploitation ability and converge to an accurate solution in a short time.

4.6. Algorithm Complexity

This section will analyse the computational complexity of UCPSO. The computational cost of the original PSO involves the initialization , evaluation , velocity, and position update for each particle. is the dimensionality of the search space, and is the allowed maximum number of iterations. The computational complexity of PSO can be estimated as . Therefore, the computational complexity of the original PSO is can be calculated in advance, so it is used directly during the iteration process. Uniform initialization adds a process of sorting before the iteration begins. RIWs adds the process of sorting and assignment into each iteration. Therefore, the computational complexity of UCPSO can be estimated as follows: . Because the number of particles is usually small, the computational complexity of UCPSO is too.

An experiment to compare the computational complexity of UCPSO with other PSO variants is carried out. T0 is the time to run the following codes:where T1 is the time to execute 40,000 evaluations of benchmark function F1 by itself with 20 dimensions and T2 is the mean time to execute the algorithm with 40,000 evaluations of F1 with 20 dimensions over 30 times. The number of particles is 20.

According to Table 10, UCPSO spends the least time on F1 apart from PSO. PSO-DLS is up to 2 times slower in terms of time than UCPSO. The computational complexity of UCPSO is lower than those of PSO-DLS, EPSO, and HCLPSO. Therefore, UCPSO is a relatively fast PSO algorithm with competitive performance. The three improvements in UCPSO will not greatly increase the computational complexity.

4.7. Application to Real-World Problems

In this part, UCPSO is applied to solve real-world engineering optimization problems. PSO and UCPSO are tested on P1 and P4 of the CEC2011 real-world optimization problems, as shown in Table 11. For each test problem, 30 independent runs are performed. The population size is set to 20, and the maximum number of iterations is set to 2000. The allowed maximum number of iterations is used as the termination criterion for all algorithms.

4.7.1. Parameter Estimation for Frequency-Modulated (FM) Sound Waves

Frequency-modulated (FM) sound wave synthesis has an important role in several modern music systems [38]. The object of this problem is to minimize the summation of square errors between the following equations:where . The dimension of this problem is 6, and the search range is . The fitness function is shown as

4.7.2. Optimal Control of a Nonlinear Stirred Tank Reactor

This problem is a multimodal optimal control problem. It describes a first-order irreversible chemical reaction carried out in a continuous stirred tank reactor [38]. This chemical process is modelled by two nonlinear differential equations:where is the flow rate of the cooling fluid, is the dimensionless steady-state temperature, and is the deviation from the dimensionless steady-state concentration. The fitness function of this problem is

The initial condition is . The search range is unconstrained, but the initial range of is . The dimension of this problem is 1.

As shown in Table 12, the best result of UCPSO is slightly worse than the best result of PSO, but the worst result of UCPSO is better than the worst result of PSO. In Table 13, the best and worst results of UCPSO are better than the best and worst results of PSO, respectively. According to the results on P1 and P4, UCPSO has the ability to solve real-world engineering optimization problems.

5. Conclusion

In this paper, UCPSO is proposed to prevent PSO from falling into a local optimum and improve the comprehensive performance of PSO. It adopts a variable-period cosine inertia weight , uniform initialization, and rank-based strategy for individual particle inertia weights (RIWs). can satisfy the requirements of inertia weight at different stages and balance exploration and exploitation better. Uniform initialization is able to avoid the aggregation of initial particles. RIWs increase the diversity of swarms and improves exploration and exploitation at the same time. The above three improvements enhance the global search ability of PSO and ensure a competitive comprehensive performance of UCPSO. Extensive tests based on benchmark functions validate the effectiveness of the improvements and the performance of UCPSO.

In future work, the authors will perform more experiments to obtain a better configuration of parameters. We intend to apply UCPSO to practical engineering fields, such as clustering, parameter optimization, image segmentation, and industry scheduling. After that, we will continue to study other forms of inertia weight and research more effective improvements that are easy to implement.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Key R&D Program of China (2018YFB1308400).