Abstract

Metaheuristic algorithms are often applied to global function optimization problems. To overcome the poor real-time performance and low precision of the basic salp swarm algorithm, this paper introduces a novel hybrid algorithm inspired by the perturbation weight mechanism. The proposed perturbation weight salp swarm algorithm has the advantages of a broad search scope and a strong balance between exploration and exploitation and retains a relatively low computational complexity when dealing with numerous large-scale problems. A new coefficient factor is introduced to the basic salp swarm algorithm, and new update strategies for the leader position and the followers are introduced in the search phase. The new leader position updating strategy has a specific bounded scope and strong search performance, thus accelerating the iteration process. The new follower updating strategy maintains the diversity of feasible solutions while reducing the computational load. This paper describes the application of the proposed algorithm to low-dimension and variable-dimension functions. This paper also presents iteration curves, box-plot charts, and search-path graphics to verify the accuracy of the proposed algorithm. The experimental results demonstrate that the perturbation weight salp swarm algorithm offers a better search speed and search balance than the basic salp swarm algorithm in different environments.

1. Introduction

In many engineering fields, there are numerous optimization problems that must be solved under complicated constraints, over large search domains and high complexities [13]. Traditional mathematical strategies, such as the steepest descent method and the variable scale method, can only calculate simple and continuous functions [4, 5]. Thus, complex features such as nonlinearity, multiple variables, multiple constraints, and multiple dimensions require new optimization strategies that have strong calculation abilities and a high degree of precision [68]. Intelligent metaheuristic algorithms have received considerable attention from researchers, and rapid improvements in such techniques have been made in recent years as a result of their widespread utilization, enhanced computational technologies, high practicability, and fault-tolerant abilities [913].

Optimization algorithms have strong prospects related to numerous practical industrial fields and theoretical mathematics applications, such as global numerical optimization [14], path planning [15], clustering analysis [16], 0-1 knapsack problems [17], image segmentation [18], PID tuning [19], obstacle avoidance of robotic manipulator [20], and feature selection [21]. All of those areas need algorithms to obtain more precise parameters. In recent years, scholars have proposed many advanced metaheuristic algorithms, such as monarch butterfly optimization (MBO) [22], beetle antennae search algorithm (BAS) [23], earthworm optimization algorithm (EWA) [24], elephant herding optimization (EHO) [25], crow search algorithm (CSA) [26], and moth search algorithm (MS) [27]. MBO, which is mainly determined by the migration operator and butterfly adjusting operator, is ideally suited for parallel searching and well capable of balancing trade-off between intensification and diversification. BAS not only has the ability of individual recognition and environmental recognition abilities but also owns the simple code. In EWA, the addition of a Cauchy mutation can make certain earthworms escape from local optima and enhance the algorithm searching ability and can also help the whole earthworm positions proceed to a better position. EHO is divided into two operators including the clan updating operators and separating operators. The worst elephant position is replaced by randomly generated positions, which can significantly accelerate convergent speed, avoiding premature and local convergence. CSA applies a population of seekers to explore the searching space, by the use of a population, the probability of searching a feasible position and escaping from local optima increases. MS searching process can be seen as exploitation and exploration, and the act of balancing of exploitation and exploration is indeed. MS has a good performance and effectiveness.

The salp swarm algorithm (SSA), a nature-inspired metaheuristic algorithm, is proposed by Mirjalili et al. in 2017 [28], which displays some promising performance for global optimization functions. SSA imitates the salp living and predation habits, and the mathematical model of SSA can be divided into two groups including one leader and followers. The leader is the first salp in front of the salp chain, whereas other salps can be seen as followers. The leader indirectly guides the salp swarm to follow each other. SSA has exploration and local optima avoidance abilities, which originate from the reason that salps tend to interact with each other; so, salps do not gravitate towards a local feasible solution easily. The salp chain makes the SSA search the finding space and gradually move to the global optimum, which demonstrates the superior exploitation of SSA. SSA converges towards the food position proportional to the iteration number because the connections between the leaders also pull other salps towards the food position. In addition, it is observed that SSA can balance exploration and exploitation. Owing to the distinguishing characteristics including simple code and easy implementation, it is becoming one of the most studying hot areas in algorithm fields, such as node localization in wireless sensor networks [29], the Takagi–Sugeno fuzzy logic controller design [30], the extreme learning machine optimization [31], the IIR wideband digital differentiators and integrators design [32], the photovoltaic cell models parameters identification [33], PEM fuel cells parameters extracting [34], the passive sonar target classification [35], the airfoil-based savories wind turbine optimization [36], the model predictive controller devising [37], and the soil water retention curve parameter estimation [38]. There are different salp swarm algorithm variants that are used in many areas. Wan et al. [39] proposed that the MPPT controller is achieved by combining the salp swarm algorithm with the grey wolf optimizer. Gao et al. [40] combined SSA with quantum swarm intelligence and proposed the quantum salp swarm algorithm to solve Nakagami-m quantile functions. Xing and Jia [41] introduced the Lévy flight salp swarm algorithm which can eliminate the problem of getting stuck in local optima and applied the proposed algorithm for multilevel color image segmentation. The literature [42] designed an advanced Lévy flight salp swarm algorithm for hydraulic systems. Majhi et al. [43] drafted a chaotic salp swarm algorithm based on the fire neural model and quadratic integration. Neggaz et al. [44] created improved leaders of the salp swarm algorithm using the sine cosine algorithm and disrupt operator, the updating position it consists to update the leader position by the sine cosine algorithm and applying the disrupt operator. The literature [45] applied diversities of the moth-flame optimization (MFO) algorithm to weaken the limitations of basic SSA and the proposed algorithm called SSAMFO. Ibrahim et al. [46] devised the hybridization algorithm SSAPSO between SSA and PSO, in which the exploration and the exploitation steps of SSA are improved. Panda and Majhi [47] introduced an improved version of SSA, which can boost the s searching performance of SSA by using space transformation search. In literature [48], a new SSA binary version called BSSA was drafted based on an Arctan transformation. In literature [49], a novel hybrid algorithm based on SSA and chaos theory was proposed, and the capability of proposed algorithm in finding an optimal feature subset can enhance the classification accuracy. Wu et al. [50] proposed an improved salp swarm algorithm based on weight factor and adaptive mutation, and testing results showed the good convergence performance of escaping local optimum when compared with basic SSA.

Xiang et al. [51] proposed a modified salp swarm algorithm called polar coordinate salp swarm algorithm (PSSA), which is inspired by the spiral aggregation chain and foraging trajectory of salps. Hegazy et al. [52] added a new control parameter in basic SSA to adjust the present best solution, and the new method is called the improved salp swarm algorithm (ISSA). Qais et al. [53] introduced an enhanced salp swarm algorithm (ESSA) to improve the power point tracking and the fault ride-through capability of a grid-tied permanent magnet synchronous generator driven by a variable speed wind turbine.

The leader salp searches for the best solution to the given problems using the difference between the lower searching bound and the upper searching bounds, which causes that the local optimum cannot be sufficiently utilized for the optimization procedure in basic SSA. The expression factor with a fixed coefficient is the e exponential function, and the exponential function will emerge the exponential disaster in the later iteration phase, which causes premature convergence. The followers update their next positions by applying for their neighbor positions. This single updating mechanism is unfavorable in terms of algorithm diversity. To overcome the above problems and enhance the performance of SSA, this paper describes the perturbation weight salp swarm algorithm (PWSSA). A new coefficient factor, a new leadership position updating strategy, and new followers updating strategy are added to the basic salp swarm algorithm. PWSSA uses the perturbation weight mechanism to weaken the distance difference of the best solution and each solution and applies the asymptotic circular searching mechanism to find a better leader position at a faster speed. Followers’ positions will be changed more and more slightly with increasing iterations in PWSSA. PWSSA can balance the leader position, and other followers’ positions can weaken the exponential explosion problem in basic SSA. As a result, the effectiveness of search orientation is significantly enhanced. For experiments and discussion, this paper used different algorithms to carry on different function experiments including low-dimension functions and variable-dimension functions, and iteration curves, box-plot charts, and searching path graphics were given to show the strong searching performance of PWSSA. All experiment results demonstrate that the proposed algorithm has a stronger searching accuracy and larger exploration balance than the basic salp swarm algorithm.

The rest of this paper is organized as follows: in Section 2, the basic salp swarm algorithm is presented. In Section 3, the perturbation weight salp swarm algorithm is proposed. Experimental parameters, experimental environments, results, and discussion are given in Section 4. In Section 5, the conclusion is drawn.

2. Salp Swarm Algorithm

The salp swarm living in the sea is a transparent organism that is similar to jellyfish. Mirjalili et al. introduced the salp swarm algorithm depending on the salp predation strategy, which is a chain-like behavior relying on the chain mechanism of the group. SSA, which is based on chain behavior to find the optimal solution, is one of the evolutionary metaheuristic algorithms. In the salp swarm, all salps are divided into two parts including a leader and followers. The salp presented at the front of the salp chain is the leader, whereas other salps can be seen as the followers. In the procedure of the salp predation mechanism, the leader in the front of the chain guides followers search food, and all followers, which follow the recent salp, deliver food signals to keep the flexibility chain shape.

In this paper, each salp position is set to find food in an N × D dimension searching space, where N represents the population size, and D represents the searching dimension. Hence, ith salp position in the in the dth dimension can be represented as

The leader position is assigned in d-dimension searching space.

The food source, which can be seen as the best solution in functions, is also set to be present in the searching area and is targeted by the salp swarm chain. The leader updates its position according to the food source position. The position of the leader can be represented aswhere is the leader position, Fd is the food position, ubd is the upper bound of dth dimension searching space, and lbd is the lower bound of dth dimension searching space. Parameters p, c2, and c3 are random numbers uniformly obtained in the interval of . The parameter c1 indicates the expression coefficient and can be represented aswhere t is the current iteration number, T is the maximum number of iterations, and e is the natural base.

In each searching procedure, each follower tracks the leader position by following other followers. Each follower position can be defined as follows:where i ≥ 2, , and mean one ith follower position and its neighbor position in dth dimension.

The pseudocode of the basic SSA is given in Algorithm 1. The SSA main step can be summarized in the pseudocode as follows:

Input: fitness function . Dimension d. Population size N. Each ith salp position . Best position Fd. Best function value . [lbd · ubd]. t = 0. Set T and p.
Output:.
while ( t<T)
c1= 2e−(4t/T)
c2 ∈ [0, 1]
c3 ∈ [0, 1]
for 1 (i = 1 : N)
  if 1 (i = 1)
   if 2 (c3 ≥ P)
   =Fd+c1 ((ubd − lbd)c2 + lbd)
    else
   =Fd − c1 ((ubd − lbd)c2 + lbd)
   end if 2
  else
     = (+)/2
  end if 1
if 3F () is better than
   Fd = 
   =F ()
  end if 3
end for 1
t = t + 1
end while

3. Perturbation Weight Salp Swarm Algorithm

The leader guides followers find food according to the difference of the lower searching bound and the upper searching bound in SSA, but if the searching problem has large-scale optimization fields, the large searching scope makes that the local searching is not a sufficient optimization process, and the neighborhood information near the local optimization optimum is insufficiently applied. The expression factor c1 is the e exponential function with a fixed coefficient, and the exponential equation will grow explosively at the later of iteration; therefore, the leader searching strategy has drawbacks of premature convergence and low searching precision. The parameter c2 is randomly selected in the range of , which is not suitable for high-precision searching in the later of iterations. Positions of other salps are updated by the average position of each follower and its neighbor. Updated positions of followers have a single direction and blindness, which cause that SSA falls into the local extremum and maximize the viciousness of iterations. To get a better searching strategy and avoid the blindness of the searching process, this paper added the variable perturbation weight mechanism into the basic SSA, and the proposed algorithm is called the perturbation weight salp swarm algorithm (PWSSA).

The perturbation weight mechanism works by changing the distance difference of the optimum solution and the population solution. The searching range is regulated by applying the asymptotic circular searching to obtain a better leader searching strategy with a faster speed and higher balance. The followers’ position will achieve better, and the position adjusting will change more and more slightly with increasing iterations. The perturbation weight mechanism will make SSA get the optimum solution better. New factors c1 and c2 can be updated as follows:where u1 and u2 meet the standard normal distribution, u1N (0, 1) and u2N (0, 1). The standard normal fraction has advantages of the concentration, the symmetry, and the uniform variability.

The new leader position can be updated as follows:

To increase diversities of followers’ positions, the multidirectional crossing searching strategy is added in the basic SSA:where , , and are random parameters in the range of [−1, 1].

The specific steps of PWSSA are described as follows:Step 1. Set salp population size N and the searching dimension D. Define the maximum number of iterations T. Determine probability coefficient p. Let t = 0. Each ith salp position can be seen as (i=1,2,…,N) and (d=1,2,…,D). Set the probability coefficient p.Step 2. Begin the iteration. Judge whether i is equal to one. If i is equal to one, jump into Step 3. If i is not equal to one, jump into Step 4.Step 3. Use equation (5) to compute the factor c1,new. Use equation (6) to compute the factor c2,new. Set c3 in the range of [0, 1]. Compute the current optimal solution Fd. Judge whether c3 is larger than p. If c3 is larger than p, the leader position can be expressed by part one of equation (7). Otherwise, the leader position can be expressed by part two of equation (7).Step 4. Set parameters , , and in the range of [−1, 1]. Update followers’ positions using equation (8).Step 5. Record the global optimal solution. If there is a better solution, replace Fd.Step 6. Set t = t + 1. Judge whether the current iteration t is equal to the maximum number of iterations T; if t is equal to T, stop the iteration. If not, jump to Step 2.

The PWSSA main step can be summarized in the pseudocode shown in Algorithm 2, and the PWSSA main step flow chart is shown in Figure 1.

Input: fitness function . Dimension d. Population size N. Each ith salp position . Best position Fd. Best function value . [lbd · ubd]. t = 0. Set T and p.
Output:.
while (t<T)
c1new=u1 (1 − t/T)
c2new=u2 (1 − t/T)
c3 ∈ [0, 1]
for 1 (i = 1 : N)
  if 1 (i = 1)
   if 2 (c3 ≥ P)
    = Fd+c1new ((Fd − )c2new+ lbd)
    else
    = Fd − c1new ((Fd − )c2new+ lbd)
   end if 2
    else
     = [( · Fd − ) + ( · Fd − )]
  end if 1
if 3F () is better than
   Fd = 
   =F ()
  end if 3
end for 1
t = t + 1
end while

4. Results and Discussion

4.1. Experimental Parameters and Environments

Benchmark function testing is a popular and common way to indicate the performance of intelligent algorithms. This paper introduces benchmark functions to exhibit the superior performance of the proposed algorithm, and the proposed algorithm will be evaluated on classical benchmark functions in this section. To testify the ability of the proposed algorithm to solve different dimensional complex functions, eight low-dimension functions (f1f8) and four variable-dimension functions (f9f12) were chosen for algorithm testing in Table 1. In Table 1, D, scope, and aim represent the function dimension, the searching range, and the ideal value.

Low-dimension functions (f1f8) are applied to measure the global searching ability of the algorithm. Variable-dimension functions (f9f12) are very difficult to converge to the global optimal solution because of owning unevenly distributed local optima points with strong oscillation and nonconvexity, especially in the case of being large-scale and high-dimension functions. To test the proposed algorithm in multisides, dimensions of variable-dimension functions were selected 2D and 100D in this paper. In the original SSA literature, authors compared SSA with seven popular algorithms, and performances of SSA are better than comparison algorithms. To avoid repeat and unnecessary experiments, this paper selected other algorithms to carry on comparative experiments. Comparison algorithms included the SSA, simulated annealing (SA) [54], Lévy flight trajectory-based whale optimization algorithm (LWOA) [55], and Lévy flight salp swarm algorithm (LSSA) [41]. All algorithm processes and details can be found in the original algorithm literature.

SA is inspired by analogy to the physical annealing procedure in metals, which is a local searching algorithm proposed in the early 1980s. The theory of the annealing procedure is to heat the solid-state metal to a large temperature, so that atoms of the metal are in a stochastic condition and then cool the metal down slowly according to particular procedures. Starting from some random solutions and fixed initial temperature, SA controls the process by metropolis criterion and a group of parameters called cooling schedule. SA has two initial parameters including the initial temperature T0 and the decay factor k. For SA, parameter T0 selects 100, and parameter k selects 0.95.

LWOA, which was proposed by Ling et al. in 2017, combines a Lévy flight trajectory and whale optimization algorithm to get a better trade-off between the exploration and exploitation for the basic whale optimization algorithm. Lévy flight is a special random searching path where walking steps are selected according to heavy power-law tails. LWOA has five initial parameters including r, b, l, p, and β. For LWOA, r and p are random numbers in [0, 1], l is automatically selected in [0, 1], b = 2, and β = 1.5.

LSSA was proposed by Xing and Jia in 2019. Lévy flight trajectory can not only maximize the diversity of searching domains but also enhance the global searching ability of SSA to avoid getting into local optimal values. There are two parameters in LSSA, including the power-law exponent β and the probability factor P. For LSSA, β = 1.5 and P = 0.5.

For PWSSA and SSA, parameter p equals to 0.5. Initial parameter values of all algorithms were chosen according to original algorithm literature, and all algorithms processes and details can be found in the original algorithm literatures. For each experiment of an algorithm on different benchmark functions, ten independent runs were performed to get a fair comparison among different algorithms. The maximum number of iterations was set to 1000, and the population size was set to 50. The best value, the worst value, the average value, and the standard deviation of each algorithm optimization were recorded. To make a fair comparison, all algorithms were programmed in MATLAB (R2014b, The MathWorks, Inc, Natick, MA, USA). All experiments were conducted on a laptop with Intel (R) Core (TM) i5-4210U CPU, 2.30 GHz, 4 GB RAM. All data and figures were completed in MATLAB (R2014b, The MathWorks, Inc, Natick, MA, USA).

4.2. Date Discussion

To demonstrate the optimization effect, four indicators were selected to comprehensively evaluate the competitiveness of different algorithms. Four indicators consist of the best searching value (best), the worst searching value (worst), the median (med), and the standard deviation (std). Fixed two-dimension functions testing results and two dimensions of variable-dimension functions testing results are given in Table 2. Other variable-dimension functions (100D) testing results are shown separately in Table 3. Tables 2 and 3 show that all searching values of the proposed algorithm are much closer to the ideal value in Table 1, which demonstrates that PWSSA not only can obtain the best aim but also have strong searching abilities. As the dimension of the testing function increases, the accuracy of the algorithms will decline, but the test results using PWSSA are consistently better than those using other algorithms. The convergence precision and optimization success ratio of the proposed algorithm is better than those of the other algorithms for all test functions. When a set of data changes significantly, the median can be used to illustrate the centralized trend of the data. PWSSA has the smallest median value of all the test results, indicating an outstanding performance compared with the other algorithms. PWSSA also has the smallest standard deviation of all the algorithms, demonstrating that the proposed algorithm offers good stability and produces relatively few poor results. Standard deviation, which can measure the discrete degree of a dataset, is the arithmetic square root of the variance. In other words, a large standard deviation exhibits a large difference between most values and the average value, and a small standard deviation shows that the calculated value is closer to their average value. PWSSA has the smallest standard deviation than those of other algorithms, which display that the proposed algorithm has good stability and a few poor results. In PWSSA, the good solution in the current iteration is applied by followers to find the better solution in the next iteration, and random factors can enhance diversities of solutions in nonlinear high-dimension problems, so it can be seen from testing results that in f7, f8, and f12, PWSSA can achieve the best optimization results on best, worst, mean, and std values.

4.3. The Wilcoxon Rank Sum Test Discussion

The rank sum test is a nonparametric technique used to define whether a result is statistically significant. The nonparametric statistical test can be used in mathematics fields to check the algorithm performance [56]. The rank sum test arranges all data in order, from small to large, and has strong practicality because there is no special form of dispersed data or known distribution. However, the rank sum test ignores absolute value differences in data testing, which not only makes the test result approximate but also causes the loss of some test information. Wilcoxon improved the basic rank sum test by considering the different directions and sizes of the data. The Wilcoxon rank sum test can be applied to a distribution of data to check any differences among them and offers more effective performance than the basic rank sum test. The Wilcoxon rank sum test produces values: if the value is less than 0.05, there is a significant difference at a level of 0.05. To further compare PWAAS performances with those of other algorithms, the Wilcoxon signed rank test was tested in this paper. All values are given in Table 4, and this paper applied the proposed algorithm results against those of other algorithms at the 0.05 significance level. For SSA, the values of function 1 and function 6 are equal to 0.011 and are larger than 0.05. For SA, the values of function 4 is equal to 0.473 and is larger than 0.05. For LWOA, the values of function 2 and functions 4–6 are larger than 0.05. For LSSA, the values in function 6 is equal to 0.011 and is larger than 0.05. Other results are all less than 0.05. The Wilcoxon rank sum test results display that the proposed algorithm owns the strongest searching efficiency and the greatest finding mechanism around the best solution, which further proves that PWSSA has the wonderful searching performance.

4.4. Iteration Curves Discussion

Iteration is the activity of repeating a feedback procedure with the purpose of finding the desired goal. Each repetition of all procedures in an algorithm is called one iteration, and the result of each iteration provides the initial value for the next iteration. To exhibit the convergence speed and global search ability of all algorithms more intuitively, the average convergence curves of all algorithms applied to functions of different dimensions are displayed in Figures 2 and 3. Single logarithmic coordinates are used in this paper for a more detailed analysis. Figure 2 exhibits two-dimensional convergence curves of PWSSA and its competitors. Figure 3 demonstrates iteration graphs of algorithms at variable-dimension functions (100D), respectively. Note that all convergence curves discussed in the following subsections are the averages of ten independent executions. As the dimension increases, the optimization performance and iteration speed of all algorithms decrease, although the performance degradation of PWSSA is not severe. The proposed algorithm achieves the target value for most functions with the fastest iteration speed and highest efficiency. The LSSA has better iteration rates than SSA in most functions but still cannot outperform the proposed algorithm. It is noticeable that PWSSA gives the superior global iteration rate and accuracy in comparison with original SSA, which is easy to be trapped to the local optimal. All figures reveal that SSA will be much poorer as the dimension increases, while the proposed algorithm still can offer the distinguished searching ability and its convergence speed and precision rank number one in all functions. In other words, PWSSA can apply fewer iterations to solve problems and is more competent than other algorithms. PWSSA enormously boosts the iteration speed and searching ability of basis SSA mainly because of the introduction of the many-sided learning and local random perturbation strategies between successive followers positions.

4.5. Box-Plot Charts Discussion

Box-plot charts are used to show dispersion information about a set of data. They have the advantages of detecting abnormal values and data skewness and are widely used to distinguish algorithm capabilities in terms of data symmetry and data dispersion. There are six parameters in a typical box-plot chart, namely, the maximum value, minimum value, median, upper quartile, lower quartile, and outliers. A set of data can be evaluated using five of these parameters. Figures 4 and 5 show box-plot charts of all algorithms after calculating a different function. There are many local optima in high-dimensional functions, so the aggregation degree of solutions is a crucial index for evaluating algorithm performance. If an algorithm becomes trapped around a local extremum, it can result in premature convergence. PWSSA produces the narrowest box-plot charts and fewest outliers for all functions. The median and upper/lower quartiles computed by PWSSA are lower than those given by the other algorithms, demonstrating that the collaborative random search strengthens the capability for individual diversity and avoids premature convergence. It is apparent that the proposed algorithm tends to obtain the best performance in precision on most functions as the dimension increases, which is mainly contributed by followers’ random positions generated, and SA and LWOA have the worst performances. SA has the largest variance in all algorithms. All box-plot charts demonstrate that the proposed algorithm has large robust and big stability in comparison with other algorithms, and the figures can show that PWSSA can avoid local extremum.

4.6. Searching Paths Discussion

To further discuss the powerful searching capability and optimization performance in PWSSA, Figure 6 gives the optimal PWSSA search path, the optimal SSA search path, and the contour plot of each function in the two-dimension plane.

Searching path figures can examine whether the algorithm will fall into the local optimal solution on complex functions. Through comparison of searching paths with the traditional SSA algorithm, all searching paths of PWSSA are shorter than SSA, which demonstrate the efficiency of PWSSA in function problems. PWSSA also applies the finding mechanism of tightening from the neighborhood to the extreme point due to average optimality and constrained average optimality. From Figure 6, we can find that the PWSSA searching path is much less than the SSA searching path; SSA has many repeat-invalid short-distance searching paths and occasional long-distance searching paths. Two sets of searching paths display that compared with the basic SSA, PWSSA can explore a wider range and is less affected by iterations, so PWSSA owns better general-purpose optimization abilities. PWSSA also is more flexible and can not only completely avoid collisions with obstacles but also provide numerous feasible solutions. The proposed algorithm can balance the searching speed and accuracy and provide a brilliant and satisfactory solution as much as possible in the case of meeting variable-demand requirements. All searching paths results can reveal that PWSSA can quickly get the best solution and can be used to scenarios with high requirements in a real-time environment.

4.7. Time Complexity Discussion

The algorithm complexity is divided into time complexity and space complexity. The time complexity refers to the computational workload required to execute the algorithm, and the spatial complexity refers to the memory space in a computer required to execute the algorithm. In computer science, the algorithm time complexity, which is a function, qualitatively describes the algorithm running time and is usually expressed by symbol O (f (n)), where f (n) means the mathematical function which includes n, n2, and logn. In this way, the time complexity can be called asymptotic when the input value approaches infinity. In other words, the time complexity means the linear and nonlinear mapping of the aim value and the number of testing times. Population initialization requires O (N × D), where D denotes the dimension of solution space, and N is the population size. The complexity of the proposed algorithm is O (max_it ×N×D), where max_it is the number of running times. To comprehensively compare the time complexity of different algorithms, this paper calculated running times of all algorithm for two-dimension functions and selected the two times of the worst searching value in all algorithms as the aim value. To comprehensively show the time complexity, this paper selected three indicators including the maximum number of running times (MAX), the minimum number of running times (MIN), and the average number of running times (AVE) for ten independent runs. All testing results are shown in Table 5. Table 5 shows that the maximum number of running times, the minimum number of running times (MIN), and the average number of running times in the proposed algorithm are smaller than those of other algorithms. And PWSSA is not easy to fall into local optimum. In addition, the results also show that PWSSA has superior searching ability because PWSSA can get results with good precision than other algorithms. In summary, the main reason is that PWSSA owns an excellent balance between global and local searching phases.

5. Conclusions

Metaheuristic optimization is a significant area, and most representative computational intelligence algorithms have permeated into almost all areas of science and engineering. SSA is a typical metaheuristic algorithm proposed in 2017. Despite SSA is success and popularity, there are some issues that need to be addressed in basic SSA. To overcome the problems of poor real-time stability and low precision of basic SSA for global function optimization problems, this paper has introduced a modified version based on the perturbation weight. PWSSA mainly relies on an asymptotic circular search strategy, which can achieve fast local searching and information orientation. PWSSA efficiently moves towards better function values and offers strong real-time performance. The proposed algorithm can effectively escape premature convergence in the early searching phase and avoids missing the global optimal solution in the later searching phase, thus achieving better search diversity and the possibility of finding a better solution. This paper has described the results of tests using eight low-dimension and four variable-dimension functions. In comparison experiments against other algorithms, PWSSA consistently obtained the best solutions and the smallest function values. This paper has described the results of tests using eight low-dimension and four variable-dimension functions. In comparison experiments against other algorithms, PWSSA consistently obtained the best solutions and the smallest function values. The proposed algorithm can give high-quality searching abilities for functions, which is reflected in the fact that PWSSA can get a more competitive precision than those of comparison algorithms. Iteration curves, box-plot charts, and search-path graphics were used to illustrate the effectiveness of the proposed algorithm, and the results show that PWSSA can generally obtain better solutions than previous algorithms. Through the analysis and comparison of the results using the different mathematical methods, the superiority of the proposed algorithm was proved. It has been proved by the no-free-lunch (NFL) [57] that metaheuristic optimization algorithms are able to solve all optimization problems. In other words, all metaheuristics perform similarly when solving all optimization problems, except the methods used in the paper; some of the most representative computational intelligence algorithms can be used to solve the problems, such as the monarch butterfly optimization (MBO) [22], earthworm optimization algorithm (EWA) [24], elephant herding optimization (EHO) [25], and moth search (MS) algorithm [27]. In future work, we will focus on the proposed algorithm used to solve industrial application problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This research was funded by the International Cooperation Project (grant no. 2012DFR70840).