Abstract

The optimization of high-dimensional functions is an important problem in both science and engineering. Particle swarm optimization is a technique often used for computing the global optimum of a multivariable function. In this paper, we develop a new particle swarm optimization algorithm that can accurately compute the optimal value of a high-dimensional function. The iteration process of the algorithm is comprised of a number of large iteration steps, where a large iteration step consists of two stages. In the first stage, an expansion procedure is utilized to effectively explore the high-dimensional variable space. In the second stage, the traditional particle swarm optimization algorithm is employed to compute the global optimal value of the function. A translation step is applied to each particle in the swarm after a large iteration step is completed to start a new large iteration step. Based on this technique, the variable space of a function can be extensively explored. Our analysis and testing results on high-dimensional benchmark functions show that this algorithm can achieve optimization results with significantly improved accuracy, compared with traditional particle swarm optimization algorithms and a few other state-of-the-art optimization algorithms based on particle swarm optimization.

1. Introduction

In both science and engineering, the particle swarm optimization (PSO) algorithm [1] is an important optimization technique that has been extensively used to find the global optima of multivariable functions. The PSO algorithm searches the variable space of a multivariable function by simulating the social behaviors of a group of animals. The animals in the group act cooperatively to find the potential location of the global optimum point, and the search pattern of each particle in the group is adjusted based on the exploring experiences of its own and other particles in the group.

Although the PSO algorithm can effectively search the variable space of a large number of multivariable functions and accurately determine their global optima, it has two disadvantages that may adversely affect its performance on some multivariable functions. Specifically, the speed of convergence is slow, especially in cases where the function that needs to be optimized is defined on a high-dimensional space. In addition, the algorithm is not guaranteed to find a global optimum and may converge to a local minimum point when the dimensionality of the function is large. A large number of techniques thus have been developed to further improve the performance of the PSO algorithm in the past two decades.

One strategy to improve the performance of the traditional PSO algorithm is to divide the individuals in the swarm into subgroups. In [2], the concept of subpopulation and a reproduction operator is introduced into the PSO algorithm. In [3], a dynamic multiswarm PSO algorithm is developed. Recently, a symbiotic particle swarm optimization (SPSO) algorithm is developed in [4] to optimize the neural fuzzy networks. In [5], the traditional PSO algorithm is modified to solve multimodal function optimization problems. In [6], the particles in the swarm are divided into mentors, mentees, and independent learner groups based on differences in fitness values and the Euclidian distances from the particle with the best fitness value. As a result, a new dynamic mentoring and self-regulation-based particle swarm optimization (DMeSR-PSO) algorithm is developed. Testing results show that DMeSR-PSO algorithm can achieve excellent and reliable performance on a number of benchmark datasets.

Another technique often used to improve the performance of PSO algorithms is to develop new rules for updating the positions of the particles. In [7], the cognitive and social behaviors of the swarm are randomized based on chaotic sequences and Gaussian distribution. Particles can thus search in a much broader area than the traditional PSO algorithm. In [8], a chaotic PSO algorithm is developed based on a virtual quadratic objective function constructed from the optimum point of an individual particle and the global optimum point. Recently, in [9], an improved PSO algorithm is developed to improve the performance of the traditional PSO by using a new strategy to move each particle. Specifically, particles in the PSO fly to their own predefined target instead of the location of the global optimum that has been found so far.

A third class of methods improves the performance of optimization by designing new strategies to update the velocities of particles. In [10], a self-adaptive PSO algorithm (SAPSO-MVS) that uses a novel velocity updating strategy to balance the exploring and exploiting capabilities of the PSO algorithm is developed to improve the performance. In [11], a new PSO-based optimization algorithm that can adaptively adjust the particle velocities in each step of iteration is developed. The adjustment is based on the current distance of each particle from the location with the best fitness found so far by all particles. Recently, in [12], a mode-dependent velocity updating approach is developed to balance the capabilities of the PSO algorithm for local search and those for the global search, which can effectively reduce the chance to be trapped within local minima.

Recently, in addition to the methods discussed above, a large number of other techniques [2, 3, 6, 8, 1331] have been developed to further enhance the performance of PSO-based optimization algorithms, and a comprehensive survey of these methods is available in [31]. In general, most of the existing techniques require sophisticated processing of the locations and fitness values of particles to determine the location and velocity of each particle for the next step of iteration [32, 33]. These approaches thus may become computationally inefficient when the function to be optimized is of a large dimensionality [34]. In practice, the optimization of these functions is often important [35]. For example, a protein sequence generally contains a few hundred amino acids. An amino acid usually consists of around 10 atoms. A protein sequence of moderate length is thus comprised of a few thousand atoms. The native folding of a protein sequence is generally predicted by minimizing the free energy of the system formed by the atoms in the sequence. The free energy of a protein sequence is thus often a function of a few thousand variables, and the accuracy of its minimization is crucial for determining the native folding of a protein sequence.

In this paper, we develop a new efficient PSO algorithm for the optimization of high-dimensionality functions. The algorithm performs optimization by a number of large iteration steps. Each large iteration step consists of two stages. In the first stage, an expelling force field is applied to each particle in the swarm to significantly enlarge the area the particles can explore. In the second stage, the standard PSO algorithm is applied to the swarm to converge to a global optimum. After a large iteration step is completed, a translation procedure is applied to each particle in the swarm to further expand the region explored by the swarm. An iteration step in this new algorithm can be efficiently performed, and the algorithm thus can be applied to the optimization of high-dimensional functions. Our analysis and testing results on high-dimensional benchmark functions show that this new algorithm can achieve significantly improved performance on optimization, compared with the traditional PSO algorithm and its a few existing variants.

2. The Proposed Algorithm

The standard PSO algorithm performs the optimization of a function by simulating the social behavior of a swarm of animals. Specifically, a swarm of particles with randomly generated initial locations is created in the variable space of the function. The fitness of a location in the variable space of the function is defined to be the function value of the location. Each particle is also assigned a randomly generated velocity. In each iteration step, the location with the best fitness value that has been found by all individuals in the swarm is determined. In addition, the location with the best fitness in the trajectory of each individual is also determined. The velocity of each particle is adjusted based on the location with the globally best fitness value and the one with the best fitness value in its own trajectory. The location of each particle is then updated based on the current velocity of the particle.

Given a swarm of particles, let be the locations of the particles in the th step of iteration and be their velocities, respectively. denotes the location with the globally best fitness value that has been found by the swarm after steps of iteration and is the location with the best fitness value in the trajectory of particle . The velocity of particle in the th step of iteration can be computed as follows:where and are two positive constants and and are two randomly generated numbers between 0 and 1; is a positive constant between 0 and 1. The location of particle in the th step of iteration can be computed from as follows:

To effectively enhance the ability of a PSO algorithm to explore within the variable space of a high-dimensional function, we propose to perform the procedure of optimization by a number of large iteration steps. Each large iteration step contains two stages. In the first stage, the particles in the swarm are expelled from the center of mass for the particles in the swarm to explore a wider area in the variable space. This stage is the exploring stage of the optimization procedure. After the exploring stage is completed, the standard PSO algorithm is then applied to compute the global optimum of the function. The second stage is thus the converging stage of the optimization procedure.

In the exploring stage, the velocity of particle in the th step of iteration can be computed from its location and velocity in the th step of iteration as follows:where are the same as those shown in equation (1); and are positive numbers and is a random number between and . is the center of mass for the particles in the swarm for the th iteration step and can be conveniently computed as follows:

The exploring stage executes for a certain number of iteration steps to perform an extensive exploration of the variable space. The converging stage starts execution after the exploring stage is completed. In the converging stage, equations (1) and (2) are used to update the velocity and location of each particle in the swarm until the specified number of iterations has been executed.

Before a new large iteration step starts, a translation procedure is applied to the particles in the swarm such that each particle is relocated to a different position. The velocity of each particle remains unchanged after the translation occurs. The translation of particle is performed as follows:where is the location of particle before the translation is applied and is its location after the translation; is a positive constant and is a random vector of the same dimensionality as , and each component of is a random number between−1.0 and 1.0.

3. Analysis of the Algorithm

In this section, we show that the algorithm can effectively explore the variable space of a high-dimensional function in the exploring stage when appropriate values are selected for parameters and the population size. From equations (3)–(5), no coupling exists between any two different dimensions in the coordinate of a particle in the swarm. Each individual dimension can thus be analyzed independent of others. The analysis of the algorithm can be simplified by choosing one particular dimension in the space and analyzing the behavior of the swarm in this dimension. In the rest of this paper, we assume that all follow a uniform distribution in interval .

Definition 1. Given positive constants and a positive integer, let be the identity matrix and be an matrix where every element in is . The partial transition matrix is defined as follows:

Definition 2. Given positive constants and a positive integer, let be the identity matrix. The complete transition matrix is a square matrix and it can be determined from the partial transition matrix as follows:We assume that the variable space of a high-dimensional function is a “cubic” region in a multidimensional space, where the value of the th dimension is within interval and both and are positive numbers. The following theorem shows that, for certain values of , and , the expected value of the th dimension of each particle explores the corresponding dimension of the cubic region with a function of exponential order.

Theorem 1. Given a “cubic” region in an -dimensional space, where the value of the th dimension is within interval and both and are positive numbers, is the th dimension of particle in iteration step , and is the expected value of . Let be the eigenvalues of the complete transition matrix constructed from and the population size, are mutually different, and both and hold for . There exist nonnegative constants such that holds for large enough .

Proof. Let be the th dimension of the velocity of particle in iteration step , be the th dimension of the best location found by the swarm in iteration step , and be the th dimension of the best location in the trajectory of particle in iteration step . From equations (4) and (5), when , can be written as follows:where is defined to beTaking the expected value for equation (8), we havewhere is the expected value of .
Define , , and as follows:Equation (10) can then be written as follows:where and are defined as follows:From the fact thatEquation (14) can be further written asLet , equation (18) can be written asDefine -dimensional vectors and as follows:Equation (19) can be written as follows:Since has different eigenvalues, it can be decomposed into a form as follows:where is a diagonal matrix with on its diagonal positions and the columns of are the corresponding eigenvectors of . Based on equation (23), equation (22) can be further written as follows:where and . Let be the th dimension of , it is clear that the following can be obtained for :From equation (25), we are able to obtain the following equation:We assume that all particles in the swarm are within the “cubic” region in the iteration steps from 1 to . From equations (12) and (13), we haveFrom equation (16), the following equation holds for when is sufficiently large:where and are selected to guarantee that or holds. Since , , both and are constant matrices and and . We immediately obtain from equations (17), (26), and (29) that when is large enough, there exist positive constants such that holds for each . The theorem thus follows.
Next, we show that , and can be selected to guarantee that holds. The proof is based on the following well-known Gershgorin circle theorem.

Theorem 2. Let be an matrix and denotes the circle in the complex plane with center and radius , that is,where denotes the set of complex numbers. The eigenvalues of are contained within . Moreover, the union of any of these circles that do not intersect the remaining contains precisely (counting multiplicities) of the eigenvalues.

Theorem 3. Let be the eigenvalue of with the largest magnitude, , if .

Proof. Let and , from Definition 2 and Theorem 2, there exist two circles and such that all eigenvalues of are contained in . is centered at in the complex plane and its radius is and is centered at in the complex plane and its radius is .
It is clear that when, holds and and are thus disjoint and there is thus at least one eigenvalue contained in . We haveThe lemma thus follows.
From Theorems 1 and 3, it is clear that when appropriate values are selected for , and , each particle in the swarm is able to enlarge its searched area with the order of an exponential function and the entire “cubic” region can thus be extensively explored by the swarm in a logarithmic number of steps. In practice, the value of should be appropriate to guarantee a both efficient and extensive search in a desired region. Theorem 3 only provides a sufficient condition for to hold, and experiments are often needed in practice to determine the appropriate values for , and .

4. Testing Results

We have implemented this new PSO-based optimization algorithm into a computer program PSOTS in MATLAB and evaluated its performance on the minimization of a few high-dimensional functions in the Benchmark Dataset for PSO functions, which can be downloaded for free from the website at https://www.cil.pku.edu.cn/resources/somso/271214.htm. Its performance is also compared with that of a few other PSO-based algorithms, including the traditional PSO algorithm, the AsynLnPSO algorithm [14], the LinWPSO algorithm [13], the GPSO algorithm [19], the CCAS algorithm [21], and the VPSO algorithm [30]. GPSO, CCAS, and VPSO are state-of-the-art PSO-based algorithms particularly designed for the optimization of high-dimensional functions. A large amount of testing has been performed to determine the parameters that can optimize the performance of each of the four algorithms. For PSOTS, the population size is determined to be 200, the values of ,, and are set to be 0.7, 0.5, and 0.9, respectively. is 1.0 and is 0.8.

The performance of all four algorithms is then evaluated and compared. To evaluate the performance of a given algorithm on a function, the algorithm is executed 10 times on a randomly rotated version of the function. For a fair comparison, the number of iterations in each execution of an algorithm is set to be 3000. The minimum function value obtained in all 10 executions and the distribution of the minimized function values in all 10 executions is obtained. Tables 1 and 2 show the minimum function values, the mean values, and standard deviations of the minimized function values obtained with the seven algorithms on 9 functions when the dimensionality of each function is 100 and 150, respectively. It is clear from both tables that PSOTS outperforms the other six algorithms in both the best and mean of the minimized function values on 8 out of the 9 functions. The minimized function values obtained with PSOTS on Ackley, Cigar, Rastrigin, and Noncon-rastrigin are significantly lower than those obtained with the other six algorithms. On the other hand, all other algorithms outperform PSOTS on function Schwefel.

It is also interesting to observe that the best minimized function value obtained with PSOTS on Ellipse is less than that obtained with the traditional PSO algorithm when the dimensionality is 100. However, the mean of the minimized function values obtained with PSOTS is larger than that obtained with the traditional PSO algorithm in the same dimensionality. When the dimensionality increases from 100 to 150, PSOTS outperforms the traditional PSO algorithm in both the best and mean of minimized function values. This particular example suggests that the exploration ability of PSOTS is stronger than that of the traditional PSO, and the advantage of its exploring ability over the traditional PSO algorithm is further enhanced when the dimensionality of the function that needs to be optimized increases.

We then increase the dimensionality of the functions to higher values and tested the performance of all seven algorithms on the same set of functions when the dimensionality is 200, 300, and 500, respectively. Tables 36 show the performance of the seven algorithms on 10 functions from the benchmark dataset when the dimensionality is selected to be 200, 300, 500, and 1000, respectively. It is clear from the tables that PSOTS significantly outperforms the other six algorithms in 9 tested functions in both the best and mean of minimized function values. Although the other six algorithms still outperform PSOTS on function Schwefel, the relative performance of PSOTS is improved significantly when the dimensionality increases. Table 5 shows that PSOTS outperforms the traditional PSO and AsynLnPSO in the best of the minimized function values of Schwefel when the dimensionality of the function reaches a value of 500.

Compared with other PSO-based optimization algorithms, a certain portion of the iteration steps of PSOTS is dedicated to the exploration of the variable space of a function. This exploring process may not always lead to improved performance of optimization. As a direct result of the exploration, fewer iteration steps are available for the convergence process. The convergence rates of PSOTS thus may be lower than those of other PSO-based algorithms on functions that do not have a large number of local optima. Function Schwefel in the benchmark set is an example of such functions. In addition, the global minimum of Schwefel is located in a straight line with small gradient values in the variable space. The existence of this straight line may further slow down the convergence process of PSOTS. Specifically, in the converging process of any PSO-based algorithm, particles tend to move toward the straight line rapidly while converging slowly along the direction of the straight line to the global minimum. In PSOTS, since particles are expelled apart from one another during the exploring process, the converging process first moves them to points that have longer distances from the global minimum along the direction of the straight line and then let them converge to the global minimum. The longer distances to the global minimum clearly lead to optimization performance inferior to that of other PSO-based algorithms.

The performance of PSOTS on functions with a dimensionality of a few thousand is also tested and evaluated. The dimensionality of two functions, including Ellipse and Sphere, is selected to be 5000, and all seven algorithms are used to minimize them. Each algorithm is executed 10 times. Table 7 shows the result of minimization obtained with the seven algorithms. It is clear from the table that PSOTS significantly outperforms the other six algorithms on both functions when the dimensionality is 5000.

5. Conclusions

In this paper, a new efficient PSO algorithm is developed for the optimization of high-dimensional functions. Based on a simple expelling force field and a translational procedure, the algorithm can explore in a significantly enlarged area in the variable space of a high-dimensional function and the performance of optimization can thus be significantly improved for functions that have a large number of local minima. Our analysis and testing results on high-dimensional benchmark functions also show that this new algorithm can achieve significantly improved performance on the optimization of high-dimensional functions, compared with the traditional PSO algorithm, a few of its variants, and a number of state-of-art algorithms for the optimization of high-dimensional functions.

Since a large number of PSO-based algorithms have been developed for the optimization of multivariable functions, it remains unknown whether this approach can outperform the PSO-based algorithms that have not been tested in our experiments on high-dimensional functions or not. A much more comprehensive testing and comparison is thus needed to evaluate the performance of this algorithm.

Recently, due to the rapid growth in the amounts of datasets with dimensionalities larger than ten thousand, the extraction of crucial features from such datasets has become an important problem in the area of data mining. Such datasets are often called ultrahigh-dimensional datasets. The exploring ability of the proposed approach may deteriorate significantly in such ultrahigh-dimensional spaces, which may adversely affect the application of this approach to the accurate processing of ultrahigh-dimensional datasets. Improving the optimization performance of this approach on ultrahigh-dimensional functions thus constitutes an important part of our future work.

Data Availability

The source code and testing data of this work are freely available upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work of G. Li, J. Sun, M.N.A. Rana, and Y. Song was fully supported by the Fund of Specially Appointed Professor of Jiangsu Province, China, under grant number 1034901501. The work of C. Liu was fully supported by the US Science & Technology Center Grant under grant number CCF-0939370.