A rotationally invariant semi-autonomous particle swarm optimizer with directional diversity

https://doi.org/10.1016/j.swevo.2020.100700Get rights and content

Highlights

  • A Rotationally Invariant SAPSO (RI-SAPSO) algorithm with directional diversity is proposed.

  • Rotational invariance impacts the results of RI-SAPSO algorithm.

  • Directional diversity ensures multiple phases of exploration throughout the search process.

  • RI-SAPSO performed better than other PSO-like algorithms when applied on optimization problems.

Abstract

The semi-autonomous particle swarm optimizer (SAPSO) [1] is a relatively recent algorithm for global continuous optimization based on gradient direction and diversity controlling approach, providing autonomy for the particles and the swarm for exploiting regions in the search space, and preserving exploration during the whole search process. In the first study, although SAPSO algorithm holds the property rotational invariance in which it normally brings a lack of directional diversity in PSO context, the algorithm has shown very good performance in comparison to other PSO-like algorithms. In this paper, an improved version of SAPSO, named rotationally invariant SAPSO (RI-SAPSO), is proposed, which still holds the same property, but now it incorporates a rotation matrix generated by an exponential map to maintain directional diversity. A mathematical proof to prove that the RI-SAPSO algorithm is rotationally invariant is given. RI-SAPSO was evaluated on test functions extracted from CEC 2017 benchmark problems with six other PSO-like algorithms, along with its previous version. The comparative study was strengthened with a non-parametric Friedman's hypothesis test for 1 × k comparisons and p-values were adjusted in the post-hoc procedure. Simulation results showed that the proposed RI-SAPSO, in most problems, was able to find much better solutions and statistical significances were also observed.

Introduction

Many stochastic population-based algorithms have been proposed to solve a black-box optimization problem, to name a few: Genetic Algorithm (GA) [2], Artificial Bee Colony [3], and Ant Colony Optimization [4,5]. One of the most popular algorithm in the field of swarm intelligence is the Particle Swarm Optimization (PSO) algorithm [6,7]. The literature of PSO is eclectic in terms of applying the metaheuristic in different domains, ranging from toy- or puzzle-like problems, such as n-queen [8,9], sudoku [10,11], and chess [12], to more complex industry-like problems, such as processing of ore dressing plant [13], long-term production scheduling problem of the open pit mines [14], structural health monitoring applied in the assessment of bridges [15]. Also, this sort of algorithm is used as tool for training a machine learning algorithm, such as feed-forward neural networks [[16], [17], [18]] or hybridizes with other evolutionary algorithms like GAs [[19], [20], [21]], ant-colony [22], differential evolution [23], among others. This vast application domain of PSO is present in the literature of global optimization mainly because of its simplicity regarding its development and fundamental formulation, addressing two types of application domains: discrete binary [24] and continuous codifications [6,7]. Due to the acceptance of PSO algorithm by the scholars during the last years and the prominent results on using swarm intelligence to solve multimodal optimization problems, this paper deepens the analysis on PSO algorithms and debates some well-known drawbacks intrinsically embedded in the foundations of the algorithm [25].

Although great strides have been made to improve the searchability for global continuous optimization problems with PSO-like algorithms, some intrinsic deficiencies are present when particles are willing to explore and exploit good regions of the search space. In Refs. [[26], [27], [28]], the authors highlight that there is a core behavior rooted in PSO algorithm whose the motion of particles are biased in the directions parallel to the coordinate axes. Therefore, the performance of the algorithm is dependent on the coordinate system where the objective function is defined, which is an undesirable property referred to as rotational variance.

In [28], the authors argued that invariance properties, such as rotational invariance, are desirable, because they increase the predictive power of performance results, although the particles lack directional diversity as discussed in Ref. [29]. In PSO-like algorithms, the manifestation of both properties is in the velocity update equation of particles by handling vectors or scalars of random variables, respectively satisfying rotational variance and rotational invariance, as previously shown in Ref. [27].

Those observations have motivated this work in a sense to extend the previously developed PSO algorithm, which was first termed as semi-autonomous particle swarm optimizer (SAPSO) [1]. Herein, a mathematical proof that the new approach, named rotationally invariant SAPSO (RI-SAPSO), is invariant under rotation of the coordinate system is provided. As previously stated in Ref. [1], the root idea of SAPSO algorithm is to hybrid gradient information based on GPSO [30] and DGHPSOGS [31], and diversity controlling approach of ARPSO [32] to strengthen the overall search process of the metaheuristic. These characteristics are maintained in RI-SAPSO along with new improvements for tracking the attraction and repulsion moments to interchangeably use a stochastically rotation matrix to preserve a wider range of movement for each particle, thus increasing the directional diversity.

The key instigation to develop the RI-SAPSO algorithm can be found in the following works. In Ref. [29,33], the authors built a new PSO version, reffered as WPSO, in which was demonstrated that directional diversity and invariance are not necessarily exclusive and an approximated rotation matrix embedded in a new velocity update equation was proposed. The new algorithm showed to be rotationally invariant and at the same time directional diversity. In another related work described in Ref. [34], the authors developed a locally convergent rotationally invariant PSO (named here as BPSO) algorithm. They pointed out some situations: the particles stagnate at some points in the search space, having the inability to change the value of one or more decision variables; when the swarm size is small or the number of dimensions grows poor performance is reached; there is a lack of guarantee to converge even to a local optimum (local optimizer); and there is sensitivity of the algorithm to the rotation of the search space. To address all of these issues at the same time, a new general form of velocity update equation for the BPSO algorithm that contains a user-definable normal distribution function around local and global memories was proposed, which in a way resembles the proposal of PSO algorithm described in Ref. [35], and it is denominated here as SPSO.

Table 1 summarizes important features present in each related work described so far. The acceleration and inertial coefficients can be static, i.e., the same values for all types of optimization problems or no modifications are made in their values during the algorithm's run, or dynamic, which their values vary along iterations or they are problem-dependent. Currently, the RI-SAPSO algorithm turns every coefficient to static variables independently of the optimization problem under consideration, unlike its predecessor SAPSO algorithm where the parameters were problem-dependent or dynamic. Furthermore, the rotation matrices are introduced in the velocity update equation to guarantee directional diversity of search distribution and still satisfy the property rotational invariance. Note the importance of each feature being satisfied by RI-SAPSO: the diversity controlling approach (div) monitors the diversity value of the swarm during the search process and it gives the possibility to avoid local optima when the diversity value is small; the gradient information (∇) equips each particle with an individual memetic search, i.e., an asynchronous individual exploitation that can be activated at any iteration, which decreases the effect of random walk property; The rotation matrices (rm) provokes a wider search distribution (also known as distribution of all possible next position [35]), which is essential to maintain directional diversity (dd) of particles when moving through the search space; Rotational invariance (inv) brings performance stability to the algorithm, i.e., the algorithm approximately outputs the same average results regardless of the rotation of the coordinate system.

In the remainder of the paper, a theoretical background about PSO and SAPSO, along with a mathematical tool to prove rotational invariance property are given in Section 2. In Section 3, the RI-SAPSO algorithm is described in detail. Section 4 shows the numerical simulation results and provides a Friedman's hypothesis test. Finally, the concluding remarks are given in Section 5.

Section snippets

Theoretical background

This section first outlines the fundamentals of the PSO algorithm and later describes the essential theory behind the SAPSO algorithm. Further, a mathematical tool to attest invariance under scale, translation, and rotation is reported briefly, which is based on the theory described in Ref. [33]. The theory is also applied on both velocity update equations of the classical PSO algorithm and a visual interpretation of their performances are discussed.

A rotationally invariant SAPSO algorithm (RI-SAPSO)

This section proposes enhancements for SAPSO algorithm principally in terms of finding better solutions (e.g., low error rates) and minimizing computational time. A new velocity update equation of the RI-SAPSO algorithm is provided to embed such goals and the crucial differences between the SAPSO and RI-SAPSO algorithms are also discussed. At the same time, working with rotational invariance property is rather preferable than facing a bias phenomenon with rotational variance property. A

Experimental simulations

The RI-SAPSO algorithm is performed on CEC 2017 benchmark functions available in Ref. [54] and denotes here as: Bent Cigar (f1), Zakharov (f2), Rosenbrock (f3), Rastrigin (f4), Scaffer's F6 (f5), Lunacek Bi_Rastrigin (f6), Non-Continuous Rastrigin (f7), Levy (f8), Schwefel (f9), Hybrid functions (f10−19), and Composition functions (f20−29). The function named “Sum of different Power” is not used here due to unstable behavior for higher dimensions, as described in Ref. [54]. Those optimization

Conclusion

This work presented a new PSO version, termed as rotationally invariant semi-autonomous particle swarm optimizer or simply RI-SAPSO, which can be considered an update of the previous SAPSO algorithm. In RI-SAPSO algorithm, it was rather preferable to work with rotational invariance property and to face the lack of directional diversity, than facing a bias phenomenon with rotational variance property. The new velocity update equation embodied a rotation matrix that is activated when the swarm is

Acknowledgement

This work was supported in part by the National Council for Scientific and Technological Development (CNPq), under grant numbers 432668/2018–7 and 426075/2016–1 and in part by the Coordination for the Improvement of Higher Education Personnel (CAPES), Brazil, under grant number 88887.351566/2019–00.

References (59)

  • H.E. Espitia et al.

    Statistical analysis for vortex particle swarm optimization

    Appl. Soft Comput.

    (2018)
  • G. Xu et al.

    On convergence analysis of particle swarm optimization algorithm

    J. Comput. Appl. Math.

    (2018)
  • S.-F. Li et al.

    Particle swarm optimization with fitness adjustment parameters

    Comput. Ind. Eng.

    (2017)
  • J. Gou et al.

    A novel improved particle swarm optimization algorithm based on individual difference evolution

    Appl. Soft Comput.

    (2017)
  • K. Chen et al.

    A hybrid particle swarm optimizer with sine cosine acceleration coefficients

    Inf. Sci.

    (2018)
  • J. Derrac et al.

    A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms

    Swarm Evol. Comput.

    (2011)
  • N. Lynn et al.

    Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation

    Swarm Evol. Comput.

    (2015)
  • D.E. Goldberg

    Genetic Algorithms in Search, Optimization and Machine Learning

    (1989)
  • D. Karaboga et al.

    A comprehensive survey: artificial bee colony (abc) algorithm and applications

    Artif. Intell. Rev.

    (2014)
  • M. Dorigo et al.

    Ant system: optimization by a colony of cooperating agents

    IEEE Trans. Syst. Man Cybern. Part B (Cybern)

    (1996)
  • M. Dorigo et al.

    Ant colony optimization – artificial ants as a computational intelligence technique

    IEEE Comput. Intell. Mag.

    (2006)
  • J. Kennedy et al.

    Particle swarm optimization

  • R.C. Eberhart et al.

    A new optimizer using particle swarm theory

    Proc. Sixth Int. Sympos. Micro Mach. Hum. Sci.

    (1995)
  • A. A. Shaikh, A. Shah, K. Ali, A. H. S. Bukhari, Particle swarm optimization for n-queens problem, J. Adv. Comput. Sci....
  • Y.R. Wang et al.

    Swarm refinement pso for solving n-queens problem

  • A. Moraglio et al.

    Geometric particle swarm optimization for the sudoku puzzle

  • J.M. Hereford et al.

    Integer-valued particle swarm optimization applied to sudoku puzzles

  • J.A. Duro et al.

    Particle swarm optimization applied to the chess game

  • X. Huang et al.

    Research on particle swarm optimization and its industrial application

  • Cited by (12)

    • Dealing with multi-modality using synthesis of Moth-flame optimizer with sine cosine mechanisms

      2021, Mathematics and Computers in Simulation
      Citation Excerpt :

      CLS was used to improve the refined search of search agents in specific areas. Xu et al. [132] combined the GM strategy and cultural learning mechanisms [90] into the MFO. GM was used to help the MFO escape the local optimum, and CL was used to help search individuals to remember historical experiences and improve exploration capabilities.

    • Intelligent human action recognition using an ensemble model of evolving deep networks with swarm-based optimization

      2021, Knowledge-Based Systems
      Citation Excerpt :

      Kang et al. [20] developed a non-inertial opposition-based PSO algorithm (NOPSO) for undertaking various unimodal and multimodal benchmark functions. A rotationally invariant semi-autonomous PSO model with directional diversity was proposed by Santos et al. [21] for solving the CEC2017 artificial landscapes. Another PSO variant, namely dynamic multi-swarm differential learning PSO, was proposed by Chen et al. [22].

    View all citing articles on Scopus
    View full text