A memory guided sine cosine algorithm for global optimization

https://doi.org/10.1016/j.engappai.2020.103718Get rights and content

Abstract

Real-world optimization problems demand an algorithm which properly explores the search space to find a good solution to the problem. The sine cosine algorithm (SCA) is a recently developed and efficient optimization algorithm, which performs searches using the trigonometric functions sine and cosine. These trigonometric functions help in exploring the search space to find an optimum. However, in some cases, SCA becomes trapped in a sub-optimal solution due to an inefficient balance between exploration and exploitation. Therefore, in the present work, a balanced and explorative search guidance is introduced in SCA for candidate solutions by proposing a novel algorithm called the memory guided sine cosine algorithm (MG-SCA). In MG-SCA, the number of guides is decreased with increase in the number of iterations to provide a sufficient balance between exploration and exploitation. The performance of the proposed MG-SCA is analysed on benchmark sets of classical test problems, IEEE CEC 2014 problems, and four well known engineering benchmark problems. The results on these applications demonstrate the competitive ability of the proposed algorithm as compared to other algorithms.

Introduction

Optimization can be defined as the process of selecting the best option from an available set of alternatives. Many challenging problems in science, economics, business, and engineering can be formulated as an optimization problem. The general form of a single objective optimization problem can be stated as follows: Minfxx=x1,x2,,xDRDs.t.gix0i=1,2,,Ihjx=0j=1,2,,Jx[xmin,xmax] where, x=x1,x2,,xD is a decision vector in D-dimensional space. The functions f,gi(i=1,2,,I) and hj(j=1,2,,J) are real valued functions, respectively defined as the objective function, inequality constraints, and equality constraints. The variables I and J denote the number of inequality and equality constraints, respectively. The expression given in Eq. (4) is known as a bound constraint.

In the literature, several deterministic methods are available to find solutions to optimization problems. Most of these deterministic methods utilize derivative information of the functions being optimized (Himmelblau, 1972, Boyd and Vandenberghe, 2004, Yang, 2014). However, in real-life, it may not always be possible that the functions involved in the problems are differentiable, continues or convex, for example: economic dispatch problem (Niknam, 2010), sizing design of truss structures (Schutte and Groenwold, 2003), electromagnetic optimization problems (Grimaccia et al., 2007). Therefore, derivative-free stochastic methods were developed which utilize only the objective function to evaluate the quality of candidate solutions and which do not require that the function has mathematical properties such as being continuous, differentiable and convex. The success of stochastic optimization methods is very much attributed to the fact that the search for a good solution starts from multiple randomly selected positions in the search space, that global information about the search space is used to guide the search, and due to the ability to both explore and exploit the search space. Diversification, or exploration, is the ability of searching new regions of the available search space in order to locate promising regions of the search space that may locate a global optimal, or at least a good local optimal solution. Exploitation is the ability to refine the search in promising regions found during the exploration process. For a successful algorithm, a proper balance between these two conflicting objectives is essential.

Stochastic methods treat the optimization problem as a black box as they do not utilize the problem information. They just evaluate the value of functions at decision variables. Stochastic methods are iterative and introduce a random component into the search process. Most of these stochastic methods are nature-inspired, based on some phenomena from nature. Stochastic methods are also referred to as meta-heuristic algorithms. Nature inspired stochastic optimization methods include genetic algorithms (GAs) (Holland, 1992), particle swarm optimization (PSO) (Eberhart and Kennedy, 1995), differential evolution (DE) (Storn and Price, 1997) and artificial bee colony (ABC) (Karaboga and Basturk, 2007), amongst others. Some recently developed and efficient algorithms are the grey wolf optimizer (GWO) (Mirjalili et al., 2014), moth-flame optimization (MFO) algorithm (Mirjalili, 2015), whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016), and many others.

The reason for developing new stochastic search algorithms which exhibit new foundational principles can be answered with the ‘No Free Lunch Theorem (NFL)’ (Wolpert and Macready, 1995). The NFL states that no single optimizer can be designed that is the best at solving all problems. In other words, a particular algorithm may show very efficient results on some set of problems, but the same algorithm may show poor performance on a different problem set. Thus the NFL has made the field of stochastic algorithms highly active and supports the development of new search algorithms and improvement of current available algorithms. Based on the nature of the search space of optimization problems, an algorithm requires a varying amount of exploration and exploitation during the search to locate an optimum solution. When we modify the search strategy of an algorithm, the main objective is to establish a sufficient amount of exploration and an appropriate balance between them, so that a wide range of optimization problems can be solved. The requirement of varying levels of exploration and exploitation for different optimization problems with different difficulty levels is the main issue in meta-heuristic algorithms and an imbalance between exploration and exploitation creates the problem of stagnation at a local optimum and premature convergence (Eiben and Schippers, 1998, Črepinšek et al., 2013, Del Ser et al., 2019).

The SCA (Mirjalili, 2016) is one of the recently developed stochastic algorithms to solve continuous optimization problems. Although the SCA is rich in diversification strength, it is poor in intensifying the search (Nenavath et al., 2018). Thus, an insufficient balance between intensification and diversification has been observed in the standard SCA (Nenavath et al., 2018). In SCA, each iteration updates the candidate solutions randomly and preserves the best solution. The remaining solutions are again updated using the search equation. In this process, since each candidate solution leaves its position and attains a new position, the useful information about the search history has been lost by that candidate solution. All these shortcomings emaciate the performance of SCA. Therefore, to handle such situations, several attempts have been made in literature to enhance the performance of the standard SCA. For example, Meshkat and Parhizgar (2017) introduced a weighted update position mechanism in SCA. Elaziz et al. (2017b) proposed an improved opposition-based SCA to provide better exploration ability to the candidate solutions. Sindhu et al. (2017) hybridized the SCA with an elitist strategy to solve the feature selection problem. Elaziz et al. (2017a) hybridized DE and SCA to enhance the capability of SCA to avoid local optima. To reduce susceptibility to local optima, Rizk-Allah (2018) combined SCA with multi-orthogonal search. Turgut (2017) developed a hybrid algorithm of the SCA and backtracking search (BSA) for thermal and economical optimization of a shell and tube evaporator. Li et al. (2017) and Attia et al. (2018) have used Lévy-flight search in SCA to avoid local optima. An improved version of SCA was used by Bairathi and Gopalani (2017) to train feed-forward neural networks. This improved version is based on opposition based learning which enhances the exploration ability of the algorithm. A hybrid version of SCA and GWO was proposed by Singh and Singh (2017) to capitalize on the exploitation ability of GWO and the exploration ability of SCA.

In the present paper, as an alternative approach to enhance the search performance of the standard SCA, a memory guided sine cosine algorithm (MG-SCA) is proposed in order to remove the above issues from the standard SCA and to increase the search efficiency of the algorithm. The proposed memory based strategy helps to locally explore the promising regions around the personal best state of candidate solutions. To validate the performance of MG-SCA, two benchmark sets, i.e. a classical set of 13 problems (Yao et al., 1999) and the standard IEEE CEC 2014 set of problems (Liang et al., 2013) have been selected. The MG-SCA is also applied to solve four well-known engineering optimization problems. The experiments and comparisons are supported by different metrics and statistical validations.

The rest of the paper is organized as follows. Section 2 discusses the standard SCA. Section 3 provides a detailed description of the proposed algorithm (MG-SCA). Section 4 presents the experimentation and comparison on benchmark problems and on four well-known engineering optimization problems. Finally, Section 5 concludes the work and suggests some future directions.

Section snippets

Overview of the Sine Cosine algorithm

The SCA was proposed by Mirjalili (2016) in 2014. As for other meta-heuristic algorithms, the SCA generates new solutions via random sampling around current solutions. In the population of size N, a new solution corresponding to the candidate solution xit, (i=1,2,,N) is generated as follows xi,jt+1=ẋi,jt+1ifri,j<0.5x̃i,jt+1otherwisewhere ẋi,jt+1=xi,jt+r1,jt×sinr2i,jt×|r3i,jtyjtxi,jt|x̃i,jt+1=xi,jt+r1,jt×cosr2i,jt×|r3i,jtyjtxi,jt| and ri,j is uniformly distributed random number from the

Proposed algorithm

This section proposes the memory guided sine cosine algorithm (MG-SCA). The MG-SCA is motivated in Section 3.1, and the search strategy is discussed in Section 3.2.

Results and discussion

This section conducts an empirical analysis of the performance of MG-SCA. The first experiment analyses performance on a standard benchmark of classical problems, while the second experiment analyses the performance on the standard benchmark of IEEE CEC 2014. The performance of the MG-SCA on four different classes of engineering problems is also discussed.

Conclusions

This paper presented the Memory Guided Sine Cosine Algorithm (MG-SCA) which is an improved version of the standard SCA. The MG-SCA focuses on a better balance between exploration and exploitation, to prevent becoming stuck in local optima. A good balance is obtained by using multiple search guides. The number of search guides are decreased with increase in the number of iterations for the transition from the diversification phase to the intensification phase. The MG-SCA is evaluated on 13

CRediT authorship contribution statement

Shubham Gupta: Conceptualization, Methodology, Validation, Visualization, Writing original draft, Writing - review & editing. Kusum Deep: Writing - review & editing, Visualization, Investigation, Supervision. Andries P. Engelbrecht: Writing - review & editing, Visualization, Investigation.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

The first author would like to thank Ministry of Human Resource and Development (MHRD), Govt. of India , India, for their financial support. Grant No. MHR-02-41-113-429.

References (46)

  • NayakD.R. et al.

    Combining extreme learning machine with modified sine cosine algorithm for detection of pathological brain

    Comput. Electr. Eng.

    (2018)
  • NiknamT.

    A new fuzzy adaptive hybrid particle swarm optimization algorithm for non-linear, non-smooth and non-convex economic dispatch problem

    Appl. Energy

    (2010)
  • Rizk-AllahR.M.

    Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems

    J. Comput. Des. Eng.

    (2018)
  • SinghN. et al.

    A novel hybrid GWO-SCA approach for optimization problems

    Eng. Sci. Technol. Int. J.

    (2017)
  • BairathiD. et al.

    Opposition-based Sine cosine algorithm (OSCA) for training feed-forward neural networks

  • BhandariA.K.

    A novel beta differential evolution algorithm-based fast multilevel thresholding for color image segmentation

    Neural Comput. Appl.

    (2018)
  • BoydS. et al.

    Convex Optimization

    (2004)
  • DasS. et al.

    Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems

    (2010)
  • EberhartR. et al.

    A new optimizer using particle swarm theory

  • EibenA.E. et al.

    On evolutionary exploration and exploitation

    Fund. Inform.

    (1998)
  • ElazizM.E.A. et al.

    A hybrid method of sine cosine algorithm and differential evolution for feature selection

  • ElazizM.A. et al.

    An improved opposition-based sine cosine algorithm for global optimization

    Expert Syst. Appl.

    (2017)
  • GandomiA.H. et al.

    Benchmark problems in structural optimization

  • Cited by (75)

    View all citing articles on Scopus
    View full text