Abstract

The aim of this paper is to present a design of a metaheuristic search called improved monkey algorithm (MA+) that provides a suitable solution for the optimization problems. The proposed algorithm has been renewed by a new method using random perturbation (RP) into two control parameters (p1 and p2) to solve a wide variety of optimization problems. A novel RP is defined to improve the control parameters and is constructed off the proposed algorithm. The main advantage of the control parameters is that they more generally prevented the proposed algorithm from getting stuck in optimal solutions. Many optimization problems at the maximum allowable number of iterations can sometimes lead to an inferior local optimum. However, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm. All details in the improved monkey algorithm have been represented in this study. The performance of the proposed algorithm was first evaluated using 12 benchmark functions on different dimensions. These different dimensions can be classified into three different types: low-dimensional (30), medium-dimensional (60), and high-dimensional (90). In addition, the performance of the proposed algorithm was compared with the performance of several metaheuristic algorithms using these benchmark functions on many different types of dimensions. Experimental results show that the improved monkey algorithm is clearly superior to the original monkey algorithm, as well as to other well-known metaheuristic algorithms, in terms of obtaining the best optimal value and accelerating convergence solution.

1. Introduction

In this section, the background understanding of optimization in calculus, mathematical optimization, heuristics, and metaheuristic approaches is given. The research based on optimization [13] seeks out a solution iteratively for analytical solutions that have been analyzed. The design of an improved monkey algorithm for a multivariate system is noticed.

Fermat and Lagrange were the first to suggest formulas that are based on calculations for determining the optima. Newton and Gauss were the first to suggest iterative methods for finding the best solution. Actually, this means an approach to optimization in calculus in the case of a point on a function of one variable. It gives the best solution (the maximum or minimum of the function). Many optimization problems are primarily to find the best solution within certain boundaries. It refers to the best available functions that solve objective applied mathematics functions. Formally, “linear programming” was started by Kantorovich in 1939. It is also called linear optimization (LO). The LO is a technique to get the best solution in a calculus model whose elements are represented by linear relationships. Therefore, linear optimization is also a special case of mathematical optimization. The first well-known approach was the simplex method by using mathematical optimization. Danzig studied the simplex method in 1947 for solving linear programming problems. Since then, many optimization methods or techniques have been developed. These are, respectively, as follows: quasi-Newton method [4], steepest descent method [5], possible directions method [6], Newton method [7], penalty method [8, 9], and quadratic programming [10]. Karush–Kuhn–Tucker conditions are first derivative tests for a solution in nonlinear programming (nonlinear optimization) as an optimal expression in mathematical optimization. Kuhn and Tucker studied first derivative tests in 1951. Karush explained in his Master’s thesis in 1939 the necessary conditions for a constrained optimum. The Karush–Kuhn–Tucker conditions of nonlinear programming generalize the method of Lagrange multipliers, which allows only equality constraints. Mathematical programming is briefly about the selection of a best element from some set of available alternatives. Quantitative disciplines are common optimization problems of sorts arising in all from computer science, engineering operations research, economics, and industry. Since then, many optimization methods or techniques have been developed as solutions of interest in mathematics for centuries. Thus, mathematical programming is a rising trend for many fields. These kinds are, respectively, as follows: linear programming [1113], nonlinear programming [14, 15], objective programming [16, 17], and dynamic programming [18, 19]. With regard to nonlinear optimization, that is, having at least one goal or nonlinear constraint function, the known approaches have encountered a lot of difficulties. Unfortunately, all tasks in engineering design are almost nonlinear.

Heuristic methods were first used in philosophy and mathematics for finding solutions to complex problems. Heuristics are problem-dependent methods. Thus, they are usually adapted to a specific problem and try to make full use of its features. However, they are often too greedy, tend to fall into the local optimum trap, and generally cannot get a global optimal solution. The study of this method was developed in human decision-making in the 1970s–1980s by Tversky and Kahneman. In the 1980s, metaheuristic approaches attracted the attention of engineers and they studied all kinds of optimization. Metaheuristics are problem-independent methods and they are of a high level. A set of strategies are provided for developing heuristic optimization algorithms. In general, they are not greedy. In fact, they can even accept temporary deterioration of the solution which allows them to explore the solution space more deeply and thus get a better solution. One of the most well-known approaches was genetic algorithms (GAs). Holland studied the principle of “survival of the fittest” in the 1960s. Subsequently, Simulated Annealing was published in 1983. The optimization problems were solved by Simulated Annealing (SA).

The SA is currently formulated by an objective function for many variables; that is, it means several constraints. Therefore, with SA in practice, the constraint can be penalized as part of the objective function for the best solution. The following are five metaheuristic periods:(1)In 1940: pretheoretical period(2)From 1940 to 1980: the early period(3)From 1980 to 2000: the method-centric period(4)In 2000s: the framework-centric period(5)Scientific period (future)

Nowadays, there are many optimization algorithms that are designed to find the global optimal solutions to optimization problems. One of them is metaheuristic algorithms that can be efficiently used to solve “local minima” problems and determine global solutions of the optimization problems. The set of metaheuristic algorithms include ant colony optimization (ACO) [20, 21], ant lion optimizer (ALO) [22], bat algorithm (BAT) [23], cuckoo search (CS) [24], elephant herding optimization (EHO) [25], particle swarm optimization (PSO) [26], krill herd (KH) [27], moth-flame optimization (MFO) [28], monarch butterfly optimization (MBO) [29, 30], mussels wandering optimization (MWO) [31], moth search algorithm (MSA) [32], and whale optimization algorithm (WOA) [33] for finding good solution to optimization problems. Even today, new methods are being developed as new metaheuristics are invented. Other metaheuristics research works have been done on the designing of the evolutionary theory such as biogeography-based optimization (BBO) [34], the differential evolution (DE) [35], evolution strategies (ES) [36], genetic algorithm (GA) [37, 38], harmony search (HS) [39], gravitational search algorithm (GSA) [40], sine cosine algorithm (SCA) [41], dragonfly algorithm (DA), and hybrid ABC/DA (HAD) [42].

What is more, the improved monkey algorithm (MA+), which finds the best solution and solves optimization problems, is designed in this study. In addition, the proposed algorithm is a new metaheuristic search for the optimization of multivariate systems. There exists much insufficiency for monkey algorithm about its solution search area which may bring about the premature convergence and the low search accuracy when solving complex optimization of multivariate systems. Then, considering that monkey algorithm converges very slowly, a random perturbation method can be used to ensure the diversity of monkey algorithm against premature convergence. The design of a random perturbation into two parameters in a convergence state helps the best monkey position to jump out of possible local optima to further increase the performance of the proposed algorithm (MA+). Thus, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many complex optimization problems for the lowest number of iterations.

This paper is organized as follows: Section 2 describes the proposed algorithm and the design of a random perturbation into two parameters is explained clearly. Section 3 describes the experimental results and discussion. The information of twelve benchmark functions is given. Moreover, the performance of the proposed algorithm is evaluated and is compared with many comparative algorithms (many metaheuristic algorithms and modified comparative algorithms) on different dimensional functions. Finally, the conclusion is summarized in Section 4.

The aim of this paper is to present the design of a new optimization method called improved monkey algorithm (MA+) to find a good solution for the optimization of multivariate systems. The design of the proposed algorithm (MA+) is a new metaheuristic search method for optimization problems inspired by the behavior of the movement of a monkey. The original monkey algorithm (MA) mainly consists of four processes, namely, initialization process, climb process, watch-jump process, and somersault process. The improvement of the monkey algorithm is renewed by adding random perturbation (RP) in the original four processes. All processes in the proposed algorithm have been designed in Figure 1.

Step A. Initialization Process
The proposed algorithm begins with random generation of a position for each monkey. Each monkey position is set to M, where M represents population size (number of monkeys). Hence, ith is a position as xi = (i = 1, 2, …, M) for each monkey as in the following equation:Each monkey’s position is evaluated in objective function. It is also set to be present in the searching area (lower boundary-upper boundary). Lower and upper boundaries (lb and ub) are for all solutions (monkey’s position). All solutions should be in the searching area between lower boundary (lb) and upper boundary (ub).

Step B. Climb Process
The climb process changes the monkeys’ positions step by step from the initial positions to new ones that can evaluate an improvement in the objective function. Step length (a) parameter in the design of climb process updates the movement of monkeys’ positions. The total number of monkeys is M. Hence, ith is a position as xi = ((xi1, xi2, … xin), i = 1, 2, …, M) for each monkey. The step length in changing (updating) position of monkey is in the following equation:where is updating the position of monkeys (j = 1, 2, …, n); a (positive number) is step length in climb process with a = 10−3. Each ith monkey position evaluates an improvement in the objective function in the climb number of iterations (Nc). This function is called the pseudogradient of the objective function and is expressed as follows:The step length in the climb process has a crucial role in the precision of the approximation of the local solution, so the climb process supports a feasible solution. y is a feasible position for each monkey. is updating with y till reaching a feasible solution. Otherwise, does not change.

Step C. Watch-Jump Process
This process checks each monkey position after the climb process. In other words, it is checked whether their position has reached the top or not. Moreover, each monkey looks around to see whether there is a position that is higher than the current position. If they have it, they will jump from their current position. Otherwise, it means that their position has not reached the top. Of course this will be realized for the monkeys who have the best positions (close to the top or at the top). Therefore, each monkey takes a maximal distance from its current position. The maximal distance in the watch-jump process is parameter b as the eyesight. This process is expressed as follows:The parameter b is the eyesight with b = 0.5. This process updates with y. Both are evaluated: . Otherwise, it checks equation (4) till reaching an appropriate point . Then, in this process, the climb process is repeated by employing . Thus, each monkey takes a maximal distance from its current position.

Step D. Somersault Process
This process enables finding out new positions (searching domains). This process enables finding new positions (searching domains) to the barycentre of all monkeys' current positions defined as a pivot. Monkeys will somersault along the direction pointing to pivot. All solutions should be in somersault interval [c, d] with [−1, 1]. The search space of monkeys for this problem has large feasible spaces till increasing values of |c| and d, respectively. In this process, a real number is generated randomly as s between somersault intervals [−1, 1] and is expressed as follows:where p is the somersault pivot, j = 1, 2, …,n, and then if till feasible solution. Otherwise, this process repeats equations (5) and (6) until a feasible solution (y) is found.

Step E. Random Perturbation (RP) Process
This process controls monkeys current position, which can be stuck at some optimal solution. After somersault process, a novel random perturbation (RP) process is constructed of the proposed algorithm into two control parameters with p1 = 0.5 and p2 = 0.2, respectively. p1 improves monkeys’ current position if they are stuck at a local minimum. The same or worse value is found in searching space consecutively; they have a tolerance number (tolX) in the number of iterations for improving monkeys’ position (one at a time). p2 improves along the direction pointing to their current position with different perturbation to out from some possible local minimum or to search for other (and better) minima. The details regarding the pseudocode of the proposed algorithm are shown in Algorithm 1.
The design of the proposed algorithm for solution to global numerical optimization problems begins with population (M), boundaries (lb, ub), eyesight (b), climb number (Nc), somersault interval c, d ∈ [−1, 1], and control parameters (p1, p2). All input parameters are designed for the proposed algorithm. In addition, a random dimension is calculated as ceil (rand x D). This calculation is evaluated as a random scalar that is drawn from the standard normal distribution multiplying with the parameter p1. Thus, it prevents the proposed algorithm from getting stuck in some optimal solutions while controlling the monkeys’ position. The second one is about uniformly distributed random numbers (rand [1 x D]) multiplying with the parameter p2. This scalar point is combined with each element of the vector x. Thus, improvement of monkeys’ positions is found along the direction pointing with different perturbation to out from some possible local minimum or to search for other (and better) minima. The special steps of the improved control parameters with RP are described in Algorithm 1.

Step A–D: Inputs Step E
global_min = −1
for i = 1 to M do
for j = 1 to tolX do
  d ← ceil (rand x D)
  yi ← xid (1 + p1 randn)
  if lb < yi < ub then
   Continue
  end if
  if global_min>0 then
  |if f (yi) > f (xij) then
   |xij ← yi
  |end if
  Else
  |if f (yi) < f (xij) then
   |xij ← yi
  |end if
  end if
end for
end for
for i = 1 to M do
for j = 1 to n do
  yi ← xij (·) (1 + p2 rand [1, D])
  if lb < yi < ub then
   Continue
  end if
  if global_min>0 then
  |if f (yi) > f (xij) then
   |xij ← yi
  |end if
  Else
  |if f (yi) < f (xij) then
   |xij ← yi
  |end if
  end if
end for
end for
Output

3. Results and Discussion

3.1. Benchmark Functions

The performance of the improved monkey algorithm (MA+) is implemented in Matlab (2017). The features of equipment of the computer used are given as follows:(1)CPU: i5–6200 U(2)CPU speed: 2.30 GHz–2401 MHz(3)RAM: 4.00 GB(4)OS: Microsoft Windows 10

The information for 12 benchmark functions is listed in Table 1 as the name of each function, its equation, and range. The improved monkey algorithm (MA+) performed on various benchmark functions. They are 12 benchmark functions, namely, sphere function (F1), Schwefel 2.22 function (F2), Schwefel 1.2 function (F3), Rosenbrock function (F4), Ackley function (F5), Griewank function (F6), sum squares function (F7), Dixon-Price function (F8), Bent Cigar function (F9), sum of different powers function (F10), Holzman function (F11), and hyperellipsoid function (F12). The performance of the improved monkey algorithm and the performance of comparative algorithms (metaheuristic) are evaluated against 12 benchmark functions in the next section.

3.2. The Performance of Improved Monkey Algorithm on Different Dimensional Functions

The performance of the improved monkey algorithm and the performance of original monkey algorithm (MA) were evaluated against 12 benchmark functions on different dimensions that can be classified into three different types: low-dimensional (30D), medium-dimensional (60D), and high-dimensional (90D). Both algorithms have the same size of population and number of iterations (max) and their maximum function evaluation times are equal. Each algorithm is run 100 times independently for all conditions. The parameters and the same conditions for both algorithms are set as follows: size of population (M) = 50, number of iterations (Ite.) = 50, and dimension (D) = 30, 60, and 90. The best mean and the best standard deviation of experimental results are marked in bold for each function.

The first part of this experiment was conducted on the 30, 60, and 90 dimensions. The experimental results were illustrated in Table 2 that shows 12 benchmark optimization functions (F1–F12), and the improved monkey algorithm (MA+) achieves the best optimization results on the best, mean, and standard deviation values. Therefore, the experimental results on the all-dimensional benchmark functions showed that the performance of the improved monkey algorithm is much better than that of the original monkey algorithm (MA) for the all-dimensional functions. The experimental results are more intuitively demonstrated by the convergence plots and global search ability of the two algorithms (MA and MA+), and the convergence plots of the both algorithms on the 30-dimensional functions are in Figures 2(a)2(l) for all benchmark functions.

Table 2 shows the best mean optimization results for all functions (F1–F12) on the 30 dimensions, and the performance of the proposed improved algorithm was obtained: 1.03E − 40, 1.09E − 23, 1.22E − 36, 2.69E + 01, 9.55E − 15, 0.00E + 00, 4.68E − 38, 6.67E − 01, 1.40E − 31, 1.9E − 196, 3.49E – 54, and 1.15E − 37 respectively. Additionally, Figures 2(a) to 2(l) reveal that the original monkey algorithm is much poorer for the 30-dimensional functions, while the proposed improved algorithm still shows a distinguished searching ability, global optimal solution, and its convergence speed in all functions.

In addition, the performance of the proposed improved algorithm was obtained, 8.20E − 29, 8.35E − 18, 1.14E − 24, 5.71E + 01, 3.53E − 14, 0.00E + 00, 5.54E − 28, 6.67E − 01, 5.59E − 21, 9.3E − 190, 4.49E – 31, and 1.35E – 26, respectively, against 12 60-dimensional functions. Finally, the performance of the proposed algorithm was obtained, 1.16E − 23, 2.52E − 14, 1.45E − 19, 8.71E + 01, 1.33E − 11, 0.00E+ 00, 2.20E − 21, 6.67E − 01, 6.76E − 15, 1.3E − 165, 2.90E – 23, and 1.97E – 20, respectively, against 12 90-dimensional functions.

3.3. Comparison of MA+ with Metaheuristic Algorithms on Different Dimensions

The improved monkey algorithm (MA+) was compared with many metaheuristic optimization algorithms against 12 benchmark optimization functions on different dimensions. The information of benchmark functions is listed in Table 1. All algorithms have the same information and use the same initial parameters, the same dimensions, and the same number of iterations, and their maximum function evaluation times are equal [30, 42]. The best experimental comparative results are marked in bold for each function and all details are shown in Tables 36.

Tables 3 to 5 show the experimental comparative results for all functions (F1–F12) on the 30, 60, and 90 dimensions. Each algorithm is run 100 times independently for all 12 benchmark optimization functions on all these dimensions in these tables. However, Table 6 shows the experimental comparative results for some functions (F1–F5) on 20, 50, and 100 dimensions. Each algorithm is run 30 times independently for each of 5 benchmark optimization functions on all these dimensions in Table 6.

The initial parameters and the same conditions for all algorithms were set as follows: size of population is set to 50 and each algorithm ran till it reached a number of iterations (50).

In the first stage, the performance of the proposed algorithm is compared with the performances of a selected collection of comparative algorithms that have been evaluated. The included algorithms are ABC, DA, and HAD. The best mean and the best standard deviation of experimental results are shown in Table 3 for each function. Table 3 shows that the proposed algorithm has an outstanding performance in the majority of the evaluation cases for F1–F7 and F9–F12 benchmark functions, respectively. However, the performance of HAD algorithm is equal to the result in the case of all dimensions on only Dixon-Price function (F8) with the proposed algorithm. The best standard deviation on Dixon-Price function was obtained by HAD algorithm. All details are shown in Table 3.

In the second stage, the performance of the proposed algorithm is compared with those of some metaheuristic optimization algorithms that have been evaluated. The included algorithms are ACO, BAT, BBO, DE, GA, and PSO. The best mean of experimental results is listed in Table 4 for each function.

In the third stage, the performance of the proposed algorithm is compared with those of other metaheuristic optimization algorithms that have been evaluated. The included algorithms are EHO, KH, MFO, MSA, SCA, and WOA. The best mean of experimental results is listed in Table 5 for each function.

Finally, the performance of the proposed algorithm was also compared with the performances of three algorithms, namely, the monarch butterfly optimization (MBO) algorithm, MBO with greedy strategy and self-adaptive crossover operator (GCMBO), and MBO with opposition-based learning and random local perturbation (OPMBO) using five benchmark functions, and all details are listed in Table 6.

To sum up, the experimental comparative results showed reaching the much better solution and the best convergence performance of escaping local optimum for the proposed algorithm when it is compared with ACO, BAT, BBO, DE, GA, PSO, EHO, KH, MFO, MSA, SCA, WOA, MBO, GCMBO, and OPMBO. All those comparative results showed an outstanding performance of the proposed algorithm in the majority of the evaluation cases. All details are listed in Tables 46.

4. Conclusions

This paper presented a novel metaheuristic search and cognitively inspired algorithm, based on the monkey algorithm. The proposed algorithm has been widely employed for solving various kinds of optimization problems and was evaluated extensively against 12 benchmark optimization functions on different types of dimensions for each function. A new random perturbation was defined to improve the control parameters and was constructed of the proposed algorithm. The main advantage of the control parameters was that they efficiently prevented the improved monkey algorithm from getting stuck in optimal solutions and found global optimal solution for 8 benchmark functions, namely, sphere function (F1), Schwefel 2.22 function (F2), Schwefel 1.2 function (F3), sum squares function (F7), Bent Cigar function (F9), sum of different powers function (F10), Holzman function (F11), and hyperellipsoid function (F12) in Figures 2(a)2(c), 2(g), and 2(i)2(l), respectively. Moreover, Ackley function (F5) in Figure 2(e) and Griewank function (F6) in Figure (2f) early obtained 8.88E − 16 and 0.00E + 00, respectively, for the lowest iterations. These experimental results are the best solution for these functions, and they also reached global optimum solutions early without getting stuck in local optimum solutions. However, Rosenbrock function (F4) in Figure (2d) and Dixon-Price function (F8) in Figure (2h) caused getting stuck in optimal solutions and obtained poor performance for only the lowest iterations, but of course the proposed algorithm obtained the best solution for these functions at the maximum allowable number of iterations. Briefly, the search strategy in the proposed algorithm has more generally proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm.

The performance of the improved monkey algorithm was compared with many metaheuristic optimization algorithms, including a collection of 18 optimizer algorithms. The comparative results included simple statistics for the best, mean, and convergence plots. All those comparative results showed that the proposed algorithm had an outstanding performance in majority of the evaluation cases.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Acknowledgments

The author would like to acknowledge Faculty of Engineering and Architecture, Department of Computer Engineering, Istanbul Gelisim University, Avcılar-Istanbul, Turkey.