1 Introduction

Meta-heuristic algorithms can be applied for training neural networks in solving real-life problems, though each algorithm has its own limitations. For instance, there are recently some of the prominent meta-heuristic algorithms that been widely used in optimizing the neural network accuracy include Particle Swarm Optimisation (PSO) [5], Bat Algorithm (BA) [9], FireFly (FF) [12]. However, literature cannot identify a single algorithm to be the best for solving all optimization problems, this has also been proved by the well-known No Free Lunch (NFL) theorem [29]. In this theorem, there was a logical prove supporting the aforementioned claim that there is no such meta-heuristic best suited for solving all types of optimization problems. In another words, there is a group of meta-heuristic algorithms perform the best in solving a set of problems, while the same group might give poor performance in solving different set of optimization problems. Hence, this NFL theorem has opened the door to researchers on keep developing new algorithms trying to achieve the best solution for different kind of problems. Besides, the challenges of heavy computational cost, existence of hastily convergence, mutation rate, crossover rate, time taken in fitness evaluation chiefs to boost current algorithm or develop new one.

Artificial Neural Networks (ANNs), which are normally used in pattern recognition, computer vision, solving real-world problems (liner or non-liner problems) and classification, are normally need to be trained or optimized using a wide range of meta-heuristics algorithms that mainly classified as a single-solution-based or population-based. Population-based meta-heuristics algorithms are widely used recently due to their ability in cooperatively finding the optimal solution over the course of training process. This kind of algorithms is mainly found as a concept of Swarm Intelligence (SI) that was proposed by [30]. The main process of meta-heuristics’ behavior was formulated based on the evolutionary concept of the SI agents.

Evolutionary or nature-inspired algorithms like Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Bat Algorithm (BA), Fire Fly (FF), and Gray Wolfe Optimizer (GWO) are widely used in various optimization problems in different fields. For instant, the feature selection is a vital process that mainly reflect on the accuracy of a classification model as well as optimizing the parameters via the use of meta-heuristic algorithms. The feature selection through this process is another extension of distinct research dimension. However, there are still some limitations with the widely used classifiers such as: computationally expensive, high algorithmic complexity, extensive memory requirements, and selection of appropriate kernel parameters, which are mostly tricky. Specifically, when a meta-heuristic algorithm well handles a problem with high accuracy may not produce the same inspiring results for another problem with different requirements.

Well-known meta-heuristic algorithms have been simulated mimicking animals/insects’ behavior [1,2,3,4,5,6,7,8,9,10,11, 16,17,18, 21, 22, 32, 33]. Ant Colony Optimization (ACO) [2] simulated social life of ants, PSO [3] came from behavior of swarm life of animals such as birds, fishes and so on. Other famous meta-heuristics are as follows: Artificial Bee Colony Algorithm [4], krill herd algorithm [5], BA [6], social spider optimization [7], Chicken Swarm Optimization (CSO) [8], firefly algorithm [9], Multi-Verse Optimizer [26], Quantum multiverse optimization algorithm [27], Chaotic multi-verse optimizer [28]. However, the investigation of applying various heuristic algorithms and determine the best result is motivated through the need for the best tuning between the common feature of these algorithms that is the division of the search process into two phases: exploration and exploitation. Searching for appropriate weighing scale between these two phases is still considered as an open challenge job. This is due to the nature of stochastic behavior of meta-heuristics algorithms. One of the very good approaches that recently widely used is the evolutionary or nature-inspired algorithms, which originates from the meta-heuristic search algorithms family motivated by the theories and biological evolution and the actions of swarms of nature’s creation. Table 1 shows the brief description of the nature-inspired algorithms.

Table 1 Comparison of meta-heuristic algorithms

Similarly, Cyber-physical systems (CPS) have tight integration of computation, communication, and control engineering with physical elements. CPS systems such as medical CPS, transportation CPS and energy CPS can benefit from proper design and optimization techniques. All CPS systems are emerging faster than before due to progress in real-time computing, communications, control, and artificial intelligence. Multi-objective design optimization approaches help to maximize the efficiency, capability, performance, and safety of CPS systems. The proposed approaches in this paper can be applied to above mentioned CPS systems for improved efficiency and performance where time-varying sampling patterns, sensor scheduling, real-time control, feedback scheduling, task and motion planning and resource sharing can be optimized.

Therefore, in this paper a new algorithm is proposed that has ability to solve various kinds of optimization problems. The proposed algorithm has been modelled and simulated by using inspiration of multiverse theory, named Multi-Verse Algorithm (MVA). Computational results show that MVA is successful to propose efficient and feasible solutions for different problems. MVA is constructed from easy conception of multiverse theory, which is implemented using MATLAB platform. MVA algorithm is organized based on initial population, explosion of solutions and principle concepts such as feasible and infeasible regions. Therefore, the algorithm has low computational complexity in comparison with the state of the arts approaches.

The MVA has some new remarkable features as it has been inspired from a scientific theory not based on behavior of animals or insects, which it gives more stable and accurate behavior. Moreover, MVA can solve difficult optimization problems such as bi-level programming problems. The algorithm is extensive according to different kinds of problem in computational results.

The MVA has been made from two conceptions: population of solution and the theory of parallel worlds. The algorithm starts from feasible and infeasible solutions and carries on using main conceptions of multiverse theory. In fact, details of theory are used in all steps of MVA such as: creation of the initial population, explosion of the solutions (big bangs) and movement of universes to find the optimal solution. MVA is compared with other classic and meta-heuristic approaches. Comparison confirms efficiency of the proposed meta-heuristic algorithm.

The rest of paper is organized as follows: Section 2 presents a source of inspiration of the proposed MVA. Section 3 details out a conceptual design and simulation of the proposed MVA algorithm from the multiverse theory. This is followed by Section 4, which presents computational results of MVA technique. Section 5 shows the comparison of our MVA and existing well-known meta-heuristic algorithms and Section 6 shows convergence behavior of the MVA. Finally, Section 7 concludes the paper.

1.1 Key differences of MVA and MVO

The proposed MVA in this paper is completely different with MVO [26]. In particular, the differences are categorized into three main aspects: concepts and inspiration, mathematical formulation, and steps of algorithms. Table 2 summarizes the key differences between the proposed MVA and MVO in [26].

Table 2 Key Differences of MVA and MVO in [26]

1.1.1 Concepts and inspiration

The main inspirations of MVO [26] are based on: white hole, black hole, and wormhole whereas the main inspirations of our algorithms, MVA, are based on: parallel and different big bangs.

The mathematical models of concepts in MVO are: exploration, exploitation, and local search. The mathematical models of concepts in MVA are: creation of the initial population, explosion of the solutions, and rotation of universes to find the optimal solution. Following concepts of multiverse have never discussed in [26], and this is because inspirations of two algorithms are different:

  1. 1.

    In the multiverse theory all universes come from very small and dense particle. Thus, the proposed MVA is inspired from this idea to create the next population very near to the solutions of initial population.

  2. 2.

    In the next step, all solutions, which were very near to each other are distributed in the feasible region same as big bangs.

  3. 3.

    The best solution of each area will be found and these solutions construct next population. Each area corresponds a universe in multiverse theory.

  4. 4.

    In each iteration, better solutions will be surrounded by more new solutions of the next population. This is because in the multiverse theory universe with more dark energy has more galaxies and plants.

  5. 5.

    Likewise, the proposed algorithm focus on the beginning of the world and converting into the present complexity of the world. In the other words, MVA wants to answer this question: how the world was started and changed by passing time?”

1.1.2 Mathematical formulation

In reference [26], formulation of the mathematical model is developed for:

  1. 1.

    The white/black hole tunnels and exchange the objects of universes.

  2. 2.

    Maintaining the diversity of universes and perform exploitation.

In our algorithm, formulation and mathematical model have been proposed for:

  1. a.

    Initial population:

    For each solution, some solutions (based on the rank of that solution) will be created very near the solution randomly.

  2. b.

    Explosion of solutions:

    Each solution would be changed in direction of the vector which connects that solution and the solution of previous population.

1.1.3 The procedure

The procedure of the proposed MVA is based on: Initial population, Ranking of solutions, Making dense solutions, Big bangs, Finding best solutions and Termination. Which is completely different with steps of algorithm in reference [26]:

  1. 1.

    The higher inflation rate, the higher probability of having white hole.

  2. 2.

    The higher inflation rate, the lower probability of having black holes.

  3. 3.

    Universes with higher inflation rate tend to send objects through white holes.

  4. 4.

    Universes with lower inflation rate tend to receive more objects through black holes.

  5. 5.

    The objects in all universes may face random movement towards the best universe via wormholes regardless of the inflation rate.

2 Source of inspiration

This section presents a simulation of multiverse theory as an optimizer, named multiverse algorithm (MVA), Here we explained the principle concepts of MVA, mathematical equations and process of the algorithm to find optimal solution in optimization problems. The basic idea of multiverse theory is developed from string theory. This theory states that there are several universes in the world. More particularly, multiverse theory more than one big bang are existing besides to the big bang of our universe [10].

Meta-heuristics have main concepts, which have been simulated from treatment of animals, insects or natural events. The most important concept of ant colony is pheromone of ants, particle swarm optimization has been based on the global best, while main concept of genetic algorithm is combination, warming of egg in laying chicken algorithm (LCA) and explosion in big bang algorithm are the most important concepts of these algorithms. In this paper, we have inspired MVA algorithm from mainly from the existence of several worlds and big bangs.

As mentioned previously according to the multiverse theory there are several universes. Thus, MVA algorithm starts with a set of solutions as initial population. In the multiverse theory all universes come from very small and dense particle, so this is great idea to create the next population very near to the solutions of initial population, which has been simulated by our MVA algorithm. In the next step all solutions, which were very near to each other, are distributed in the feasible region same as big bangs.

Eventually, the best solution of each area will be accordingly explored and found, afterwards, these solutions will construct the next population. Each area corresponds a universe in multiverse theory. In each iteration, better solutions will be surrounded by more new solutions of the next population. This is because in the multiverse theory universe with more dark energy has more galaxies and plants.

Likewise, the proposed algorithm focuses on the beginning of the world and converting into the present complexity of the world. In the other words, MVA intends to answer this question: “how the world was started and changed by passing time?” One of the features of MVA is based on population. Particularly, in each iteration the algorithm changes and modifies population as well as the set of solutions.

3 The proposed multiverse algorithm (MVA)

This section presents the detail of the proposed MVA technique in terms of its process of generating solutions and populations, followed by the process of explosion of solutions, then the main procedure of MVA is presented by this section accordingly. Figure 1 illustrates the procedure of the proposed MVA.

Fig. 1
figure 1

The flowchart of proposed MVA

3.1 The solutions and populations

Initial population is created in feasible region same as the first hypotheses of multiverse theory, which is stating that there are parallel worlds not just one. In fact, each solution in MVA displays a universe in multiverse theory. In multiverse theory, each universe is defined as a dark energy and this is great idea to sort solutions of populations and to correspond a rank to each solution based on their objective functions.

The number of random solutions is defined according to the rank of original solution. In fact, the algorithm tries to generate more solutions close to the relatively better solutions. This is exactly taken from the concept of multiverse theory, which states that a universe with large dark energy is larger and has more galaxies. For each solution x(i), the random solutions x(j) are created according to non-equation ||xixj||≤ 𝜖, which is the famous Euclidean norm in mathematics [31].

In Rn, i = 1,2,...,n and j = 1,2,...,m,𝜖 is a small positive number, n is number of solutions in previous population and m is defined according to the rank of xi, it is larger for solutions which have better rank. This procedure is illustrated in Algorithm 1. In the algorithm, i is defined as number of solutions that are generated randomly at the beginning of the MVA technique, j is defined as number of solutions for each previous solution and The value of j is calculated based on the objective function. Further, k is the number of iterations of the algorithm.

figure f

As illustrated in Fig. 2a, 24 feasible solutions in the initial population (blue points) have been shown for a given problem and next population (green points) is distributed close to them according to the solution ranking in the initial population with 𝜖 = 0.2.

Fig. 2
figure 2

Initial population and its changing process in simulation of big bangs

3.2 Explosion of solutions

In the explosion process of MVA, each solution of current population (green points) is going away from the solution of previous population (blue points) after the big bangs. This has been simulated from the big bang of each universe in multiverse theory. In fact, each solution would be changed in direction of the vector, which connects the current solution and the solution of previous population. These movements are according to xj = xi + λdij known as the equation of movements, where, dij is distance between points xi,xj andλ is a constant.

Moreover, MVA tries to explode all solutions which are near to previous solutions. These solutions have been shown by black points in Fig. 2b. The best solution in each universe has been shown by red points. As illustrated in Fig. 2b, The algorithm will continue by tagging the red points to be categorised as new population. Then, the solutions in current generation (red points) are highlighted to be better than solutions obtained by the previous population (blue points initial population). Algorithm 2 shows the pseudo code of the stage of explosion of solutions.

figure g

In order to solve multi-objective problems, Algorithm 2 updates and evaluates the solutions based on Equation 1:

$$ BS=\left\{\begin{array}{ll} x_{1} ~~~~ if~~a<0\\ x_{2} ~~~~ if~~a>0 \end{array}\right. $$
(1)

If we consider all objective functions are representing minimization problems and BS is defined as a better solution between x1,x2, and f = (f1,f2,...,fn) then a is defined in (2):

$$ a=\sum\limits_{i=1}^{n} (f_{i}(x_{1})-f_{i}(x_{2})) $$
(2)

and

$$ BS=\left\{\begin{array}{ll} x_{1} ~~~~ if~~a>0\\ x_{2} ~~~~ if~~a<0 \end{array}\right. $$
(3)

Equation 3 for the cases where all objective functions are representing maximization problems.

3.3 The procedure of the MVA

This section presents the procedure of the proposed MVA meta-heuristic technique. Algorithm 1 provides initial solution and population as stated in steps 1 to 3 while steps 4 to 6 are solved by Algorithm 2.

  1. 1.

    Initial population is generated in all feasible regions. N is number of solutions, k = 0 and𝜖 is a given positive small number, i = 1. This step is illustrated in Fig. 3a.

  2. 2.

    All solutions will be sorted according to their objective function. In this step, a specific rank is assigned to each solution. This step is illustrated in Fig. 3b.

  3. 3.

    For each solution xi from initial population, some solutions will be generated close to xi. Number of these solutions depends on the rank of xi in step two. For example, the number of solutions in the current population will be gathered near to the best solution in the previous population. In fact, current population is distributed among solutions of previous generation. This step is illustrated in Fig. 3c.

  4. 4.

    All solutions in current population are getting away from solutions of previous generation. Here, solutions will be exploded into the space as illustrated in 3d.

  5. 5.

    Find the best solution of current population. If j < 2 let j=j + 1 and, then go to the step 2. This step is illustrated in Fig. 3e.

  6. 6.

    If d(f(xj+ 1),f(xj)) < 𝜖 then the algorithm will be finished and xj+ 1 is the best solution by MVA xj is the best solution in jth iteration. Otherwise, let j=j + 1 and go to the step 2, d is defined in mathematical metric 4 [31] and Fig. 3 shows the process of the algorithm to find optimal solution in R2 (2 Dimension).

    $$ \max_{i} \quad |f(x^{i}_{j+1})-f({x^{i}_{j}})|=d(f(x_{j+1}),f(x_{j})) $$
    (4)
Fig. 3
figure 3

Steps of the MVA to obtain optimal solution R2

4 Computational results

In this section, both kinds of continuous in small size and discrete in large size optimization problems are solved.

4.1 Continuous problems

In this section, almost all kinds of continuous optimization problems: constrained, unconstrained, linear, non-linear, multi-level and multi-objective are solved.

Example 1

Consider Ackley Function (AF):

$$ \begin{array}{@{}rcl@{}} \min -20exp(\!-0.2\sqrt{0.5(x^{2} + y^{2})}) - exp(0.5(cos(2\pi x)\\+cos(2\pi y)))+exp(1)+20 \end{array} $$
(5)

The proposed MVA is applied to solve the optimization problem in (5). Table 3 depicts the process of how the algorithm reaches to the optimal solution (0,0) after just two iterations. Further, the process of the algorithm, initial population, optimal solution of generations and constraints of the problems have been shown for two iterations in Fig. 4. As can be seen, the optimal solution, big red point in Fig. 4d, has surrounded by solution of generation 2.

Fig. 4
figure 4

Generations move to find optimal solution by MVA- Example 1

Table 3 Results of MVA for Ackley Function - Example 2

Example 2

Consider Hölder Table Function (HTF):

The optimization problem that is represented by (6) has been solved by MVA. The results is shown in Table 4. The process of the algorithm, initial population, optimal solution of generations and constraints of the problems have been shown for two iterations in Fig. 5. Holder Table Function has four global optimal solutions, (8.05502, 9.66459), (-8.05502, 9.66459), (8.05502, -9.66459), (-8.05502, -9.66459), with -19.2085 objective function value. The proposed algorithm incredibly obtains (-8.05502, -9.66459) just after two iterations.

$$ \min -|sin(x)cos(y)exp(|1-\sqrt{(x^{2}+y^{2})}/\pi|)| $$
(6)
Fig. 5
figure 5

Generations move to find optimal solution by MVA- Example 2

Table 4 Results of MVA for Hölder Table Function - Example 3

Example 3

Consider Mishra’s Bird Function (MBF):

$$ \begin{array}{@{}rcl@{}} &&\min sin(x)exp((1 - cos(y))^{2})\\&& + cos(y)exp((1 - sin(x))^{2}) + (x - y)^{2} \end{array} $$
(7)

The problem has been solved by MVA, results have been shown in Table 5 and also the process of the algorithm, initial population, optimal solution of generations in addition to the constraints of the problems have been shown for two iterations in Fig. 6. Global optimal of Mishra’s Bird Function is (-3.1302468, -1.5821422), with objective function value of -106.7645367. MVA could find the optimal solution during two populations, which has been shown in Table 5 and also in Fig. 6d as the large red point.

Fig. 6
figure 6

Generations move to find optimal solution by MVA- Example 3

Table 5 Results of MVA for Mishra’s Bird Function - Example 4

Example 4

[13]:

Consider the following linear bi-level programming problem:

$$ \begin{array}{@{}rcl@{}} \min x-4y\\ \min y\\ x+y\geq 3\\ -2x+y\leq 0\\ 2x+y\leq 12\\ 3x-2y\leq 4\\ x, y\geq 0 \end{array} $$
(8)

Using Search Results Web results

Karush–Kuhn–Tucker (KKT) conditions the problem will be converted to the following problem:

$$ \begin{array}{@{}rcl@{}} \min x-4y\\ -\lambda_{1} +\lambda_{2} +\lambda_{3} -2\lambda_{4} =-1\\ \lambda_{1}(-x-y+3)=0\\ \lambda_{2}(-2x+y)= 0\\ \lambda_{3}(2x+y-12)=0\\ \lambda_{4}(3x-2y-4)=0\\ - x-y+3\leq 0\\ -2x+y\leq 0\\ 2x+y-12\leq 0\\ 3x-2y-4\leq 0\\ x, y, \lambda_{1}, \lambda_{2}, \lambda_{3}, \lambda_{4}\geq 0 \end{array} $$
(9)

The bi-level programming problem is difficult, because two objective functions should be optimize in two different levels at the same time. So proposing a method, which can solve such kind of problems is significant. MVA proposed the optimal solution same as exact algorithms according to Table 6. Number of iteration taken to find the optimal solution is completely low. Also, the proposed solution by LS and TM [13], (3.9,4), is feasible for all constraints of the second level of the problem, but it is infeasible for bi-level programming problem. Behavior of solutions, constraints of the problem and optimal solution are shown in Fig. 7.

Fig. 7
figure 7

Process of finding optimal solution by MV- Example 4

Table 6 Comparison of MVA and other methods- Example 4

More examples are solved by MVA and numerical results and behavior of populations are shown in Tables 7 and 8 for Examples 5 and 6.

Fig. 8
figure 8

Behavior of populations to get optimal solution by applying proposed MVA for example 5

Table 7 Comparison of MV and other methods in examples 5 and 6

Example 5

[15]

Consider the following linear programming problem (Fig. 8):

$$ \begin{array}{@{}rcl@{}} \min -3x_{1}+x_{2}\\ x_{1}+2x_{2}\leq 4\\ -x_{1}+x_{2}\leq 1\\ x_{1}, x_{2}\geq 0 \end{array} $$
(10)

Example 6

[9] (Non-linear)

Consider the following non-linear unconstraint optimization problem:

$$ \max e^{-{\kern-.5pt}(x{\kern-.5pt}-{\kern-.5pt}4)^{2} {\kern-.5pt}-{\kern-.5pt}(y-4)^{2}}{\kern-.5pt}+e^{{\kern-.5pt}-{\kern-.5pt}(x{\kern-.5pt}+4)^{2} {\kern-.5pt}-{\kern-.5pt}(y{\kern-.5pt}-4)^{2}}{\kern-.5pt}+2e^{{\kern-.5pt}-{\kern-.5pt}x^{2} {\kern-.5pt}-{\kern-.5pt}y^{2}}{\kern-.5pt}+{\kern-.5pt}2e^{{\kern-.5pt}-{\kern-.5pt}x^{2} {\kern-.5pt}-{\kern-.5pt}(y+4)^{2}} $$
(11)

Example 7

[23] (Multi-Objective):

Here MVA is used for solving Deb, Thiele, Laumanns and Zitzler (DTLZ) benchmark problems. Behavior of the algorithm to find Pareto optimal for DTLZ1 problem has been shown in Fig. 9. Feasibility of the algorithm is clear based on Fig. 9c to control sequence, which shows initial population, because some of solutions in the population have been reached to Pareto optimal. Moreover, efficiency of the algorithm is obvious by comparison of Fig. 9a and c. Most of solutions are completely far from Pareto optimal at first, but during the process of applying our proposed algorithm, solutions have achieved Pareto optimal. Further, Fig. 9c shows that the last population has surrounded Pareto optimal solutions.

Fig. 9
figure 9

Behavior of populations to get Pareto optimal solution by MVA for DTLZ1 with k = 2

For single objective problems, we have used the same procedure of the proposed MVA technique while multi-objective problems, a set of solutions have been generated in the feasible region. Additionally, applying the procedure of the MVA on solving singly objective problems set, till the algorithm reaches the Pareto optimal solutions as shown in Fig. 9.

Table 8 shows the comparison of best solutions in obtaining Pareto optimal of DTLZ problems by MVA and ParEGO which is a method that has used in reference [24].

Table 8 Comparison of MVA and other methods for DTLZ problems

To evaluate the performance of the proposed algorithm, Hyper-Volume (HV) is used as a performance metric in Table 9. HV metric simultaneously measures the convergence of many-objective optimization problems. In Table 9, the HV values are normalized between [0,1] by dividing the HV value of the origin with the corresponding reference point. Thus, higher value of HV interprets better performance of the corresponding many-objective optimization problem. In the simulation, the number of population is set to 240, the maximum number of iterations is equal to 100, epsilon value is 0.1, and the number that each algorithm has been carried out is 30 times. Table 9 illustrates the performance of the proposed MVA as compared with the exisitng algorithms for solving test problems with specific objective numbers.Here, we used HV as a performance metric to fairly judge the efficiency of the algorithms. Further, the best mean values of the corresponding test problem has been shown in bold, based on the results of HV on DTLZ1-DTLZ5 test problems. It is worth to highlight that MVA achieves the best performance as compared with its peer competitors.

Table 9 HV Results of MVA and other algorithm over DTLZ1-DTLZ5

4.2 Large size practical problems

To show efficiency of the algorithm for real life problems, this section presents three kinds of practical problems: large size of real linear programming problems, transportation problems and internet of vehicles problems. Then, the proposed MVA is applied to solve the aforementioned real life problems.

Some benchmark of linear programming can be found in NetLib repository such as aggregate function (agg), Quadratic assignment problem8 (qap8), SC50A, AFIRO. Table 10 confirms that the MVA can solve large size problems. Note that the agg, qap8, SC50A, AFIRO are linear programming test problems in the “NETLIB Linear Programming test set” which is a collection of real-life linear programming examples.

Table 10 Results of MVA for more test problems

Finding a suitable feasible solution of transportation problem is remarkable, so MVA has been applied to some random transportation problems [19]. The obtained results have been listed in Table 11.

Table 11 Comparison among MVA and other algorithms for large size problems

North-West and Vogel are two famous algorithms are used in finding feasible solution of transportation problems. Comparison with Vogel algorithm, the best algorithm, in Table 11 ascertains the preference of MVA.

Finally, MVA is applied for solving route optimization problem in IoV scenario as illustrated in [25]. Table 14 shows the higher efficiency has been obtained by deploying the proposed MVA as compared with the the benchmark LCA algorithm (Tables 12 and 13).

Table 12 Optimization test functions
Table 13 Optimization test functions
Table 14 Comparison of LCA and MVA for internet of vehicles

For each problem initial solutions have been generated randomly and they are different for both LCA and MVA algorithms. Table 14 shows improvement of their initial solutions after five iterations.

5 Comparison with other optimization algorithms

MVA is used to solve two different test functions: unimodal and multi-modal. Unimodal test functions have one global optimum and multi-modal test functions have a global optimum as well as multiple local optima. For the verification of the results, proposed algorithm is compared with MVO [26], PSO [2], GA [1], GWO [28]. Note that the number of agents is set to 24, the maximum number of iterations is equal to 100, epsilon value is 𝜖 0.1, and the number that each algorithm has been carried out is 20 times. The results of Tables 15 and 16 show that the proposed algorithm is able to provide very competitive and efficient results on both the unimodal and multi-modal test functions. Low standard deviation of MVA is remarkable, which indicates that the values tend to be close to the mean of the set of solutions.

Table 15 Comparison of MVA and existing metaheuristic methods
Table 16 Comparison of MVA and existing metaheuristic methods

6 Convergence behavior of MVA

In MVA, each solution of population will be exploded in space, so it needs a large space to improve the obtained solutions. Therefore, problems with large feasible region, the algorithm improves population very fast and gets appropriate solution. Thus, MVA is completely efficient in solving unconstrained and unbounded types of problems. Also, it proposes suitable solutions for constrained problems with large feasible region. However, MVA is not very efficient in solving problems with small feasible region. In this case, if MVA is starting from infeasible solutions, better results can be found. For example, by changing initial population in Example 5 much better result will be found according to Table 17 and Fig. 10:

Fig. 10
figure 10

Example 6 by infeasible initial population

Table 17 Comparison of MVA and exact methods by changing initial population

Figure 10a shows initial population by only one feasible solution. In Fig. 10b just the feasible solution of the previous population will be exploded (green point).

In this paper, we introduced generic optimization problems and solutions with the help of developed MVA. However, when we consider a typical CPS system, we need to consider the domain specific parameters while optimizing the overall performance. For instance, low latency is required in almost all CPS systems such as transportation CPS and energy CPS, information should be propagated in fraction of second (10 ms to 500 ms depending on the types of messages in the systems) [34, 35]. Energy CPS does not have to deal with mobility that much since most of the energy assets are fixed. However, for transportation CPS, most of the CPS nodes are mobile [36, 37]. While optimizing, we need to consider mobility on top of other parameters such as delay and high throughput. So, future research can focus on generic optimization that can be fine-tuned to a domain specific problem such as optimization problem with mobility constraint can be relaxed when we consider speed or velocity of the node equal to 0. Further research could focus on time-varying sampling patterns, sensor scheduling, real-time control, feedback scheduling, task and motion planning and resource sharing for different CPS systems.

7 Conclusion

In this paper, we developed a novel meta-heuristic algorithm named MVA, which is inspired from a scientific theory of multiverse. MVA is a naive optimizer, which optimizes most kinds of optimization programming problems. The proposed algorithm is applicable for unconstrained and constrained with small and large feasible regions. In particular, several types of complex Engineering problems, including problems in CPS, can be solved by our proposed MVA because of its fast convergence and lower appropriate complexity. In this paper, extensive simulations have been carried out to get numerical results which show the feasibility of our proposed MVA. We observed that the MVS outperforms the existing well known meta heuristic algorithms, especially for large size real problems.