Next Article in Journal
Evaluation of Color Change and Biodeterioration Resistance of Gewang (Corypha utan Lamk.) Wood
Next Article in Special Issue
DM: Dehghani Method for Modifying Optimization Algorithms
Previous Article in Journal
Usability Evaluation for the Integration of Library Data Analysis and an Interactive Artwork by Sensing Technology
Previous Article in Special Issue
Scheduling in Heterogeneous Distributed Computing Systems Based on Internal Structure of Parallel Tasks Graphs with Meta-Heuristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GRASP and Iterated Local Search-Based Cellular Processing algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems

1
Information Technology Engineering, Polytechnic University of Altamira, Altamira 89602, Mexico
2
Facultad de Ingeniería, Universidad Autónoma de Tamaulipas, Tampico 89339, Mexico
3
Graduate Program Division, Tecnológico Nacional de México/Instituto Tecnológico de Ciudad Madero, Cd. Madero 89440, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7500; https://doi.org/10.3390/app10217500
Submission received: 28 September 2020 / Revised: 18 October 2020 / Accepted: 19 October 2020 / Published: 25 October 2020

Abstract

:
High-Performance Computing systems rely on the software’s capability to be highly parallelized in individual computing tasks. However, even with a high parallelization level, poor scheduling can lead to long runtimes; this scheduling is in itself an NP-hard problem. Therefore, it is our interest to use a heuristic approach, particularly Cellular Processing Algorithms (CPA), which is a novel metaheuristic framework for optimization. This framework has its foundation in exploring the search space by multiple Processing Cells that communicate to exploit the search and in the individual stagnation detection mechanism in the Processing Cells. In this paper, we proposed using a Greedy Randomized Adaptive Search Procedure (GRASP) to look for promising task execution orders; later, a CPA formed with Iterated Local Search (ILS) Processing Cells is used for the optimization. We assess our approach with a high-performance ILS state-of-the-art approach. Experimental results show that the CPA outperforms the previous ILS in real applications and synthetic instances.

1. Introduction

According to the website www.top500.org, the supercomputer Fugaku from the Fujitsu RIKEN Center for Computational Science in Japan consists of 7,299,072 processing units. The above gives an outstanding parallel computing power. However, without accurate and efficient scheduling methods, parallel programs can be very computationally inefficient. In this paper, we approach the precedence-constraint task scheduling of parallel programs on systems formed by heterogeneous processing units to minimize the final computing time [1]. Scheduling is a well-known NP-hard optimization problem [2], as well as the task scheduling for parallel systems [3]. Therefore, different scheduling approaches were developed for this problem: Heuristics [4,5,6,7], Local Searches [8,9,10], and Metaheuristics [11,12,13,14,15]. Unfortunately, the wide variety of works in the state-of-the-art differs in their objective definitions and uses different sets of instances. From the relevant works in the state-of-the-art, we highlight [11]. To our knowledge, it is the only work that compares the obtained results against the optimal values for their synthetic instances, which are fourteen scheduling problem instances that are easily reproducible. In [11], four high-performance Iterated Local Search (ILS) algorithms are proposed, the best ILS nearly reaches all the optimal values, obtaining an approximation factor of 1.018 (desirable values are near to 1.0). Thus, we assess our proposal with the best-proposed algorithm in [11].
Scheduling precedence-constraint tasks for heterogeneous systems has been addressed with a lot of variants: energy-aware [16], idle energy-aware [8,17], energy constraint [18], communication delays and energy [19], budget constraint [20], fault-tolerant [21], security-aware [22], volunteer computing systems [23], among others. A related problem is the job shop problem [24]. However, in a parallel program, scheduling is a single job (the parallel program) without dedicated machines to a single task. On the other hand, in scheduling, any machine can compute any task, similar to the job shop’s flexible problem [25] but with a single job. In Section 2, we detail our studied version of scheduling precedence-constraint tasks on heterogeneous systems.
In this paper, we use a recent metaheuristic optimization framework called Cellular Processing Algorithm (CPA) [26,27]. CPAs are more like a framework than a strict algorithm. The main idea is to cycle between exploring the search space with multiple limited-effort algorithms (Processing Cells) and sharing information (communication) among these Processing Cells. The limited-effort algorithms can explore the search space independently from one another and communicate their findings using shared memory or other combining methods. However, despite the communication, the Processing Cells should keep looking independently and not as a whole algorithm. In this way, we avoid the computational cost of converging the whole population (when the solutions reach the same search space area) or exploring the search space with a single solution algorithm (like Greedy Randomized Adaptive Search Procedure (GRASP) or Iterated Local Search).
As stated before, CPAs are flexible, e.g., a cellular processing algorithm can initialize each Processing Cell with different heuristics to ensure the splitting of the search space among the Processing Cells [28]. Additionally, the Processing Cells can be homogeneous or heterogeneous, which means that the Processing Cells can be multiple instances of the same algorithm [29] or completely different algorithms [30]. Furthermore, we can create a soft-heterogeneous CPA with differently-configured instances of the same algorithm. Thus, with all its flexibility, there is still much to research for CPAs and its applications.
The remainder of the paper is organized as follows. Section 2 details the studied precedence-constraint task scheduling problem on heterogeneous systems. In Section 3, we introduce the related and proposed algorithms for our experimentation. Section 4 contains the experimental setup, and parameter settings. Section 5 analyzes the experimental results according to the achieved median values using non-parametric statistical tests. Finally, Section 6 gives our conclusions and future work on scheduling precedence-constraint tasks on heterogeneous systems and the Cellular Processing Algorithms.

2. Problem Description

The High-Performance Computing systems (HPC) addressed in this paper consist of a set of heterogeneous machines M completely interconnected. Every machine m j M has different hardware characteristics, which yields different processing times for the same task. Without loss of generality, we make the following assumptions:
  • Every machine has a connection link with any other machine.
  • The communication links have no conflicts.
  • All the communication links operate at the same speed.
  • Every machine can send/receive information to/from another while executing a task.
  • The communication cost between tasks in the same machine is depreciated.

2.1. Instance of the Problem

An instance of the problem is made up of two parts, a Directed Acyclic Graph (DAG) and the computation cost of the tasks in every machine. We represent the set of tasks of a parallel computing program and their precedencies as a DAG. Therefore, the parallel program is represented as the graph G = ( T , C ) where T is the set of tasks (vertices) and C is the set of communication costs between tasks (edges) (see Figure 1). The complete instance of the problem is formed by G and the computational costs P i , j of each task t i in every machine m j (see Table 1).
Any task t i cannot be initiated until its precedent tasks t j T | ( t j , t i ) C finalize their executions and communications ( C j , i ) . However, when any pair of tasks are scheduled in the same machine, the actual communication cost C j , i between them is depreciated.

2.2. Objective Function

In this work, we follow the approach of list scheduling algorithms. List scheduling algorithms are a family of heuristics in which tasks are ordered according to a particular priority criterion. The task execution order is equivalent to a DAG G = ( T , C ) topological order from the task graph, without violating the precedence constraints. Table 2 shows an example of a feasible order for the task graph from Figure 1.
Although the task order of execution is not indispensable for the scheduling, it simplifies the objective function computation [1], i.e., because it is not necessary to compute different combinations of the tasks, starting and finish times, to compute the minimum makespan [3,15,17]. However, this approach has the deficiency that the optimal value may not be possible in every task execution order. Algorithm 1 details the computation of the makespan objective function, using the computation times P i , j from Table 1 and a time counter ( T i m e j ) for each machine, to keep track of the last executed task in each machine.
Algorithm 1 Makespan objective function
Input: G = ( T , C ) , computational costs P t i , j , and an order execution of the tasks O = { o 1 , , o | T | } .
Output: m a k e s p a n
1:  T i m e j 0 , m j M
2: for x = 1 to | O | do
3:    t c u r r e n t = o x
4:    j the index of the machine m j assign to t c u r r e n t
5:   if u = { 1 , . . . , | T | } ( t u , t c u r r e n t ) C then
6:     t s c u r r e n t T i m e j
7:     t f c u r r e n t t s c u r r e n t + P t c u r r e n t , j
8:      T i m e j t f c u r r e n t
9:   else
10:     t u * argmax { t u | ( t u , t c u r r e n t ) C } ( t f u + C t u , t c u r r e n t )
11:     t s c u r r e n t max ( t f u * + C t u * , t c u r r e n t , T i m e j )
12:     t f c u r r e n t t s c u r r e n t + P t c u r r e n t , j
13:      T i m e j t f c u r r e n t
14:   end if
15: end for
16: return m a k e s p a n max { t f i } t i T
Finally, the parallel program makespan (computation time) is the difference between the start and the ending of the first and last tasks. The makespan objective function uses the auxiliary variables t s i (the starting time of task i), t f i (the finish time of task i), and C i , j which is the communication cost that is zero if the tasks are executed in the same machine. The objective is to compute the tasks, from the first to the last of the feasible execution order. Finally, the parallel program makespan (computation time) is the difference between the start and end of the first and last tasks. The complexity of the Algorithm 1 is O ( | T | · | C | ) , although, in practice, it is remarkably lower than that, because not all the edges in G are connected to every node.

3. Algorithms Descriptions

This section introduces the generic metaheuristic frameworks (Section 3.1 and Section 3.2), as well as a high-performance algorithm in the state-of-the-art (Section 3.3). Finally, our proposed algorithm is detailed in Section 3.4.

3.1. Iterated Local Search (ILS)

The ILS is a multi-start metaheuristic search based on local improvements ( L o c a l S e a r c h ) and solution alterations ( P e r t u r b a t i o n ) [31], see Algorithm 2. This algorithm starts initializing the current solution s with a random solution, which is also assigned to the best solution s b e s t (see lines 3 and 4). The main loop of the algorithm iterates over the solution s applying a perturbation followed by a Local Search [32], if the ILS detects a new best-known solution, then s b e s t is updated (see line 7). The above process continues until the stopping criterion is reached, usually a maximum Central Processing Unit (CPU) time or a fixed number of objective function evaluations.
Algorithm 2 Iterated Local Search.
Input: Problem to solve
Output: s b e s t
1:  s Random initial solution
2:  s b e s t s
3: while Stopping criterion not reached do
4:    s P e r t u r b a t i o n ( s )
5:    s L o c a l S e a r c h ( s )
6:   if f ( s ) < f ( s b e s t ) then
7:      s b e s t s
8:   end if
9: end while

3.2. Greedy Randomized Adaptive Search Procedure (GRASP)

GRASP is a multi-start metaheuristic algorithm that builds a solution by selecting one promising fragment of the solution at a time [33], see Algorithm 3. The inner loop of the GRASP, in line 5 builds a S o l u t i o n by adding random individual elements from a Restricted Candidate List ( R C L ) (see line 11). In order to build the R C L , the algorithm evaluates the increase of the partial objective and stores its maximum and minimum values (see lines 7 and 8) to set a limit for the original candidate list ( C L ) (see line 9), thus creating the R C L (see line 10); this process occurs at every step of the construction.
Algorithm 3 Greedy Randomized Adaptive Search Procedure.
Input: Problem to solve
Output: s b e s t
1:  s b e s t Random initial solution
2: while Stopping criterion not reached do
3:    S o l u t i o n ▹ An empty initial solution
4:    i = 0
5:   while S o l u t i o n is not complete do
6:     C L S e l e c t F e a s i b l e E l e m e n t s ( )
7:     f m a x max ( P a r t i a l O b j e c t i v e E v a l u a t i o n ( t i ) t i C L )
8:     f m i n min ( P a r t i a l O b j e c t i v e E v a l u a t i o n ( t i ) t i C L )
9:     l i m i t f m i n + α ( f m a x f m i n )
10:     R C L B u i l d R C L ( C L , l i m i t )
11:     S o l u t i o n i = t r | t r R C L ▹ Add to S o l u t i o n a random element from the R C L
12:     i = i + 1
13:   end while
14:    S o l u t i o n L o c a l S e a r c h ( S o l u t i o n ) ▹ The Local Search procedure is optional
15:   if f ( Solution ) < f ( s b e s t ) then
16:     s b e s t S o l u t i o n
17:   end if
18:  end while
The R C L only includes candidate tasks whose incremental costs are bounded by f m i n + α ( f m a x f m i n ) in line 9. Where f m a x and f m i n are the maximum and minimum incremental cost of the objective function, for all the candidate elements t i C L , which is calculated with a modification of Algorithm 1 named P a r t i a l O b j e c t i v e E v a l u a t i o n that evaluates up to the last element in the partial solution. Additionally, α [ 0 , 1 ] defines the greedy level of the algorithm; where α = 0 defines a completely greedy search, and α = 1 defines a completely random search. Additionally, the candidate list ( C L ) must be created or updated every iteration of the inner loop (see line 6). Once the S o l u t i o n is constructed, an optional L o c a l S e a r c h procedure can be used to improve the current S o l u t i o n (see line 14). Furthermore, s b e s t is updated every time a complete S o l u t i o n outperforms the objective value of s b e s t (see line 16). Finally, the outer loop in line 2 restarts the S o l u t i o n and iterates until the algorithm reaches the stopping criterion.

3.3. State-of-the-Art (Earliest Finish Time) EFT-ILS

In [11], the authors introduce an ILS, which will be called Earliest Finish Time (EFT)-ILS (in this paper), see Algorithm 4. EFT-ILS consists of two phases; the first explores random feasible execution orders of the tasks’ graph from lines 3 to 10. After each new ordering o , the algorithm assigns the machines to the tasks that produce their Earliest Finish Time (EFT) (see line 5 and Algorithm 5). For heavy computational instances, we suggest using an external stopping criterion as CPU time, see line 10. The second phase initializes an ILS described in Section 3.1, using the best order o b e s t and solution s b e s t , found by the first phase, as initial solution. The next subsection details the Local Search and perturbation processes.
Algorithm 4 Earliest Finish Time-Iterated Local Search.
Input: G = ( T , C ) , computational costs P i , j .
Output: s b e s t , o b e s t
1:  s Random initial solution
2:  s b e s t s
3: for i = 1 to M a x I t e r a t i o n s do
4:    o Random DAG G topological order
5:    s E F T ( o )
6:   if f ( s ) < f ( s b e s t ) then
7:      s b e s t s
8:      o b e s t o
9:   end if
10: end for▹ If the external stopping criterion is reach stop the for loop
11:  I L S ( s b e s t , o b e s t )
Algorithm 5 Earliest Finish Time Function
Input:  An order execution of the tasks O = { o 1 , , o | T | }
Output:  An assignation of machines to tasks
1: for i = 1 to | T | do
2:     Assign to the task o i the machine m j which produce their minimum finish time.
3: end for

3.3.1. EFT-ILS Local Search

The Local Search (LS) in EFT-ILS is based on the first improvement pivoting rule, see Algorithm 6. This algorithm evaluates the tasks in the execution order O (see line 2). The algorithm generates neighbors s assigning machines m j M to the current task t c u r r e n t , if a neighbor improves solution s then s is updated (see line 7), as consequence the search is reinitialized in line 8. Finally, the algorithm verifies an auxiliary external stopping criterion before continuing with the neighbor generation process to avoid exceeding the maximum CPU time or objective function evaluations.
Algorithm 6 EFT-ILS Local Search procedure.
Input:  Solution to improve s, and an order execution of the tasks O = { o 1 , . . . , o | T | }
Output:  s
1: for i = 1 to | T | do
2:      t c u r r e n t o i ▹ Assigns the task o i in the execution order as the current task
3:     for j = 1 to | M | do
4:         s s
5:        Assign t c u r r e n t in s to the machine m j
6:        if f ( s ) < f ( s ) then
7:              s s
8:              i = 1 , j = 1
9:        end if
10:     end for▹ If the external stopping criterion is reach stop the Local Search
11:  end for

3.3.2. EFT-ILS Perturbation

EFT-ILS uses a perturbation process based on a probability. Every task t i of the solution has a probability to be changed from its current machine. If the probability occurs the task t i will be moved from its current machine and assigned to a new random one. For our experiment, we use a probability of 5% which is the best probability presented in [11].

3.4. Proposed GRASP-Cellular Processing Algorithm (GRASP-CPA)

In a similar manner as in EFT-ILS, our proposed algorithm GRASP-CPA consists of two phases (see Figure 2). First, a GRASP explores feasible tasks’ orders for the next phase of the algorithm. In the second phase, the algorithm uses the best order o b e s t and solution s b e s t found by GRASP, in a homogeneous Cellular Processing Algorithm (CPA). The algorithm is composed by three ILS Processing Cells (PCells), the PCells have two functions that are independent from their ILS procedure. The first is to update the global best solution s b e s t if the PCell finds a better solution. The second is to update their current solutions through the communication processes. The communication is performed using the well-known single point-crossover from Genetic Algorithms (GAs), where two solutions from different PCells split and combine their information [12,34]. This phase continues until a fixed number of iterations or CPU seconds is reached.
Algorithm 7 describes our GRASP-CPA proposal, where the G R A S P C o n s t r u c t i o n produces feasible orderings o that are evaluated using E F T to produce the solution s (see lines 5 and 6). Here, we can see that the algorithm uses a G R A S P C o n s t r u c t i o n that receives an α value that can be either 0.9 or 1 with the same probability. This α value is used to restrict the candidate list (see lines 9 and 10 from Algorithm 3). GRASP algorithms usually use α values between 0.1 and 0.3. However, preliminary experimentation proved that the our proposed α values where the ones with the best performance for the instances used. Finally, line 12 executes the cellular processing section of the algorithm.
Algorithm 7 GRASP-CPA
Input: G = ( T , C ) , computational costs P i , j .
Output: s b e s t , o b e s t
1:  s Random initial solution
2:  s b e s t s
3: for i = 1 to M a x I t e r a t i o n s do
4:      α r a n d o m F r o m ( 0.9 , 1.0 )
5:      o G R A S P C o n s t r u c t i o n ( α )
6:      s E F T ( o ) .
7:     if f ( s ) < f ( s b e s t ) then
8:          s b e s t s
9:          o b e s t o
10:     end if
11: end for▹ If the external stopping criterion is reach stop the for loop
12:  C P A ( s b e s t , o b e s t ) .
Algorithm 8 Cellular Processing Algorithm section of the algorithm (GRASP-CPA)
Input: s b e s t , o b e s t
Output: s b e s t
1:  P C e l l 1 s c u r r e n t s b e s t
2:  P C e l l 2 s c u r r e n t s b e s t
3:  P C e l l 3 s c u r r e n t s b e s t
4: while stopping criterion not reached do
5:      P C e l l 1 I L S ( P C e l l 1 s c u r r e n t )
6:      P C e l l 2 I L S ( P C e l l 2 s c u r r e n t )
7:      P C e l l 3 I L S ( P C e l l 3 s c u r r e n t )
8:      C o m m u n i c a t i o n ( P C e l l 1 s c u r r e n t , P C e l l 2 s c u r r e n t , P C e l l 3 s c u r r e n t )
9:  end while
Algorithm 8 shows the general idea of the C P A ( s b e s t , o b e s t ) function. Here, the execution of the ILS Processing Cells (see lines 5 and 7) iterate five times each one, which limits the inner computational effort of the Processing Cells. After the Processing Cells’ execution, the C o m m u n i c a t i o n processes the current solutions, recombining the s b e s t of PCell1 with the s b e s t of PCell2, the first offspring becomes the new current solution in PCell1. At the same time, the second offspring is used for a second recombination with the s b e s t of PCell3. The resulting offsprings from the second recombination become the new current solution of PCell2 and PCell3. This process continues until the stopping criterion is reached (see line 4).

4. Experimental Setup

This section describes the experimental set of instances, the experimental configuration, and the statistical indicators of confidence in the results.

4.1. Parallel Application Instances

The applications used in the experimentation are:
  • Double precision floating point FORTRAN benchmark (Fpppp) [35].
  • The Laser Interferometer Gravitational-Wave Observatory (LIGO) application [36].
  • Robot control application (Robot)
  • Sparse matrix solver (Sparse).
  • A benchmark of fourteen small synthetic instances from [11].
The applications Fpppp, Robot, and Sparse are included in the Standard Task Graph Set (STG) in [37]. We performed the same treatment to the original applications instances as in [8], considering different (Communication to Computation Ratios, CCRs) CCRs = { 0.1 , 0.5 , 1 , 5 , 10 } , (Heterogeneity Factors) HFs = { 0.1 , 0.25 , 0.5 , 0.75 , 1 } , and number of machines | M | = { 8 , 16 , 32 , 64 } . The combination of the mentioned configurations for the four parallel applications gives a total of 4 · 5 · 5 · 4 = 400 instances. The nomenclature for the large instance set of Fpppp, Robot, and Sparse instances used in this work is Application-Machines-Tasks-HF-CCR. For the small benchmark in [11], the nomenclature is Name-Machines-Tasks. We made available the complete instance set in [38].

4.2. Experimental Settings

In the case of EFT-ILS, we use the best configuration in [11] (see Table 3). The GRASP-CPA uses a few extra parameters. One of them is the α value of the GRASP algorithm, for which we carried out extensive experimentation with α = { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 } , finding that the best values for these instances were 1.0 and 0.9 without a clear dominance. Thus, we decide to use 1.0 and 0.9 as α values randomly with an equal probability before the G R A S P C o n s t r u c t i o n (see line 5 of Algorithm 5). We use the same Local Search, perturbation process, and perturbation probability p m in both algorithms for a fair comparison. Finally, to ensure the CPA communication process, we set the probability of recombination p r as 100%. In both algorithms, the first phase has two stopping criteria, 50 iterations or a maximum of 5 min, while the second phase uses a stopping criterion of 100,000 objective function evaluations. Every instance from Section 4.1 is run 100 independent times. The complete experimental parameter settings are shown in Table 3. We made available the algorithm implementations at [39,40].

4.3. Statistical Indicators

To assess statistical confidence in our experimentation, we compute the median value from the independent runs as well as the interquartile range (IQR). The tables presenting our results follow the format M E D I A N I Q R . The tables also emphasize the best and second-best reported values for every problem with a gray and light background, respectively. For the sake of completeness, we apply the non-parametric Wilcoxon signed ranks test on the results to assess statistical differences in a pairwise comparison for every problem, at 95% confidence level [41]. A symbol ▲ indicates that EFT-ILS was statistically worse than GRASP-CPA according to the Wilcoxon signed ranks test; we use ▽ otherwise. Finally, we marked as ‘–’ the cases where there were no statistical differences.

5. Results

First, we start analyzing the results of the large benchmark set of 400 scheduling problems. Focusing on the Fpppp instance set (see Table 4), GRASP-CPA outperformed with statistical significance EFT-ILS in 53 instances, not statistically outperforming in any instance with 8 and 16 machines. However, for the instances with 32 and 64 machines, EFT-ILS outperformed GRASP-CPA in 3 and 11 instances, respectively.
Regarding the LIGO benchmark results from Table 5, GRASP-CPA outperformed EFT-ILS in 45 instances. EFT-ILS only outperformed GRASP-CPA in 13 instances, distributed as follows: two, three, five, and three for the instances with 8, 16, 32, and 64 machines, respectively.
For the Robot benchmark (see Table 6), GRASP-CPA outperformed EFT-ILS with statistical significance in 48 cases, while EFT-ILS only outperformed GRASP-CPA in 12 instances, where most of them occurred for the instances with 16 machines.
The results for the Sparse benchmark from Table 7 show that GRASP-CPA outperformed EFT-ILS with statistical significance in 33 instances, while EFT-ILS outperformed GRASP-CPA in 10 instances.
Finally, for these instances sets, GRASP-CPA achieved 265% more best median values found than EFT-ILS, with statistical significance. Therefore, we consider that GRASP-CPA is superior to EFT-ILS in a relevant proportion of the studied cases.
Furthermore, we analyze the results from the small benchmark of 14 synthetic instances in Table 8. For the 14 synthetic problems, GRASP-CPA achieves the best median value in all the cases, with statistical significance in ten cases. In addition, Table 8 shows the average computing time of the enumerative optimal algorithm (Time Enum ), EFT-ILS (Time EFT-ILS ), and GRASP-CPA (Time GRASP-CPA ) in CPU seconds on a Macbook pro 13-inch late 2011. Table 9 presents the best solution found by EFT-ILS and GRASP-CPA, where EFT-ILS computes twelve optimal values, while GRASP-CPA computes 13. Additionally, GRASP-CPA achieves an IQR of 0.0 in six cases where the algorithm reached the optimal value in the median.

6. Conclusions and Future Work

This paper proposes a new Cellular Processing Algorithm that uses a GRASP construction, called GRASP-CPA, for scheduling precedence-constraint tasks on heterogeneous systems. Experimental results showed that GRASP-CPA outperformed a high-performance algorithm from the state-of-the-art called EFT-ILS, regarding optimal and median values with statistical significance for the proposed set of instances. Two main features of the GRASP-CPA contribute to its performance. The first is the generation of tasks’ execution orders using a GRASP algorithm, conversely to a completely random order generation in EFT-ILS. The second is the communication between different Processing Cells to help explore the search space. We encourage researches to apply the Cellular Processing Algorithm approach to their problems. This approach is more a framework than a strict algorithm that allows flexible implementations with homogeneous or heterogeneous Processing Cells. Despite being a novel approach, it has proven to be effective for several problems and still has many open research areas. As future work, we would like to research other methods to produce tasks’ execution orderings, to improve the results yielded by GRASP-CPA.

Author Contributions

Conceptualization, formal analysis, investigation, methodology, resources, software, supervision, validation, writing—original draft, writing—review and editing, A.S.; data curation, project administration, software, validation, writing—original draft, and writing—review and editing, J.D.T.-V.; funding acquisition, project administration, visualization, S.I.M.; writing—review and editing, J.A.C.R., J.L.M., M.G.T.B., and M.P.-F. All authors have read and agreed to the published version of the manuscript.

Funding

A. Santiago would like to thank the CONACyT Mexico SNI for the salary award under the record 83525. The APC was funded by the Universidad Autónoma de Tamaulipas, grant PROFEXCE 2020.

Acknowledgments

We would like to thank Johnatan E. Pecero for helping with the scheduling problem instances for the experimental comparison.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Soto, C.; Santiago, A.; Fraire, H.; Dorronsoro, B. Optimal Scheduling for Precedence-Constrained Applications on Heterogeneous Machines. Int. Conf. Ser. Multidiscipl. Sci. 2018. [Google Scholar] [CrossRef]
  2. Ullman, J. NP-complete scheduling problems. J. Comput. Syst. Sci. 1975, 10, 384–393. [Google Scholar] [CrossRef] [Green Version]
  3. Sinnen, O. Task Scheduling for Parallel Systems; John Wiley & Sons: Hoboken, NJ, USA, 2007; Volume 60. [Google Scholar]
  4. Topcuoglu, H.; Hariri, S.; Wu, M.-Y. Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 2002, 13, 260–274. [Google Scholar] [CrossRef] [Green Version]
  5. Diaz, C.O.; Guzek, M.; Pecero, J.E.; Danoy, G.; Bouvry, P.; Khan, S.U. Energy-aware fast scheduling heuristics in heterogeneous computing systems. In Proceedings of the 2011 International Conference on High Performance Computing Simulation, Instanbul, Turkey, 4–8 July 2011; pp. 478–484. [Google Scholar]
  6. Diaz, C.O.; Pecero, J.E.; Bouvry, P. Scalable, low complexity, and fast greedy scheduling heuristics for highly heterogeneous distributed computing systems. J. Supercomput. 2014, 67, 837–853. [Google Scholar] [CrossRef] [Green Version]
  7. Lee, Y.C.; Zomaya, A.Y. Minimizing Energy Consumption for Precedence-Constrained Applications Using Dynamic Voltage Scaling. In Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, Shanghai, China, 18–21 May 2009; pp. 92–99. [Google Scholar]
  8. Pecero, J.E.; Huacuja, H.J.F.; Bouvry, P.; Pineda, A.A.S.; Locés, M.C.L.; Barbosa, J.J.G. On the energy optimization for precedence constrained applications using local search algorithms. In Proceedings of the 2012 International Conference on High Performance Computing Simulation (HPCS), Madrid, Spain, 2–6 July 2012; pp. 133–139. [Google Scholar] [CrossRef] [Green Version]
  9. Pineda, A.A.S. Estrategias de Búsqueda local Para el Problema de Progamación de Tareas en Sistemas de Procesamiento Paralelo. Master’s Thesis, Instituto Tecnológico de Ciudad Madero, Cd Madero, Mexico, 2013. [Google Scholar]
  10. Nesmachnow, S.; Luna, F.; Alba, E. An Efficient Stochastic Local Search for Heterogeneous Computing Scheduling. In Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops PhD Forum, Shanghai, China, 21–25 May 2012; pp. 593–600. [Google Scholar] [CrossRef]
  11. Pineda, A.A.S.; Pecero, J.; Huacuja, H.; Barbosa, J.; Bouvry, P. An iterative local search algorithm for scheduling precedence-constrained applications on heterogeneous machines. In Proceedings of the 6th Multidisciplinary International Conference on Scheduling: Theory and Applications (MISTA 2013), Ghent, Belgium, 27–29 August 2013; pp. 472–485. [Google Scholar]
  12. Huacuja, H.J.F.; Santiago, A.; Pecero, J.E.; Dorronsoro, B.; Bouvry, P.; Monterrubio, J.C.S.; Barbosa, J.J.G.; Santillan, C.G. A Comparison Between Memetic Algorithm and Seeded Genetic Algorithm for Multi-Objective Independent Task Scheduling on Heterogeneous Machines. In Design of Intelligent Systems Based on Fuzzy Logic, Neural Networks and Nature-Inspired Optimization; Melin, P., Castillo, O., Kacprzyk, J., Eds.; Springer International Publishing: Cham, Switzerlands, 2015; pp. 377–389. [Google Scholar] [CrossRef]
  13. Pecero, J.E.; Bouvry, P.; Huacuja, H.J.F.; Villanueva, J.D.T.; Zuñiga, M.A.R.; Santillán, C.G.G. Task Scheduling in Heterogeneous Computing Systems Using a MicroGA. In Proceedings of the 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, Compiegne, France, 28–30 October 2013; pp. 618–623. [Google Scholar]
  14. Flórez, E.; Barrios, C.J.; Pecero, J.E. Methods for Job Scheduling on Computational Grids: Review and Comparison. In High Performance Computing; Osthoff, C., Navaux, P.O.A., Barrios Hernandez, C.J., Silva Dias, P.L., Eds.; Springer International Publishing: Cham, Switzerlands, 2015; pp. 19–33. [Google Scholar]
  15. Pecero, J.E.; Bouvry, P.; Huacuja, H.J.F.; Khan, S.U. A Multi-objective GRASP Algorithm for Joint Optimization of Energy Consumption and Schedule Length of Precedence-Constrained Applications. In Proceedings of the 2011 IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing, Sydney, Australia, 12–14 December 2011; pp. 510–517. [Google Scholar]
  16. Huang, Q.; Su, S.; Li, J.; Xu, P.; Shuang, K.; Huang, X. Enhanced Energy-Efficient Scheduling for Parallel Applications in Cloud. In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), Ottawa, ON, Canada, 13–16 May 2012; pp. 781–786. [Google Scholar]
  17. Lee, Y.C.; Zomaya, A.Y. Energy Conscious Scheduling for Distributed Computing Systems under Different Operating Conditions. IEEE Trans. Parallel Distrib. Syst. 2011, 22, 1374–1381. [Google Scholar] [CrossRef]
  18. Xiao, X.; Xie, G.; Xu, C.; Fan, C.; Li, R.; Li, K. Maximizing reliability of energy constrained parallel applications on heterogeneous distributed systems. J. Comput. Sci. 2018, 26, 344–353. [Google Scholar] [CrossRef]
  19. Ait Aba, M.; Zaourar, L.; Munier, A. Efficient Algorithm for Scheduling Parallel Applications on Hybrid Multicore Machines with Communications Delays and Energy Constraint. Concurr. Comput. Pract. Exp. 2020, 32, e5573. [Google Scholar] [CrossRef]
  20. Chen, W.; Xie, G.; Li, R.; Bai, Y.; Fan, C.; Li, K. Efficient task scheduling for budget constrained parallel applications on heterogeneous cloud computing systems. Future Gener. Comput. Syst. 2017, 74, 1–11. [Google Scholar] [CrossRef]
  21. Xie, G.; Chen, Y.; Xiao, X.; Xu, C.; Li, R.; Li, K. Energy-Efficient Fault-Tolerant Scheduling of Reliable Parallel Applications on Heterogeneous Distributed Embedded Systems. IEEE Trans. Sustain. Comput. 2018, 3, 167–181. [Google Scholar] [CrossRef]
  22. Xiaoyong, T.; Li, K.; Zeng, Z.; Veeravalli, B. A Novel Security-Driven Scheduling Algorithm for Precedence-Constrained Tasks in Heterogeneous Distributed Systems. IEEE Trans. Comput. 2011, 60, 1017–1029. [Google Scholar] [CrossRef]
  23. Lee, Y.C.; Zomaya, A.Y.; Siegel, H.J. Robust task scheduling for volunteer computing systems. J. Supercomput. 2010, 53, 163–181. [Google Scholar] [CrossRef]
  24. Applegate, D.; Cook, W. A Computational Study of the Job-Shop Scheduling Problem. ORSA J. Comput. 1991, 3, 149–156. [Google Scholar] [CrossRef]
  25. Soto, C.; Dorronsoro, B.; Fraire, H.; Cruz-Reyes, L.; Gomez-Santillan, C.; Rangel, N. Solving the multi-objective flexible job shop scheduling problem with a novel parallel branch and bound algorithm. Swarm Evol. Comput. 2020, 53, 100632. [Google Scholar] [CrossRef]
  26. Terán-Villanueva, J.D.; Fraire-Huacuja, H.J.; Carpio-Valadez, J.M.; Pazos R., R.A.; Puga-Soberanes, H.J.; Martínez-Flores, J.A. Experimental study of a new algorithm-design-framework based on cellular computing. In Studies in Computational Intelligence; Castillo, O., Melin, P., Kacprzyk, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 451, pp. 517–532. [Google Scholar] [CrossRef]
  27. Terán-Villanueva, J.D.; Fraire-Huacuja, H.J.; Carpio Valadez, J.M.; Pazos R., R.A.; Puga-Soberanes, H.J.; Martínez-Flores, J.A. Cellular processing algorithms. In Studies in Fuzziness and Soft Computing; Melin, P., Castillo, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 294, pp. 53–74. [Google Scholar] [CrossRef]
  28. Terán-Villanueva, J.D.; Fraire-Huacuja, H.J.; Ibarra Martínez, S.; Cruz-Reyes, L.; Castán Rocha, J.A.; Gómez Santillán, C.; Menchaca, J.L. Cellular processing algorithm for the vertex bisection problem: Detailed analysis and new component design. Inf. Sci. 2019, 478, 62–82. [Google Scholar] [CrossRef]
  29. Terán-Villanueva, J.D.; Fraire-Huacuja, H.J.; Carpio-Valadez, J.M.; Pazos R., R.A.; Puga-Soberanes, H.J.; Martínez-Flores, J.A. Cellular Processing Scatter Search for Minimizing Power Consumption on Wireless Communications Systems; Hpcs: Madrid, España, 2012; pp. 126–132. [Google Scholar]
  30. Terán-Villanueva, J.D.; Fraire Huacuja, H.J.; Carpio Valadez, J.M.; Pazos Rangel, R.; Puga Soberanes, H.J.; Martínez Flores, J.A. A heterogeneous cellular processing algorithm for minimizing the power consumption in wireless communications systems. Comput. Optim. Appl. 2015, 62, 787–814. [Google Scholar] [CrossRef]
  31. Lourenço, H.R.; Martin, O.C.; Stützle, T. Iterated Local Search: Framework and Applications. In Handbook of Metaheuristics; Gendreau, M., Potvin, J.Y., Eds.; Springer: Boston, MA, USA, 2010; pp. 363–397. [Google Scholar] [CrossRef]
  32. Hoos, H.H.; Stützle, T. Stochastic Local Search: Foundations and Applications; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  33. Resende, M.G.; Ribeiro, C.C. Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications. In Handbook of Metaheuristics; Gendreau, M., Potvin, J.Y., Eds.; Springer: Boston, MA, USA, 2010; pp. 283–319. [Google Scholar] [CrossRef]
  34. Monterrubio, J.C.S.; Huacuja, H.J.F.; Alejandro, A.; Pineda, S. Comparativa de tres Cruzas y Cuatro Mutaciones Para el Problema de Asignación de Tareas en Sistemas de Cómputo Heterogéneo; Instituto Tecnológico de Ciudad Madero: Ciudad Madero, Mexico, 2015. [Google Scholar] [CrossRef]
  35. Saavedra, R.H.; Smith, A.J. Analysis of benchmark characteristics and benchmark performance prediction. ACM Trans. Comput. Syst. (TOCS) 1996, 14, 344–384. [Google Scholar] [CrossRef] [Green Version]
  36. Brown, D.A.; Brady, P.R.; Dietz, A.; Cao, J.; Johnson, B.; McNabb, J. A Case Study on the Use of Workflow Technologies for Scientific Analysis: Gravitational Wave Data Analysis. In Workflows for e-Science: Scientific Workflows for Grids; Springer: London, UK, 2007; Chapter 4; pp. 39–59. [Google Scholar] [CrossRef]
  37. Tobita, T.; Kasahara, H. A standard task graph set for fair evaluation of multiprocessor scheduling algorithms. J. Sched. 2002, 5, 379–394. [Google Scholar] [CrossRef]
  38. Santiago, A.; Terán-Villanueva, J.D.; Martínez, S.I.; Rocha, J.A.C.; Menchaca, J.L.; Borreones, M.G.T.; Ponce-Flores, M. Instance Set for: GRASP and Iterated Local Search Based Cellular Processing Algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems. Available online: https://github.com/AASantiago/SchedulingInstances (accessed on 27 August 2020).
  39. Santiago, A.; Terán-Villanueva, J.D.; Martínez, S.I.; Rocha, J.A.C.; Menchaca, J.L.; Borreones, M.G.T.; Ponce-Flores, M. Source Code for: GRASP and Iterated Local Search Based Cellular Processing Algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems. 2020. Available online: https://github.com/AASantiago/GRASP-CPA (accessed on 27 August 2020).
  40. Pineda, A.A.S.; Pecero, J.E.; Huacuja, H.J.F.; Barbosa, J.J.G.; Bouvry, P. Source Code for: An Iterative Local Search Algorithm for Scheduling Precedence-Constrained Applications on Heterogeneous Machines. 2020. Available online: https://github.com/AASantiago/EFT-ILS (accessed on 27 August 2020).
  41. Corder, G.W.; Foreman, D.I. Nonparametric Statistics for Non-Statisticians; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  42. Ahmad, I.; Dhodhi, M.K.; Ul-Mustafa, R. DPS: Dynamic priority scheduling heuristic for heterogeneous computing systems. IEE Proc. Comput. Digit. Tech. 1998, 145, 411–418. [Google Scholar] [CrossRef]
  43. Daoud, M.I.; Kharma, N. A hybrid heuristic–genetic algorithm for task scheduling in heterogeneous processor networks. J. Parallel Distrib. Comput. 2011, 71, 1518–1531. [Google Scholar] [CrossRef]
  44. Eswari, R.; Nickolas, S. Path-Based Heuristic Task Scheduling Algorithm for Heterogeneous Distributed Computing Systems. In Proceedings of the 2010 International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, India, 16–17 October 2010; pp. 30–34. [Google Scholar]
  45. Arabnejad, H. List Based Task Scheduling Algorithms on Heterogeneous Systems—An Overview. Available online: https://paginas.fe.up.pt/~prodei/dsie12/papers/paper_30.pdf (accessed on 27 August 2020).
  46. Arabnejad, H.; Barbosa, J.G. Performance Evaluation of List Based Scheduling on Heterogeneous Systems. Available online: http://icl.cs.utk.edu/workshops/heteropar2011/slides/heteropar_JorgeBarbosa.pdf (accessed on 27 August 2020).
  47. Hsu, C.H.; Hsieh, C.W.; Yang, C.T. A generalized critical task anticipation technique for dag scheduling. In International Conference on Algorithms and Architectures for Parallel Processing; Springer: Berlin/Heidelberg, Germany, 2007; pp. 493–505. [Google Scholar]
  48. Ilavarasan, E.; Thambidurai, P.; Mahilmannan, R. Performance Effective Task Scheduling Algorithm for Heterogeneous Computing System. In Proceedings of the 4th International Symposium on Parallel and Distributed Computing (ISPDC’05), Lile, France, 4–6 July 2005; pp. 28–38. [Google Scholar]
  49. Kang, Y.; Zhang, Z.; Chen, P. An activity-based genetic algorithm approach to multiprocessor scheduling. In Proceedings of the 2011 Seventh International Conference on Natural Computation, Shanghai, China, 26–28 July 2011; Volume 2, pp. 1048–1052. [Google Scholar]
  50. Kang, Y.; Lin, Y. A recursive algorithm for scheduling of tasks in a heterogeneous distributed environment. In Proceedings of the 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI), Shanghai, China, 15–17 October 2011; Volume 4, pp. 2099–2103. [Google Scholar]
  51. Lai, K.C.; Yang, C.T. A dominant predecessor duplication scheduling algorithm for heterogeneous systems. J. Supercomput. 2008, 44, 126–145. [Google Scholar] [CrossRef]
  52. Lee, L.; Chen, C.; Chang, H.; Tang, C.; Pan, K. A Non-Critical Path Earliest-Finish Algorithm for Inter-dependent Tasks in Heterogeneous Computing Environments. In Proceedings of the 2009 11th IEEE International Conference on High Performance Computing and Communications, Seoul, Korea, 25–27 June 2009; pp. 603–608. [Google Scholar]
  53. Lee, Y.; Zomaya, A. A Novel State Transition Method for Metaheuristic-Based Scheduling in Heterogeneous Computing Systems. IEEE Trans. Parallel Distrib. Syst. 2008, 19, 1215–1223. [Google Scholar]
Figure 1. Graph G = ( T , C ) of precedence tasks with communication costs C i , j .
Figure 1. Graph G = ( T , C ) of precedence tasks with communication costs C i , j .
Applsci 10 07500 g001
Figure 2. Proposed Greedy Randomized Adaptive Search Procedure (GRASP)-Cellular Processing Algorithm (CPA) flow chart diagram.
Figure 2. Proposed Greedy Randomized Adaptive Search Procedure (GRASP)-Cellular Processing Algorithm (CPA) flow chart diagram.
Applsci 10 07500 g002
Table 1. Computational costs of the tasks P i , j on every machine.
Table 1. Computational costs of the tasks P i , j on every machine.
Task m 1 m 2 m 3
t 0 11139
t 1 101511
t 2 91214
t 3 121610
t 4 151119
t 5 1395
t 6 111513
t 7 111510
Table 2. Tasks’ order of execution.
Table 2. Tasks’ order of execution.
Task t 0 t 4 t 3 t 1 t 2 t 5 t 6 t 7
Table 3. Parameter settings for EFT-ILS and GRASP-CPA.
Table 3. Parameter settings for EFT-ILS and GRASP-CPA.
EFT-ILSGRASP-CPA
Phase 1 stopping criterion:50 iterations/5 min50 iterations/5 min
Phase 2 stopping criterion:100,000 MaxEvaluations100,000 MaxEvaluations
α:- 1.0 and 0.9
Communication:-single-point cx, p r = 1.0
ILS max iterations:unlimited/MaxEvaluations5/MaxEvaluations
Perturbation:Section 3.3.2, P m = 0.05 Section 3.3.2, P m = 0.05
Local SearchAlgorithm 6Algorithm 6
Table 4. Fpppp instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 4. Fpppp instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAProblemEFT-ILSGRASP-CPA
Fpppp-8-334-0.1-0.1 9.40 E 1 1.1 E 0 9.39 E 1 1.2 E 0 Fpppp-32-334-0.1-0.1 2.28 E 2 8.4 E 0 2.28 E 2 6.8 E 0
Fpppp-8-334-0.1-0.5 6.05 E 2 9.8 E 0 6.00 E 2 8.7 E 0 Fpppp-32-334-0.1-0.5 8.88 E 2 2.1 E 1 8.93 E 2 3.2 E 1
Fpppp-8-334-0.1-1 1.49 E 3 2.5 E 1 1.48 E 3 2.7 E 1 Fpppp-32-334-0.1-1 1.12 E 3 3.0 E 1 1.12 E 3 2.5 E 1
Fpppp-8-334-0.1-5 2.65 E 3 9.2 E 1 2.60 E 3 8.8 E 1 Fpppp-32-334-0.1-5 1.55 E 3 7.0 E 1 1.52 E 3 7.6 E 1
Fpppp-8-334-0.1-10 2.12 E 3 7.7 E 1 2.05 E 3 9.7 E 1 Fpppp-32-334-0.1-10 5.66 E 3 2.7 E 2 5.51 E 3 3.2 E 2
Fpppp-8-334-0.25-0.1 8.67 E 2 1.1 E 1 8.61 E 2 1.1 E 1 Fpppp-32-334-0.25-0.1 8.44 E 2 2.8 E 1 8.40 E 2 3.3 E 1
Fpppp-8-334-0.25-0.5 2.65 E 3 3.7 E 1 2.64 E 3 3.7 E 1 Fpppp-32-334-0.25-0.5 5.71 E 2 1.4 E 1 5.75 E 2 1.6 E 1
Fpppp-8-334-0.25-1 1.94 E 3 3.0 E 1 1.93 E 3 3.3 E 1 Fpppp-32-334-0.25-1 2.66 E 2 8.5 E 0 2.67 E 2 6.8 E 0
Fpppp-8-334-0.25-5 5.86 E 2 2.1 E 1 5.73 E 2 2.4 E 1 Fpppp-32-334-0.25-5 2.76 E 3 1.5 E 2 2.70 E 3 1.3 E 2
Fpppp-8-334-0.25-10 2.82 E 3 1.2 E 2 2.70 E 3 1.3 E 2 Fpppp-32-334-0.25-10 4.42 E 3 1.6 E 2 4.41 E 3 1.7 E 2
Fpppp-8-334-0.5-0.1 3.87 E 2 4.7 E 0 3.83 E 2 5.6 E 0 Fpppp-32-334-0.5-0.1 1.04 E 3 3.6 E 1 1.03 E 3 3.4 E 1
Fpppp-8-334-0.5-0.5 2.25 E 3 3.9 E 1 2.23 E 3 3.8 E 1 Fpppp-32-334-0.5-0.5 1.23 E 3 4.1 E 1 1.24 E 3 4.2 E 1
Fpppp-8-334-0.5-1 2.07 E 3 4.4 E 1 2.06 E 3 4.2 E 1 Fpppp-32-334-0.5-1 6.64 E 2 1.8 E 1 6.66 E 2 1.6 E 1
Fpppp-8-334-0.5-5 4.23 E 3 1.4 E 2 4.08 E 3 1.3 E 2 Fpppp-32-334-0.5-5 2.41 E 3 1.5 E 2 2.41 E 3 1.4 E 2
Fpppp-8-334-0.5-10 2.73 E 3 1.3 E 2 2.60 E 3 1.5 E 2 Fpppp-32-334-0.5-10 3.08 E 3 1.2 E 2 3.03 E 3 1.1 E 2
Fpppp-8-334-0.75-0.1 3.00 E 2 5.8 E 0 2.96 E 2 4.8 E 0 Fpppp-32-334-0.75-0.1 8.78 E 2 3.1 E 1 8.71 E 2 3.1 E 1
Fpppp-8-334-0.75-0.5 3.93 E 2 8.4 E 0 3.89 E 2 6.0 E 0 Fpppp-32-334-0.75-0.5 1.28 E 3 2.7 E 1 1.28 E 3 2.8 E 1
Fpppp-8-334-0.75-1 7.64 E 2 1.7 E 1 7.53 E 2 1.4 E 1 Fpppp-32-334-0.75-1 7.89 E 2 2.3 E 1 7.97 E 2 1.9 E 1
Fpppp-8-334-0.75-5 2.87 E 3 1.3 E 2 2.77 E 3 1.4 E 2 Fpppp-32-334-0.75-5 8.48 E 2 3.1 E 1 8.38 E 2 2.4 E 1
Fpppp-8-334-0.75-10 4.92 E 3 1.9 E 2 4.74 E 3 2.4 E 2 Fpppp-32-334-0.75-10 2.54 E 3 7.0 E 1 2.51 E 3 7.3 E 1
Fpppp-8-334-1-0.1 7.97 E 2 1.1 E 1 7.86 E 2 1.0 E 1 Fpppp-32-334-1-0.1 6.90 E 2 2.2 E 1 6.90 E 2 2.4 E 1
Fpppp-8-334-1-0.5 1.31 E 3 3.3 E 1 1.30 E 3 2.3 E 1 Fpppp-32-334-1-0.5 6.28 E 2 2.3 E 1 6.29 E 2 1.8 E 1
Fpppp-8-334-1-1 3.21 E 2 7.3 E 0 3.16 E 2 8.4 E 0 Fpppp-32-334-1-1 6.81 E 1 1.7 E 0 6.82 E 1 1.8 E 0
Fpppp-8-334-1-5 4.63 E 3 2.0 E 2 4.53 E 3 2.2 E 2 Fpppp-32-334-1-5 2.13 E 3 1.1 E 2 2.13 E 3 8.0 E 1
Fpppp-8-334-1-10 3.35 E 3 1.4 E 2 3.21 E 3 1.0 E 2 Fpppp-32-334-1-10 4.46 E 3 2.5 E 2 4.34 E 3 1.9 E 2
Fpppp-16-334-0.1-0.1 3.70 E 2 7.7 E 0 3.72 E 2 9.6 E 0 Fpppp-64-334-0.1-0.1 2.60 E 2 1.7 E 0 2.60 E 2 1.4 E 0
Fpppp-16-334-0.1-0.5 1.93 E 3 6.5 E 1 1.92 E 3 7.8 E 1 Fpppp-64-334-0.1-0.5 5.54 E 1 8.4 E 1 5.58 E 1 7.8 E 1
Fpppp-16-334-0.1-1 8.72 E 2 2.7 E 1 8.66 E 2 3.4 E 1 Fpppp-64-334-0.1-1 7.31 E 2 1.1 E 1 7.36 E 2 1.0 E 1
Fpppp-16-334-0.1-5 3.26 E 3 1.2 E 2 3.25 E 3 1.2 E 2 Fpppp-64-334-0.1-5 2.00 E 3 1.0 E 2 2.03 E 3 9.2 E 1
Fpppp-16-334-0.1-10 5.57 E 3 2.4 E 2 5.37 E 3 3.0 E 2 Fpppp-64-334-0.1-10 2.26 E 3 1.1 E 2 2.28 E 3 1.3 E 2
Fpppp-16-334-0.25-0.1 1.80 E 3 4.2 E 1 1.78 E 3 4.5 E 1 Fpppp-64-334-0.25-0.1 8.93 E 2 5.2 E 0 8.97 E 2 6.1 E 0
Fpppp-16-334-0.25-0.5 1.78 E 3 5.7 E 1 1.78 E 3 6.0 E 1 Fpppp-64-334-0.25-0.5 1.33 E 3 1.4 E 1 1.34 E 3 1.6 E 1
Fpppp-16-334-0.25-1 5.15 E 2 2.5 E 1 5.12 E 2 1.6 E 1 Fpppp-64-334-0.25-1 9.48 E 2 2.0 E 1 9.52 E 2 2.6 E 1
Fpppp-16-334-0.25-5 1.87 E 3 7.0 E 1 1.85 E 3 7.9 E 1 Fpppp-64-334-0.25-5 2.06 E 3 9.9 E 1 2.06 E 3 9.6 E 1
Fpppp-16-334-0.25-10 8.07 E 2 3.3 E 1 7.79 E 2 3.1 E 1 Fpppp-64-334-0.25-10 1.74 E 3 7.6 E 1 1.75 E 3 8.1 E 1
Fpppp-16-334-0.5-0.1 1.46 E 3 3.0 E 1 1.44 E 3 2.6 E 1 Fpppp-64-334-0.5-0.1 2.04 E 2 3.3 E 1 2.04 E 2 1.5 E 0
Fpppp-16-334-0.5-0.5 1.35 E 3 3.6 E 1 1.35 E 3 4.0 E 1 Fpppp-64-334-0.5-0.5 1.13 E 3 1.1 E 1 1.13 E 3 1.2 E 1
Fpppp-16-334-0.5-1 3.80 E 2 1.4 E 1 3.74 E 2 1.1 E 1 Fpppp-64-334-0.5-1 5.96 E 2 1.1 E 1 6.01 E 2 7.4 E 0
Fpppp-16-334-0.5-5 2.82 E 3 1.7 E 2 2.85 E 3 1.3 E 2 Fpppp-64-334-0.5-5 1.23 E 3 6.1 E 1 1.26 E 3 6.0 E 1
Fpppp-16-334-0.5-10 9.22 E 2 2.9 E 1 8.94 E 2 3.4 E 1 Fpppp-64-334-0.5-10 1.61 E 3 6.8 E 1 1.62 E 3 5.4 E 1
Fpppp-16-334-0.75-0.1 3.27 E 2 7.2 E 0 3.24 E 2 9.2 E 0 Fpppp-64-334-0.75-0.1 1.18 E 2 8.0 E 2 1.18 E 2 6.7 E 1
Fpppp-16-334-0.75-0.5 5.87 E 2 2.2 E 1 5.85 E 2 1.7 E 1 Fpppp-64-334-0.75-0.5 4.72 E 2 9.2 E 0 4.76 E 2 1.1 E 1
Fpppp-16-334-0.75-1 6.12 E 2 2.2 E 1 6.12 E 2 2.0 E 1 Fpppp-64-334-0.75-1 4.98 E 2 1.2 E 1 5.00 E 2 1.1 E 1
Fpppp-16-334-0.75-5 3.73 E 3 2.6 E 2 3.68 E 3 1.9 E 2 Fpppp-64-334-0.75-5 8.96 E 2 4.9 E 1 9.06 E 2 5.3 E 1
Fpppp-16-334-0.75-10 1.99 E 3 1.0 E 2 1.92 E 3 8.2 E 1 Fpppp-64-334-0.75-10 3.44 E 3 1.4 E 2 3.46 E 3 1.4 E 2
Fpppp-16-334-1-0.1 7.48 E 2 1.8 E 1 7.40 E 2 1.7 E 1 Fpppp-64-334-1-0.1 8.22 E 1 5.2 E 1 8.26 E 1 6.8 E 1
Fpppp-16-334-1-0.5 9.56 E 2 2.7 E 1 9.47 E 2 2.8 E 1 Fpppp-64-334-1-0.5 3.55 E 2 3.8 E 0 3.56 E 2 2.5 E 0
Fpppp-16-334-1-1 4.58 E 2 1.3 E 1 4.54 E 2 1.5 E 1 Fpppp-64-334-1-1 1.28 E 3 1.2 E 1 1.29 E 3 1.4 E 1
Fpppp-16-334-1-5 1.45 E 3 6.4 E 1 1.44 E 3 6.8 E 1 Fpppp-64-334-1-5 8.50 E 2 5.1 E 1 8.58 E 2 5.8 E 1
Fpppp-16-334-1-10 9.84 E 2 3.5 E 1 9.54 E 2 4.2 E 1 Fpppp-64-334-1-10 4.34 E 3 1.6 E 2 4.39 E 3 1.8 E 2
▲ GRASP-CPA outperforms EFT-ILS with statistical difference, otherwise ▽, − no statistical difference.
Table 5. LIGO instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 5. LIGO instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAProblemEFT-ILSGRASP-CPA
LIGO-8-76-0.1-0.1 6.10 E 2 1.9 E 1 6.08 E 2 1.4 E 1 LIGO-32-76-0.1-0.1 5.40 E 2 1.4 E 0 5.41 E 2 1.3 E 0
LIGO-8-76-0.1-0.5 9.24 E 2 1.8 E 1 9.20 E 2 2.0 E 1 LIGO-32-76-0.1-0.5 6.00 E 2 1.2 E 1 6.02 E 2 1.3 E 1
LIGO-8-76-0.1-1 1.04 E 3 2.5 E 1 1.03 E 3 3.8 E 1 LIGO-32-76-0.1-1 3.03 E 1 8.5 E 1 3.03 E 1 8.3 E 1
LIGO-8-76-0.1-5 1.36 E 3 5.2 E 1 1.30 E 3 7.1 E 1 LIGO-32-76-0.1-5 1.30 E 3 6.3 E 1 1.26 E 3 8.7 E 1
LIGO-8-76-0.1-10 1.48 E 3 7.5 E 1 1.46 E 3 1.1 E 2 LIGO-32-76-0.1-10 2.01 E 3 1.2 E 2 1.96 E 3 1.1 E 2
LIGO-8-76-0.25-0.1 4.91 E 2 1.4 E 1 4.82 E 2 1.1 E 1 LIGO-32-76-0.25-0.1 1.03 E 2 8.1 E 1 1.03 E 2 7.2 E 1
LIGO-8-76-0.25-0.5 7.54 E 2 2.4 E 1 7.49 E 2 1.8 E 1 LIGO-32-76-0.25-0.5 8.64 E 1 1.5 E 0 8.59 E 1 2.4 E 0
LIGO-8-76-0.25-1 4.76 E 2 1.7 E 1 4.70 E 2 1.2 E 1 LIGO-32-76-0.25-1 8.43 E 2 2.2 E 1 8.42 E 2 2.7 E 1
LIGO-8-76-0.25-5 2.15 E 2 8.1 E 0 2.11 E 2 9.3 E 0 LIGO-32-76-0.25-5 5.86 E 2 2.0 E 1 5.82 E 2 2.3 E 1
LIGO-8-76-0.25-10 1.74 E 3 1.2 E 2 1.71 E 3 8.4 E 1 LIGO-32-76-0.25-10 8.25 E 2 5.9 E 1 8.06 E 2 6.4 E 1
LIGO-8-76-0.5-0.1 4.30 E 2 1.1 E 1 4.24 E 2 1.1 E 1 LIGO-32-76-0.5-0.1 6.76 E 2 0.0 E 0 6.76 E 2 0.0 E 0
LIGO-8-76-0.5-0.5 8.17 E 2 2.4 E 1 8.05 E 2 2.6 E 1 LIGO-32-76-0.5-0.5 3.72 E 2 8.4 E 0 3.77 E 2 1.0 E 1
LIGO-8-76-0.5-1 8.06 E 2 2.6 E 1 8.15 E 2 3.0 E 1 LIGO-32-76-0.5-1 4.22 E 2 1.3 E 1 4.22 E 2 1.3 E 1
LIGO-8-76-0.5-5 1.77 E 3 1.1 E 2 1.76 E 3 8.6 E 1 LIGO-32-76-0.5-5 6.40 E 2 3.2 E 1 6.33 E 2 3.3 E 1
LIGO-8-76-0.5-10 3.41 E 2 2.6 E 1 3.46 E 2 2.3 E 1 LIGO-32-76-0.5-10 1.70 E 3 1.0 E 2 1.63 E 3 6.1 E 1
LIGO-8-76-0.75-0.1 1.75 E 2 3.7 E 0 1.71 E 2 4.5 E 0 LIGO-32-76-0.75-0.1 5.70 E 2 0.0 E 0 5.70 E 2 0.0 E 0
LIGO-8-76-0.75-0.5 5.19 E 2 1.6 E 1 5.09 E 2 1.4 E 1 LIGO-32-76-0.75-0.5 3.19 E 2 1.4 E 0 3.19 E 2 1.2 E 0
LIGO-8-76-0.75-1 7.62 E 2 2.6 E 1 7.52 E 2 2.0 E 1 LIGO-32-76-0.75-1 6.60 E 1 1.5 E 0 6.61 E 1 1.0 E 0
LIGO-8-76-0.75-5 4.78 E 2 2.6 E 1 4.71 E 2 2.5 E 1 LIGO-32-76-0.75-5 1.42 E 3 5.5 E 1 1.39 E 3 9.4 E 1
LIGO-8-76-0.75-10 9.21 E 2 4.2 E 1 8.73 E 2 6.3 E 1 LIGO-32-76-0.75-10 1.46 E 3 7.0 E 1 1.44 E 3 8.4 E 1
LIGO-8-76-1-0.1 1.41 E 2 3.8 E 0 1.36 E 2 3.0 E 0 LIGO-32-76-1-0.1 4.23 E 2 0.0 E 0 4.23 E 2 0.0 E 0
LIGO-8-76-1-0.5 4.13 E 2 1.3 E 1 4.16 E 2 1.1 E 1 LIGO-32-76-1-0.5 1.09 E 2 2.9 E 0 1.10 E 2 2.3 E 0
LIGO-8-76-1-1 8.08 E 2 2.0 E 1 8.09 E 2 1.7 E 1 LIGO-32-76-1-1 1.59 E 2 3.5 E 0 1.59 E 2 4.5 E 0
LIGO-8-76-1-5 4.22 E 2 2.3 E 1 4.11 E 2 2.1 E 1 LIGO-32-76-1-5 5.48 E 2 4.0 E 1 5.33 E 2 3.2 E 1
LIGO-8-76-1-10 2.75 E 2 1.5 E 1 2.68 E 2 1.2 E 1 LIGO-32-76-1-10 2.25 E 3 2.6 E 2 2.14 E 3 1.2 E 2
LIGO-16-76-0.1-0.1 5.52 E 1 5.0 E 1 5.54 E 1 9.8 E 1 LIGO-64-76-0.1-0.1 6.02 E 2 4.5 E 1 6.02 E 2 1.1 E 0
LIGO-16-76-0.1-0.5 5.87 E 2 7.0 E 0 5.89 E 2 6.8 E 0 LIGO-64-76-0.1-0.5 4.22 E 2 1.5 E 1 4.17 E 2 1.4 E 1
LIGO-16-76-0.1-1 7.78 E 2 3.6 E 1 7.85 E 2 3.3 E 1 LIGO-64-76-0.1-1 1.72 E 2 3.7 E 0 1.73 E 2 3.8 E 0
LIGO-16-76-0.1-5 1.28 E 3 8.7 E 1 1.28 E 3 8.7 E 1 LIGO-64-76-0.1-5 1.10 E 3 3.9 E 1 1.11 E 3 4.9 E 1
LIGO-16-76-0.1-10 2.23 E 3 1.5 E 2 2.23 E 3 1.7 E 2 LIGO-64-76-0.1-10 1.32 E 3 7.7 E 1 1.27 E 3 6.8 E 1
LIGO-16-76-0.25-0.1 2.94 E 2 0.0 E 0 2.94 E 2 4.5 E 1 LIGO-64-76-0.25-0.1 5.92 E 1 6.8 E 1 5.93 E 1 8.5 E 1
LIGO-16-76-0.25-0.5 2.98 E 2 1.1 E 0 2.97 E 2 1.9 E 0 LIGO-64-76-0.25-0.5 1.69 E 2 1.9 E 0 1.69 E 2 2.4 E 0
LIGO-16-76-0.25-1 1.01 E 2 1.5 E 0 1.02 E 2 1.7 E 0 LIGO-64-76-0.25-1 4.27 E 2 1.7 E 1 4.30 E 2 1.2 E 1
LIGO-16-76-0.25-5 1.02 E 3 5.2 E 1 1.03 E 3 4.5 E 1 LIGO-64-76-0.25-5 1.68 E 3 7.5 E 1 1.69 E 3 8.0 E 1
LIGO-16-76-0.25-10 5.77 E 2 3.7 E 1 5.57 E 2 2.1 E 1 LIGO-64-76-0.25-10 1.81 E 3 9.3 E 1 1.78 E 3 8.9 E 1
LIGO-16-76-0.5-0.1 1.35 E 2 1.8 E 1 1.35 E 2 2.4 E 1 LIGO-64-76-0.5-0.1 6.26 E 2 0.0 E 0 6.26 E 2 0.0 E 0
LIGO-16-76-0.5-0.5 3.62 E 2 8.8 E 0 3.62 E 2 7.4 E 0 LIGO-64-76-0.5-0.5 6.21 E 2 5.6 E 0 6.22 E 2 2.7 E 0
LIGO-16-76-0.5-1 9.26 E 2 4.1 E 1 9.19 E 2 3.4 E 1 LIGO-64-76-0.5-1 3.26 E 2 8.1 E 0 3.22 E 2 1.2 E 1
LIGO-16-76-0.5-5 2.80 E 2 9.2 E 0 2.81 E 2 8.6 E 0 LIGO-64-76-0.5-5 1.25 E 3 7.8 E 1 1.27 E 3 8.9 E 1
LIGO-16-76-0.5-10 2.45 E 2 1.7 E 1 2.42 E 2 1.2 E 1 LIGO-64-76-0.5-10 9.80 E 2 5.5 E 1 9.47 E 2 5.3 E 1
LIGO-16-76-0.75-0.1 4.93 E 2 0.0 E 0 4.93 E 2 0.0 E 0 LIGO-64-76-0.75-0.1 2.72 E 2 0.0 E 0 2.72 E 2 0.0 E 0
LIGO-16-76-0.75-0.5 1.65 E 2 5.6 E 0 1.65 E 2 4.2 E 0 LIGO-64-76-0.75-0.5 2.79 E 2 0.0 E 0 2.79 E 2 0.0 E 0
LIGO-16-76-0.75-1 3.78 E 2 1.1 E 1 3.76 E 2 1.4 E 1 LIGO-64-76-0.75-1 3.33 E 2 1.4 E 1 3.27 E 2 9.3 E 0
LIGO-16-76-0.75-5 1.09 E 3 5.0 E 1 1.10 E 3 6.0 E 1 LIGO-64-76-0.75-5 8.42 E 2 6.0 E 1 8.17 E 2 5.0 E 1
LIGO-16-76-0.75-10 6.77 E 2 4.5 E 1 6.74 E 2 3.6 E 1 LIGO-64-76-0.75-10 2.42 E 3 1.2 E 2 2.33 E 3 9.7 E 1
LIGO-16-76-1-0.1 1.11 E 2 5.2 E 1 1.11 E 2 1.3 E 0 LIGO-64-76-1-0.1 3.01 E 2 0.0 E 0 3.01 E 2 0.0 E 0
LIGO-16-76-1-0.5 1.34 E 2 1.8 E 0 1.34 E 2 1.7 E 0 LIGO-64-76-1-0.5 2.55 E 2 0.0 E 0 2.55 E 2 0.0 E 0
LIGO-16-76-1-1 2.85 E 2 6.7 E 0 2.85 E 2 4.7 E 0 LIGO-64-76-1-1 1.63 E 2 2.1 E 0 1.63 E 2 2.1 E 0
LIGO-16-76-1-5 1.39 E 2 6.0 E 0 1.38 E 2 6.0 E 0 LIGO-64-76-1-5 1.50 E 3 8.2 E 1 1.51 E 3 7.5 E 1
LIGO-16-76-1-10 2.24 E 2 2.4 E 1 2.29 E 2 2.3 E 1 LIGO-64-76-1-10 1.20 E 2 7.8 E 0 1.18 E 2 8.3 E 0
Table 6. Robot instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 6. Robot instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAProblemEFT-ILSGRASP-CPA
Robot-8-88-0.1-0.1 1.89 E 3 2.7 E 1 1.88 E 3 1.6 E 1 Robot-32-88-0.1-0.1 1.07 E 3 2.3 E 0 1.07 E 3 2.3 E 0
Robot-8-88-0.1-0.5 7.22 E 1 1.3 E 0 7.20 E 1 1.2 E 0 Robot-32-88-0.1-0.5 1.07 E 3 3.8 E 0 1.06 E 3 1.1 E 1
Robot-8-88-0.1-1 2.02 E 3 4.4 E 1 1.98 E 3 3.7 E 1 Robot-32-88-0.1-1 1.32 E 3 1.3 E 1 1.33 E 3 1.1 E 1
Robot-8-88-0.1-5 3.03 E 3 1.1 E 2 3.04 E 3 1.0 E 2 Robot-32-88-0.1-5 2.44 E 3 9.8 E 1 2.35 E 3 1.0 E 2
Robot-8-88-0.1-10 2.39 E 3 1.0 E 2 2.24 E 3 9.0 E 1 Robot-32-88-0.1-10 2.97 E 2 1.1 E 1 2.74 E 2 1.1 E 1
Robot-8-88-0.25-0.1 6.45 E 2 1.1 E 1 6.40 E 2 9.3 E 0 Robot-32-88-0.25-0.1 1.26 E 3 3.0 E 0 1.26 E 3 2.4 E 0
Robot-8-88-0.25-0.5 6.52 E 2 9.9 E 0 6.50 E 2 8.2 E 0 Robot-32-88-0.25-0.5 8.73 E 1 8.8 E 1 8.71 E 1 8.9 E 1
Robot-8-88-0.25-1 4.71 E 2 6.4 E 0 4.73 E 2 8.1 E 0 Robot-32-88-0.25-1 1.91 E 3 2.7 E 1 1.91 E 3 2.6 E 1
Robot-8-88-0.25-5 1.55 E 3 4.5 E 1 1.56 E 3 4.5 E 1 Robot-32-88-0.25-5 3.05 E 3 1.1 E 2 3.04 E 3 8.9 E 1
Robot-8-88-0.25-10 2.73 E 3 9.6 E 1 2.66 E 3 8.2 E 1 Robot-32-88-0.25-10 1.85 E 3 8.2 E 1 1.68 E 3 7.6 E 1
Robot-8-88-0.5-0.1 1.03 E 3 1.6 E 1 1.02 E 3 1.3 E 1 Robot-32-88-0.5-0.1 3.75 E 2 2.7 E 0 3.74 E 2 7.7 E 1
Robot-8-88-0.5-0.5 7.93 E 2 1.6 E 1 7.85 E 2 1.5 E 1 Robot-32-88-0.5-0.5 3.23 E 2 3.8 E 0 3.21 E 2 3.8 E 0
Robot-8-88-0.5-1 7.45 E 2 1.1 E 1 7.41 E 2 1.2 E 1 Robot-32-88-0.5-1 8.58 E 2 1.2 E 1 8.65 E 2 1.4 E 1
Robot-8-88-0.5-5 1.35 E 3 4.0 E 1 1.34 E 3 3.0 E 1 Robot-32-88-0.5-5 7.28 E 2 1.7 E 1 7.24 E 2 1.8 E 1
Robot-8-88-0.5-10 3.60 E 2 1.8 E 1 3.49 E 2 1.4 E 1 Robot-32-88-0.5-10 1.98 E 3 7.2 E 1 1.99 E 3 6.3 E 1
Robot-8-88-0.75-0.1 5.64 E 2 1.2 E 1 5.60 E 2 1.1 E 1 Robot-32-88-0.75-0.1 9.42 E 2 0.0 E 0 9.42 E 2 1.5 E 0
Robot-8-88-0.75-0.5 1.59 E 3 2.1 E 1 1.58 E 3 1.7 E 1 Robot-32-88-0.75-0.5 1.28 E 3 1.5 E 1 1.28 E 3 1.2 E 1
Robot-8-88-0.75-1 1.40 E 3 2.9 E 1 1.40 E 3 2.2 E 1 Robot-32-88-0.75-1 1.39 E 3 1.3 E 1 1.39 E 3 1.8 E 1
Robot-8-88-0.75-5 2.52 E 3 7.1 E 1 2.41 E 3 8.0 E 1 Robot-32-88-0.75-5 1.50 E 3 5.0 E 1 1.51 E 3 4.4 E 1
Robot-8-88-0.75-10 1.09 E 3 4.2 E 1 1.04 E 3 3.8 E 1 Robot-32-88-0.75-10 4.75 E 2 1.5 E 1 4.56 E 2 1.8 E 1
Robot-8-88-1-0.1 3.04 E 2 7.2 E 0 2.96 E 2 5.7 E 0 Robot-32-88-1-0.1 1.02 E 3 3.2 E 0 1.02 E 3 3.2 E 0
Robot-8-88-1-0.5 3.73 E 2 9.3 E 0 3.66 E 2 8.0 E 0 Robot-32-88-1-0.5 4.67 E 2 0.0 E 0 4.67 E 2 4.1 E 1
Robot-8-88-1-1 1.38 E 3 4.0 E 1 1.36 E 3 4.3 E 1 Robot-32-88-1-1 7.96 E 2 2.3 E 1 7.78 E 2 3.2 E 1
Robot-8-88-1-5 9.94 E 2 2.8 E 1 9.95 E 2 2.8 E 1 Robot-32-88-1-5 1.16 E 3 3.1 E 1 1.14 E 3 2.9 E 1
Robot-8-88-1-10 2.01 E 3 7.0 E 1 1.93 E 3 6.7 E 1 Robot-32-88-1-10 1.39 E 3 6.0 E 1 1.38 E 3 4.5 E 1
Robot-16-88-0.1-0.1 6.78 E 2 3.5 E 0 6.78 E 2 1.3 E 0 Robot-64-88-0.1-0.1 2.13 E 2 5.1 E 1 2.13 E 2 1.0 E 0
Robot-16-88-0.1-0.5 1.91 E 3 1.3 E 1 1.92 E 3 1.6 E 1 Robot-64-88-0.1-0.5 8.75 E 2 1.2 E 1 8.78 E 2 1.6 E 1
Robot-16-88-0.1-1 1.15 E 3 2.2 E 1 1.14 E 3 2.4 E 1 Robot-64-88-0.1-1 9.45 E 2 1.2 E 1 9.46 E 2 1.2 E 1
Robot-16-88-0.1-5 5.16 E 2 1.8 E 1 5.19 E 2 1.5 E 1 Robot-64-88-0.1-5 2.55 E 3 8.3 E 1 2.50 E 3 8.0 E 1
Robot-16-88-0.1-10 2.53 E 3 7.1 E 1 2.46 E 3 6.8 E 1 Robot-64-88-0.1-10 3.27 E 3 1.2 E 2 3.17 E 3 9.8 E 1
Robot-16-88-0.25-0.1 6.00 E 1 5.0 E 1 6.01 E 1 5.3 E 1 Robot-64-88-0.25-0.1 1.35 E 3 3.7 E 0 1.35 E 3 5.4 E 1
Robot-16-88-0.25-0.5 3.67 E 2 2.4 E 0 3.67 E 2 2.1 E 0 Robot-64-88-0.25-0.5 1.64 E 3 1.7 E 1 1.64 E 3 1.1 E 1
Robot-16-88-0.25-1 5.48 E 2 1.1 E 1 5.47 E 2 8.2 E 0 Robot-64-88-0.25-1 1.09 E 3 1.0 E 1 1.09 E 3 9.7 E 0
Robot-16-88-0.25-5 3.78 E 3 1.4 E 2 3.79 E 3 1.2 E 2 Robot-64-88-0.25-5 2.23 E 3 7.3 E 1 2.22 E 3 7.6 E 1
Robot-16-88-0.25-10 3.53 E 3 1.4 E 2 3.46 E 3 1.9 E 2 Robot-64-88-0.25-10 1.72 E 2 6.4 E 0 1.69 E 2 3.9 E 0
Robot-16-88-0.5-0.1 9.34 E 2 1.9 E 0 9.35 E 2 1.2 E 0 Robot-64-88-0.5-0.1 3.57 E 2 0.0 E 0 3.57 E 2 0.0 E 0
Robot-16-88-0.5-0.5 1.38 E 3 2.0 E 1 1.37 E 3 1.7 E 1 Robot-64-88-0.5-0.5 1.10 E 3 1.3 E 1 1.10 E 3 1.3 E 1
Robot-16-88-0.5-1 2.34 E 2 2.8 E 0 2.35 E 2 2.7 E 0 Robot-64-88-0.5-1 4.51 E 2 4.0 E 0 4.49 E 2 5.6 E 0
Robot-16-88-0.5-5 1.89 E 3 7.5 E 1 1.89 E 3 6.5 E 1 Robot-64-88-0.5-5 1.79 E 3 8.2 E 1 1.79 E 3 7.7 E 1
Robot-16-88-0.5-10 3.33 E 3 1.1 E 2 3.24 E 3 1.1 E 2 Robot-64-88-0.5-10 3.18 E 3 1.1 E 2 3.13 E 3 9.4 E 1
Robot-16-88-0.75-0.1 3.48 E 2 3.6 E 1 3.48 E 2 7.4 E 1 Robot-64-88-0.75-0.1 8.15 E 2 0.0 E 0 8.15 E 2 0.0 E 0
Robot-16-88-0.75-0.5 1.63 E 2 1.6 E 0 1.63 E 2 1.6 E 0 Robot-64-88-0.75-0.5 1.58 E 3 1.3 E 1 1.58 E 3 5.7 E 0
Robot-16-88-0.75-1 7.18 E 2 2.0 E 1 7.14 E 2 1.9 E 1 Robot-64-88-0.75-1 2.24 E 2 5.0 E 0 2.19 E 2 5.5 E 0
Robot-16-88-0.75-5 2.36 E 2 7.6 E 0 2.28 E 2 6.9 E 0 Robot-64-88-0.75-5 2.53 E 3 9.0 E 1 2.47 E 3 6.5 E 1
Robot-16-88-0.75-10 8.92 E 2 4.6 E 1 8.98 E 2 4.6 E 1 Robot-64-88-0.75-10 4.92 E 3 1.6 E 2 4.86 E 3 1.3 E 2
Robot-16-88-1-0.1 2.81 E 2 1.3 E 0 2.80 E 2 1.0 E 0 Robot-64-88-1-0.1 8.45 E 2 0.0 E 0 8.45 E 2 0.0 E 0
Robot-16-88-1-0.5 5.14 E 2 2.6 E 0 5.15 E 2 3.0 E 0 Robot-64-88-1-0.5 9.58 E 2 9.6 E 1 9.59 E 2 3.1 E 0
Robot-16-88-1-1 1.77 E 3 3.6 E 1 1.77 E 3 2.4 E 1 Robot-64-88-1-1 2.47 E 2 1.2 E 0 2.45 E 2 2.7 E 0
Robot-16-88-1-5 1.15 E 3 4.9 E 1 1.15 E 3 4.5 E 1 Robot-64-88-1-5 1.73 E 3 4.1 E 1 1.71 E 3 5.6 E 1
Robot-16-88-1-10 8.29 E 2 2.9 E 1 8.22 E 2 2.9 E 1 Robot-64-88-1-10 3.07 E 3 1.3 E 2 3.00 E 3 1.1 E 2
Table 7. Sparse instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 7. Sparse instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAProblemEFT-ILSGRASP-CPA
Sparse-8-96-0.1-0.1 2.63 E 1 6.9 E 1 2.63 E 1 7.3 E 1 Sparse-32-96-0.1-0.1 3.70 E 2 2.1 E 1 3.70 E 2 1.4 E 1
Sparse-8-96-0.1-0.5 2.68 E 2 9.8 E 0 2.72 E 2 1.1 E 1 Sparse-32-96-0.1-0.5 4.96 E 2 1.5 E 1 4.94 E 2 1.3 E 1
Sparse-8-96-0.1-1 4.50 E 2 2.0 E 1 4.56 E 2 2.0 E 1 Sparse-32-96-0.1-1 5.33 E 2 2.2 E 1 5.34 E 2 1.2 E 1
Sparse-8-96-0.1-5 7.65 E 2 2.6 E 1 7.56 E 2 3.2 E 1 Sparse-32-96-0.1-5 2.36 E 2 3.4 E 0 2.36 E 2 4.8 E 0
Sparse-8-96-0.1-10 8.93 E 2 4.0 E 1 8.89 E 2 4.0 E 1 Sparse-32-96-0.1-10 1.41 E 3 5.8 E 0 1.41 E 3 7.8 E 0
Sparse-8-96-0.25-0.1 5.10 E 2 1.1 E 1 5.08 E 2 1.3 E 1 Sparse-32-96-0.25-0.1 1.95 E 2 8.5 E 0 1.96 E 2 7.6 E 0
Sparse-8-96-0.25-0.5 2.20 E 1 7.7 E 1 2.21 E 1 8.0 E 1 Sparse-32-96-0.25-0.5 1.41 E 2 2.4 E 0 1.41 E 2 4.2 E 0
Sparse-8-96-0.25-1 3.28 E 2 1.9 E 1 3.30 E 2 1.5 E 1 Sparse-32-96-0.25-1 4.32 E 2 1.2 E 1 4.33 E 2 1.3 E 1
Sparse-8-96-0.25-5 2.09 E 2 9.2 E 0 2.07 E 2 9.8 E 0 Sparse-32-96-0.25-5 6.10 E 2 4.1 E 1 6.14 E 2 4.6 E 1
Sparse-8-96-0.25-10 1.46 E 3 5.6 E 1 1.45 E 3 4.6 E 1 Sparse-32-96-0.25-10 7.48 E 2 9.5 E 0 7.48 E 2 7.2 E 0
Sparse-8-96-0.5-0.1 8.45 E 2 3.1 E 1 8.55 E 2 3.5 E 1 Sparse-32-96-0.5-0.1 3.68 E 2 1.6 E 1 3.66 E 2 1.3 E 1
Sparse-8-96-0.5-0.5 8.47 E 2 2.1 E 1 8.44 E 2 3.0 E 1 Sparse-32-96-0.5-0.5 3.47 E 1 1.2 E 0 3.42 E 1 1.1 E 0
Sparse-8-96-0.5-1 5.95 E 2 2.0 E 1 5.98 E 2 2.6 E 1 Sparse-32-96-0.5-1 5.27 E 2 1.7 E 1 5.23 E 2 1.8 E 1
Sparse-8-96-0.5-5 3.56 E 1 1.6 E 0 3.52 E 1 1.2 E 0 Sparse-32-96-0.5-5 3.54 E 2 1.8 E 1 3.54 E 2 1.2 E 1
Sparse-8-96-0.5-10 5.73 E 2 3.2 E 1 5.69 E 2 2.4 E 1 Sparse-32-96-0.5-10 4.28 E 2 2.6 E 0 4.27 E 2 3.2 E 0
Sparse-8-96-0.75-0.1 6.50 E 2 1.3 E 1 6.43 E 2 1.7 E 1 Sparse-32-96-0.75-0.1 1.20 E 2 4.9 E 0 1.21 E 2 4.9 E 0
Sparse-8-96-0.75-0.5 2.67 E 2 7.0 E 0 2.64 E 2 7.9 E 0 Sparse-32-96-0.75-0.5 2.15 E 2 5.0 E 0 2.15 E 2 6.0 E 0
Sparse-8-96-0.75-1 6.52 E 2 2.3 E 1 6.46 E 2 2.1 E 1 Sparse-32-96-0.75-1 2.25 E 2 8.9 E 0 2.28 E 2 6.6 E 0
Sparse-8-96-0.75-5 9.51 E 2 8.3 E 1 9.51 E 2 8.1 E 1 Sparse-32-96-0.75-5 3.59 E 1 1.1 E 0 3.58 E 1 1.0 E 0
Sparse-8-96-0.75-10 9.99 E 2 7.1 E 1 9.36 E 2 6.0 E 1 Sparse-32-96-0.75-10 6.68 E 2 2.2 E 1 6.61 E 2 2.9 E 1
Sparse-8-96-1-0.1 6.97 E 2 1.6 E 1 6.92 E 2 1.5 E 1 Sparse-32-96-1-0.1 2.98 E 1 5.7 E 1 2.99 E 1 5.7 E 1
Sparse-8-96-1-0.5 6.47 E 2 2.4 E 1 6.30 E 2 2.5 E 1 Sparse-32-96-1-0.5 2.88 E 2 7.3 E 0 2.88 E 2 7.2 E 0
Sparse-8-96-1-1 2.48 E 2 1.1 E 1 2.47 E 2 7.0 E 0 Sparse-32-96-1-1 2.05 E 2 7.2 E 0 2.08 E 2 6.4 E 0
Sparse-8-96-1-5 5.41 E 2 2.3 E 1 5.34 E 2 2.2 E 1 Sparse-32-96-1-5 5.95 E 2 3.2 E 1 5.86 E 2 1.9 E 1
Sparse-8-96-1-10 1.96 E 2 9.8 E 0 1.97 E 2 1.3 E 1 Sparse-32-96-1-10 1.01 E 3 7.2 E 1 9.81 E 2 9.4 E 1
Sparse-16-96-0.1-0.1 5.47 E 2 1.5 E 1 5.46 E 2 2.1 E 1 Sparse-64-96-0.1-0.1 1.36 E 2 4.2 E 1 1.37 E 2 3.7 E 2
Sparse-16-96-0.1-0.5 3.55 E 2 1.3 E 1 3.57 E 2 1.3 E 1 Sparse-64-96-0.1-0.5 4.74 E 2 9.6 E 1 4.74 E 2 2.7 E 0
Sparse-16-96-0.1-1 4.59 E 2 2.0 E 1 4.60 E 2 1.3 E 1 Sparse-64-96-0.1-1 3.12 E 2 1.6 E 0 3.15 E 2 2.2 E 0
Sparse-16-96-0.1-5 1.91 E 2 5.7 E 0 1.89 E 2 5.8 E 0 Sparse-64-96-0.1-5 3.30 E 2 9.9 E 1 3.29 E 2 1.2 E 0
Sparse-16-96-0.1-10 5.54 E 2 7.9 E 0 5.54 E 2 8.4 E 0 Sparse-64-96-0.1-10 1.39 E 2 4.0 E 1 1.38 E 2 2.2 E 1
Sparse-16-96-0.25-0.1 3.04 E 2 1.1 E 1 3.05 E 2 1.1 E 1 Sparse-64-96-0.25-0.1 2.73 E 2 2.2 E 1 2.73 E 2 0.0 E 0
Sparse-16-96-0.25-0.5 2.27 E 2 9.6 E 0 2.28 E 2 8.3 E 0 Sparse-64-96-0.25-0.5 4.04 E 2 4.6 E 0 4.05 E 2 1.1 E 0
Sparse-16-96-0.25-1 2.75 E 2 1.2 E 1 2.78 E 2 1.2 E 1 Sparse-64-96-0.25-1 9.32 E 1 2.5 E 1 9.32 E 1 9.5 E 2
Sparse-16-96-0.25-5 8.10 E 2 3.4 E 1 8.07 E 2 3.2 E 1 Sparse-64-96-0.25-5 3.86 E 2 3.0 E 1 3.64 E 2 1.9 E 1
Sparse-16-96-0.25-10 1.09 E 3 4.5 E 1 1.06 E 3 3.2 E 1 Sparse-64-96-0.25-10 5.47 E 2 4.6 E 1 5.47 E 2 4.6 E 1
Sparse-16-96-0.5-0.1 1.92 E 2 8.4 E 0 1.94 E 2 5.8 E 0 Sparse-64-96-0.5-0.1 4.52 E 2 2.0 E 0 4.52 E 2 5.8 E 1
Sparse-16-96-0.5-0.5 2.80 E 2 1.5 E 1 2.82 E 2 8.6 E 0 Sparse-64-96-0.5-0.5 1.26 E 2 7.9 E 0 1.21 E 2 6.7 E 0
Sparse-16-96-0.5-1 2.01 E 2 9.3 E 0 2.04 E 2 9.0 E 0 Sparse-64-96-0.5-1 2.73 E 2 3.4 E 0 2.75 E 2 2.5 E 0
Sparse-16-96-0.5-5 7.70 E 2 5.3 E 1 7.59 E 2 3.9 E 1 Sparse-64-96-0.5-5 7.03 E 2 1.0 E 1 7.00 E 2 2.0 E 1
Sparse-16-96-0.5-10 1.80 E 3 2.3 E 1 1.80 E 3 3.3 E 1 Sparse-64-96-0.5-10 1.74 E 3 0.0 E 0 1.74 E 3 0.0 E 0
Sparse-16-96-0.75-0.1 3.53 E 2 1.5 E 1 3.51 E 2 1.5 E 1 Sparse-64-96-0.75-0.1 2.89 E 2 5.2 E 0 2.91 E 2 5.2 E 0
Sparse-16-96-0.75-0.5 2.93 E 2 1.2 E 1 2.93 E 2 9.4 E 0 Sparse-64-96-0.75-0.5 3.57 E 2 1.1 E 0 3.57 E 2 5.3 E 0
Sparse-16-96-0.75-1 3.46 E 2 1.5 E 1 3.42 E 2 1.4 E 1 Sparse-64-96-0.75-1 3.68 E 2 3.4 E 0 3.72 E 2 1.2 E 1
Sparse-16-96-0.75-5 5.94 E 2 1.7 E 1 5.95 E 2 2.4 E 1 Sparse-64-96-0.75-5 3.13 E 2 2.2 E 0 3.13 E 2 0.0 E 0
Sparse-16-96-0.75-10 1.64 E 3 3.8 E 1 1.63 E 3 6.6 E 1 Sparse-64-96-0.75-10 4.64 E 2 3.3 E 0 4.64 E 2 3.0 E 0
Sparse-16-96-1-0.1 2.03 E 2 7.2 E 0 2.01 E 2 7.2 E 0 Sparse-64-96-1-0.1 2.32 E 2 0.0 E 0 2.32 E 2 0.0 E 0
Sparse-16-96-1-0.5 2.49 E 2 1.2 E 1 2.50 E 2 1.2 E 1 Sparse-64-96-1-0.5 1.72 E 2 4.4 E 0 1.72 E 2 2.5 E 0
Sparse-16-96-1-1 1.79 E 2 8.0 E 0 1.78 E 2 7.5 E 0 Sparse-64-96-1-1 8.33 E 1 0.0 E 0 8.33 E 1 0.0 E 0
Sparse-16-96-1-5 3.73 E 2 1.9 E 1 3.73 E 2 2.0 E 1 Sparse-64-96-1-5 4.10 E 2 9.4 E 0 4.07 E 2 8.6 E 0
Sparse-16-96-1-10 1.75 E 3 5.2 E 1 1.73 E 3 4.9 E 1 Sparse-64-96-1-10 1.65 E 3 1.1 E 0 1.65 E 3 0.0 E 0
Table 8. Small synthetic benchmark instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 8. Small synthetic benchmark instances median and IQR of EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAOptimumReferenceTime Enum Time EFT-ILS Time GRASP-CPA
Ahmad-3-9 27 0.0 27 0.0 27[42]8s0s0s
Daoud-2-11 60 1.5 58 . 5 4.0 56[43]46s0s0s
Eswari-2-11 60 0.5 58 . 5 4.0 56[44]46s0s0s
Hamid-3-10 100 0.0 100 0.0 100[45]71s0s0s
Heteropar-4-12 136 0.0 124 12 124[46]1677s0s0s
Hsu-3-10 80 0.0 80 0.0 80[47]64s0s0s
Ilavarasan-3-10 73 3.0 73 0.0 73[48]65s0s0s
Kang1-3-10 76 0.0 76 1.0 73[49]67s0s0s
Kang2-3-10 83 0.0 83 0.0 79[50]68s0s0s
Kuan-3-10 27 0.0 27 0.0 26[51]25s0s0s
Liang-3-10 73 3.0 73 3.0 73[52]67s0s0s
Sample-3-8 84 2.0 82 3.0 81[11]0s0s0s
SampleFig-3-8 66 0.0 66 0.0 66[11]0s0s0s
YCLee-3-8 66 0.0 66 0.0 66[53]0s0s0s
Table 9. Small synthetic benchmark instances best results found by EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
Table 9. Small synthetic benchmark instances best results found by EFT-ILS and GRASP-CPA over 100 independent runs. Light gray emphasizes the best results.
ProblemEFT-ILSGRASP-CPAOptimumReference
Ahmad-3-9272727[42]
Daoud-2-11565656[43]
Eswari-2-11565656[44]
Hamid-3-10100100100[45]
Heteropar-4-12136124124[46]
Hsu-3-10808080[47]
Ilavarasan-3-10737373[48]
Kang1-3-10737373[49]
Kang2-3-10838279[50]
Kuan-3-10262626[51]
Liang-3-10737373[52]
Sample-3-8818181[11]
SampleFig-3-8666666[11]
YCLee-3-8666666[53]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Santiago, A.; Terán-Villanueva, J.D.; Martínez, S.I.; Rocha, J.A.C.; Menchaca, J.L.; Berrones, M.G.T.; Ponce-Flores, M. GRASP and Iterated Local Search-Based Cellular Processing algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems. Appl. Sci. 2020, 10, 7500. https://doi.org/10.3390/app10217500

AMA Style

Santiago A, Terán-Villanueva JD, Martínez SI, Rocha JAC, Menchaca JL, Berrones MGT, Ponce-Flores M. GRASP and Iterated Local Search-Based Cellular Processing algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems. Applied Sciences. 2020; 10(21):7500. https://doi.org/10.3390/app10217500

Chicago/Turabian Style

Santiago, Alejandro, J. David Terán-Villanueva, Salvador Ibarra Martínez, José Antonio Castán Rocha, Julio Laria Menchaca, Mayra Guadalupe Treviño Berrones, and Mirna Ponce-Flores. 2020. "GRASP and Iterated Local Search-Based Cellular Processing algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems" Applied Sciences 10, no. 21: 7500. https://doi.org/10.3390/app10217500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop