Abstract

In many scheduling studies, researchers consider the processing times of jobs as constant numbers. This assumption sometimes is at odds with practical manufacturing process due to several sources of uncertainties arising from real-life situations. Examples are the changing working environments, machine breakdowns, tool quality variations and unavailability, and so on. In light of the phenomenon of scenario-dependent processing times existing in many applications, this paper proposes to incorporate scenario-dependent processing times into a two-machine flow-shop environment with the objective of minimizing the total completion time. The problem under consideration is never explored. To solve it, we first derive a lower bound and two optimality properties to enhance the searching efficiency of a branch-and-bound method. Then, we propose 12 simple heuristics and their corresponding counterparts improved by a pairwise interchange method. Furthermore, we set proposed 12 simple heuristics as the 12 initial seeds to design 12 variants of a cloud theory-based simulated annealing (CSA) algorithm. Finally, we conduct simulations and report the performances of the proposed branch-and-bound method, the 12 heuristics, and the 12 variants of CSA algorithm.

1. Introduction

In classical scheduling models, the processing times of jobs are often assumed to be constant numbers, but in practice it can be met with a lot of uncertainty occurrences such as the working environment changes, machine breakdowns, tool quality variations and unavailability, worker performance instabilities, and some external complex factors. Furthermore, the phenomenon of scenario-dependent processing times has not been addressed in the area of scheduling problems until recently [1, 2]. Motivated by these uncertainty reasons, in this study we address scenario-dependent processing times into a two-machine flow-shop setting and find a robust measurement which minimizes the maximum total completion time over all scenarios. This problem is certainly NP-hard because the same problem without the scenario-dependent processing times is nondeterministic polynomial-time hardness (or NP-hard) (see Gonzalez and Sahni [3]).

Many studies on literature consider that most processing times are often estimated based on statistical data and find that the variation of the data can be very big and the underlying distributions come from the data which could be inaccurate. Therefore, researchers consider the worst-case performance measurement to be more important than the average error performance. When any one of two probable scenarios occurs, Kouvelis and Yu [4] suggested adopting a robust approach to overcome the worst case. For details of different scenarios of job processing times, readers may refer to Aissi et al. [5], Aloulou and Della Croce [6], De Farias et al. [7], Kasperski and Zielinski [8], Yang and Yu [9], and so on.

Regarding the studies on the completion time on the two-machine flow-shop scheduling problem without the scenario-dependent processing times, Kohler and Steiglitz [10] proposed some heuristics for finding near-optimal solutions. Following the NP-hard result by Gonzalez and Sahni [3], Cadambi and Sathe [11], Pan and Wu [12], Wang et al. [13], and Della Croce et al. [14] utilized a branch-and-bound method incorporating several superior properties for exact solutions in turn. Following up two related studies, some complexity and approximation results for special cases of this problem are presented by Hoogeveen and Kawaguchi [15] while the lower bound based on Lagrangian relaxation was improved by Della Croce et al. [16]. For more flow shops with more than two machines, readers may refer to Gupta et al. [17], Xiang et al. [18], Allahverdi and Aldowaisan [19], Chung et al. [20], Rajendran and Ziegler [21], Zhang et al. [22], Tasgetiren et al. [23], Gao et al. [24], Shivakumar and Amudha [25], Dong et al. [26, 27], and Marichelvam et al. [28].

Meanwhile, simulated annealing (SA), proposed by Kirkpatrick et al. [29], has been widely and successfully used to solve many discrete combinatorial problems. However, there are some drawbacks in this method. For example, the temperature is discrete and unchangeable in the annealing course from implementation aspect. This point cannot be fit to the requirement of continuous decrease in temperature in actual physical annealing processes. Second point is that it is easy to accept deteriorating solution with high temperature and it does not converge quickly. Third point is that it is hard to escape from local minimum trap with low temperature and has low searching accuracy. To overcome the disadvantages, Lv et al. [30] gave a theoretical discussion of a cloud theory-based simulated annealing algorithm (CSA) and showed that CSA is superior to SA in terms of convergence speed, searching ability, and robustness. Following that, Torabzadeh and Zandieh [31] applied CSA to solve the two-stage assembly flow-shop problems on m-machine environment to minimize a weighted sum of makespan and mean completion time. They further compared CSA and SA and showed that CSA performed better than SA did. For more applications of the CSA algorithm, readers may refer to Geng et al. [32] for a SVR-based load forecasting model, to Pourvaziri and Pierreval [33] for a dynamic facility layout problem based on open queuing network, and to We et al. [34] for a two-stage assembly scheduling problem with learning consideration.

In light of these observations, in this study, we will consider a robust two-machine flow-shop scheduling problem with scenario-dependent processing times and the goal is to minimize the total completion time of jobs. In the same problem setting but under the case where jobs processing times are constant, the literature shows that this problem is NP-hard for the adopted goal. The main contributions of this study are as follows. (1) The authors addressed a robust two-machine flow-shop scheduling problem with scenario-dependent processing times. (2) The authors derived two new dominant properties and two lower bounds to be embedded in a branch-and-bound method for finding optimal solutions efficiently. (3) The authors proposed 12 simple heuristics and their corresponding counterparts improved by a pairwise interchange method. (4) The authors design 12 variants of a CSA algorithm.

The rest of this study is organized as follows. In Section 2, we introduce notations and problem formulation. In Section 3, we derive a lower bound and two dominant properties to be used in a branch-and-bound method. In Section 4, we propose 12 simple heuristics and their corresponding improved counterparts by a pairwise interchange process and 12 variants of CSA algorithm. In Section 5, we tune the parameters of CSA algorithm, and in Section 6, we conduct simulation studies to evaluate the performances of all the proposed methods.

2. Notations and Problem Description

The adopted notations for this study are listed as follows:n: number of jobs.: n job codes{M1, M2}: two machine codes: job schedules, : k determined job schedule and n − k undetermined job schedule[ ]: scheduled job positions: scenario, where s = 1, 2: scenario-s processing times of job on M1, M2, : the starting time on M1 and M2: the completion time of job Jj on machine i under scenario s for the job schedule : the initial temperature in CSA: the final temperature in CSA: the annealing index with in CSA: the number of improvements in CSA

The studied two-machine flow-shop scheduling problem with two scenarios (or states) is described as follows. There are n jobs, each consisting of two operations with processing times , , for scenario s, , on machines M1 and M2, respectively. There are precedence constraints between the operations, i.e., each job is first processed on M1 and then on M2. Additionally, no machine can process more than one job at a time and a job cannot be interrupted once the processing starts; no idle time can be allowed on machine M1. As the value of our objective function is scenario-dependent, the robustness criterion of this study is the absolute robustness [4, 6], which is to find a schedule minimizing , where is the set of all possible permutations of jobs.

3. Methodology

Given the fact that the same problem without scenario-dependent version is NP-hard (see Gonzalez and Sahni [3]), the branch-and-bound approach is commonly and widely used for an optimal-solution technical tool. Besides, deriving a good lower bound of the partial schedule is very important in the branch-and-bound (B&B) method. In what follows, we build a lower bound based on the idea of Ignall and Schrage [35]. Let be a schedule where the order of the first k jobs in the subschedule has been determined and the remaining (n − k) jobs in another subschedule have not been determined, where l = n − k. According to the definition, the completion times for the (k + 1)th job, say from unscheduled set , and the (k + 2)th job, say from unscheduled set , on machines M1 and M2 for scenario s in a schedule are given as follows:

In general,

In a similar way, we have

Therefore, we have the following inequality for s = 1, 2.

On the other hand, we have

In a similar way, we have

Therefore, we have the following inequality for s = 1, 2.

According to equations (4) and (7), we have

Therefore, we have the following lower bound:

That is,where and are nondecreasing orders of and for jobs in the partial schedule , s = 1, 2.

In what follows, we also derive two more properties used in our proposed branch-and-bound method. The first one is based on the idea of Cadambi and Sathe [11]. To further explain the details of Property 1, we let and denote two schedules in which and , respectively, are partial sequences. To show that is no worse than , the following condition suffices:

Property 1. If and , s = 1, 2, then is no worse than .

Proof. To show that is no worse than , the following condition suffices:From the definition of completion time of a job, the following equations hold. For s = 1, 2,Using successively definition (18), definition (17), assumptions, definition (13), and definition (15), we haveUsing successively definition (19), inequality (20), equality (14), assumptions, and definition (16), we haveFrom (20) and (21), we have .

Property 2. If , , , and , s = 1, 2, then is no worse than .

Proof. This proof can be obtained similarly to the proof of Property 1.
Note that in general, a sequence of jobs is said to be no worse than another sequence if . Properties 1 and 2 are two special cases.

4. Some Heuristics and Some Variants of the CSA

In light of the scenario-dependent processing time factor, we consider a two-stage method to solve this problem. It is well known that the Johnson rule can be applied to minimize the makespan criterion for the two-machine flow shop. Thus, we adopt several different variants of the combinations of the scenario-dependent processing times to be applied on Johnson rule to find initial seeds. Even though it is not the best one for the total completion time criterion, a good seed can be obtained. Following that, we improve these seeds by a pairwise interchange method and reset these seeds in CSAs, respectively.

In the researchers’ communities, Johnson’s rule [36] has been commonly utilized to solve the makespan minimization problem on 2-machine flow-shop setting. However, one can observe that it could not yield the optimal schedule for this model when the scenario-dependent processing times are present. The main idea of Johnson’s rule is based on “There exists an optimal sequence in which job Ji is scheduled before job Jj if holds.” Furthermore, to explore the performance of this rule, 12 simple heuristics and their corresponding improved ones by a pairwise interchange process are provided in the following. The details of the steps for the 12 heuristics in stage one are described as follows.

For stage 1, the following 12 heuristic algorithms are based on the Johnson rule [36]. Furthermore, to obtain a high quality of approximate solutions, we consider the solutions found from these 12 heuristics as 12 seeds to be improved by a pairwise interchange process and to be seeds used in a cloud theory-based simulated annealing (CSA) algorithm, respectively. The summary of 12 proposed initial heuristics, their corresponding 12 (pairwise) improved counterparts, and 12 variants of the CSA algorithm is as follows.

According the thoughts of Johnson’s rule, we propose the 12 heuristics as H1, H2, …, and H12 as follows.H1 heuristic(1)Let be the processing time for job Ji on machine 1 and be the processing time for job Ji on machine 2, i = 1, 2, …, n(2)Let , i = 1, 2, …, n(3)Order jobs in nondecreasing value of , i = 1,…, n} and output the final schedule and its value of the objective function

Regarding the H2 heuristic and H3 heuristic, in step 1, we replace by , , and by , , respectively, while steps 2–5 are the same as those of H1 method. Regarding H4, H5, and H6 methods, for each , we reset as the processing time for job Ji on machine 1 and as the processing time for job Ji on machine 2, keeping the same steps 2–5 as those in H1. We then record H4, H5, and H6 for . In a similar way, we record H7 to H9 for the cases and , for , and record H10 to H12 for the cases and , for .

For stage 2, we consider embedding a pairwise interchange improvement method to each heuristic Hi to increase the efficiency of finding better solutions, i = 1, 2, …, 12. They are simply recorded as .

The problem under study is NP-hard because the same problem without scenario-dependent processing times is NP-hard. In order to quickly find good quality of near-optimal solutions and save CPU time, for stage 3, we utilize the cloud theory-based simulated annealing (CSA) algorithm proposed by Torabzadeh and Zandieh [31]. Furthermore, we also adopt solutions yielded from 12 heuristics at the first stage to be 12 seeds in CSA algorithm. Some important parameters in CSA such as the initial temperature (), final temperature (), the number of improvement repetitions (Nr), and the annealing index (λ) need to be explored and tuned. The descriptions of 12 variants of CSA algorithm are summarized as follows (Algorithm 1).

Input: 12 initials of and values of parameters , λ,
(1) Do i = 1, 12
(2) for each
(3) Let T = Ti
(4) Do while {T > }
(5) Let He = T, En = T, u0 = 1 − T, ,
where 0 < q < 1, and 0 <  < .
(6) Let
(7) Do while { j < Nr}.
(8) Choose p and q integers randomly with
(9) Interchange the pth position and qth position in to generate another new schedule .
(10) Compute the value of the objective function for , obj ().
(11) If obj() < obj(), then replace by ; otherwise, replace by if < , where is a random number between (0, 1) and .
(12) j = j + 1
(13) End do
(14)
(15) End do
(16) Output the final schedule ,
(17) End do
(18) Output the final 12 schedules

Besides the above developments, our proposed branch-and-bound algorithm adopts the depth first policy in the branching tree and assigns jobs in a forward manner starting from the first place to the last place [34, 37]. In the branching nodes, we determine if the node should be cut by using Property 1 and Property 2. We compute its lower bound for the active node and find its objective function for the complete node. We adopt the best solution among the proposed heuristics as an upper bound. Once a complete node is done, we update the upper bound when a complete node is generated.

5. Tuning the Parameters

Before performing intensive computational simulation experiments, there are four initial parameters in a CSA algorithm that need to be tuned, such as the initial temperature (), final temperature (), the number of improvement repetitions (Nr), and the annealing index (λ). According to the experience from some preliminary simulations, final temperature () is set at 10−8. An experiment of one factor at a time was run to tune the parameters in the order of , Nr, and λ. On these computer experiments, the size of jobs was set at n = 10, the processing times followed a uniform distribution over integers 1 to 20, Nr was set as 10, and λ was set as 0.98. For each parameter combination, 100 problem instances were generated, the mean of the objective function was computed, and the B&B method was also run to obtain the optimal value of the objective function. The average error percentage (AEP) was recorded. The formula of AEP is AEP =  [(Oj − )/] [%], where Oj is the final objective value outputted from each of heuristics or algorithms and is from the B&B method. We tested at 0.9, 0.99, 0.1, 0.01, 10−3, 10−4, and 10−5. As shown in Figure 1, as , the mean of AEP has the minimum, 0.0257.

Second, the parameter Nr increases from 1 to 25, and other parameters were fixed at and λ = 0.98. As shown in Figure 2, as Nr = 20, the mean AEP has a minimum of 0.18%.

Third, the parameter λ was chosen at values of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, and 0.99. As shown in Figure 3, as λ = 0.99, the mean AEP reaches a minimum value of 0.16%.

According to the parameter tuning results, the parameters were set finally as follows: at 0.9, Nr at 20, λ at 0.99, and at 10−8, respectively, for the later computational experiments.

6. Simulation Studies

Intensive computational simulation experiments were carried out to evaluate the performances of the branch-and-bound method and the efficacy of the 12 heuristics and 12 variants of the CSA algorithm (recorded as 12 CSA_Hs). The heuristics and algorithms were coded by FORTRAN (Compaq visual Fortran 6.6 version). The experiments were executed on a PC with an Intel Xeon CPU E5-1620 3.60 GHz, RAM 4.00 GB, and in Windows 7 OS with 32 bits. For small size of jobs (n = 8, 9, 10, 12), the mean AEPs of 12 heuristics and 12 variants of the CSA algorithm were recorded; for large size of jobs (n = 150, 150, 200), the mean RPDs were reported, where RPD was calculated as RPD = 100  [(Oj − )/] [%], in which Oj is the final objective value yielded from each of heuristics or algorithms and is the best among the 12 heuristics and 12 algorithms.

The job processing times were generated using several different uniform distributions by Kouveils et al. [2]. The integer processing times of on machine follows a discrete uniformly distribution, where takes values of 0.2, 0.6, and 1.0 and () takes values of, say, type T1 = (1.0, 1.0), type T2 = (1.2, 1.0), and type T3 = (1.0, 1.2). There are nine combinations of and types of (). For each parameter combination, 100 problem instances were generated.

The results found from the B&B method are listed in Table 1. As can be seen from Table 1, as n = 12, the number of nodes in the B&B method will exceed 108. The number of nodes and CPU processing times will also increase as the number of jobs n and alpha increases.

The simulation results of the small size of jobs are shown in Tables 2 and 3 for 12 Hi + PIs and 12 CSA_Hs. The AEPs of the CSA_Hs algorithm are much less than the Hi + PIs. The comparison results of the Hi + PIs and the CSA_Hs are further shown in Figure 4. In light of the boxplots in Figure 4, the performances of the group CSA_Hs are better than those of the group Hi + PIs in the mean and variance of AEP, i.e., the CSA_Hs are more accurate and stable.

In regard to the performances of 12 heuristics and 12 CSA_Hs, Tables 4 and 5 demonstrate the simulation numerical results.

The performances of 12 Hi + PIs and 12 CSA_Hs are further summarized in Figure 5. From the boxplots shown in Figure 5, 12 CSA_Hs indicate a better capability on searching near-optimal solutions than those of 12 Hi + PIs, and it seems that there is no difference among the group of CSA_Hs and among the group of Hi + PIs.

The following analysis of the simulation data applies statistical methods in the aim to have science evidence to that whether the differentiation of performance between the groups of 12 heuristics and the 12 CSA_Hs are significant statistically, and the performances within the 12 heuristics or within the 12 CSA_Hs are not significant statistically.

In order to establish the difference of performance between the 12 heuristics and CSA_Hs for small size of n, a linear model was fitted to AEPs on the affected factors, n, α, and the type of (), on software SAS 9.4. Four normality tests of the error term in the fitted linear model were adopted. The p values of these tests are less than 0.05 and are listed in Table 6 (columns 3 and 4), for example, the p value is less than 0.01 obtained from the Kolmogorov–Smirnov test. Thus, for the commonly used significance level 0.05, the normality assumption of the AEPs is invalid. Therefore, instead of using a parametric statistical method, the DSCF procedure (Dwass–Steel–Critchlow–Fligner, [38]), a nonparametric statistical method, was utilized to allow a multiple comparison among the 12 Hs + PI and 12 CSA_Hs to fulfill the characteristic of the simulation observations. Table 7 exhibits the comparative outcomes. The performances among all pairs of any one of the 12 heuristics and any one of the12 CSA_Hs are significant different statistically. Moreover, any pair of differences of AEPs between the 12 heuristics is not significant statistically, and any pair of differences of AEPs between the 12 CSA_Hs is also not significant statistically.

Regarding the large size of jobs, another linear model was fitted to RPDs on the affected factor, n, α, and the type of (). The p values of the four normality tests of the error term in the fitted linear model are less than 0.05 and are listed in Table 6 (columns 5 and 6). Table 8 illustrates the results of another multiple comparison on the 12 heuristics and 12 CSA_Hs for large n. The conclusion about the performances of (within) 12 heuristics and (within) the 12 CSA_Hs is almost the same as that from the comparisons for the small n; we would not repeat here because only some of the p values are slightly different between Tables 7 and 8.

7. Conclusions

This paper studies the scenario-dependent processing times in a two-machine flow-shop environment on which the measurement is the total completion time. Because of the NP-hardness, we designed a branch-and-bound (B&B) method and developed a lower bound and two superior properties to enhance the searching efficiency of the B&B method. Then, we proposed 12 simple heuristics which were improved by a pairwise interchange method, coded as Hi + PIs, and further devised a cloud theory-based simulated annealing (CSA) algorithm. The CSA algorithm fed with the initial solutions produced 12 simple heuristics (without any further improvement) to form 12 versions of the CSA algorithm, coded as CSA_Hs. The computer experiments were conducted to investigate the efficiency of the 12 heuristics, Hi + PIs, and 12 CSA_Hs. It was noticed that the CSA_Hs produce better solutions compared with 12 heuristics. In addition, statistical methods were applied to perform the analysis of the experimental data, and we verified the significant difference among the groups of the Hi + PIs and the CSA_Hs.

As for the future investigation, other characteristics of a scheduling problem like release times of jobs might also incur the uncertain feature. Thus, a potential future research direction could consider this issue by developing structural properties for algorithm design to incorporate this aspect. Deriving more heuristics based on the total completion time criterion in references [3942] is also a worthy future issue.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported in part by the Ministry of Science and Technology of Taiwan (MOST 108-2410-H-035-046).