1 Introduction

In most classical scheduling problems involving due-date related performance measures, the due-dates are given as a set of predefined job-related parameters, i.e., each job has its own predefined due-date given by the instance. When scheduling with generalized due-dates (gdd’s), the due-date of each job is defined only after the scheduling decisions are made. Accordingly, due-dates are associated with positions in the job processing order, and each job is assigned with a due-date based on its position in this order. Yin et al. (2012) pointed out that scheduling with gdd arises in cases where there are milestones in a serial project, each indicating the number of operations that are required to be completed up to a certain point in time. Browne et al. (1984), Hall (1986) and Stecke and Solberg (1981) describe situations in which generalized due-dates arise in practical settings, including in public utility planning, survey design and flexible manufacturing.

1.1 Literature review and problem definition

The field of scheduling with gdd was first introduced by Hall (1986) who analyzed several scheduling problems with gdd on a single machine and on identical parallel machines. He showed that some problems that are solvable in polynomial time for the case of job-related due-dates remain so when generalized due-dates are considered. This includes the problems of the minimizing maximum lateness and the number of tardy jobs on a single machine. He also showed, however, that for some other problems the complexity status changes. For example, Hall showed that although the problem of minimizing the total tardiness on a single machine is \(\mathcal {NP }\)-hard when due-dates are job-related (see Du & Leung, 1990), it is solvable in polynomial time with gdd. Other results on scheduling under the assumption of gdd appear in Sriskandarajah (1990), Hall et al. (1991), Tanaka and Vlach (1999), Gao and Yuan (2006), Yin et al. (2012) and Gerstl and Mosheiov (2020) .

In many cases where a scheduling problem with gdd is found to be \( \mathcal {NP}\)-hard, the reduction is done to an instance that includes an arbitrary number of different due-dates, \(\nu _{d}\) (see, e.g., Tanaka & Vlach, 1999; Gerstl & Mosheiov, 2020). However, in real-life applications, especially in cases where delivery costs are very high, the value of \(\nu _{d}\) may be of limited size. Therefore, it is interesting to investigate whether or not the problems become tractable for bounded values of \(\nu _{d}\). We focus on two such problems, where the scheduling criterion is non-regular and follows the concept of just in time (JIT). The concept of JIT is used whenever both job earliness and tardiness should be avoided, i.e., when it is desirable to complete a job’s processing at, or as close as possible, to its due-date. In the first problem, our aim is to find a schedule that minimizes the maximum absolute lateness, while in the second problem we seek a schedule that maximizes the weighted number of jobs completed exactly at their due-dates.

The two problems we consider in this paper are defined as follows. We are given a set \({\mathcal {J}}=\{J_{1},J_{2},\ldots ,J_{n}\}\) of n jobs to be scheduled non-preemptively on a single machine. Let \(p_{j}\) be the processing time of job \(J_{j}\) on the single machine. A feasible job schedule S is defined by (i) a processing permutation \(\pi =\{J_{[1]},J_{[2]},\ldots ,J_{[n]}\}\) of the n jobs on the single machine, where [j] is the index of the job in the jth position in \(\pi \), and by ( ii) a feasible set of processing intervals, \( (S_{[j]},C_{[j]}=S_{[j]}+p_{[j]}]\) for \(j=1,\ldots ,n\), satisfying that \( S_{[j]}\ge C_{[j-1]}=S_{[j-1]}+p_{[j-1]}\) for \(j=1,\ldots ,n\), where \(S_{[j]}\) and \(C_{[j]}\) are the start time and the completion time of job \(J_{[j]}\), respectively, and \(S_{[0]}=p_{[0]}=0\) by definition. Given S, the due-date assigned to job \(J_{[j]}\) is \(\delta _{j}\). Accordingly, the lateness of job \(J_{[j]}\) is defined by \(L_{[j]}=C_{[j]}-\delta _{j}\). Moreover, we say that job \(J_{[j]}\) is a JIT job if \(C_{[j]}=\delta _{j}\), and by \(\mathcal { E}=\{J_{j}\in {\mathcal {J}}\left| C_{[j]}=\delta _{j}\right. \}\), we denote the set of JIT jobs.

In the first problem, our objective is to find a feasible schedule that minimizes the maximum absolute lateness, given by \(\max _{J_{j}\in {\mathcal {J}} }\{\left| L_{j}\right| \},\) while in the second problem we aim to find a solution that maximizes the weighted number of JIT jobs, given by \(\Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\), where \(w_{j}\) is the weight of job \(J_{j}\) representing the gain obtained from completing job \(J_{j}\) in a JIT mode. Using the classical three-field notation for scheduling problems (see Graham et al., 1979), we denote the first problem we study by \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and the second problem by \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\). We note that \(\left| L_{[j]}\right| =\max \{E_{[j]},T_{[j]}\}\), where \(E_{[j]}=\max \{0,\delta _{j}-C_{[j]}\}\) is the earliness of job \(J_{[j]}\), and \( T_{[j]}=\max \{0,C_{[j]}-\delta _{j}\}\) is the tardiness of job \(J_{[j]}.\) By \(1\left| {}\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \(1\left| {}\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\), we refer to the variant of the same problems where due-dates are job-related.

When due-dates are job-related, the resulting \(1\left| {}\right| \max _{J_{j}\in {\mathcal {J}}} \{\left| L_{j}\right| \}\) and \( 1\left| {}\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problems are solvable in polynomial time (see Garey et al., 1988 and Lann and Mosheiov, 1996). However, both problems are strongly \( \mathcal {NP}\)-hard with generalized due-dates (see Tanaka & Vlach, 1999 and Gerstl & Mosheiov, 2020), and for the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem, the strongly \(\mathcal {NP}\)-hardness result holds even if all weights are identical (i.e., even when the objective is simply to maximize the number of JIT jobs, \(\left| {\mathcal {E}}\right| \)).

1.2 Research objectives and paper organization

The reductions used by Tanaka and Vlach (1999) and Gerstl and Mosheiov (2020) to prove that problems \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \left| {\mathcal {E}} \right| \) are strongly \(\mathcal {NP}\)-hard are done by constructing instances that include an arbitrary number of different due-dates. We aim to see whether the problems become easier to solve when the number of different due-dates in the instance is of a limited size. Our analysis is done from both a classical (see Garey & Johnson, 1979) and a parameterized (see, e.g., Downey, 1999 & Niedermeier, 2006) complexity point of view. To do so, let \( v_{d}\) be the number of different due-dates in the instance. From a classical complexity perspective, we aim to determine whether or not the problems are solvable in polynomial time when \(v_{d}\) is upper bounded by a constant. If not, we aim to determine whether the problem remains \(\mathcal { NP}\)-hard in the strong sense, or it is solvable in pseudo-polynomial time. We also aim to determine the complexity of each problem with respect to (wrt.)\(\ v_{d}\) in the sense of parameterized complexity.

Given an \(\mathcal {NP}\)-hard problem and a parameter k, in parameterized complexity we aim to determine whether the problem has an algorithm running in \(f(k)n^{O(1)}\) time, where f(k) is a function that depends solely on k (and thus independent of the number of jobs n). Such an algorithm is referred to as a Fixed Parameter Tractable (FPT) algorithm wrt. k. Note that an FPT algorithm is always faster than an \(n^{f(k)}\) algorithm, for any monotone increasing function f(k), when n and k are sufficiently large. Thus, for example, an FPT algorithm is preferable over an algorithm that is polynomial whenever k is constant.

In Sections 2 and 3, we analyze the tractability of problems \(1\left| gdd\right| \max _{J_{j}\in \mathcal { J}}\{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) wrt. \(\nu _{d}\). We prove that problems \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \left| {\mathcal {E}} \right| \) are both \(\mathcal {NP}\)-hard even if \(\nu _{d}=2\). Unless P=\(\mathcal {NP}\), this result rules out the possibility to construct FPT algorithms for problems \(1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \( 1\left| gdd\right| \left| {\mathcal {E}}\right| \) wrt. \(\nu _{d} \). We also provide pseudo-polynomial time algorithms to solve problems \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) when \(\nu _{d}\) is upper bounded by a constant. These results lead to the conclusion that both problems are strongly \(\mathcal {NP}\)-hard only for an arbitrary value of \(\nu _{d}\), while they both become ordinary \(\mathcal {NP}\)-hard when \(\nu _{d}\) is upper bounded by a constant. As both problems do not admit an FPT algorithm wrt. \(\nu _{d}\), we further analyze the case where we combine parameter \(\nu _{d}\) with another parameter, \(\nu _{p}\), which represents the number of different processing times in the instance. We provide FPT algorithms for problems \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \left| {\mathcal {E}} \right| \) wrt. the combined parameter. A summary and future research section concludes our paper.

2 Maximal absolute lateness

The \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem is strongly \(\mathcal {NP}\)-hard when due-dates are arbitrary (see Tanaka & Vlach, 1999). When there is a common due-date for all jobs (i.e., \(\nu _{d}=1\)), a simple O(n) time algorithm provides the optimal schedule for the corresponding \(1\left| gdd,\delta _{j}=\delta \right| \max \{\left| L_{j}\right| \}\) problem (see Cheng, 1987). In the following three subsections, we complement these two results by showing that the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem is (i) \(\mathcal {NP}\)-hard even when we have only two distinct due-dates in the instance; (ii) solvable in pseudo-polynomial time when the value of \(\nu _{d}\) is upper bounded by a constant; and (iii) fixed parameterized tractable (FPT) when we combine the two parameters consisting of the number of different due-dates (\( \nu _{d}\)) and number of different processing times (\(\nu _{p}\)).

2.1 \(\mathcal {NP}\)-hardness for the two distinct due-dates case

In this subsection, we show that the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) is \(\mathcal {NP}\)-hard even when \(\nu _{d}=2\). The reduction is done from the ordinary \(\mathcal {NP}\)-hard Equal Size Partition problem, which is defined below.

Definition 1

Equal Size Partition: Given a set of 2t positive integers \( {\mathcal {A}}=\{a_{1},a_{2},\ldots ,a_{2t}\}\) with \(\Sigma _{j=1}^{2t}a_{j}=2B\) and \(0<a_{j}<B\). Can \({\mathcal {A}}\) be partitioned into two subsets \(\mathcal { A}_{1}\) and \({\mathcal {A}}_{2}\) such that \(\Sigma _{a_{j}\in {\mathcal {A}} _{i}}a_{j}=B\) and \(\left| {\mathcal {A}}_{i}\right| =t\) for \(i=1,2\)?

Theorem 1

The \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) is \( \mathcal {NP}\)-hard even when there are only two distinct due-dates ( i.e., even when \(\nu _{d}=2\)).

Proof

Given an instance for the \(\mathcal {NP}\)-hard Equal Size Partition problem, we construct the following instance for our \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem: The instance includes \(n=2t+2\) jobs. The processing times are:

$$\begin{aligned} p_{j}=\left\{ \begin{array}{ccc} a_{j} &{} \text {for} &{} j=1,\ldots ,2t \\ 2B &{} \text {for} &{} j=2t+1,2t+2 \end{array} \right. \end{aligned}$$
(1)

and the due-dates are

$$\begin{aligned} \delta _{j}=\left\{ \begin{array}{ccc} {\underline{\delta }}=2.5B &{} \text {for} &{} j=1,\ldots ,t+1 \\ {\overline{\delta }}=5.5B &{} \text {for} &{} j=t+2,\ldots 2t+2 \end{array} \right. . \end{aligned}$$
(2)

In the decision version of the problem, we ask whether there exists a feasible schedule with \(\max \{\left| L_{j}\right| \}\le B/2\). Note that there are only two distinct due-dates in the constructed instance of the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem.

Given a solution that provides a YES answer for the Equal Size Partition, we construct the following solution S to our \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. We set \( {\mathcal {J}}_{i}=\{J_{j}\left| a_{j}\in {\mathcal {A}}_{i}\right. \}\) for \( i=1,2\). Then, we schedule the jobs in the following order: \(J_{2t+1}, {\mathcal {J}}_{1},J_{2t+2}\),\({\mathcal {J}}_{2}\) with no machine idle times. The processing order in each \({\mathcal {J}}_{i}\) \((i=1,2)\) is arbitrary. Since \( \left| {\mathcal {J}}_{i}\right| =\left| {\mathcal {A}}_{i}\right| =t\), all jobs in \(J_{2t+1}\cup {\mathcal {J}}_{1}\) share the same due-date of \( {\underline{\delta }}=2.5B\), and all jobs in \(J_{2t+2}\cup {\mathcal {J}}_{2}\) share the same due-date of \({\overline{\delta }}=5.5B\). Therefore, in S:

  • Job \(J_{2t+1}\) is scheduled during time interval (0, 2B]. Thus, \( \left| L_{2t+1}\right| =\left| 2B-2.5B\right| =0.5B.\)

  • Job set \({\mathcal {J}}_{1}\) is scheduled during time interval (2B, 3B]. Therefore, \(\max _{J_{j}\in {\mathcal {J}}_{1}}\{\left| L_{j}\right| \}=\max \{\left| 2B+p_{[2]}-2.5B\right| , 3B-2.5B\}\), where [j] is the index of the jth job in the processing order. The fact that \( p_{[2]}=a_{[2]}<B\) implies that \(\max _{J_{j}\in {\mathcal {J}}_{1}}\{\left| L_{j}\right| \}=3B-2.5B=0.5B\).

  • Job \(J_{2t+2}\) is scheduled during time interval (3B, 5B]. Thus, \( \left| L_{2t+2}\right| =\left| 5B-5.5B\right| =0.5B.\)

  • Job set \({\mathcal {J}}_{2}\) is scheduled during time interval (5B, 6B]. Therefore, \(\max _{J_{j}\in {\mathcal {J}}_{2}}\left| L_{j}\right| =\max \{\left| 5B+p_{[t+3]}-5.5B\right| , 6B-5.5B\}\). The fact that \( p_{[t+3]}=a_{[t+3]}<B\) implies that \(\max _{J_{j}\in {\mathcal {J}} _{2}}\{\left| L_{j}\right| \}=6B-5.5B=0.5B\).

It follows that in schedule S, \(\max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}=0.5B\), and we have a YES answer for the constructed instance of our \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem.

Consider next a solution S that provides a YES answer for the constructed instance of our \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem.

Lemma 1

In S, there are no machine idle times.

Proof

If S includes machine idle times of a total length of \(\Delta >0\), then the last job is completed at time \(6B+\Delta \). As the last scheduled job is assigned a due-date of \({\overline{\delta }}=5.5B\), its absolute lateness equals to \(0.5B+\Delta >0.5B\), contradicting the fact that S provides a YES answer for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. \(\square \)

Now, let \({\mathcal {J}}_{1}\) be the set of \(t+1\) jobs that are scheduled first in S, and let \({\mathcal {J}}_{2}\) be the set of \(t+1\) jobs that are scheduled last in S. All jobs in \({\mathcal {J}}_{1}\) share the same due-date of \({\underline{\delta }}=2.5B\), and all jobs in \({\mathcal {J}}_{2}\) share the same due-date of \({\overline{\delta }}=5.5B\).

Lemma 2

Set \({\mathcal {J}}_{1}\) includes exactly a single job out of the pair \(\{J_{2t+1},J_{2t+2}\}\).

Proof

By contradiction, assume that set \({\mathcal {J}}_{1}\) includes either none or both jobs in \(\{J_{2t+1},J_{2t+2}\}\). If \({\mathcal {J}}_{1}\) includes none of the two jobs in \(\{J_{2t+1},J_{2t+2}\}\), then based on Lemma 1 the first job to be schedule will complete at time \(C_{[1]}=a_{[1]}<B\). Then, \( \left| L_{[1]}\right| =\left| C_{[1]}-\delta _{1}\right| >\left| B-2.5B\right| =1.5B\), contradicting the fact that S provides a YES answer for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. If \({\mathcal {J}}_{1}\) includes both \(\{J_{2t+1},J_{2t+2}\}\), the completion of the last job in \({\mathcal {J}} _{1}\) will be at time \(C_{[t+1]}>p_{2t+1}+p_{2t+2}=4B\). Therefore, \( \left| L_{[t+1]}\right| =\left| C_{[t+1]}-\delta _{t+1}\right| >\left| 4B-2.5B\right| =1.5B\), contradicting the fact that S provides a YES answer for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. Accordingly \( {\mathcal {J}}_{1}\) includes exactly a single job out of the pair \( \{J_{2t+1},J_{2t+2}\}\). \(\square \)

Following Lemma 2, and without loss of generality, assume that \( J_{2t+1}\) is included in \({\mathcal {J}}_{1}\). Moreover, let \(\widehat{\mathcal { J}}_{1}={\mathcal {J}}_{1}\setminus \{J_{2t+1}\}\). Note that \(\left| \widehat{{\mathcal {J}}}_{1}\right| =t\).

Lemma 3

In schedule S, the total processing time of all jobs in \(\widehat{\mathcal { J}}_{1}\) is exactly B (i.e., \(\Sigma _{J_{j}\in \widehat{{\mathcal {J}} }_{1}}p_{j}=B\)).

Proof

By contradiction, assume that \(\Sigma _{J_{j}\in \widehat{{\mathcal {J}}} _{1}}p_{j}>B\). In such a case \(C_{[t+1]}=p_{2t+1}+\Sigma _{J_{j}\in \widehat{ {\mathcal {J}}}_{1}}p_{j}>2B+B=3B.\) Thus, \(\left| L_{[t+1]}\right| =\left| C_{[t+1]}-\delta _{t+1}\right| >\left| 3B-2.5B\right| =0.5B\), contradicting the fact that S provides a YES answer for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. On the other hand, if \(\Sigma _{J_{j}\in \widehat{{\mathcal {J}}}_{1}}p_{j}<B\), it implies (based on Lemma 1) that job \(J_{[t+2]}\) will start at time \(p_{2t+1}+\Sigma _{J_{j}\in \widehat{ {\mathcal {J}}}_{1}}p_{j}<3B\). Thus, \(C_{[t+2]}=p_{2t+1}+\Sigma _{J_{j}\in \widehat{{\mathcal {J}}}_{1}}p_{j}+p_{[t+2]}<3B+2B=5B\). Therefore, \(\left| L_{[t+2]}\right| =\left| C_{[t+2]}-\delta _{t+2}\right| >\left| 5B-5.5B\right| =0.5B\), contradicting the fact that S provides a YES answer for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem. \(\square \)

Now, let \({\mathcal {A}}_{1}=\{a_{j}\mid J_{j}\in \widehat{{\mathcal {J}}}_{1}\}\) and \({\mathcal {A}}_{2}={\mathcal {A}}\diagdown {\mathcal {A}}_{1}\). The fact that \( \left| {\mathcal {A}}_{1}\right| =\left| \widehat{{\mathcal {J}}} _{1}\right| =t\) and that \(\Sigma _{a_{j}\in A_{1}}a_{j}=\Sigma _{J_{j}\in \widehat{{\mathcal {J}}}_{1}}p_{j}=B\) implies that we have a YES answer for the Equal Size Partition problem. \(\square \)

2.2 A pseudo-polynomial time algorithm for a constant number of different due-dates

In this section, we develop a pseudo-polynomial time algorithm to solve the \( 1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem when the number of different due-dates, \(\nu _{d}\), is bounded by a constant. We assume that \(\delta _{1}<\delta _{2}<\cdots <\delta _{\nu _{d}}\), and begin with few notations. Let \(m_{i}\) denote the number of positions in the job processing order having a due-date of \(\delta _{i}\) for \(i=1,\ldots ,\nu _{d}\). Moreover, let \(l_{r}=\Sigma _{i=1}^{r}m_{i}\) denote the number of positions having due-date not greater than \(\delta _{r}\) for \(r=1,\ldots ,\nu _{d}\), with \(l_{\nu _{d}}=n\) by definition. Given a solution, by \(\mathcal { J}_{i}\) we denote the set of \(m_{i}\) jobs assigned to due-date \(\delta _{i}\) (\(i=1,\ldots ,\nu _{d}\)). Our pseudo-polynomial time algorithm exploits the properties in following two lemmas:

Lemma 4

There exists an optimal schedule for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem, which includes no machine idle times between the processing of any two consecutive jobs in each \({\mathcal {J}}_{i}\) (\(i=1,\ldots ,\nu _{d}\)).

Proof

Consider an optimal solution \(S^{*}\), which includes machine idle time of duration \(\Delta \) between the processing of jobs \(J_{[j]}\) and \( J_{[j+1]} \) both belong to the same set \({\mathcal {J}}_{i}\) (i.e., \(j\in \{l_{i-1}+1,\ldots ,l_{i}\}\)). Define an alternative solution \(S^{\prime }\) out of \(S^{*}\), by starting the processing of jobs \( \{J_{[j+1]},..,J_{[l_{i}]}\}\) \(\Delta \) time units earlier (i.e., by eliminating the idle time between jobs \(J_{[j]}\) and \(J_{[j+1]}\)), while keeping the schedule of all other jobs unchanged.

As all jobs in \({\mathcal {J}}_{i}\) share the same due-date of \(d_{i}\), job \( J_{[l_{i-1}+1]}\) has the maximum earliness value among all jobs in \(\mathcal { J}_{i}\), while job \(J_{[l_{i}]}\) has the maximal tardiness value among all jobs in \({\mathcal {J}}_{i}\). As the schedule of job \(J_{[l_{i-1}+1]}\) is identical in both \(S^{\prime }\) and \(S^{*}\), the maximum earliness among all jobs in \({\mathcal {J}}_{i}\) is the same in both \(S^{\prime }\) and \(S^{*}\). However, job \(J_{[l_{i}]}\) is completed \(\Delta \) earlier in \(S^{\prime } \) comparing to its completion time in \(S^{*}\). Therefore, the maximum tardiness among all jobs in \({\mathcal {J}}_{i}\) is not greater in \(S^{\prime }\) than it is in \(S^{*}\). Thus, the absolute lateness among all jobs in \( S^{\prime }\) is not greater than it is in \(S^{*}\). Accordingly, schedule \(S^{\prime }\) is optimal as well. By repeating this procedure for any pair of consecutive jobs in \(S^{*}\) sharing the same due-date and having idle time between them, we complete the proof. \(\square \)

Lemma 5

There exists an optimal schedule for the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem, where the job with the largest processing time among all jobs in each set \({\mathcal {J}}_{i}\) is scheduled first (within its job set), while the processing order of all other jobs in \({\mathcal {J}}_{i}\) can be arbitrary.

Proof

As all jobs in \({\mathcal {J}}_{i}\) share the same due-date of \(d_{i}\), the first scheduled job in \({\mathcal {J}}_{i}\), which is job \(J_{[l_{i-1}+1]}\), has the maximum earliness value among all jobs in \({\mathcal {J}}_{i}\), while the last scheduled job in \({\mathcal {J}}_{i}\), which is job \(J_{[l_{i}]}\), has the maximal tardiness value among all jobs in \({\mathcal {J}}_{i}\). Consider now an optimal solution, \(S^{*}\), which follows the property in Lemma 4. It follows that in \(S^{*}\)

$$\begin{aligned}&\max _{J_{j}\in {\mathcal {J}}_{i}}\{\left| L_{j}\right| \}=\max \{E_{[l_{i-1}+1]},T_{[l_{i}]}\}\\&\quad =\max \{\max \{0,\delta _{i}-A_{i}-p_{[l_{i-1}+1]}\},\\&\quad \max \{0,A_{i}+P({\mathcal {J}} _{i})-\delta _{i}\}\}, \end{aligned}$$

where \(A_{i}\) is the start time of set \({\mathcal {J}}_{i}\) in \(S^{*}\), and \(P({\mathcal {J}}_{i})=\Sigma _{J_{j}\in {\mathcal {J}}_{i}}p_{j}\). The lemma now follows from the fact the value of \(T_{[l_{i}]}\) is independent of the internal schedule of the jobs in \({\mathcal {J}}_{i}\), and that the value of \( E_{[l_{i-1}+1]}\) is a non-increasing function of \(p_{[l_{i-1}+1]}\). \(\square \)

Consider now a feasible partition of job set \({\mathcal {J}}\) into the subsets \( {\mathcal {J}}_{1},{\mathcal {J}}_{2},\ldots ,{\mathcal {J}}_{\nu _{d}}\), and let (i) \( P_{i}=\Sigma _{J_{j}\in {\mathcal {J}}_{i}}p_{j}\) represent the total processing time of all jobs assigned to set \({\mathcal {J}}_{i}\); and (ii) and \(\alpha _{i}=\max _{J_{j}\in {\mathcal {J}}_{i}}p_{j}\) represent the maximal processing time among all jobs assigned to set \({\mathcal {J}}_{i}\). Given the values of \(P_{i}\) and \(\alpha _{i}\) for \(i=1,\ldots ,\nu _{d},\) we can compute the optimal starting times of job sets \({\mathcal {J}}_{1},{\mathcal {J}}_{2},\ldots , {\mathcal {J}}_{\nu _{d}}\) by using a Linear Programming (\(LP \)) formulation, consisting solely of continuous variables. In the formulation, \( S_{i}\) is a continuous variable representing the start time of set \(\mathcal { J}_{i}\) on the single machine (\(i=1,\ldots ,\nu _{d}\)), and Z is a continuous variable representing the value of the maximum absolute lateness. To ensure that the processing intervals of the sets do not overlap, we include the set of constraints that

$$\begin{aligned} S_{i}\ge S_{i-1}+P_{i-1}\text { for }i\in \{1,\ldots ,\nu _{d}\} \end{aligned}$$
(3)

with \(S_{0}=P_{0}=0\) by definition. Due to Lemmas 4 and 5, and due to the fact that all jobs in \({\mathcal {J}}_{i}\) share the same due-date of \(\delta _{i}\), the maximal earliness value of a job belonging to \({\mathcal {J}}_{i}\) is equal to \(\max \{0,\delta _{i}-S_{i}-\alpha _{i}\}\), and the maximal tardiness value of a job belonging to \({\mathcal {J}}_{i}\) is equal to \(\max \{0,S_{i}+P_{i}-\delta _{i}\}\). Therefore, we include the following set of constraints as well

$$\begin{aligned} Z\ge \delta _{i}-S_{i}-\alpha _{i}\text { for }i\in \{1,\ldots ,\nu _{d}\}\text {, } \end{aligned}$$
(4)

and

$$\begin{aligned} Z\ge S_{i}+P_{i}-\delta _{i}\text { for }i\in \{1,\ldots ,\nu _{d}\}. \end{aligned}$$
(5)

Thus, given a feasible partition of job set \({\mathcal {J}}\) into the subsets \( {\mathcal {J}}_{1},{\mathcal {J}}_{2},\ldots ,{\mathcal {J}}_{\nu _{d}}\) represented by the vector (\(P_{1},P_{2},\ldots ,P_{\nu _{d}}, \alpha _{1},\alpha _{2},\ldots ,\alpha _{\nu _{d}}\)), one can compute the optimal starting time of the job sets, and the minimal objective value by solving a LP problem of finding a solution that minimizes Z, subject to the set of constraints in (3)–(5). Using Karmarkar’s Method (see Karmarkar, 1984), this can be done in \(O(\nu _{d}^{3.5})\ \)time (note that we have \(\nu _{d}\) continuous variables in the LP formulation). Thus, the following holds:

Lemma 6

Given a vector (\(P_{1},P_{2},\ldots ,P_{\nu _{d}},\alpha _{1},\alpha _{2},\ldots ,\alpha _{\nu _{d}}\)) representing a feasible partition of \({\mathcal {J}}\) into the sets \({\mathcal {J}}_{1}\ ,{\mathcal {J}} _{2}\ , \ldots ,{\mathcal {J}}_{\nu _{d}}\), one can compute the optimal starting time of each of the sets \({\mathcal {J}}_{i}\ ,{\mathcal {J}}_{2}\ ,\ldots ,{\mathcal {J}} _{\nu _{d}}\), and the minimal objective value in \(O(\nu _{d}^{3.5})\ \)time.

It follows from Lemma 6 that we can solve our \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem, by finding all possible

\((P_{1},P_{2},\ldots ,P_{\nu _{d}},\alpha _{1},\alpha _{2},\ldots ,\alpha _{\nu _{d}})\) vectors, each representing at least a single feasible partition of \({\mathcal {J}}\) into the sets \({\mathcal {J}}_{i}\ ,{\mathcal {J}}_{2}\ ,\ldots ,{\mathcal {J}}_{\nu _{d}}\). Then, for each such vector, we can find the optimal objective value by solving the LP formulation. Following this process, we select as an optimal solution a schedule that corresponds to the vector which yields the minimum objective value among all the vectors.

Next, we present a state generation process that constructs all possible (\( P_{1},P_{2},\ldots ,P_{\nu _{d}},\alpha _{1},\alpha _{2},\ldots ,\alpha _{\nu _{d}}\) ) vectors representing feasible partitions of \({\mathcal {J}}\) into the sets \( {\mathcal {J}}_{1}\ ,{\mathcal {J}}_{2}\ ,\ldots ,{\mathcal {J}}_{\nu _{d}}\). We begin by re-indexing our jobs according to the longest processing time (LPT) rule such that \(p_{1}\ge p_{2}\ge \cdots \ge p_{n}\). Now, let state \((j,k_{1},\ldots ,k_{\nu _{d}},P_{1},\ldots ,P_{\nu _{d}}, \alpha _{1},\ldots ,\alpha _{\nu _{d}})\) represent a feasible partition of job set \( \{J_{1},\ldots ,J_{j}\}\) into the sets \({\mathcal {J}}_{1}\ ,{\mathcal {J}}_{2}\ ,\ldots , {\mathcal {J}}_{\nu _{d}}\), where \(k_{i}\le m_{i}\) represents the number of jobs assigned to \({\mathcal {J}}_{i}\) (\(i\in \{1,\ldots ,\nu _{d}\}\)). Let \( {\mathcal {L}}_{j}\) represent all possible states on job set \( \{J_{1},\ldots ,J_{j}\}\).

We initialize our state generation process by including a single state \( (0,k_{1}=0,\ldots ,k_{\nu _{d}}=0,P_{1}=0,\ldots ,P_{\nu _{d}}=0,\alpha _{1}=0,\ldots ,\alpha _{\nu _{d}}=0)\ \)in \({\mathcal {L}}_{0}\). Then, for \( j=1,\ldots ,n \), we construct \({\mathcal {L}}_{j}\) from \({\mathcal {L}}_{j-1}\) as follows: starting from each \((j-1,k_{1},\ldots ,k_{\nu _{d}},P_{1},\ldots ,P_{\nu _{d}},\alpha _{1},\ldots ,\alpha _{\nu _{d}})\in {\mathcal {L}}_{j-1}\), we include at most \(\nu _{d}\) states in \({\mathcal {L}}_{j}\) each representing a feasible assignment of \(J_{j}\) into one of the sets, \({\mathcal {J}}_{1}\ ,{\mathcal {J}} _{2}\ ,\ldots ,{\mathcal {J}}_{\nu _{d}}\). Accordingly, for \(r=1,\ldots ,\nu _{d}\), if \( k_{r}<m_{r}\) we assign job \(J_{j}\) to set \({\mathcal {J}}_{r}\) and therefore include the state \((j,k_{1}^{\prime },\ldots ,k_{\nu _{d}}^{\prime },P_{1}^{\prime },\ldots ,P_{\nu _{d}}^{\prime },\alpha _{1}^{\prime },\ldots ,\alpha _{\nu _{d}}^{\prime })\) in \({\mathcal {L}}_{j}\) with (i)\(\ k_{i}^{\prime }=k_{i}\), \(P_{i}^{\prime }=P_{i}\) and \(\alpha _{i}^{\prime }=\alpha _{i}\) for \(i\in \{1,\ldots ,\nu _{d}\}\diagdown \{r\}\); (ii) \( k_{r}^{\prime }=k_{r}+1\); (iii) \(P_{r}^{\prime }=P_{r}+p_{j}\); and (iv) \( \alpha _{r}^{\prime }=p_{j}\) if \(\alpha _{r}=0\) and \(\alpha _{r}^{\prime }=\alpha _{r}\) if \(\alpha _{r}>0\). At the end of the state generation process, set \({\mathcal {L}}_{n}\) includes the set of all possible state vectors. To conclude the above analysis, the following algorithm can be used to solve the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem:

figure a

Theorem 2

Algorithm 1 solves the \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) problem in pseudo-polynomial time when the value of \(\nu _{d}\) is upper bounded by a constant.

Proof

The correctness of the algorithm follows from the discussion in this section. Step 1 requires a sorting operation, and therefore can be done in \( O(n\log n)\) time. The facts that (i) the value of \(P_{i}\) (\(i\in \{1,\ldots ,\nu _{d}\}\)) is upper bounded by \(P_{\Sigma }=\sum _{J_{j}\in {\mathcal {J}}}p_{j}\), and that (ii) there are O(n) different possible values for each \(\alpha _{i}\), implies that there are at most \(O((nP_{\Sigma })^{\nu _{d}})\) states in each \({\mathcal {L}}_{j}\). In Step 2, we construct at most \(\nu _{d}\) states in \({\mathcal {L}}_{j}\) out of state in \({\mathcal {L}}_{j-1}\), each of which requires \( O(\nu _{d})\) time. Thus, each iteration \(j\in \{1,\ldots ,n\}\) of Step 2 requires \(O((\nu _{d})^{2}(nP_{\Sigma })^{\nu _{d}})\) time, and the overall complexity of Step 2 is \(O((\nu _{d})^{2}n^{\nu _{d}+1}(P_{\Sigma })^{\nu _{d}})\). In Step 3, for each state in \({\mathcal {L}}_{n}\), we solve an LP formulation, which requires \(O(\nu _{d}^{3.5})\ \)time (see Lemma 6). The fact that we have \(O((nP_{\Sigma })^{\nu _{d}})\) states in each \({\mathcal {L}}_{j}\), implies that Step 3 can be done in \( O(\nu _{d}^{3.5}(nP_{\Sigma })^{\nu _{d}})\) time. In Step 4 we track back the optimal assignment of jobs into the job sets \({\mathcal {J}}_{1}\ ,\mathcal { J}_{2}\ ,\ldots ,{\mathcal {J}}_{\nu _{d}}\) out of Opt. This can easily be done in a linear time, and therefore the time complexity of Algorithm 1 is \(O((\nu _{d})^{2}\max \{n,(\nu _{d})^{1.5}\}(n P_{\Sigma })^{\nu _{d}})\). If the value of \(\nu _{d}\) is upper bounded by a constant, this time complexity reduces to \(O(n^{\nu _{d}+1}(P_{\Sigma })^{\nu _{d}})\), which is pseudo-polynomial. \(\square \)

2.3 An FPT algorithm for the combined parameter \((\nu _{d}, \nu _{p})\)

In this section, we prove that the following result holds:

Theorem 3

The \(1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}} }\{\left| L_{j}\right| \}\) problem is FPT wrt. the combined parameter \((\nu _{d},\nu _{p})\).

The proof of Theorem 3 is done by providing a Mixed Integer Linear Programming (MILP) formulation for the \(1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) problem with \(O(\nu _{d}\nu _{p})\) integer variables. Then, Theorem directly holds from the result by Lenstra (1983) who showed that the problem of solving an MILP is FPT wrt. the number of integer variables.

For ease of presentation, we represent the vector of n due-dates in a modified manner by two vectors of size \(\nu _{d}\): (\(\delta _{1},\delta _{2},\ldots ,\delta _{\nu _{d}}\)) and (\(m_{1},m_{2},\ldots ,m_{\nu _{d}}\)). The first includes the \(\nu _{d}\) distinct due-dates in the instance. In the second, \(m_{i}\) represents the number of positions in the sequence having a due-date of \(\delta _{i}\). We also represent the vector of n processing times in a modified manner by two vectors of size \(\nu _{p}\): (\( p_{1},p_{2},\ldots ,p_{\nu _{p}}\)) and (\(n_{1},n_{2},\ldots ,n_{\nu _{p}}\)), where the first includes the \(\nu _{p}\) distinct processing times in the instance, and \(n_{i}\) is the number of jobs having processing time of \(p_{i}\) for \( i=1,\ldots ,\nu _{p}\). Without loss of generality, we assume that the due-dates are numbered according to the EDD rule, such that \(\delta _{1}<\delta _{2}<\cdots <\delta _{\nu _{d}}\), and that the processing times are numbered according to the SPT rule, such that \(p_{1}<p_{2}<\cdots <p_{\nu _{p}}\).

Now, let \(x_{ij}\) be an integer decision variable representing the number of jobs with processing time \(p_{j}\) allocated to due-date \(\delta _{i}\) \((i\in \{1,\ldots ,\nu _{d}\}\), \(j\in \{1,\ldots ,\nu _{p}\}).\) Accordingly, we have the following set of constraints:

(6)

and

(7)

Now, let \(S_{i}\) \((i=1,\ldots ,\nu _{d}\)) be a continuous decision variable representing the start time of the jobs in set \({\mathcal {J}}_{i}\). Due to Lemma 4, the completion time of all jobs in set \({\mathcal {J}}_{i}\) is at time . Therefore, we include the following set of constraints that ensures that processing time intervals do not overlap:

(8)

with \(S_{0}=0\) by definition.

Let Z be a continuous decision variable representing the maximum absolute lateness, and let \(\alpha _{i}\) be a continuous decision variable representing the largest processing time among all processing times of the jobs in \({\mathcal {J}}_{i}\) (\(i\in \{1,\ldots ,\nu _{d}\}\)). Due to Lemma , and the fact that the first scheduled job in \({\mathcal {J}}_{i}\) has the maximum earliness value among all jobs in \({\mathcal {J}}_{i}\), we include the set of constraints that

$$\begin{aligned} Z\ge \delta _{i}-S_{i}-\alpha _{i}\text { for }i=1,\ldots ,\nu _{d}\text {.} \end{aligned}$$
(9)

Moreover, due to Lemmas 45, and the fact that the last scheduled job in \({\mathcal {J}}_{i}\) has the maximal tardiness value among all jobs in \({\mathcal {J}}_{i}\), we also include the following set of constraints as well

(10)

To ensure that \(\alpha _{i}\) is indeed the largest processing time among all processing times of the jobs in \({\mathcal {J}}_{i}\), we define \(y_{ij}\) as a binary variable that is equal to 1 when \(x_{ij}\ge 1\ \)and is equal 0 otherwise \((i\in \{1,\ldots ,\nu _{d}\}\), \(j\in \{1,\ldots ,\nu _{p}\})\). Then, we include the following set of constraints:

$$\begin{aligned} y_{ij}\le x_{ij}\text { for }i=1,\ldots ,\nu _{d}\text { and }j=1,\ldots ,\nu _{p} \text {;} \end{aligned}$$
(11)

and also the following set of constraints:

(12)

Now, consider the problem \({\mathcal {P}}\) of minimizing Z subject to the set of constraints in (6)–(12). It implies from the objective, and the set of constraints in (9)–(10) that in an optimal solution for \({\mathcal {P}}\)

(13)

which is exactly the value of the maximal absolute lateness of any solution that satisfy the properties in Lemmas 4 and 5 if indeed \(\alpha _{i}\) receives the largest processing time among all processing times of the jobs in \({\mathcal {J}}_{i}\). To prove that, we note that Z is a non-increasing function of \(\alpha _{i}\) for \(i=1,..,n\). Therefore, there exists an optimal solution for problem \({\mathcal {P}},\) where \(\alpha _{i}\) is the largest value satisfying the set of constraints in (12). As the right hand side of the set of constraints in (12) is an increasing function of the \(y_{il}\) variables, there exists an optimal solution in which \(y_{ij}=1\) if \(x_{ij}\ge 1\), and \(y_{ij}=0\) if \( x_{ij}=0.\)

Assume now that \(p_{q}\) is the largest processing time among all processing times of the jobs assigned to \({\mathcal {J}}_{i}\) (\(q\in \{1,\ldots ,\nu _{p}\}\)). It follows that \(y_{iq}=1\), and that \(y_{ij}=0\) for \(j=q+1,\ldots ,\nu _{p}\). Therefore, for any \(j\in \{1,\ldots ,q-1\}\), and for any \(j\in \{q+1,\ldots ,\nu _{p}\}\). Thus, the set of constraints in (12) reduces to \(\alpha _{i}\le p_{q}\), and since the value of Z is a non-increasing function of \(\alpha _{i}\) for \( i=1,..,n\), there exists an optimal solution for \({\mathcal {P}}\) in which \( \alpha _{i}=p_{q}\).

The fact that solving \({\mathcal {P}}\) provides an optimal solution for the \( 1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}}}\{\left| L_{j}\right| \}\) problem, and that \({\mathcal {P}}\) includes \(O(\nu _{d}\nu _{p})\) integer variables, implies that we can use Lenstra’s algorithm (1983) algorithm to solve the \(1\left| gdd\right| \max _{J_{j}\in {\mathcal {J}} }\{\left| L_{j}\right| \}\) problem wrt. the combined parameter \((\nu _{d},\nu _{p}),\) and Theorem 3 follows.

3 Maximal weighted number of JIT Jobs

Gerstl and Mosheiov (2020) prove that the following theorem holds:

Theorem 4

(Gerstl & Mosheiov, 2020) The \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem is strongly \(\mathcal {NP}\)-hard when the number of due-dates is arbitrary.

Consider next the case where all due-dates are common, i.e., \(\delta _{j}=\delta \) for \(j=1,\ldots ,n\). For this case, it is obvious that at most a single job can be a JIT job, i.e., that for any feasible schedule we have that \(\left| {\mathcal {E}}\right| \le 1\). Now, let \({\mathcal {J}} ^{\prime }=\{J_{j}\in {\mathcal {J}}\left| p_{j}\le \delta \right. \}\) be the subset of jobs which can be scheduled in a JIT mode. It implies that among all jobs in \({\mathcal {J}}^{\prime }\) it is optimal to schedule the one of maximal weight in a JIT mode. Therefore, the following corollary holds:

Corollary 1

The \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}} }w_{j} \) problem is solvable in O(n) time when \(\nu _{d}=1\).

In the following three subsections, we complement the results in Theorem and Corollary 1 by showing that (i) the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem is \(\mathcal {NP}\) -hard even when we have only two distinct due-dates in the instance; (ii) the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem is solvable in pseudo-polynomial time when the value of \(\nu _{d}\) is upper bounded by a constant; and (iii) the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem is FPT wrt. (\(\nu _{d},\nu _{p}\)). We leave the question of whether the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) is FPT wrt. (\(\nu _{d},\nu _{p}\)) open.

3.1 \(\mathcal {NP}\)-hardness for the two distinct due-dates case

In this subsection, we show that the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) is \(\mathcal {NP}\)-hard even when \(\nu _{d}=2\). The reduction is done from the ordinary \(\mathcal {NP}\)-hard Partition problem, which is defined below.

Definition 2

Partition: Given a set of t positive integers \({\mathcal {A}} =\{a_{1},a_{2},\ldots ,a_{t}\}\) with \(\Sigma _{j=1}^{t}a_{j}=2B\) and \(0<a_{j}<B\) . Can \({\mathcal {A}}\) be partitioned into two subsets \({\mathcal {A}}_{1}\) and \( {\mathcal {A}}_{2}\) such that \(\Sigma _{a_{j}\in {\mathcal {A}}_{i}}a_{j}=B\) for \( i=1,2\)?

Theorem 5

The \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem is \(\mathcal {NP}\)-hard even when there are only two distinct due-dates (i.e., even when \(\nu _{d}=2\)).

Proof

The proof is based on a reduction from the \(\mathcal {NP}\)-hard Partition problem. Given an instance for the Partition problem, we construct the following instance to our scheduling problem with a set \( \mathcal {J=\{}J_{1},\ldots ,J_{n}\mathcal {\}}\) of \(n=t\) jobs. The processing times are

$$\begin{aligned} p_{j}=a_{j} \end{aligned}$$
(14)

for \(j=1,\ldots ,t\). The due-dates are

$$\begin{aligned} \delta _{j}=\left\{ \begin{array}{ccc} B &{} \text {for} &{} j=1,\ldots ,t-1 \\ 2B &{} \text {for} &{} j=t \end{array} \right. . \end{aligned}$$
(15)

In the decision version of the problem, we ask whether there is a feasible schedule with \(\left| {\mathcal {E}}\right| =2\). Note that there are only two distinct due-dates in the constructed instance of the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem. Therefore, \( \left| {\mathcal {E}}\right| \le 2\) in any feasible schedule.

Consider a solution that provides a YES answer for the Partition problem, we construct the following solution S to our \( 1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem. We set \({\mathcal {J}}_{i}=\{J_{j}\left| a_{j}\in {\mathcal {A}}_{i}\right. \}\) for \(i=1,2\). Then, we schedule all jobs in \({\mathcal {J}}_{1}\) before any job in \({\mathcal {J}}_{2}\) with no machine idle times. The processing order within each \({\mathcal {J}}_{i}\) \((i=1,2)\) is arbitrary. As \(\Sigma _{J_{j}\in {\mathcal {J}}_{i}}p_{j}=\Sigma _{J_{j}\in {\mathcal {A}}_{i}}a_{j}=B\), the completion time of the last job in \({\mathcal {J}}_{1}\) is exactly at time B. Moreover, the fact that \(\left| {\mathcal {J}}_{1}\right| <n,\) implies that the last job to be scheduled in \({\mathcal {J}}_{1}\) is assigned with a due-date of B (see eq. (15)). Therefore, the last scheduled job in \({\mathcal {J}}_{1}\) is completed in a JIT mode. Moreover, as we schedule all jobs with no idle times, the last job to be scheduled in \( {\mathcal {J}}_{2}\) is completed at time 2B, which is also the due-date of the last scheduled job (see eq. (15)). Accordingly, in S we have that \(\left| {\mathcal {E}}\right| =2\), and therefore S provides a YES answer for the constructed instance of our \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem.

Consider next a solution S that provides a YES answer for the constructed instance of our \(1\left| gdd\right| \left| \mathcal {E }\right| \) problem. It implies that the last scheduled job is completed exactly at time 2B. Thus, in S, there are no machine idle times. The fact that \(\left| {\mathcal {E}}\right| =2\) in S implies that there is job which completes exactly at time B in schedule S. Let \({\mathcal {J}} _{1}\) be the set of jobs that are completed no later than time B in S, and let \({\mathcal {J}}_{2}\) be the set of all other jobs. As there are no machine idle time in S, we have that \(\Sigma _{J_{j}\in {\mathcal {J}} _{i}}p_{j}=B\) for \(i=1,2\). Therefore, by setting \({\mathcal {A}} _{i}=\{a_{j}\left| J_{j}\in {\mathcal {J}}_{i}\right. \}\) for \(i=1,2\), we have a solution that provides a YES answer for the Partition problem. \(\square \)

3.2 A pseudo-polynomial time algorithm for a constant number of different due-dates

Let \({\mathcal {L}}\) be the set of all \(l=(l_{1},l_{2},\ldots ,l_{r})\) vectors satisfying that \((i)\ l_{i}\in \{1,\ldots ,n\}\); and \((ii)\ \delta _{l_{i}-1}<\delta _{l_{i}}\) for \(i=1,\ldots ,r\) with \(l_{0}=0\) by definition (note that \(r\le \nu _{d}\)). There are \(O(n^{\nu _{d}})\) such vectors, each includes a subset of positions in the job sequence having different due-dates. It implies that any feasible \({\mathcal {E}}\) set includes jobs that are scheduled in the positions of some \(l\in {\mathcal {L}}\). Therefore, we solve the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem, by partitioning it into a set of \(O(n^{k})\) (where k is the number of due-dates) subproblems each corresponding to a different \( (l_{1},l_{2},\ldots ,l_{r})\in {\mathcal {L}}\). Let P(l) be the subproblem corresponding to vector l. Our \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem reduces to finding the vector \(l\in {\mathcal {L}}\) with the best feasible solution (if such a solution exists) satisfying that the set of JIT jobs are scheduled in positions \( l_{1},l_{2},\ldots ,l_{r}\).

Given a subproblem P(l), corresponding to vector \((l_{1},l_{2}, \ldots ,l_{r}) \in {\mathcal {L}}\), we let \(n_{i}=l_{i}-l_{i-1}\) for \(i=1,\ldots ,r+1,\) where \( l_{r+1}=n\) by definition. Let S be a feasible solution for the corresponding subproblem with \({\mathcal {A}}_{i}\) being the set of \(n_{i}\) jobs scheduled in positions \(\{l_{i-1}+1,\ldots ,l_{i}\}\) for \(i=1,\ldots ,r+1\). Note that all jobs in each \({\mathcal {A}}_{i}\) (\(i=1,...,r\)) share the same due-date of \(\delta _{l_{i}}\). Accordingly, only a single job in each \(\mathcal {A }_{i}\) (\(i=1,\,...,r\)) can be completed at the common due-date, \(\delta _{l_{i}} \). It follows that \(\Sigma _{J_{j}\in {\mathcal {A}}_{i}}p_{j}\le \delta _{l_{i}}-\delta _{l_{i}-1}\) for \(i=1,\ldots ,r\), as otherwise we cannot complete the last job in \({\mathcal {A}}_{i}\) at time \(\delta _{l_{i}}\), given that the last job in \({\mathcal {A}}_{i-1}\) is completed at time \(\delta _{l_{i}-1}\). The following lemma obviously holds:

Lemma 7

Given a feasible solution S for problem P(l) with \(n_{i}\) jobs in each set \({\mathcal {A}}_{i}\) and with \(\Sigma _{J_{j}\in {\mathcal {A}} _{i}}p_{j}\le \delta _{l_{i}}-\delta _{l_{i}-1}\) for \(i=1,\ldots ,r\), it is optimal to schedule all jobs in \({\mathcal {A}}_{i}\) during time interval \( (\delta _{l_{i}}-\Sigma _{J_{j}\in {\mathcal {A}}_{i}}p_{j},\delta _{l_{i}}]\) with the last job, which is the only one in \({\mathcal {A}}_{i}\) being scheduled in a JIT mode, is the one of maximal weight among all jobs in \( {\mathcal {A}}_{i}\).

We solve each subproblem P(l) by using a dynamic programming procedure, which starts by re-indexing the jobs in a non-increasing order of weight, such that \(w_{1}\ge w_{2}\ge \cdots \ge w_{n}\). Now, let \( F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\) be the maximal gain that can be obtained out of all feasible partial schedule on job set \({\mathcal {J}} _{j}=\{J_{1},\ldots ,J_{j}\}\) with \(q_{i}\) jobs the assigned to set \({\mathcal {A}} _{i}\) and \(x_{i}=\Sigma _{J_{j}\in {\mathcal {A}}_{i}}p_{j}\) for \(i=1,\ldots ,r+1\). It follows from the feasibility of the partial schedule that the following conditions holds:

Condition 1

\(x_{i}\le \delta _{l_{i}}-\delta _{l_{i}-1}\) for \(i=1,\ldots ,r\), as otherwise we cannot complete the last job in \({\mathcal {A}}_{i}\) at time \( \delta _{l_{i}}\), given that the last job in \({\mathcal {A}}_{i-1}\) is completed at time \(\delta _{l_{i}-1}\).

Condition 2

\(x_{r+1}=\Sigma _{l=1}^{j}p_{l}-\Sigma _{i=1}^{r}x_{i}\), and \( q_{r+1}=j-\Sigma _{i=1}^{r}q_{i}\) as any job has to be assigned to one of the \({\mathcal {A}}_{i}\) sets (\(i=1,\ldots ,r+1\)).

Condition 3

For any \(j=1,\ldots ,n\) and \(i=1,\ldots ,r+1\), \(q_{i}\le n_{i}\) and \( q_{i}+(n-j)\ge n_{i}\); as otherwise we cannot construct a complete solution with \(n_{i}\) jobs in \({\mathcal {A}}_{i}\) for \(i=1,\ldots ,r\).

Based on Lemma 7 and the fact that the jobs are re-indexed such that \(w_{1}\ge w_{2}\cdots \ge w_{n}\), we can compute \(F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\) for \(j=1,\ldots ,n\) and for any set of \(x_{i}\) and \(q_{i}\) values satisfying the conditions in (1)–( 3) by using the following recursion:

$$\begin{aligned}&F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\nonumber \\&\quad =\underset{i=1,\ldots ,r+1}{\max } {\left\{ \begin{array}{ll} F_{j-1}(x_{1},\ldots ,x_{i}-p_{j}, &{}\\ \ldots ,x_{r+1},q_{1},\ldots ,q_{i}&{} \text {if }i<r+1 \\ -1, \ldots ,q_{r+1})+w_{j} &{}\text { and }q_{i}=1 \\ F_{j-1}(x_{1},\ldots ,x_{i}-p_{j},&{} \\ \ldots , x_{r+1},q_{1},\ldots ,&{} \\ q_{i}-1,\ldots ,q_{r+1}) &{}\text {otherwise} \end{array}\right. } \end{aligned}$$
(16)

with the initial condition that

$$\begin{aligned}&F_{0}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\nonumber \\&\quad =\left\{ \begin{array}{ccc} 0 &{} \text {if} &{} x_{i}=q_{i}=0\text { for }i=1,\ldots ,r+1 \\ -\infty &{} \text {otherwise} &{} \end{array} \right. \end{aligned}$$
(17)

At the end of the computing process, the optimal solution for P(l) is given by

$$\begin{aligned}&F^{*}(l)=\max \{F_{n}(x_{1},\ldots ,x_{r+1},n_{1},\ldots ,n_{r+1})\nonumber \\&\quad \mid x_{i}\le \delta _{l_{i}}-\delta _{l_{i}-1}\text { for }i=1,\ldots ,r\text { and }\nonumber \\&\quad \Sigma _{i=1}^{r+1}x_{i}=\Sigma _{l=1}^{n}p_{l}\}. \end{aligned}$$
(18)

To conclude, we can use the following algorithm to solve our \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem:

figure b

Theorem 6

Algorithm 2 solves the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem in \(O(\nu _{d}n^{2\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\).

Proof

The fact that Algorithm 2 provides the optimal solution follows from the discussion in this section. Step 1 requires \(O(n^{\nu _{d}}) \) time as the number of possible l vectors. In Step 2, for any \( l\in {\mathcal {L}}\), we compute \(F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots , q_{r+1})\) by (16). The fact that \(r=O(\nu _{d})\), that \(\ \delta _{l_{i}}-\delta _{l_{i}-1}=O(\delta _{l_{i}})\) and that \(q_{i}\le n_{i}=O(n) \), implies that we compute \(O(n^{\nu _{d}}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) different values of \( F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots , q_{r+1})\) at any stage \(j\in \{1,\ldots ,n\}\), and \(O(n^{\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) values of \( F_{j}(x_{1},x_{2},\ldots ,x_{r+1},q_{1},q_{2},\ldots ,q_{r+1})\) in total. As the computation of each \(F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\) by (16) requires \(O(\nu _{d})\) time, the time required in Step 2 to compute all \(F_{j}(x_{1},\ldots ,x_{r+1},q_{1},\ldots ,q_{r+1})\) values for a given \( l\in {\mathcal {L}}\) is \(O(\nu _{d}n^{\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\). As calculating \(F^{*}(l)\) by (18) can be done in \( O(n^{\nu _{d}}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) time, applying Step 2 for a given l vector requires \(O(\nu _{d}n^{\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) time. The fact that we repeat Step 2, for any \(l\in {\mathcal {L}}\), and that \(\left| {\mathcal {L}}\right| =O(n^{\nu _{d}})\), implies that Step 2 requires \(O(\nu _{d}n^{2\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) time. This time complexity reduces to \(O(n^{\nu _{d}+1}\Pi _{i=1}^{\nu _{d}}\delta _{l_{i}})\) when the value of \(\nu _{d}\) is upper bounded by a constant. The theorem now follows from the fact that tracking the decisions that lead to the optimal solution value in Step 3 require only linear time. \(\square \)

3.3 An FPT algorithm for the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem wrt. the combined parameter \((\nu _{d},\nu _{p})\)

In this section, we prove that the following result holds:

Theorem 7

The \(1\left| gdd\right| \left| {\mathcal {E}} \right| \) problem is FPT wrt. the combined parameter \((\nu _{d},\nu _{p})\).

The proof of Theorem 7 is based on breaking down the \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem into a set of \( O(2^{\nu _{d}})\) subproblems and providing a MILP formulation with \( O(\nu _{d}\nu _{p})\) integer variables for each of the subproblems. Then the fact that Theorem 7 holds follows directly from the result by Lenstra (1983) that shows that the problem of solving an MILP is FPT wrt. the number of integer variables.

For ease of presentation, we represent the vector of n due-dates in a modified manner by two vectors of size \(\nu _{d}\): (\(\delta _{1},\delta _{2},..,\delta _{\nu _{d}}\)) and (\(m_{1},m_{2},..,m_{\nu _{d}}\)). The first includes the \(\nu _{d}\) distinct due-dates in the instance. In the second, \( m_{i}\) represents the number of positions in the sequence having a due-date not greater than \(\delta _{i}\). We order the due-dates in the first vector according to the EDD rule, such that \(\delta _{1}<\delta _{2}<\cdots <\delta _{\nu _{d}}\). By definition, we also have that \( m_{1}<m_{2}<\cdots <m_{\nu _{d}}\) with \(m_{\nu _{d}}=n\). We also represent the vector of n processing times in a modified manner by two vectors of size \( \nu _{p}\): (\(p_{1},p_{2},..,p_{\nu _{p}}\)) and (\(n_{1},n_{2},..,n_{\nu _{p}}\)), where the first includes the \(\nu _{p}\) distinct processing times in the instance, and \(n_{i}\) is the number of jobs having processing time of \(p_{i}\) for \(i=1,\ldots ,\nu _{p}\).

Given a feasible schedule, we say that due-date \(\delta _{i}\) is active if there exists a job completed exactly at time \(\delta _{i}\). Now, let \( \Delta \) be a set that includes all subsets of set (\(\delta _{1},\delta _{2},..,\delta _{\nu _{d}}\)). Note that \(\left| \Delta \right| =O(2^{\nu _{d}}).\) We partition our \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem into a set of \(O(2^{\nu _{d}})\) subproblems, each corresponding to a specific set of distinct due-dates \( \delta \in \Delta \). In each such subproblem, we aim to find if there exists a feasible solution with all due-dates in \(\delta \) being active. Let \( P(\delta )\) be the subproblem that corresponds to vector \(\delta \). Our \( 1\left| gdd\right| \left| {\mathcal {E}}\right| \) problem reduces to finding the set \(\delta \in \Delta \) of maximal cardinality for which there is a feasible solution for \(P(\delta )\).

Given \(\delta \in \Delta \), we next show that each \(P(\delta )\) problem can be represented as an MILP with only \(O(\nu _{d}\nu _{p})\) integer variables. Let \(\delta =(\delta _{[1]},\delta _{[2]},\ldots ,\delta _{[k]})\) (note that \(k\le \nu _{d}\)). We define \( x_{i} \) as a positive integer variable that represents the position in the sequence of the job completed at \(\delta _{[i]}\) for \(i=1,..,k\). As there are exactly \(m_{i}\) positions with due-date not larger than \(\delta _{i}\), we include the following set of constraints for \(i=1,\ldots ,k\):

$$\begin{aligned} m_{[i-1]}+1\le x_{i}\le m_{[i]}\text {.} \end{aligned}$$

We also define \(y_{ij}\) as a nonnegative integer variable which represents the number of jobs having processing time of \(p_{j}\) that are assigned to positions \(x_{i-1}+1,\ldots ,x_{i}\) in the sequence. Accordingly, for each \( i=1,\ldots ,k\) we include the constraint that

with \(x_{i}=0\) by definition. To make sure that \(\delta _{[i]}\) is indeed an active due-date, we also include the following set of constraints for \(i=1,\ldots ,k\):

Finally, to ensure that we do not assign more than \(n_{j}\) jobs with processing time \(p_{j}\), up to position k, we include the following set of constraints for \(j=1,\ldots ,\nu _{p}\):

As \(k\le \nu _{d},\) the above formulation includes \(O(\nu _{d}+\nu _{d}\nu _{p})=O(\nu _{d}\nu _{p})\) integer variables in each subproblem \(P(\delta )\).

4 Summary and future research

Many scheduling problems with due-date related objective function are \(\mathcal {NP}\)-hard when the number of different due-dates in the instance, \( \nu _{d}\), is arbitrary. However, in many real-life problems the number of due-dates is a bounded parameter due to many reasons, such as high shipment costs, and work agreements. Therefore, there is great interest in studying the complexity of such \(\mathcal {NP}\)-hard problems with respect to the number of due-dates in the instance. We consider two such single-machine scheduling problems with generalized due-dates. The first, denoted by \( 1\left| gdd\right| \max \{\left| L_{j}\right| \}\), focuses on minimizing the maximal absolute lateness, and the second, denoted by \( 1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\), consists of maximizing the weighted number of jobs completed in a JIT mode (i.e., exactly at their due-date). Both problems are known to be strongly \(\mathcal { NP}\)-hard for arbitrary values of \(\nu _{d}\). We show that both problems are solvable in polynomial time when \(\nu _{d}=1\). Moreover, we show that problems \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \left| {\mathcal {E}}\right| \) are \( \mathcal {NP}\)-hard when \(\nu _{d}=2\). We compliment these two results by showing that problems \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) are solvable in pseudo-polynomial time when the value of \( \nu _{d}\) is bounded by a constant.

The fact that problems \(1\left| gdd\right| \max \{\left| L_{j}\right| \}\) and \(1\left| gdd\right| \left| {\mathcal {E}} \right| \) are \(\mathcal {NP}\)-hard even if \(\nu _{d}=2\) rules out the possibility of constructing an FPT algorithm for neither problem wrt. \(\nu _{d}\), unless \({\mathcal {P}}=\mathcal {NP}\). We show, however, that both problems are FPT wrt. the combined parameter \((\nu _{d},\nu _{p})\) where \(\nu _{p}\) is the number of different processing times in the instance. We leave open the question whether the \(1\left| gdd\right| \Sigma _{J_{j}\in {\mathcal {E}}}w_{j}\) problem is FPT when we combine parameters \(\nu _{d}\) and \(\nu _{p}\).

In future research, we aim to study the complexity status of other \(\mathcal { NP}\)-hard scheduling problems with due-date related objective function wrt. to \(\nu _{d}\), hoping to provide efficient algorithms to solve practical instances with limited \(\nu _{d}\) values.