Skip to main content
Log in

Minimizing the waiting time for a one-way shuttle service

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

Consider a terminal in which users arrive continuously over a finite period of time at a variable rate known in advance. A fleet of shuttles has to carry them over a fixed trip. What is the shuttle schedule that minimizes their waiting time? This is the question addressed in the present paper. We consider several versions that differ according to whether the shuttles come back to the terminal after their trip or not, and according to the objective function (maximum or average of the waiting times). We propose efficient algorithms with proven performance guarantees for almost all versions, and we completely solve the case where all users are present in the terminal from the beginning, a result which is already of some interest. The techniques used are of various types (convex optimization, shortest paths, ...). The paper ends with numerical experiments showing that most of our algorithms behave also well in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Barbosa, L. C., & Friedman, M. (1978). Deterministic inventory lot size models—A general root law. Management Science, 24(8), 819–826.

    Article  Google Scholar 

  • Barrena, E., Canca, D., Coelho, L., & Laporte, G. (2014). Exact formulations and algorithm for the train timetabling problem with dynamic demand. Computers & Operations Research, 44, 66–74.

    Article  Google Scholar 

  • Cacchiani, V., Caprara, A., & Toth, P. (2008). A column generation approach to train timetabling on a corridor. 4OR: A Quarterly Journal of Operations Research, 6(2), 125–142.

    Article  Google Scholar 

  • Cacchiani, V., Caprara, A., & Toth, P. (2010). Non-cyclic train timetabling and comparability graphs. Operations Research Letters, 38(3), 179–184.

    Article  Google Scholar 

  • Cai, X., Goh, C. J., & Mees, A. (1998). Greedy heuristics for rapid scheduling of trains on a single track. IIE Transactions, 30(5), 481–493.

    Article  Google Scholar 

  • Caprara, A., Fischetti, M., & Toth, P. (2002). Modeling and solving the train timetabling problem. Operations Research, 50(5), 851–861.

    Article  Google Scholar 

  • Cordone, R., & Redaelli, F. (2011). Optimizing the demand captured by a railway system with a regular timetable. Transportation Research Part B: Methodological, 45(2), 430–446.

    Article  Google Scholar 

  • Diewert, W. E. (1981). Alternative characterizations of six kinds of quasiconcavity in the nondifferentiable case with applications to nonsmooth programming. In S. Schaible & W. T. Ziemba (Eds.), Generalized concavity in optimization and economics (pp. 51–93). New York: Academic Press.

    Google Scholar 

  • Dooly, D. R., Goldman, S. A., & Scott, S. D. (1998). TCP dynamic acknowledgment delay (extended abstract): Theory and practice. In Proceedings of the thirtieth annual ACM symposium on Theory of Computing (pp. 389–398). ACM.

  • Ilani, H., Shufan, E., Grinshpoun, T., Belulu, A., & Fainberg, A. (2014). A reduction approach to the two-campus transport problem. Journal of Scheduling, 17(6), 587–599.

    Article  Google Scholar 

  • Ingolotti, L., Lova, A., Barber, F., Tormos, P., Salido, M. A., & Abril, M. (2006). New heuristics to solve the CSOP railway timetabling problem. In International conference on industrial, engineering and other applications of applied intelligent systems (pp. 400–409). Springer.

  • Kroon, L., & Peeters, L. (2003). A variable trip time model for cyclic railway timetabling. Transportation Science, 37(2), 198–212.

    Article  Google Scholar 

  • Kroon, L., Huisman, D., Abbink, E., Fioole, P.-J., Fischetti, M., Maróti, G., et al. (2009). The new Dutch timetable: The OR revolution. Interfaces, 39(1), 6–17.

    Article  Google Scholar 

  • Lehoux-Lebacque, V., Brauner, N., Finke, G., & Rapine, C. (2007). Scheduling chemical experiments. In 37th international conference on computers and industrial engineering, (CIE37).

  • Liebchen, C. (2003). Finding short integral cycle bases for cyclic timetabling. In European symposium on algorithms (pp. 715–726). Springer.

  • Liebchen, C., & Möhring, R. (2002). A case study in periodic timetabling. Electronic Notes in Theoretical Computer Science, 66(6), 18–31.

    Article  Google Scholar 

  • Little, J. D. C, & Graves, S. C. (2008). Little’s law. In Building intuition (pp. 81–100). Springer.

  • Nachtigall, K., & Voget, S. (1996). A genetic algorithm approach to periodic railway synchronization. Computers & Operations Research, 23(5), 453–463.

    Article  Google Scholar 

  • Serafini, P., & Ukovich, W. (1989). A mathematical model for periodic scheduling problems. SIAM Journal on Discrete Mathematics, 2(4), 550–581.

    Article  Google Scholar 

  • Voorhoeve, M. (1993). Rail scheduling with discrete sets. Unpublished report, Eindhoven University of Technology, The Netherlands.

Download references

Acknowledgements

The authors thank the reviewers for their comments, which helped improve the paper. They are also grateful to Eurotunnel for its explanations about the operation of the tunnel and the terminals, and for the data it provided.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frédéric Meunier.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors are grateful to the company Eurotunnel for its financial support to this work.

Appendix

Appendix

1.1 Proofs of the lemmas of Section 2.2

Proof of Lemma 1

Since \(D(\cdot )\) is nondecreasing, the first inequality is a direct consequence of the inequality \(\tau (y)\geqslant \bar{\tau }(y)\), which is obvious from the definition. To prove the second inequality, consider \((t_n)\) a nonincreasing sequence converging toward \(\bar{\tau }(y)\) such that \(D(t_n)\geqslant y\) for all n. By the upper semicontinuity of \(D(\cdot )\), we get then \(D(\bar{\tau }(y))\geqslant y\).\(\square \)

Proof of Lemma 2

We first prove that \(\tau (\cdot )\) is upper semicontinuous. Let \(\alpha \) be some real number such that \(\{y:\tau (y)<\alpha \}\) is nonempty and take from it an arbitrary element \(y_0\). We want to prove that \(\{y:\tau (y)<\alpha \}\) is open for the induced topology on [0, D(T)]. If \(y_0=D(T)\), then this set is [0, D(T)] and thus open. Otherwise, by the definition of \(\tau (\cdot )\), we know that there exists \(t_0<\alpha \) such that \(D(t_0)>y_0\). For any element y in \([0,D(t_0))\), we have \(\tau (y)\leqslant t_0\), and thus \([0,D(t_0))\) is an open set containing \(y_0\) and fully contained in \(\{y:\tau (y)<\alpha \}\). The set \(\{y:\tau (y)<\alpha \}\) is thus an open set of [0, D(T)] for every real number \(\alpha \), which precisely means that \(\tau (\cdot )\) is upper semicontinuous.

We prove now that \(\bar{\tau }(\cdot )\) is lower semicontinuous. Let \(\alpha \) be some nonnegative number. Consider a converging sequence \((y_n)\) in [0, D(T)] such that \(\bar{\tau }(y_n)\leqslant \alpha \) for all n. We want to prove that \(\bar{\tau }(\lim _{n\rightarrow \infty }y_n)\) is at most \(\alpha \). This is obvious when \(\alpha \geqslant T\). We can thus assume that \(\alpha <T\). By definition of \(\bar{\tau }(y_n)\), we know that \(D(\alpha +\frac{1}{n})\geqslant y_n\) (we consider n large enough so that \(\alpha +\frac{1}{n}\leqslant T\)). Since \(D(\cdot )\) is upper semicontinuous and nondecreasing, we get that \(D(\alpha )\geqslant \lim _{n\rightarrow \infty }y_n\), and the conclusion follows. \(\square \)

Proof of Lemma 3

If \(\bar{\tau }(y)=T\), then the equality is obvious. We can thus assume that \(\bar{\tau }(y)<T\). For every \(t>\bar{\tau }(y)\), we have \(D(t)>D(\bar{\tau }(y))\) since \(D(\cdot )\) is increasing, and Lemma 1 implies that \(D(t)>y\). By definition of \(\tau (\cdot )\), we have \(\tau (y)\leqslant \bar{\tau }(y)\). The reverse inequality being clear from the definitions, we get the result.\(\square \)

1.2 Proofs of the lemmas of Section 2.3

Proof of Lemma 4

Let \((\varvec{d},\varvec{y})\) be a feasible solution of \(P^{\mathrm{max}}_{\text {no return}}\). We are going to build a feasible solution \((\varvec{d}',\varvec{y})\) (with the same \(\varvec{y}\)) such that

$$\begin{aligned} g^{\mathrm{max}}(\varvec{d},\varvec{y})\geqslant g^{\mathrm{max}}(\varvec{d}',\varvec{y})=\max _{j}\big (d'_j-\tau (y_{j-1})\big ).\end{aligned}$$
(2)

We set \(d_1'=\bar{\tau }(y_1)+\nu y_1\) and define inductively \(d'_j=\max (d_{j-1}',\bar{\tau }(y_j)+\nu (y_j-y_{j-1}))\). We have \(d_j'\leqslant d_j\) for all j, and it implies the inequality in (2). Let us prove the equality of (2): if \(\max _{j}\big (d'_j-\tau (y_{j-1})\big )\) is attained for a \(\bar{\jmath }\) such that \(y_{\bar{\jmath }-1}<D(T)\), then there exists \(k\geqslant \bar{\jmath }\) such that \(y_k>y_{k-1}=y_{\bar{\jmath }-1}\) and \(d_k'\geqslant d_{\bar{\jmath }}'\), which means that the maximum is also attained for a k such that \(y_k>y_{k-1}\), and if \(\max _{j}\big (d'_j-\tau (y_{j-1})\big )\) is attained for a \(\bar{\jmath }\) such that \(y_{\bar{\jmath }-1}=D(T)\), then there exists \(\ell \leqslant \bar{\jmath }-1\) such that \(y_{\ell -1}<y_{\ell }=y_{\bar{\jmath }-1}\) and by construction \(d'_{\ell }=d'_{\ell +1}=\cdots =d'_S\) (since \(y_{\ell }=y_{\ell +1}=\cdots =y_S=D(T)\)), which means that the maximum is also attained for an \(\ell \) such that \(y_{\ell }>y_{\ell -1}\).\(\square \)

Proof of Lemma 5

Let \((\varvec{d},\varvec{y})\) be an optimal solution (we do not care which objective function is used yet). We choose this optimal solution with minimal \(\sum _{j=1}^Sy_j\) among all possible optimal solutions. Such a solution exists by continuity and compactness: as explained right before Lemma 4, we can add the constraint \(d_j\leqslant T+\nu C\) for all j, without changing the optimal value and without changing the possible values for \(\varvec{y}\) at optimality. Without loss of generality, we can moreover assume that \(d_1=\bar{\tau }(y_1)+\nu y_1\) and that for all \(j\in \{2,\ldots ,S\}\) we have

$$\begin{aligned} d_j=\max \big (d_{j-1},\bar{\tau }(y_j)+\nu (y_j-y_{j-1})\big ) \end{aligned}$$
(3)

(just redefine \(d_j\) according to these equalities if necessary). When \(\nu =0\), a straightforward induction on j shows that we have then always \(d_j=\bar{\tau }(y_j)\). We can thus assume that \(\nu >0\).

Suppose for a contradiction that there is a j such that \(d_j>\bar{\tau }(y_j)+\nu (y_j-y_{j-1})\). Denote by \(j_1\) the smallest index for which this inequality holds. We necessarily have \(d_{j_1}=d_{j_1-1}\) [because of the equality (3)]. Denote by \(j_0\) the smallest index \(j< j_1\) such that \(d_j=d_{j_1}\). Note that since \(D(\cdot )\) is increasing, we have that \(\bar{\tau }(\cdot )\) is continuous (it is upper and lower semicontinuous with Lemma 3).

For some small \(\varepsilon >0\), we define \((\bar{\varvec{d}},\bar{\varvec{y}})\) as follows:

$$\begin{aligned} \bar{y}_j=\left\{ \begin{array}{ll} y_j-\varepsilon &{} \quad \hbox {for }j=j_0,\ldots ,j_1-1 \\ y_j &{} \quad \hbox {otherwise} \end{array}\right. \end{aligned}$$

and

where \(\bar{d}_0=0\). We first check that \((\bar{\varvec{d}},\bar{\varvec{y}})\) is a feasible solution of (\(P_{\mathrm{no return}}\)).

The definition of \(j_1\) implies that \(d_{j_0}>0\). Thus, if \(j_0=1\), we have \(y_1>0\) and for a small enough \(\varepsilon \), the vector \(\bar{\varvec{y}}\) satisfies constraint (ii). Otherwise, we have \(\bar{\tau }(y_{j_0-1})+\nu (y_{j_0-1}-y_{j_0-2})=d_{j_0-1}<d_{j_0}=\bar{\tau }(y_{j_0})+\nu (y_{j_0}-y_{j_0-1})\). It implies that \(y_{j_0-1}<y_{j_0}\) (as otherwise the equality would imply that \(y_{j_0-1}<y_{j_0-2}\)). Thus, for a small enough \(\varepsilon \), we have \(\bar{\varvec{y}}\) satisfies constraint (ii). It also satisfies obviously constraint (iv).

For \(j\in \{2,\ldots ,j_1\}\cup \{j_1+2,\ldots ,S\}\), checking \(\bar{d}_{j-1}\leqslant \bar{d}_j\) is straightforward. The remaining case is \(j=j_1+1\). A direct induction shows that \(\bar{d}_j\leqslant d_j\) for \(j\leqslant j_1-1\). Since \(\bar{\tau }(y_{j_1})+\nu (y_{j_1}-y_{j_1-1})<\bar{\tau }(y_{j_1-1})+\nu (y_{j_1-1}-y_{j_1-2})\) (because \(d_{j_1-1}=d_{j_1}\)), for \(\varepsilon \) small enough, we have \(\bar{d}_{j_1-1}\geqslant \bar{\tau }(\bar{y}_{j_1})+\nu (\bar{y}_{j_1}-\bar{y}_{j_1-1})\). Here, we use the fact that \(\bar{\tau }(\cdot )\) is continuous. Thus, \(\bar{d}_{j_1}=\bar{d}_{j_1-1}\). Since we have \(\bar{d}_{j_1-1}\leqslant d_{j_1-1}\) by the above induction, we finally obtain \(\bar{d}_{j_1}\leqslant d_{j_1}\leqslant d_{j_1+1}=\bar{d}_{j_1+1}\). Therefore, \(\bar{\varvec{d}}\) satisfies constraint (iii).

Constraint (i) is satisfied for all j, except maybe for \(j=j_1\). We have proved that \(\bar{d}_{j_1}=\bar{d}_{j_1-1}\). Since \(\bar{d}_{j_1-1}=\bar{\tau }(\bar{y}_{j'})+\nu (\bar{y}_{j'}-\bar{y}_{j'-1})\) for some \(j'\leqslant j_1-1\), we have \(\bar{\tau }(\bar{y}_{j_1})+\nu (\bar{y}_{j_1}-\bar{y}_{j_1-1})\leqslant \bar{d}_{j_1}=\bar{\tau }(\bar{y}_{j'})+\nu (\bar{y}_{j'}-\bar{y}_{j'-1})\), and thus \(\nu (\bar{y}_{j_1}-\bar{y}_{j_1-1})\leqslant \nu (\bar{y}_{j'}-\bar{y}_{j'-1})\leqslant \nu C\). Therefore, constraint (i) is also satisfied for \(j=j_1\).

Since the constraint (v) is clearly satisfied, \((\bar{\varvec{d}},\bar{\varvec{y}})\) is a feasible solution of (\(P_{\mathrm{no return}}\)).

We have proved that \(\bar{d}_{j_1}\leqslant d_{j_1}\). Therefore,

$$\begin{aligned} \begin{array}{rcl} \displaystyle \sum _{j=j_0}^{j_1}\int _{\bar{y}_{j-1}}^{\bar{y}_j}\big (\bar{d}_j-\bar{\tau }(u)\big )\mathrm{d}u &{} \leqslant &{} \displaystyle \int _{\bar{y}_{j_0-1}}^{\bar{y}_{j_1}}\big (\bar{d}_{j_1}-\bar{\tau }(u)\big )\mathrm{d}u\\ &{} \leqslant &{} \displaystyle \int _{y_{j_0-1}}^{y_{j_1}}\big (d_{j_1}-\bar{\tau }(u)\big )\mathrm{d}u \\ &{} = &{} \displaystyle \sum _{j=j_0}^{j_1}\int _{y_{j-1}}^{y_j}\big (d_j-\bar{\tau }(u)\big )\mathrm{d}u, \end{array} \end{aligned}$$

which shows that \((\bar{\varvec{d}},\bar{\varvec{y}})\) is also an optimal solution of \(P^{\mathrm{ave}}_{\text {no return}}\), which is in contradiction with the minimality assumption on \(\sum _{j=1}^Sy_j\). The case of \(P^{\mathrm{max}}_{\text {no return}}\) is dealt with similarly.\(\square \)

1.3 Proofs of the lemmas of Section 5.2

To achieve the proofs of the lemmas of Section 5.2, we need two technical results.

Lemma 15

We have \(r_j\leqslant \tilde{y}_j\leqslant r_j+\eta \) for \(j=0,\ldots ,n\).

Proof

We have \(\tilde{y}_j\leqslant r_j+\eta \) by definition. Using \(r_j-r_{j-1}\leqslant M\eta \) in a feasible path, a direct induction shows that \(\tilde{y}_j\geqslant r_j\) for \(j=0,\ldots ,n\). \(\square \)

Lemma 16

Suppose that \(\alpha >0\). Then, for all \(y\in [0,D(T)]\) and \(\delta \in [0,D(T)-y]\), we have \(\bar{\tau }(y+\delta )\leqslant \bar{\tau }(y)+\delta /\alpha \) and \(\tau (y+\delta )\leqslant \tau (y)+\delta /\alpha \).\(\square \)

Proof

Diewert (1981) extended the Mean Value Theorem to semicontinuous functions. According to his result, for any \(0\leqslant a\leqslant b\leqslant T\), there exists \(c\in [a,b)\) such that

$$\begin{aligned} \limsup _{t\rightarrow 0^+} \frac{D(c+t)-D(c)}{t}\leqslant \frac{D(b)-D(a)}{b-a}. \end{aligned}$$

Since

$$\begin{aligned} \alpha = \displaystyle {\inf _{t\in [0,T)}D'_+(t)\leqslant D'_+(c)} \end{aligned}$$

and thus

$$\begin{aligned} \alpha \leqslant \displaystyle {\limsup _{t\rightarrow 0^+} \frac{D(c+t)-D(c)}{t}}, \end{aligned}$$

we have \(D(a)+\alpha (b-a)\leqslant D(b)\) for any \(0\leqslant a\leqslant b\leqslant T\). With \(a=\bar{\tau }(y)\) and \(b=\bar{\tau }(y)+\delta /\alpha \), we get \(y+\delta \leqslant D(\bar{\tau }(y))+\delta \leqslant D(\bar{\tau }(y)+\delta /\alpha )\) (the first inequality is given by Lemma 1). By definition of \(\bar{\tau }\), we have \(\bar{\tau }(y+\delta )\leqslant \bar{\tau }(y)+\delta /\alpha \). The second inequality is proved along the same lines.\(\square \)

Proof of Lemma 8

Let \((\varvec{d}^*,\varvec{y}^*)\) be an optimal solution of \(P^{\mathrm{ave}}_{\text {no return}}\) such that \(d_j^*=\bar{\tau }(y_j^*)+\nu (y_j^*-y_{j-1}^*)\) for all \(j\in \{1,\ldots ,S\}\) (Lemma 5). Consider the sequence \(\lfloor y^*_1/\eta \rfloor \eta ,\ldots ,\lfloor y^*_S/\eta \rfloor \eta \) and remove the repetitions. Since the sequence is nondecreasing, we obtain an increasing sequence \(\varvec{r}=r_1,\ldots ,r_n\). We introduce \(\sigma :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,S\}\) with \(\sigma (j)\) being the smallest index such that \(r_j=\lfloor y_{\sigma (j)}^*/\eta \rfloor \eta \). We then define \(z_j=r_j-r_{j-1}\) for \(j\in \left\{ 1,\ldots ,n\right\} \), with \(r_0=0\). We prove that the sequence \((z_j,r_j)_{j\in \left\{ 1,\ldots ,n\right\} }\) provides a feasible path from the vertex (0, 0) to \((z_n,r_n)\) in \(\mathcal {G}\). First note that \(r_n=R\eta \) since \(y_S^*=D(T)\) and that \(z_j>0\). For all \(j\in \left\{ 1,\ldots ,n\right\} \), we have

$$\begin{aligned} \begin{array}{rcl} z_j &{} = &{} r_j-r_{j-1} \\ &{} = &{} \displaystyle {\big (\lfloor y_{\sigma (j)}^*/\eta \rfloor -\lfloor y_{\sigma (j)-1}^*/\eta \rfloor } \\ &{} &{} \displaystyle {+\lfloor y_{\sigma (j)-1}^*/\eta \rfloor -\lfloor y_{\sigma (j-1)}^*/\eta \rfloor \big )}\eta \\ &{} < &{} M\eta +\eta ,\end{array} \end{aligned}$$

since

$$\begin{aligned} \lfloor y_{\sigma (j)-1}^*/\eta \rfloor =\lfloor y_{\sigma (j-1)}^*/\eta \rfloor \hbox { and }y^*_{\sigma (j)}-y^*_{\sigma (j)-1}\leqslant C. \end{aligned}$$

Thus, \(z_j\leqslant M\eta \). Moreover, by definition, \(r_j\leqslant R\eta \). Therefore, \((z_j,r_j)\in \mathcal {V}\) for all \(j\in \left\{ 1,\ldots ,n\right\} \). Let us now prove that \(((z_{j-1},r_{j-1}),(z_j,r_j))\in \mathcal {A}\) for all \(j\in \left\{ 2,\ldots ,n\right\} \). By definition, \(z_j+r_{j-1}=r_j\). Note that because of the definition of \(r_j\), we have \(r_j\leqslant y_{\sigma (j)}^*\leqslant y_{\sigma _{(j+1)}-1}^*<r_j+\eta \). Combining these inequalities for all j with Lemma 16 leads to

$$\begin{aligned} \begin{array}{lcl} &{}&{}\bar{\tau }(r_j)-\bar{\tau }(r_{j-1})+\nu (z_j-z_{j-1}) \\ &{}&{}\quad \geqslant \bar{\tau }(y_{\sigma (j)}^*)-\eta /\alpha -\bar{\tau }(y_{\sigma (j-1)}^*)\\ &{}&{}\qquad +\,\nu (y_{\sigma (j)}^*-y_{\sigma (j)-1}^*-y_{\sigma (j-1)}^*+y_{\sigma (j-1)-1}^*-2\eta )\\ &{}&{}\quad = d_{\sigma (j)}^*-d_{\sigma (j-1)}^*-(1/\alpha +2\nu )\eta \\ &{}&{}\quad \geqslant -(1/\alpha +2\nu )\eta . \end{array} \end{aligned}$$

The sequence \((z_j,r_j)_{j\in \left\{ 1,\ldots ,n\right\} }\) is then a feasible path p from the vertex (0, 0) to \((z_n,r_n)\) in \(\mathcal {G}\), with at most S arcs. The only thing that remains to be checked in that the claimed inequality holds.

We have \(f^{{\text {ave}}}( d_{\sigma (j)}^*, y_{\sigma (j)-1}^*, y_{\sigma (j)}^*)\geqslant f\big (\bar{\tau }(r_j)+\nu (z_j-\eta ),r_{j-1}+\eta , r_j\big )\) for all \(j\in \left\{ 1,\ldots ,n\right\} \) since \(f^{{\text {ave}}}(\cdot )\) is nonincreasing in the second term and nondecreasing in the first and third terms. Thus,

$$\begin{aligned} \displaystyle {\sum _{a\in A(p)}w(a)}\leqslant & {} \displaystyle {\sum _{j=1}^nf^{{\text {ave}}}( d_{\sigma (j)}^*, y_{\sigma (j)-1}^*, y_{\sigma (j)}^*)} \\\leqslant & {} D(T)g^{\mathrm{ave}}(\varvec{d}^*,\varvec{y}^*). \end{aligned}$$

\(\square \)

Proof of Lemma 9

We are going to check that \((\tilde{\varvec{d}},\tilde{\varvec{y}})\) is feasible for \(P^{\mathrm{ave}}_{\text {no return}}\).

For \(j=1,\ldots ,n\), we have \(\tilde{y}_j-\tilde{y}_{j-1}\leqslant C\) by definition of \(\tilde{\varvec{y}}\). For \(j=n+2,\ldots ,S\), we have \(\tilde{y}_j-\tilde{y}_{j-1}=0\). Finally, we have \(\tilde{y}_{n+1}-\tilde{y}_n\leqslant D(T)-r_n<\eta \leqslant C\) (where we use Lemma 15 to bound \(\tilde{y}_n\)). Thus, \(\tilde{\varvec{y}}\) satisfies constraint (i).

For \(j=1,\ldots ,n\), we have \(r_j>r_{j-1}\) and thus \(\tilde{y}_{j-1}\leqslant r_{j-1}+\eta \leqslant r_j\leqslant \tilde{y}_j\) (the last inequality being Lemma 15). Thus, \(\tilde{\varvec{y}}\) satisfies constraint (ii).

Consider \(j\in \{2,\ldots ,n\}\). We have

$$\begin{aligned} \tilde{d}_j-\tilde{d}_{j-1}= & {} \bar{\tau }(\tilde{y}_j)+\nu (\tilde{y}_j-\tilde{y}_{j-1})-\tau (\tilde{y}_{j-1}) \\&-\,\nu (\tilde{y}_{j-1}-\tilde{y}_{j-2})+\gamma \eta \\\geqslant & {} \bar{\tau }(r_j)-\bar{\tau }(r_{j-1}+\eta )\\&+\,\nu (r_j-2r_{j-1}+r_{j-2}-2\eta )+\gamma \eta \\\geqslant & {} \bar{\tau }(r_j)-\bar{\tau }(r_{j-1})-\eta /\alpha \\&+\,\nu (z_j-z_{j-1}-2\eta )+\gamma \eta \\\geqslant & {} 0. \end{aligned}$$

The first inequality is obtained with the help of Lemma 15. For the second one, we use Lemma 16 and also that \(z_j=r_j-r_{j-1}\) and \(z_{j-1}=r_{j-1}-r_{j-2}\) which hold because

$$\begin{aligned} \tilde{p}=\big ((z_0,r_0),(z_1,r_1),\ldots ,(z_n,r_n)\big ) \end{aligned}$$

is a path in \(\mathcal {G}\). For the last inequality, we use \(\bar{\tau }(r_j)-\bar{\tau }(r_{j-1})+\nu (z_j-z_{j-1})+\frac{1}{2} \gamma \eta \geqslant 0\), which holds again because \(\tilde{p}\) is a path, and the definition of \(\gamma \). For \(j\geqslant n+1\), we have \(\tilde{d}_j\geqslant \tilde{d}_{j-1}\) by definition. Constraint (iii) is thus satisfied for all j.

If \(n<S\), then \(\tilde{y}_S=D(T)\) by definition. From now on, we suppose thus that \(n=S\). We also suppose that \(S\geqslant 2\). The case \(S=1\) is easy to check (and anyway, for a complexity point of view, this case does not matter). If \(\tilde{y}_{S-1}=r_{S-1}+\eta \), then \(\tilde{y}_{S-1}+C=r_{S-1}+\eta +C\geqslant r_S+\eta >D(T)\) (here we use that \(z_S\leqslant C\) and that \(r_S=R\eta \)) and thus \(\tilde{y}_S=D(T)\). If \(\tilde{y}_{S-1}=D(T)\), then \(\tilde{y}_S=D(T)\) since \(\tilde{y}_{S-1}\leqslant \tilde{y}_S\leqslant D(T)\). Hence, in all these cases, \(\tilde{\varvec{y}}\) satisfies constraint (iv). The only remaining case is when \(\tilde{y}_{S-1}=\tilde{y}_{S-2}+C\). If j is an index in \(\left\{ 1,\ldots ,S-2\right\} \) such that \(\tilde{y}_j=r_j+\eta \), then we have \(r_{j+1}+\eta \leqslant r_j+C+\eta =\tilde{y}_j+C\) and \(r_{j+1}+\eta \leqslant D(T)\), and thus \(\tilde{y}_{j+1}=r_{j+1}+\eta \). It implies that as soon as some \(j_0\in \left\{ 1,\ldots ,S-1\right\} \) is such that \(\tilde{y}_{j_0}=r_{j_0}+\eta \), we have \(\tilde{y}_{S-1}=r_{S-1}+\eta \), which is a case we have already dealt with. Since \(r_j+\eta \leqslant r_S\leqslant D(T)\) for \(j\in \{1,\ldots ,S-1\}\), we are left with the case where \(\tilde{y}_j=\tilde{y}_{j-1}+C\) for every \(j\in \left\{ 1,\ldots ,S-1\right\} \). In this situation, we have \(\tilde{y}_{S-1}=(S-1)C\) and hence \(\tilde{y}_{S-1}+C=CS\geqslant D(T)\). Since \(r_S+\eta > D(T)\), we get that \(\tilde{y}_S=D(T)\), and \(\tilde{\varvec{y}}\) satisfies constraint (iv) in every case.

For \(j=1,\ldots ,n\), we have \(\tilde{d}_j\geqslant \bar{\tau }(\tilde{y}_j)+\nu (\tilde{y}_j-\tilde{y}_{j-1})\) by definition, and for \(j\geqslant n+1\), we have \(\tilde{d}_j\geqslant T+\nu (\tilde{y}_{n+1}-\tilde{y}_n)\geqslant \bar{\tau }(\tilde{y}_j)+\nu (\tilde{y}_j-\tilde{y}_{j-1})\). Thus, \(\tilde{\varvec{d}}\) satisfies constraint (v) and \((\tilde{\varvec{d}},\tilde{\varvec{y}})\) is feasible for \(P^{\mathrm{ave}}_{\text {no return}}\).\(\square \)

Proof of Lemma 10

Our goal is to bound from above the following quantity

$$\begin{aligned} g^{\mathrm{ave}}(\tilde{\varvec{d}},\tilde{\varvec{y}})=\frac{1}{D(T)}\sum _{j=1}^Sf^{{\text {ave}}}(\tilde{d}_j,\tilde{y}_{j-1},\tilde{y}_j) \end{aligned}$$
(4)

We proceed by splitting the expression into two parts: the sum from \(j=1\) to \(j=n\), and the sum from \(j=n+1\) to \(j=S\).

Using Lemmas 15 and 16, we have \(\bar{\tau }(\tilde{y}_j)+\nu (\tilde{y}_j-\tilde{y}_{j-1})\leqslant q_j+\eta /\alpha +\nu \eta \), where \(q_j=\bar{\tau }(r_j)+\nu (r_j-r_{j-1})\). Thus, we have for all \(j\leqslant n\),

$$\begin{aligned}&\displaystyle {\sum _{j=1}^nf^{{\text {ave}}}(\tilde{d}_j,\tilde{y}_{j-1},\tilde{y}_j)} \nonumber \\&\quad \displaystyle {\leqslant \sum _{j=1}^nf^{{\text {ave}}}(q_j+\eta /\alpha +\nu \eta +j\gamma \eta ,r_{j-1},r_j+\eta )}, \end{aligned}$$
(5)

since \(f^{{\text {ave}}}(\cdot )\) is nonincreasing in the second term and nondecreasing in the first and third terms and where we extend the definition of \(\bar{\tau }(\cdot )\) by letting \(\bar{\tau }(y)=T\) for all \(y>D(T)\).

For the second part, we proceed as follows. Since \(r_n+\eta =(R+1)\eta >D(T)\), Lemma 15 immediately implies \(D(T)-\tilde{y}_n\leqslant \eta \). With Lemma 16, we get thus \(T\leqslant \bar{\tau }(\tilde{y}_n)+\eta /\alpha \), where we used \(T=\bar{\tau }\big (D(T)-\tilde{y}_n+\tilde{y}_n\big )\). This provides

$$\begin{aligned} \tilde{d}_{n+1}\leqslant & {} \displaystyle {\bar{\tau }(\tilde{y}_n)+\eta /\alpha +\nu (r_n-r_{n-1})+\nu \eta +n\gamma \eta }\\= & {} q_n+(1/\alpha +\nu +n\gamma )\eta . \end{aligned}$$

Using again the fact that \(f^{{\text {ave}}}(\cdot )\) is nonincreasing in the second term and nondecreasing in the first and third terms and with the help of Lemma 15, we get

$$\begin{aligned} \begin{array}{l} \displaystyle {\sum _{j=n+1}^Sf^{{\text {ave}}}(\tilde{d}_{j},\tilde{y}_{j-1},\tilde{y}_{j})}\\ \displaystyle {\leqslant f^{{\text {ave}}}(q_n+\eta /\alpha +\nu \eta +n\gamma \eta ,r_n,r_n+\eta )},\end{array} \end{aligned}$$
(6)

since the terms indexed by \(j=n+2,\ldots ,S\) are all zero and since \(D(T)<r_n+\eta \).

We aim at comparing the upper bounds in Eqs. (5) and (6) with

$$\begin{aligned} \sum _{a\in A(\tilde{p})}w(a)=\sum _{j=1}^nf^{{\text {ave}}}(q_j-\nu \eta ,r_{j-1}+\eta ,r_j). \end{aligned}$$
(7)

We first compare the jth term of the bound in (5) with the jth term of the sum in (7).

$$\begin{aligned}&f^{{\text {ave}}}(q_j+\eta /\alpha +\nu \eta +j\gamma \eta ,r_{j-1},r_j+\eta )\\&\quad -f^{{\text {ave}}}(q_j-\nu \eta ,r_{j-1}+\eta ,r_j)=I_j^1+I_j^2+I_j^3 \end{aligned}$$

with

$$\begin{aligned} I_j^1= & {} \displaystyle \int _{r_{j-1}}^{r_{j-1}+\eta }\big (q_j+\eta /\alpha +\nu \eta +j\gamma \eta -\bar{\tau }(u)\big )\mathrm{d}u\\ I_j^2= & {} \displaystyle \int _{r_{j-1}+\eta }^{r_{j}}\big (j\gamma \eta +\eta /\alpha +2\nu \eta \big )\mathrm{d}u\\ I_j^3= & {} \displaystyle \int _{r_{j}}^{r_{j}+\eta }\big (q_j+\eta /\alpha +\nu \eta +j\gamma \eta -\bar{\tau }(u)\big )\mathrm{d}u. \end{aligned}$$

Since \(\bar{\tau }(\cdot )\) in nondecreasing, we get

$$\begin{aligned} I_j^1\leqslant & {} \displaystyle {\big (\bar{\tau }(r_j)-\bar{\tau }(r_{j-1})+\nu (r_j-r_{j-1})\big )\eta } \\&+\,(1/\alpha +\nu +j\gamma )\eta ^2\\ I_j^2\leqslant & {} \displaystyle {(r_j-r_{j-1})(j\gamma +1/\alpha +2\nu )\eta } \\&-\,(j\gamma +1/\alpha +2\nu )\eta ^2\\ I_j^3\leqslant & {} \nu (r_j-r_{j-1})\eta +(1/\alpha +\nu \eta +j\gamma )\eta ^2. \end{aligned}$$

Using \(j\gamma \leqslant n\gamma \) and \(\gamma =2(1/\alpha +2\nu )\), we obtain

$$\begin{aligned} I_j^1+I_j^2+I_j^3\leqslant & {} \big (\bar{\tau }(r_j)-\bar{\tau }(r_{j-1})+2\nu (r_j-r_{j-1})\big )\eta \\&+\,(n+1/2)\gamma \eta ^2\\&+\,(r_j-r_{j-1})(n+1/2)\gamma \eta . \end{aligned}$$

We now bound the term in Eq. (6). Let \(I=f^{{\text {ave}}}(q_n+\eta /\alpha +\nu \eta +n\gamma \eta ,r_n,r_n+\eta )\). We have

$$\begin{aligned} I= & {} \displaystyle \int _{r_n}^{r_n+\eta }(q_n+\eta /\alpha +\nu \eta +n\gamma \eta -\bar{\tau }(u))\mathrm{d}u.\\\leqslant & {} \nu (r_n-r_{n-1})\eta +(1/\alpha +\nu +n\gamma \big )\eta ^2. \end{aligned}$$

We have thus

$$\begin{aligned}&\displaystyle {g^{\mathrm{ave}}(\tilde{\varvec{d}},\tilde{\varvec{y}})-\frac{1}{D(T)}\sum _{a\in A(\tilde{p})}w(a)} \\&\quad \leqslant \displaystyle {\frac{1}{D(T)}\left( \sum _{j=1}^n(I_j^1+I_j^2+I_j^3)+I\right) } \\&\quad \leqslant \displaystyle \frac{1}{D(T)}\big (\bar{\tau }(r_n)+2\nu r_n\\&\qquad \displaystyle {+\,r_n(n+1)\gamma +\nu C+(n+1)^2\gamma \eta \big )\eta .} \end{aligned}$$

Using \(r_n\leqslant D(T)\) and \(\bar{\tau }(r_n)\leqslant T\) leads to

$$\begin{aligned} g^{\mathrm{ave}}(\tilde{\varvec{d}},\tilde{\varvec{y}})\leqslant & {} \displaystyle \frac{1}{D(T)}\sum _{a\in A(\tilde{p})}w(a) \\&\displaystyle {+\left( \frac{T+\nu C}{D(T)}+\gamma (S+1)\right) \eta +\frac{\gamma (S+1)^2}{D(T)}\eta ^2}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Daudet, L., Meunier, F. Minimizing the waiting time for a one-way shuttle service. J Sched 23, 95–115 (2020). https://doi.org/10.1007/s10951-019-00604-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-019-00604-y

Keywords

Mathematics Subject Classification

Navigation