Skip to main content
Log in

Min max min robust (relative) regret combinatorial optimization

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

We consider combinatorial optimization problems with uncertainty in the cost vector. Recently, a novel approach was developed to deal with such uncertainties: instead of a single one robust solution, obtained by solving a min max problem, the authors consider a set of solutions obtained by solving a min max min problem. In this new approach, the set of solutions is computed once and we can choose the best one in real time each time a cost vector occurs yielding better solutions compared to the min max approach. In this paper, we apply the new approach to the absolute and relative regret cases. Algorithms to solve the min max min robust (relative) regret problems are presented with computational experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Aissi H, Bazgan C, Vanderpooten D (2009) Min–max and min–max regret versions of combinatorial optimization problems: a survey. Eur J Oper Res 197(2):427–438

    Article  MathSciNet  Google Scholar 

  • Averbakh I (2005) Computing and minimizing the relative regret in combinatorial optimization with interval data. Discrete Optim 2:273–287

    Article  MathSciNet  Google Scholar 

  • Boffey TB, Karkazis J (1984) p-Medians and multi-medians. J Oper Res Soc 35(1):57–64

    Article  Google Scholar 

  • Buchheim C, Kurtz J (2016) Min–max–min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions. Electron Not Discrete Math 52:45–52

    Article  MathSciNet  Google Scholar 

  • Buchheim C, Kurtz J (2017) Min–max–min robust combinatorial optimization. Math Program 163(1–2):1–23

    Article  MathSciNet  Google Scholar 

  • Chassein A, Goerigk M, Kurtz J, Poss M (2019) Faster algorithms for min–max–min robustness for combinatorial problems with budgeted uncertainty. Eur J Oper Res 279(2):308–319

    Article  MathSciNet  Google Scholar 

  • Candia-Véjar A, Alvarez-Miranda E, Maculan N (2011) Minmax regret combinatorial optimization problems: an alogorithmic perspective. RAIRO-Oper Res 45:101–129

    Article  MathSciNet  Google Scholar 

  • Carlsson C, Fuller R (2012) Fuzzy reasoning in decision making and optimization. Physica, vol 82. Springer, Berlin

    MATH  Google Scholar 

  • Chassein A, Goerigk M (2016) Performance analysis in robust optimization. In: Doumpos M, Zopounidis C, Grigoroudis E (eds) Robustness analysis in decision aiding, optimization, and analytics. International series in operations research and management science, vol 241. Springer, Cham

    MATH  Google Scholar 

  • Crema A (2000) An algorithm for the multiparametric 0–1-integer linear programming problem relative to the objective function. Eur J Oper Res 125:18–24

    Article  MathSciNet  Google Scholar 

  • Crema A (2014) Mathematical programming approach to tighten a Big-\(M\) formulation. www.optimization-online.org. Accessed Aug 2014

  • Hanasusanto G, Kuhn D, Wiesemann W (2015) K-adaptability in two-stage robust binary programming. Oper Res 63(4):877–891

    Article  MathSciNet  Google Scholar 

  • Kasperski A, Zielinski P (2016) Robust discrete optimization under discrete and interval uncertainty: a survey. In: Doumpos M, Zopounidis C, Grigoroudis E (eds) Robustness analysis in decision aiding, optimization, and analytics. International series in operations research and management science. Springer, Berlin

    Google Scholar 

  • Kouvelis P, Yu G (2013) Robust discrete optimization and its applications, vol 14. Springer, Berlin

    MATH  Google Scholar 

  • Li J, Liu Y (2016) Approximation algorithms for stochastic combinatorial optimization problems. J Oper Res Soc China 4(1):1–47

    Article  MathSciNet  Google Scholar 

  • Megiddo N (1979) Combinatorial optimization with fractional objective functions. Math Oper Res 4(4):414–424

    Article  MathSciNet  Google Scholar 

  • Montemanni R, Gambardella LM (2005) The robust shortest path problem with interval data via Benders decomposition. 4 OR 3:315–328

    MathSciNet  MATH  Google Scholar 

  • Oberdieck R, Diangelakis NA, Nascu I, Papathanasiou MM, Sun M, Avraamidou S, Pistikopoulos EN (2016) On multi-parametric programming and its applications in process systems engineering. Chem Eng Res Des 116:61–82

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Crema.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of Proposition 1

  1. (i)

     Let \(\mathbf {c} \in \varOmega \) and let \(\mathbf {h},\mathbf {x} \in X\).

    • Let \(I_{1,0} = \lbrace i: \mathbf {h}_i = 1,\mathbf {x}_i = 0,\;\;1 \le i \le n \rbrace \)

    • Let \(I_{0,1} = \left\{ i: \mathbf {h}_i = 0,\mathbf {x}_i = 1,\;\;1 \le i \le n \right\} \)

    then:

    $$\begin{aligned} \mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x} = \sum _{i \in I_{1,0}} \mathbf {c}_i - \sum _{i \in I_{0,1}} \mathbf {c}_i \le \sum _{i \in I_{1,0}} \mathbf {U}_i - \sum _{i \in I_{0,1}} \mathbf {L}_i = {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h} - {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {x} \end{aligned}$$
  2. (ii)

    Let \(\mathbf {x} \in X\) and let \(\mathbf {c} \in \varOmega \). Let \(\mathbf {h}\) be an optimal solution for \(P(\mathbf {c}^+(\mathbf {x}))\).

    • Let \(I_{1,0} = \lbrace i: \mathbf {h}_i = 1,\mathbf {x}_i = 0,\;\;1 \le i \le n \rbrace \)

    • Let \(I_{0,1} = \left\{ i: \mathbf {h}_i = 0,\mathbf {x}_i = 1,\;\;1 \le i \le n \right\} \)

    then:

    $$\begin{aligned}&0 < {\mathbf {c}^+(\mathbf {x})}^t \mathbf {x} - v(P(\mathbf {c}^+(\mathbf {x}))) = {\mathbf {c}^+(\mathbf {x})}^t \mathbf {x} - {\mathbf {c}^+(\mathbf {x})}^t\mathbf {h}\\&\quad = \sum _{i \in I_{0,1}} \mathbf {L}_i - \sum _{i \in I_{1,0}} \mathbf {U}_i\le \sum _{i \in I_{0,1}} \mathbf {c}_i - \sum _{i \in I_{1,0}} \mathbf {c}_i = \mathbf {c}^t \mathbf {x} - \mathbf {c}^t \mathbf {h} \end{aligned}$$

    therefore: \(v(P(\mathbf {c})) \le \mathbf {c}^t \mathbf {h} < \mathbf {c}^t \mathbf {x}\).

  3. (iii)

    Let \(\mathbf {x} \in X\) and let \(\mathbf {c} \in \varOmega \), then: \(\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} \overset{(1)}{=} \mathbf {L}^t \mathbf {x} \overset{(2)}{\le } \mathbf {c}^t \mathbf {x}\) where (1) From the definition of \(\mathbf {c}^+(\mathbf {x})\) and (2) Since \(\mathbf {L} \le \mathbf {c}\;\;\bullet \).

Appendix B: Proof of Proposition 2

Let \(X \subseteq {\lbrace 0,1 \rbrace }^n\). Let \(f:X \longmapsto \mathbb {R}\) and \(g:X \longmapsto \mathbb {R}\) with \(f(\mathbf {x}) > 0\) and \(g(\mathbf {x}) > 0\) for all \(\mathbf {x} \in X\). Let P1 be a problem in (\(\mathbf {x}\)) defined as:

$$\begin{aligned} (P1):\;\;\underset{\mathbf {x} \in X}{\max }\;\; \frac{f(\mathbf {x})}{g(\mathbf {x})} \end{aligned}$$

Let \(\mu \ge 0\) and let \(P2(\mu )\) be a problem in (\(\mathbf {x}\)) defined as;

$$\begin{aligned} (P2(\mu )):\;\;\underset{\mathbf {x} \in X}{\max }\;\; f(\mathbf {x}) - \mu g(\mathbf {x}) \end{aligned}$$

then:

If \(v(P2(\mu )) = 0\) then \(f(\mathbf {x}) - \mu g(\mathbf {x}) \le 0\) for all \(\mathbf {x} \in X\), therefore \(\frac{f(\mathbf {x})}{g(\mathbf {x})} \le \mu \) for all \(\mathbf {x} \in X\) and we have \(v(P1) \le \mu \). If \(v(P2(\mu )) = 0\) and \(v(P1) < \mu \) we have \(\frac{f(\mathbf {x})}{g(\mathbf {x})} < \mu \) for all \(\mathbf {{x} \in X}\) and then \(f(\mathbf {x}) - \mu g(\mathbf {x}) < 0\) for all \(\mathbf {x} \in X\) and we have a contradiction.

If \(\mu = v(P1)\) and \(v(P2(\mu )) > 0\) then we have \(\frac{f(x)}{g(x)} > \mu = v(P1)\) for some \(\mathbf {x} \in X\) and we have a contradiction, therefore \(v(P2(\mu )) \le 0\). If \(\mathbf {x}^*\) is an optimal solution for P1 then we have \(f(\mathbf {x}^*) - \mu g(\mathbf {x}^*) = 0\) and then \(v(P2(\mu )) \ge 0\) and finally \(v(P2(\mu )) = 0\).

Hence we have:

  1. (1)

    \(v(P2(\mu )) = 0\) if and only if \(\mu = v(P1)\).

    If \(v(P2(\mu )) = 0\) and \(\mathbf {x}^*\) is an optimal solution for \(P2(\mu )\) we have \(v(P2(\mu )) = f(\mathbf {x}^*) - \mu g(\mathbf {x}^*) = f(\mathbf {x}^*) - v(P1) g(\mathbf {x}^*) = 0\) and then \(v(P1) = \frac{f(\mathbf {x}^*)}{g(\mathbf {x}^*)}\), therefore \(\mathbf {x}^*\) is an optimal solution for P1. Hence we have:

  2. (2)

    If \(v(P2(\mu )) = 0\) and \(\mathbf {x}^*\) is an optimal solution for \(P2(\mu )\) then \(\mathbf {x}^*\) is an optimal solution for P1.

Let \(X = \lbrace \mathbf {x}^{(1)},\ldots ,\mathbf {x}^{(L)} \rbrace \) then

\(v(P2(\mu )) = \max \lbrace f(\mathbf {x}^{(1)}) - \mu g(\mathbf {x}^{(1)}),\ldots ,f(\mathbf {x}^{(L)}) - \mu g(\mathbf {x}^{(L)}) \rbrace \).

Since \(g(\mathbf {x}) > 0\) for all \(\mathbf {x} \in X\) then we have:

  1. (3)

    \(v(P2(\mu ))\) is a piecewise linear and decreasing function in \(\mu \).

We have:

$$\begin{aligned} v(QX_r(H)) = \underset{\mathbf {x} \in X}{\max }\;\; \frac{ \underset{ \mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} }{\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}} = v(RX_r(H))-1 \end{aligned}$$

Let \(f(\mathbf {x}) = \underset{ \mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h}\) and let \(g(\mathbf {x}) = \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}\). We use (1),(2) and (3) to find:

  1. (i)

    \(v(RX_r(\mu ,H)) = 0\) if and only if \(\mu = v(QX_r(H)) + 1\).

  2. (ii)

    If \(v(RX_r(\mu ,H)) = 0\) and \(\mathbf {x}^*\) is an optimal solution for \(RX_r(\mu ,H)\) then \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\).

  3. (iii)

    \(v(RX_r(\mu ,H))\) is a piecewise linear and decreasing convex function in \(\mu \;\;\bullet \)

Appendix C: Proof of Proposition 3

Let \(X = \lbrace \mathbf {x}^{(1)},\ldots ,\mathbf {x}^{(L)} \rbrace \), let \(\phi _j = \underset{\mathbf {h} \in H}{\min }\;\; {\mathbf {c}^+(\mathbf {x}^{(j)})}^t \mathbf {h}\) and let \(\delta _j = {\mathbf {c}^+(\mathbf {x}^{(j)})}^t \mathbf {x}^{(j)}\) for \(j=1,\ldots ,L\).

The algorithm may be rewritten as follows:

Let \(\mu _1 = 1\).

Algorithm Find-\(\mu ^*\)

  1. 1.

    \(i=1\).

  2. 2.

    Solve \(R_r(\mu _i,H)\). Let \(\mathbf {x}^{(j_i)}\) be an optimal solution.

  3. 3.

    If \( v(R_r(\mu _i,H)) = 0\) let \(\mu ^* = \mu _i\), let \(\mathbf {x}^* = \mathbf {x}^{(j_i)}\) and stop.

  4. 4.

    Let \(\mu _{i+1} = \frac{\phi _{j_i}}{\delta _{j_i}}\), let \(i=i+1\) and return to Step 2.

If \(v(R_r(\mu _1,H)) > 0\) then \(\mu _2 = \frac{\phi _{j_1}}{\delta _{j_1}}\) and \(\phi _{j_1} - \mu _1 \delta _{j_1} > 0\). Hence \(\mu _2 > 1 = \mu _1\).

If \(v(R_r(\mu _2,H)) > 0\) then \(\mu _3 = \frac{\phi _{j_2}}{\delta _{j_2}}\) and \(\phi _{j_2} - \mu _2 \delta _{j_2} > 0\). Hence \(\mu _3 > \mu _2\).

In general we have that \(\mu _i < \mu _{i+1}\) for all i. Since X is a finite set then the sequence \(\mu _1,\ldots ,\mu _s,\ldots \) generated by the algorithm must be finite and then \( v(R_r(\mu _i,H)) = 0\) for some i (otherwise the algorithm generates an infinite sequence of \(\mu \)-values).

Appendix D

See Tables 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15.

Table 4 SP problems with mesh topology. Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem. Times, failures and iterations. 10 problems in each set
Table 5 SP problems with mesh topology. Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem. Reduction percentage for the regret values. 10 problems in each set
Table 6 SP problems with euclidean topology. Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem. Times and iterations. 10 problems in each set
Table 7 SP problems with euclidean topology. Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem . Reduction percentage for the regret values. 10 problems in each set
Table 8 p-M problems.Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem. Times, failures and iterations. 10 problems in each set
Table 9 p-M problems. Q-greedy, T-greedy and S-Q algorithms for the MSR(k) problem. Reduction percentage for the regret values. 10 problems in each set
Table 10 SP problems with mesh topology. Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem. Times, failures and iterations. 10 problems in each set
Table 11 SP problems with mesh topology. Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem . Reduction percentage for the regret values. 10 problems in each set
Table 12 SP problems with euclidean topology. Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem. Times, failures and iterations. 10 problems in each set
Table 13 SP problems with euclidean topology. Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem. Reduction percentage for the regret values. 10 problems in each set
Table 14 p-M problems.Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem. Times, failures and iterations. 10 problems in each set
Table 15 p-M problems. Qr-greedy, Tr-greedy and Sr-Qr algorithms for the MrelSR(k) problem . Reduction percentage for the regret values. 10 problems in each set

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Crema, A. Min max min robust (relative) regret combinatorial optimization. Math Meth Oper Res 92, 249–283 (2020). https://doi.org/10.1007/s00186-020-00712-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-020-00712-y

Keywords

Navigation