Skip to main content
Log in

Complexity results on planar multifacility location problems with forbidden regions

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

In this paper we deal with the planar location problem with forbidden regions. We consider the median objective with block norms and show that this problem is APX-hard, even when considering the Manhattan metric as distance function and polyhedral forbidden areas. As direct consequence, the problem cannot be approximated in polynomial time within a factor of 1.0019, unless \(P=NP\). In addition, we give a dominating set that contains at least one optimal solution. Based on this result an approximation algorithm is derived. For special instances it is possible to improve the algorithm. These instances include problems with bounded forbidden areas and a special structure as interrelation between the new facilities. For uniform weights, this algorithm becomes an FPTAS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. (\(P_D\)) can be solved as a quadratic assignment problem. A recent summary about the problem and known solution approaches can be found in Laporte et al. (2015).

References

  • Aneja YP, Parlar M (1994) Technical note—algorithms for Weber facility location in the presence of forbidden regions and/or barriers to travel. Transport Sci 28(1):70–76

    Article  MATH  Google Scholar 

  • Ausiello G, Protasi M, Marchetti-Spaccamela A, Gambosi G, Crescenzi P, Kann V (1999) Complexity and approximation: combinatorial optimization problems and their approximability properties, 1st edn. Springer, Secaucus

    Book  MATH  Google Scholar 

  • Batta R, Ghose A, Palekar US (1989) Locating facilities on theManhattan metric with arbitrarily shaped barriers and convex forbidden regions. Transport Sci 23(1):26–36

    Article  MATH  Google Scholar 

  • Butt SE, Cavalier TM (1997) Facility location in the presence of congested regions with the rectilinear distance metric. Socio-Econ Plan Sci 31(2):103–113

    Article  Google Scholar 

  • Canbolat MS, Wesolowsky GO (2010) The rectilinear distance Weber problem in the presence of a probabilistic line barrier. Eur J Oper Res 202(1):114–121

    Article  MathSciNet  MATH  Google Scholar 

  • Drezner Z (2013) Solving planar location problems by global optimization. Logist Res 6(1):17–23

    Article  Google Scholar 

  • Hamacher HW, Nickel S (1994) Combinatorial algorithms for some 1-facility median problems in the plane. Eur J Oper Res 79(2):340–351

    Article  MATH  Google Scholar 

  • Hamacher HW, Nickel S (1995) Restricted planar location problems and applications. Nav Res Logist (NRL) 42(6):967–992

    Article  MathSciNet  MATH  Google Scholar 

  • Hamacher HW, Schöbel A (1997) A note on center problems with forbidden polyhedra. Oper Res Lett 20(4):165–169

    Article  MathSciNet  MATH  Google Scholar 

  • Håstad J (2001) Some optimal inapproximability results. J ACM 48(4):798–859

    Article  MathSciNet  MATH  Google Scholar 

  • Horst R, Pardalos PM (eds) (1995) Handbook of global optimization. Nonconvex optimization and its applications. Kluwer Academic Publishers, Dordrecht

    MATH  Google Scholar 

  • Idrissi H, Lefebvre O, Michelot C (1989) Duality for constrained multifacility location problems with mixed norms and applications. Ann Oper Res 18(1):71–92

    Article  MathSciNet  MATH  Google Scholar 

  • Käfer B, Nickel S (2001) Error bounds for the approximative solution of restricted planar location problems. Eur J Oper Res 135(1):67–85

    Article  MathSciNet  MATH  Google Scholar 

  • Khot S, Kindler G, Mossel E, O’Donnell R (2007) Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM J Comput 37(1):319–357

    Article  MathSciNet  MATH  Google Scholar 

  • Klamroth K (2002) Single-facility location problems with barriers. Springer series in operations research and financial engineering. Springer, New York

    MATH  Google Scholar 

  • Laporte G, Nickel S, Saldanha da Gama F (2015) Location science. Springer, Cham

    Book  MATH  Google Scholar 

  • Lefebvre O, Michelot C, Plastria F (1990) Geometric interpretation of the optimality conditions in multifacility location and applications. J Optim Theory Appl 65(1):85–101

    Article  MathSciNet  MATH  Google Scholar 

  • Michelot C (1987) Localization in multifacility location theory. Eur J Oper Res 31(2):177–184

    Article  MathSciNet  MATH  Google Scholar 

  • Nickel S (1995) Discretization of planar location problems. Berichte aus der Mathematik, Shaker

    Google Scholar 

  • Nickel S, Dudenhöffer E (1997) Weber’s problem with attraction and repulsion under polyhedral gauges. J Glob Optim 11(4):409–432

    Article  MathSciNet  MATH  Google Scholar 

  • Nickel S, Fliege J (1999) An interior point method for multifacility location problems with forbidden regions. Technical report 23, Fachbereich Mathematik

  • Oğuz M, Bektaş T, Bennell JA, Fliege J (2016) A modelling framework for solving restricted planar location problems using phi-objects. J Oper Res Soc 67(8):1080–1096

    Article  Google Scholar 

  • Oğuz M, Bektaş T, Bennell JA (2018) Multicommodity flows and Benders decomposition for restricted continuous location problems. Eur J Oper Res 266(3):851–863

    Article  MathSciNet  MATH  Google Scholar 

  • Rockafellar RT (1972) Convex analysis. Princeton mathematical series. Princeton University Press, Princeton

    Google Scholar 

  • Rodríguez-Chía AM, Nickel S, Puerto J, Fernández FR (2000) A flexible approach to location problems. Math Meth Oper Res 51(1):69–89

    Article  MathSciNet  MATH  Google Scholar 

  • Tuy H (2013) Convex analysis and global optimization. Nonconvex optimization and its applications. Springer, New York

    MATH  Google Scholar 

  • Woeginger GJ (1998) A comment on a minmax location problem. Oper Res Lett 23(1):41–43

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Horst W. Hamacher.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was in part supported by a grant of the German Ministry of Research and Technology (BMBF) under grant RobEZiS, FKZ 13N13198.

Appendices

Appendices

APX-hardness: proof of claims

Claim 4.1.1

For any optimal solution holds \(x_k^i=z_k^i=y_k^i\) for all \(k\in \left[ K\right] , i=1,2\).

Proof of Claim:

We will show that \(x_k^i=y_k^i\) for any optimal solution, the proof with \(x_k^i=z_k^i\) is analogous.

Assume that there exists a \(x_k^i\ne y_k^i\) for an optimal solution. Note that \(x_k^i\in \mathrm {int}\left( R(y_k^i)\right) \), otherwise we could set \(x_k^i= y_k^i\) to get a better optimal function value since \(x_k^i\) is the only facility interacting with \(y_k^i\). Therefore, we will just consider the part \(\Phi _k^3\) in the objective function and show that the necessary optimality conditions in Theorem 3.5 cannot be satisfied. As done in Theorem 3.5, one can write a constrained location problem with halfspaces as constraints and optimal solution \(X^*\) as follows. Define

$$\begin{aligned} H({x}_k^1) = {\left\{ \begin{array}{ll} \{ (x,y) \mid x+y \le 0\} &{}\quad \text {if } {x}_k^1\in \{ (x,y) \mid x+y \le 0\},\\ \{ (x,y) \mid x+y \ge 4\} &{}\quad \text {else,} \end{array}\right. },\\ H({x}_k^2) = {\left\{ \begin{array}{ll} \{ (x,y) \mid x-y \ge 1\} &{}\quad \text {if } {x}_k^2\in \{ (x,y) \mid x-y \ge 1\},\\ \{ (x,y) \mid x-y \le -3 \} &{}\quad \text {else} \end{array}\right. } \end{aligned}$$

and for \(p\in \{{y}_k^i, {z}_k^i,{v}_k^i \mid k\in \left[ K\right] , i\in \left[ 2\right] \}\):

$$\begin{aligned} H(p) = \mathbb {R}^2{\setminus } \mathrm {int}\left( R(p)\right) . \end{aligned}$$

The halfspaces have the property that \(H(p)=\mathbb {R}^2{\setminus }\mathrm {int}\left( R(p)\right) \) for \(p=y_k^1,y_k^2,z_k^1,z_k^2,v_k^1,v_k^2\) and \(H(x_k^i)\subseteq \mathbb {R}^2{\setminus }\mathrm {int}\left( R(x_k^i)\right) \) with \(\mathrm {bd}\left( H(x_k^i)\right) \subset \mathrm {bd}\left( R_k(x_k^i)\right) \) (see Fig. 11).

Fig. 11
figure 11

Sketch of halfspaces for a given solution

The cone condition (5a) for \(x_k^i\) and \(y_k^i\) can be written as:

$$\begin{aligned} x_k^i \in y_k^i + N_{B^\circ }\left( \tilde{u}_{(x_k^i,y_k^i)}\right) \end{aligned}$$

for a suitable \(\tilde{u}_{(x_k^i,y_k^i)}\in B^\circ \), where \(B^\circ = [-1,1]\times [-1,1]\) is the unit ball of the \(l_\infty \)-norm. As \(x_k^i\ne y_k^i\), it follows that \(\tilde{u}_{(x_k^i,y_k^i)}\in \mathrm {bd}\left( B^\circ \right) \). In the following, we will assume that \(i=1\) (the case \(i=2\) is treated analogously).

By optimality, \(y_k^1\) takes the shortest distance to \(x_k^1\) considering the \(l_1\)-norm (as \(y_k^1\) is only interacting with \(x_k^1\)). This is achieved, whenever \(y_k^1\in \mathrm {bd}\left( H(y_k^1)\right) \) and their y-coordinates coincide. This yields \(\tilde{u}_{(x_k^1,y_k^1)}=(-1, \lambda )\) for \(\lambda \in [-1,1]\). Moreover, constraints (5c) and (6) (see necessary optimality conditions in Theorem 3.5) with respect to \(y_k^1\) can be written as

$$\begin{aligned}&\bar{u}_{y_k^1} \in N_{H(y_k^1)}\left( y_k^1\right) ,\\&-150L\cdot \tilde{u}_{(x_k^1,y_k^1)} + \bar{u}_{y_k^1} = 0. \end{aligned}$$

Hence, substituting the \((-1, \lambda )\) for \(\tilde{u}_{(x_k^1,y_k^1)}\) and the fact that \(N_{H(y_k^1)}\left( y_k^1\right) =\mathbb {R}_{\ge 0} (- 3,1)\) as \(y_k^1\in \mathrm {bd}\left( H(y_k^1)\right) \), we get

$$\begin{aligned} 150L \cdot (- 1, \lambda ) \in \mathbb {R}_{\ge 0} (- 3, 1), \end{aligned}$$

which yields \(\lambda = \nicefrac {1}{3}\). The conservation constraints (6) with respect to \(x_k^1\) can be written for suitable weights \(W\in \mathbb {R}_{\ge 0}\) and \(\tilde{w}_{(x_k^1, x_l^i)},\tilde{w}_{(x_l^i,x_k^1)} \le 1\) as

$$\begin{aligned}&\displaystyle \overbrace{\sum _{\{a_m: (x_k^1, a_m)\in E_A\}} u_{(x_k^1, a_m)}}^{\in [-2L,2L]\times [-2L,2L]} + \overbrace{ \sum _{\{x_l^i: (x_k^1, x_l^i)\in E_X\}} \tilde{w}_{(x_k^1, x_l^i)} \tilde{u}_{(x_k^1, x_l^i)} - \sum _{\{x_l^i: (x_l^i,x_k^1)\in E_X\}} \tilde{w}_{(x_l^i,x_k^1)} \tilde{u}_{(x_l^i,x_k^1)}}^{\in [-2L,2L]\times [-2L,2L] } \nonumber \\&\quad + \underbrace{W \tilde{u}_{(x_k^1, v_k^1)}}_{\in [-12L,12L]\times [-12L,12L]} + \underbrace{150L \tilde{u}_{(x_k^1,y_k^i)}}_{=L(-150,50)} = -\underbrace{\bar{u}_{x_k^1}}_{\in \mathbb {R}(1,1)} \end{aligned}$$
(19)

as \(\bar{u}_{x_k^i}\in N_{H(x_k^i)}\left( x_k^i\right) \subseteq \mathbb {R}(1,1)\). Note, that the sets above and underneath the summation parts in (19) are due to the fact that every \(x_k\) can at most appear 2L times in the MAX-2-SAT instance and due to the chosen weights in \(\Phi _{\zeta _j}^1\) and \(\Phi _k^2\). As the first three terms lie in the box of \([-16L,16L]\times [-16L,16L]\), the left hand side cannot be in \(\mathbb {R}(1,1)\), which is a contradiction to the fact that \(X^*\) is optimal. Therefore, \(x_k^1=y_k^1\). The case with \(x_k^2=y_k^2\) and \(x_k^i=z_k^i\) (\(i=1,2\)) is analogous. \(\square \)

Claim 4.1.2

For any optimal solution and for all \(k\in \left[ K\right] \) holds \(v_k^1=v_k^2\).

Proof of Claim:

Consider function part \(\Phi _k^2\) and let l be the number of appearances of \(x_k\) in the MAX-2-SAT instance. Choosing the halfspaces like in the previous claim as feasible sets, we can write the cone conditions (5a)–(5c) for \(v_k^1\) as

$$\begin{aligned} v_k^1&\in v_k^2 + N_{B^\circ }\left( \tilde{u}_{(v_k^1,v_k^2)}\right) , \\ x_k^1&\in v_k^1 + N_{B^\circ }\left( \tilde{u}_{(x_k^1,v_k^1)}\right) , \\ x_k^2&\in v_k^1 + N_{B^\circ }\left( \tilde{u}_{(x_k^2,v_k^1)}\right) , \\ \bar{u}_{v_k^1}&\in N_{H(v_k^1)}\left( v_k^1\right) , \end{aligned}$$

and flow conservation constraints (6) as

$$\begin{aligned} l\cdot \left( 30 \tilde{u}_{(v_k^1,v_k^2)} - 6\tilde{u}_{(x_k^1,v_k^1)} - 6\tilde{u}_{(x_k^2,v_k^1)}\right) + \bar{u}_{v_k^1} = 0. \end{aligned}$$

Now, assume that \(v_k^1\in \mathrm {int}\left( H(v_k^1)\right) \), which implies that \(N_{H(v_k^1)}\left( v_k^1\right) =\{(0,0)\}\) and \(\tilde{u}_{(v_k^1,v_k^2)} = (\lambda , -1)\) for a \(\lambda \in [-1,1]\). Then the conservation constraints yield

$$\begin{aligned} (30\lambda ,-30) = 6\tilde{u}_{(x_k^1,v_k^1)} + 6\tilde{u}_{(x_k^2,v_k^1)}, \end{aligned}$$

which is unsolvable in the second coordinate for any \(\tilde{u}_{(x_k^1,v_k^1)},\tilde{u}_{(x_k^2,v_k^1)}\in [-1,1]\times [-1,1] \). Therefore, we must have \(v_k^1\in \mathrm {bd}\left( H(v_k^1)\right) \) and, by similar argumentation, \(v_k^2\in \mathrm {bd}\left( H(v_k^2)\right) \). Assuming that \(v_k^1 \ne v_k^2\), this implies together with the cone condition (5b) that

$$\begin{aligned} \tilde{u}_{(v_k^1,v_k^2)}&\in \left\{ \ ( 1,\lambda )\ \mid \lambda \in [-1,1] \right\} ,\ \text { or }\\ \tilde{u}_{(v_k^1,v_k^2)}&\in \left\{ (-1,\lambda ) \mid \lambda \in [-1,1] \right\} . \end{aligned}$$

In the case that \(\tilde{u}_{(v_k^1,v_k^2)} = (1,\lambda )\) for a suitable \(\lambda \in [-1,1]\), we have by the conservation constraints

$$\begin{aligned} l\cdot \left( 30 \tilde{u}_{(v_k^1,v_k^2)} - 6\tilde{u}_{(x_k^1,v_k^1)} - 6\tilde{u}_{(x_k^2,v_k^1)}\right) \in \mathbb {R}_{\ge 0} (0,-1) . \end{aligned}$$

As \(\tilde{u}_{(x_k^1,v_k^1)}, \tilde{u}_{(x_k^2,v_k^1)} \in [-1,1]\times [-1,1]\), this equation is unsolvable in the first coordinate. The case with \(\tilde{u}_{(v_k^1,v_k^2)} = (-1,\lambda )\) is analogue, consequently, we must have \(v_k^1=v_k^2\). \(\square \)

Claim 4.1.3

Given a MAX-2-SAT instance and a corresponding instance of (\(P_{R}^{l_1}\)) as described before. Then the following hold.

  1. (a)

    Any optimal solution of the (\(P_{R}^{l_1}\)) instance is equivalent to an optimal solution of the MAX-2-SAT instance by

    $$\begin{aligned} \left. \begin{array}{l} x_k^1 = y_k^1 = z_k^1 = (0,0),\\ x_k^2 = y_k^2 = z_k^2 = (0,3), \\ v_k^1= v_k^2 = (0,1.5) \end{array} \right\}&\iff \ x_k = \textsc {False}\end{aligned}$$
    (20a)

    and

    $$\begin{aligned} \left. \begin{array}{l} x_k^1 = y_k^1 = z_k^1 = (1,3),\\ x_k^2 = y_k^2 = z_k^2 = (1,0), \\ v_k^1= v_k^2 = (1,1.5) \end{array}\right\}&\iff x_k = \textsc {True}. \end{aligned}$$
    (20b)
  2. (b)

    Given an assignment of the MAX-2-SAT instance and an equivalent solution of the (\(P_{R}^{l_1}\)) instance according to Eq. (20). For each clause of the form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) (Form 1) the value 45 is added to the objective \(\Phi \) if it is equivalent to a true clause and if it is equivalent to a false clause, a value 49 is added. For clauses of form \(x_k\vee \bar{x}_l\) or \(\bar{x}_k\vee x_l\) (Form 2), the value 44 is added if it is true and 48 is added if it is false.

  3. (c)

    Any other solution with \((x_k^1, x_k^2)\in \mathbb {R}^2\times \mathbb {R}^2{\setminus } \left\{ \left( (0,0),(0,3)\right) ,\left( (1,3),(1,0)\right) \right\} \) has a higher objective value than a solution of the form in (20).

Proof of claim

By Claims 4.1.1 and 4.1.2 we have that \(x_k^i = y_k^i = z_k^i\) and \(v_k^1= v_k^2\) for \(k\in \left[ K\right] \) and \(i\in \left[ 2\right] \). As direct consequence of these two claims and the choice of the forbidden regions and the demand points, it immediately follows that for any optimal solution

$$\begin{aligned} x_k^1 \in \left\{ (0,0), (1,3) \right\} , \quad x_k^2 \in \left\{ (1,0), (0,3) \right\} , \end{aligned}$$
(21)

as otherwise, we could get a smaller objective function value by moving all \(x_k^i\) closer to the closest point in the respective sets.

The remaining proof is by complete enumeration of all possible choices of \(x_k^i\) and \(x_l^i\) according to (21). Table 2 illustrates the objective function values of \(\Phi _{\zeta _j}\) and \(\Phi _k^2+\Phi _l^2\).

Table 2 Table of possible optimal objective values

Note, that by fixing all \(x_k^i\), the summation parts \(\Phi _{k}^2+\Phi _l^2\) are independent of all other variables and yield a constrained location problem that can be solved by linear programming algorithms.

Consider the first four solutions in Table 2. For every true clauses \(\zeta _j\) of the form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) [according to Eq. (20)], we get \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 45\), while \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 49\) for every false clause. For every clauses \(\zeta _j\) of the form \(\bar{x}_k\vee x_l\) and \({x}_k\vee \bar{x}_l\), we get \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 44\) for a true clause and \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 =48\) for a false clause, respectively. The optimal values for \(v_k^i\) are given by

$$\begin{aligned} v_k^i = {\left\{ \begin{array}{ll} (0,1.5) &{} \text { if } x_k^1 = (0,0) \text { and } x_k^2 = (0,3),\\ (1,1.5) &{} \text { if } x_k^1 = (1,3) \text { and } x_k^2 = (1,0). \end{array}\right. } \end{aligned}$$

For all the other solutions in the table, the values of \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2\) are strictly greater than for the first four solutions, therefore, any optimal solution fulfills

$$\begin{aligned} x_k^1 = (0,0) \iff x_k^2 = (0,3),\quad x_k^1 = (1,3) \iff x_k^2 = (1,0). \end{aligned}$$
(22)

This proves part (b) and (c). To show part (a), recall that L is the number of total clauses and let \(L_1\) and \(L_2\) be the number of clauses of form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) (Form 1) and the number of clauses of form \(x_k\vee \bar{x}_l\) or \(\bar{x}_k\vee x_l\) (Form 2), respectively.

Now assume an optimal point \(X^*\) of the (\(P_{R}^{l_1}\))-instance yields by Eq. (20) a solution to MAX-2-SAT with \(p_1^*\) true clauses of Form 1 and \(p_2^*\) true clauses of Form 2. Assume there exists a better solution to MAX-2-SAT with \(q^{\prime }:=p_1^{\prime }+p_2^{\prime }>p_1^*+p_2^*=: q^*\). By Eq. (20) this would yield a solution \(X^{\prime }\) with objective function value \(\Phi (X^{\prime })=45p_1^{\prime }+49(L_1-p_1^{\prime })+ 44 p_2^{\prime }+48(L_2-p_2^{\prime })\). Comparing the two objectives we get

$$\begin{aligned} \begin{array}{rl} \Phi (X^*)-\Phi (X^{\prime }) &{}= (45p_1^*+49(L_1-p_1^*)+ 44 p_2^*+48(L_2-p_2^*))\\ &{}\qquad \quad -(45p_1^{\prime }+49(L_1-p_1^{\prime })+ 44 p_2^{\prime }+48(L_2-p_2^{\prime }))\\ &{} = 4 [(p_1^{\prime }+p_2^{\prime })- (p_1^*+p_2^*)]\\ &{} = 4 [q^{\prime }-q^*]\\ &{} > 0, \end{array} \end{aligned}$$
(23)

which is a contradiction to the optimality of \(X^*\). \(\square \)

Approximation algorithms

1.1 Calculating the starting point \(v_{00}\) of the grid

To calculate the starting point \(v_{00}\) of Algorithm 1, we first need to calculate the outer demand points

$$\begin{aligned} a_1&\in \arg \min _{a\in \mathcal {A}} \langle b_1^\perp , a \rangle , \end{aligned}$$
(24a)
$$\begin{aligned} a_2&\in \arg \min _{a\in \mathcal {A}} \langle b_1, a \rangle ,\end{aligned}$$
(24b)
$$\begin{aligned} a_3&\in \arg \max _{a\in \mathcal {A}} \langle b_1^\perp , a \rangle ,\end{aligned}$$
(24c)
$$\begin{aligned} a_4&\in \arg \max _{a\in \mathcal {A}} \langle b_1, a \rangle . \end{aligned}$$
(24d)

These points are shown in Fig. 7. In the next cases Intersect is the intersection of two rays.

1.1.1 Case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \)

For the case with \(\mathcal {R}^{u}\subseteq \mathrm {conv}\left( \mathcal {A}\right) \) (Sect. 6.1) it is enough to define the corner points of the grid as

(24e)
(24f)
(24g)
(24h)

Moreover, the length L and the height \(L^{\prime }\) of the grid can be reduced to

$$\begin{aligned} L&= \gamma (v_{0N}- v_{00}),\\ L^{\prime }&= \gamma (v_{N^{\prime }0}- v_{00}). \end{aligned}$$

This might be faster in practice, but the worst case running time does not change.

1.1.2 Case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \)

For the case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \) (Sect. 6.2), we need to invest some more effort than before. Let, \(L^i\) and \(L_\perp ^i\) be the side lengths of the grid in direction of the longest extreme point \(b_1\in \mathrm {Ext}\left( B\right) \) and its orthogonal counterpart \(b_1^\perp \in B\), respectively. We need the grid to be balanced, i.e., the distances from \(a_1,\ldots , a_4\) to their corresponding boundaries are supposed to be balanced. The next process is illustrated in Fig. 12.

First compute

(25a)
(25b)

The surplus of each side of the grid is given by \(s= L^i-\gamma (h_2^{\prime }-a_2) \) and \(s^{\prime }= L_\perp ^i-\gamma (h_1^{\prime }-a_1) \). Then the points on the boundary of the grid are given by

$$\begin{aligned} h_1&= a_1- \frac{s^{\prime }}{2} b_1^\perp , \end{aligned}$$
(26a)
$$\begin{aligned} h_2&= a_2- \frac{s}{2} b_1, \end{aligned}$$
(26b)
$$\begin{aligned} h_3&= h_1^{\prime } + \frac{s^{\prime }}{2} b_1^\perp , \end{aligned}$$
(26c)
$$\begin{aligned} h_4&= h_2^{\prime } + \frac{s}{2} b_1, \end{aligned}$$
(26d)

which yields

(27a)
(27b)
(27c)
(27d)
Fig. 12
figure 12

Calculation of corner points \(v_{00}^i,v_{0N}^i,v_{N^{\prime }0}^i, v_{N^{\prime }N}^i\) if \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \)

1.2 Calculating a lower bound

Lemma 6.8

In case of \(\left|\mathcal {A}\right|=1\), the value \(L^0=\max \big \{ \min _{r\in \mathrm {bd}\left( R_k\right) } \gamma (a-r) \big | k\in \left[ K\right] :a\in \mathrm {int}\left( R_k\right) \big \}\) can be calculated in \({O}\left( D_1 KR{\cdot poly (\mathcal {R})} \right) \) time, where

$$\begin{aligned} D_1 :=\left|\log _2(L^0)\right|+1. \end{aligned}$$

Proof

Recall that B is the unit ball of \(\gamma \). Since \(R_k\) is convex the minimum is attained at one of the extreme points of \(a+\lambda B\) for a \(\lambda >0\). Starting with \(\lambda = 1\), we check if \(a+\lambda B \subseteq R_k\) is satisfied for at least one \(k\in \left[ K\right] \). If this is not the case, we iteratively divide \(\lambda \) by half until all extreme points are in at least one \(R_k\). If this is the case we iteratively double \(\lambda \) until it is not satisfied anymore and take the highest \(\lambda \)-value which satisfies it. Then it is possible to approximate \(L^0\) with a lower bound in

$$\begin{aligned} {O}\left( \max \left\{ -\log _2(L^0)+1, \log _2(L^0)+1 \right\} \cdot KR{\cdot poly (\mathcal {R})} \right) , \end{aligned}$$

where \({ poly (\mathcal {R})}\) is the polynomial running time to calculate whether a given point lies in the forbidden region or not like given before. This time is still polynomial in the encoding length of the input data by assumptions (A2) and (A3). \(\square \)

Special problem structures: dynamic programming

Theorem 7.1

Algorithm 4 finds an optimal solution of (\(P_D^{ Tree }\)), provided that \(G_X=(V_X,E_X)\) is a tree.

Proof

Let \(T_{k^{\prime }}=(V(T_{k^{\prime }}),E(T_{k^{\prime }}))\) be the tree with root \(k^{\prime }\) and all arcs will point away from \(k^{\prime }\). We will show by induction on the height of the tree \(h({k^{\prime }})\) that for each \(v_i^{k^{\prime }}\in V_{k^{\prime }}\), the subtree of \(G_D\) rooted at \(v_i^{k^{\prime }}\) and iteratively defined by its successors \( succ (v_i^{k^{\prime }})\) will yield an optimal solution to

figure u

I.e., it is a solution to problem (\(P_D^{ Tree }\)) when fixing location \(x_{k^{\prime }}\) to \(v_i^{k^{\prime }}\). In addition, the objective value is equal to \(w(v_i^{k^{\prime }})\).

  • Induction Base: We consider \(h({k^{\prime }})=0\) and \(h({k^{\prime }})=1\). For the first case, we will have a single facility location problem, which can be easily solved by complete enumeration as done in line 12. Therefore, assume \(h({k^{\prime }})=1\). Then, for each \(v_i^{k^{\prime }}\) problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) reduces to

    $$\begin{aligned} \begin{aligned} \text {minimize}\quad&\sum _{\begin{array}{c} l\in children (k^{\prime });\\ m:(l,m)\in E_A \end{array}} w_{lm} \gamma (x_l-a_m) + \sum _{l\in children (k^{\prime })} \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-x_l) \\&\qquad + \sum _{m:(k^{\prime },m)\in E_A} w_{k^{\prime }m}\gamma (v_i^{k^{\prime }}-a_m) \\ \text {subject to}\quad&x_i \in V_i\qquad \qquad i\in {V(T_{k^{\prime }})}{\setminus } k^{\prime }. \end{aligned} \end{aligned}$$
    (28a)

    Since \(T_{k^{\prime }}\) is a star graph with node \(k^{\prime }\) as its internal node this can be decomposed into \(\left|E(T_{k^{\prime }})\right|\) subproblems

    $$\begin{aligned} \begin{array}{ll} \text {minimize}\quad &{}\displaystyle \sum _{m:(l,m)\in E_A} w_{lm} \gamma (x-a_m) + \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-x)\\ \text {subject to}\quad &{} x\in V_l \end{array} \end{aligned}$$
    (28b)

    for \(l\in children (k^{\prime })\). In the first iteration of Algorithm 4, \(s=0\) and only leaves of \(G_X\) are considered. Each node is assigned its node cost \(w(v_i^k)=c(v_i^k) = \sum _{m:(k,m)\in E_A} w_{km} \gamma (v_i^k-a_m)\). In the second and final iteration for \(s=h({k^{\prime }})=1\), we have that \( height (s)=\{k^{\prime }\}\). In line 9 the minimum of

    $$\begin{aligned} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$

    is taken for each \(l\in children (k^{\prime })\), which is equivalent to (28b) since

    $$\begin{aligned} w(v_j^l)= \sum _{m:(l,m)\in E_A} w_{lm} \gamma (v_j^l-a_m) \end{aligned}$$

    and

    $$\begin{aligned} c(v_i^{k^{\prime }}, v_j^l) = \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-v_j^l). \end{aligned}$$

    Adding up all the subproblems in (28b) we get \(w(v_i^{k^{\prime }})\) as objective function value of (28a).

  • Induction Step,\(h(k^{\prime })\mapsto h(k^{\prime })+1:\) Let \(k^{\prime }\) be again the root node of \(T_{k^{\prime }}\). Fix \(x_{k^{\prime }}=v_i^{k^{\prime }}\in V_{k^{\prime }}\) to obtain (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)). As \(T_{k^{\prime }}\) is a tree, problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) decomposes into \(\left| children (k^{\prime })\right|\) subproblems. For each \(l\in children (k^{\prime })\), denote with \(P_l\) the lth subproblem and fix \(x_l=v_j^l\) for a \(v_j^l\in V_l\). By induction hypotheses Algorithm 4 finds an optimal solution to the subproblem \(P_l\) with \(x_l=v_j^l\) fixed and objective value \(w(v_j^l)\). Therefore,

    $$\begin{aligned} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$

    minimizes subproblem \(P_l\) with additional demand point \(v_i^{k^{\prime }}\) for each \(l\in children (k^{\prime })\) (cf. line 9). Hence,

    $$\begin{aligned} w(v_i^{k^{\prime }}) = c(v_i^{k^{\prime }}) + \sum _{l\in children (k^{\prime })} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$

    is equivalent to (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)). Since the Algorithm iterates over all \(l\in children (k^{\prime })\), problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) is minimized for \(v_i^{k^{\prime }}\).

As consequence of the induction, \(v_i^{k^{\prime }}\in \arg \min _{v_i^{k^{\prime }}\in V_{k^{\prime }}} w(v_i^{k^{\prime }})\) minimizes the overall problem (\(P_D^{ Tree }\)) with objective function value \(w(v_i^{k^{\prime }})\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maier, A., Hamacher, H.W. Complexity results on planar multifacility location problems with forbidden regions. Math Meth Oper Res 89, 433–484 (2019). https://doi.org/10.1007/s00186-019-00670-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-019-00670-0

Keywords

Navigation