Abstract
In this paper we deal with the planar location problem with forbidden regions. We consider the median objective with block norms and show that this problem is APX-hard, even when considering the Manhattan metric as distance function and polyhedral forbidden areas. As direct consequence, the problem cannot be approximated in polynomial time within a factor of 1.0019, unless \(P=NP\). In addition, we give a dominating set that contains at least one optimal solution. Based on this result an approximation algorithm is derived. For special instances it is possible to improve the algorithm. These instances include problems with bounded forbidden areas and a special structure as interrelation between the new facilities. For uniform weights, this algorithm becomes an FPTAS.










Similar content being viewed by others
Notes
(\(P_D\)) can be solved as a quadratic assignment problem. A recent summary about the problem and known solution approaches can be found in Laporte et al. (2015).
References
Aneja YP, Parlar M (1994) Technical note—algorithms for Weber facility location in the presence of forbidden regions and/or barriers to travel. Transport Sci 28(1):70–76
Ausiello G, Protasi M, Marchetti-Spaccamela A, Gambosi G, Crescenzi P, Kann V (1999) Complexity and approximation: combinatorial optimization problems and their approximability properties, 1st edn. Springer, Secaucus
Batta R, Ghose A, Palekar US (1989) Locating facilities on theManhattan metric with arbitrarily shaped barriers and convex forbidden regions. Transport Sci 23(1):26–36
Butt SE, Cavalier TM (1997) Facility location in the presence of congested regions with the rectilinear distance metric. Socio-Econ Plan Sci 31(2):103–113
Canbolat MS, Wesolowsky GO (2010) The rectilinear distance Weber problem in the presence of a probabilistic line barrier. Eur J Oper Res 202(1):114–121
Drezner Z (2013) Solving planar location problems by global optimization. Logist Res 6(1):17–23
Hamacher HW, Nickel S (1994) Combinatorial algorithms for some 1-facility median problems in the plane. Eur J Oper Res 79(2):340–351
Hamacher HW, Nickel S (1995) Restricted planar location problems and applications. Nav Res Logist (NRL) 42(6):967–992
Hamacher HW, Schöbel A (1997) A note on center problems with forbidden polyhedra. Oper Res Lett 20(4):165–169
Håstad J (2001) Some optimal inapproximability results. J ACM 48(4):798–859
Horst R, Pardalos PM (eds) (1995) Handbook of global optimization. Nonconvex optimization and its applications. Kluwer Academic Publishers, Dordrecht
Idrissi H, Lefebvre O, Michelot C (1989) Duality for constrained multifacility location problems with mixed norms and applications. Ann Oper Res 18(1):71–92
Käfer B, Nickel S (2001) Error bounds for the approximative solution of restricted planar location problems. Eur J Oper Res 135(1):67–85
Khot S, Kindler G, Mossel E, O’Donnell R (2007) Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM J Comput 37(1):319–357
Klamroth K (2002) Single-facility location problems with barriers. Springer series in operations research and financial engineering. Springer, New York
Laporte G, Nickel S, Saldanha da Gama F (2015) Location science. Springer, Cham
Lefebvre O, Michelot C, Plastria F (1990) Geometric interpretation of the optimality conditions in multifacility location and applications. J Optim Theory Appl 65(1):85–101
Michelot C (1987) Localization in multifacility location theory. Eur J Oper Res 31(2):177–184
Nickel S (1995) Discretization of planar location problems. Berichte aus der Mathematik, Shaker
Nickel S, Dudenhöffer E (1997) Weber’s problem with attraction and repulsion under polyhedral gauges. J Glob Optim 11(4):409–432
Nickel S, Fliege J (1999) An interior point method for multifacility location problems with forbidden regions. Technical report 23, Fachbereich Mathematik
Oğuz M, Bektaş T, Bennell JA, Fliege J (2016) A modelling framework for solving restricted planar location problems using phi-objects. J Oper Res Soc 67(8):1080–1096
Oğuz M, Bektaş T, Bennell JA (2018) Multicommodity flows and Benders decomposition for restricted continuous location problems. Eur J Oper Res 266(3):851–863
Rockafellar RT (1972) Convex analysis. Princeton mathematical series. Princeton University Press, Princeton
Rodríguez-Chía AM, Nickel S, Puerto J, Fernández FR (2000) A flexible approach to location problems. Math Meth Oper Res 51(1):69–89
Tuy H (2013) Convex analysis and global optimization. Nonconvex optimization and its applications. Springer, New York
Woeginger GJ (1998) A comment on a minmax location problem. Oper Res Lett 23(1):41–43
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was in part supported by a grant of the German Ministry of Research and Technology (BMBF) under grant RobEZiS, FKZ 13N13198.
Appendices
Appendices
APX-hardness: proof of claims
Claim 4.1.1
For any optimal solution holds \(x_k^i=z_k^i=y_k^i\) for all \(k\in \left[ K\right] , i=1,2\).
Proof of Claim:
We will show that \(x_k^i=y_k^i\) for any optimal solution, the proof with \(x_k^i=z_k^i\) is analogous.
Assume that there exists a \(x_k^i\ne y_k^i\) for an optimal solution. Note that \(x_k^i\in \mathrm {int}\left( R(y_k^i)\right) \), otherwise we could set \(x_k^i= y_k^i\) to get a better optimal function value since \(x_k^i\) is the only facility interacting with \(y_k^i\). Therefore, we will just consider the part \(\Phi _k^3\) in the objective function and show that the necessary optimality conditions in Theorem 3.5 cannot be satisfied. As done in Theorem 3.5, one can write a constrained location problem with halfspaces as constraints and optimal solution \(X^*\) as follows. Define
and for \(p\in \{{y}_k^i, {z}_k^i,{v}_k^i \mid k\in \left[ K\right] , i\in \left[ 2\right] \}\):
The halfspaces have the property that \(H(p)=\mathbb {R}^2{\setminus }\mathrm {int}\left( R(p)\right) \) for \(p=y_k^1,y_k^2,z_k^1,z_k^2,v_k^1,v_k^2\) and \(H(x_k^i)\subseteq \mathbb {R}^2{\setminus }\mathrm {int}\left( R(x_k^i)\right) \) with \(\mathrm {bd}\left( H(x_k^i)\right) \subset \mathrm {bd}\left( R_k(x_k^i)\right) \) (see Fig. 11).
The cone condition (5a) for \(x_k^i\) and \(y_k^i\) can be written as:
for a suitable \(\tilde{u}_{(x_k^i,y_k^i)}\in B^\circ \), where \(B^\circ = [-1,1]\times [-1,1]\) is the unit ball of the \(l_\infty \)-norm. As \(x_k^i\ne y_k^i\), it follows that \(\tilde{u}_{(x_k^i,y_k^i)}\in \mathrm {bd}\left( B^\circ \right) \). In the following, we will assume that \(i=1\) (the case \(i=2\) is treated analogously).
By optimality, \(y_k^1\) takes the shortest distance to \(x_k^1\) considering the \(l_1\)-norm (as \(y_k^1\) is only interacting with \(x_k^1\)). This is achieved, whenever \(y_k^1\in \mathrm {bd}\left( H(y_k^1)\right) \) and their y-coordinates coincide. This yields \(\tilde{u}_{(x_k^1,y_k^1)}=(-1, \lambda )\) for \(\lambda \in [-1,1]\). Moreover, constraints (5c) and (6) (see necessary optimality conditions in Theorem 3.5) with respect to \(y_k^1\) can be written as
Hence, substituting the \((-1, \lambda )\) for \(\tilde{u}_{(x_k^1,y_k^1)}\) and the fact that \(N_{H(y_k^1)}\left( y_k^1\right) =\mathbb {R}_{\ge 0} (- 3,1)\) as \(y_k^1\in \mathrm {bd}\left( H(y_k^1)\right) \), we get
which yields \(\lambda = \nicefrac {1}{3}\). The conservation constraints (6) with respect to \(x_k^1\) can be written for suitable weights \(W\in \mathbb {R}_{\ge 0}\) and \(\tilde{w}_{(x_k^1, x_l^i)},\tilde{w}_{(x_l^i,x_k^1)} \le 1\) as
as \(\bar{u}_{x_k^i}\in N_{H(x_k^i)}\left( x_k^i\right) \subseteq \mathbb {R}(1,1)\). Note, that the sets above and underneath the summation parts in (19) are due to the fact that every \(x_k\) can at most appear 2L times in the MAX-2-SAT instance and due to the chosen weights in \(\Phi _{\zeta _j}^1\) and \(\Phi _k^2\). As the first three terms lie in the box of \([-16L,16L]\times [-16L,16L]\), the left hand side cannot be in \(\mathbb {R}(1,1)\), which is a contradiction to the fact that \(X^*\) is optimal. Therefore, \(x_k^1=y_k^1\). The case with \(x_k^2=y_k^2\) and \(x_k^i=z_k^i\) (\(i=1,2\)) is analogous. \(\square \)
Claim 4.1.2
For any optimal solution and for all \(k\in \left[ K\right] \) holds \(v_k^1=v_k^2\).
Proof of Claim:
Consider function part \(\Phi _k^2\) and let l be the number of appearances of \(x_k\) in the MAX-2-SAT instance. Choosing the halfspaces like in the previous claim as feasible sets, we can write the cone conditions (5a)–(5c) for \(v_k^1\) as
and flow conservation constraints (6) as
Now, assume that \(v_k^1\in \mathrm {int}\left( H(v_k^1)\right) \), which implies that \(N_{H(v_k^1)}\left( v_k^1\right) =\{(0,0)\}\) and \(\tilde{u}_{(v_k^1,v_k^2)} = (\lambda , -1)\) for a \(\lambda \in [-1,1]\). Then the conservation constraints yield
which is unsolvable in the second coordinate for any \(\tilde{u}_{(x_k^1,v_k^1)},\tilde{u}_{(x_k^2,v_k^1)}\in [-1,1]\times [-1,1] \). Therefore, we must have \(v_k^1\in \mathrm {bd}\left( H(v_k^1)\right) \) and, by similar argumentation, \(v_k^2\in \mathrm {bd}\left( H(v_k^2)\right) \). Assuming that \(v_k^1 \ne v_k^2\), this implies together with the cone condition (5b) that
In the case that \(\tilde{u}_{(v_k^1,v_k^2)} = (1,\lambda )\) for a suitable \(\lambda \in [-1,1]\), we have by the conservation constraints
As \(\tilde{u}_{(x_k^1,v_k^1)}, \tilde{u}_{(x_k^2,v_k^1)} \in [-1,1]\times [-1,1]\), this equation is unsolvable in the first coordinate. The case with \(\tilde{u}_{(v_k^1,v_k^2)} = (-1,\lambda )\) is analogue, consequently, we must have \(v_k^1=v_k^2\). \(\square \)
Claim 4.1.3
Given a MAX-2-SAT instance and a corresponding instance of (\(P_{R}^{l_1}\)) as described before. Then the following hold.
-
(a)
Any optimal solution of the (\(P_{R}^{l_1}\)) instance is equivalent to an optimal solution of the MAX-2-SAT instance by
$$\begin{aligned} \left. \begin{array}{l} x_k^1 = y_k^1 = z_k^1 = (0,0),\\ x_k^2 = y_k^2 = z_k^2 = (0,3), \\ v_k^1= v_k^2 = (0,1.5) \end{array} \right\}&\iff \ x_k = \textsc {False}\end{aligned}$$(20a)and
$$\begin{aligned} \left. \begin{array}{l} x_k^1 = y_k^1 = z_k^1 = (1,3),\\ x_k^2 = y_k^2 = z_k^2 = (1,0), \\ v_k^1= v_k^2 = (1,1.5) \end{array}\right\}&\iff x_k = \textsc {True}. \end{aligned}$$(20b) -
(b)
Given an assignment of the MAX-2-SAT instance and an equivalent solution of the (\(P_{R}^{l_1}\)) instance according to Eq. (20). For each clause of the form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) (Form 1) the value 45 is added to the objective \(\Phi \) if it is equivalent to a true clause and if it is equivalent to a false clause, a value 49 is added. For clauses of form \(x_k\vee \bar{x}_l\) or \(\bar{x}_k\vee x_l\) (Form 2), the value 44 is added if it is true and 48 is added if it is false.
-
(c)
Any other solution with \((x_k^1, x_k^2)\in \mathbb {R}^2\times \mathbb {R}^2{\setminus } \left\{ \left( (0,0),(0,3)\right) ,\left( (1,3),(1,0)\right) \right\} \) has a higher objective value than a solution of the form in (20).
Proof of claim
By Claims 4.1.1 and 4.1.2 we have that \(x_k^i = y_k^i = z_k^i\) and \(v_k^1= v_k^2\) for \(k\in \left[ K\right] \) and \(i\in \left[ 2\right] \). As direct consequence of these two claims and the choice of the forbidden regions and the demand points, it immediately follows that for any optimal solution
as otherwise, we could get a smaller objective function value by moving all \(x_k^i\) closer to the closest point in the respective sets.
The remaining proof is by complete enumeration of all possible choices of \(x_k^i\) and \(x_l^i\) according to (21). Table 2 illustrates the objective function values of \(\Phi _{\zeta _j}\) and \(\Phi _k^2+\Phi _l^2\).
Note, that by fixing all \(x_k^i\), the summation parts \(\Phi _{k}^2+\Phi _l^2\) are independent of all other variables and yield a constrained location problem that can be solved by linear programming algorithms.
Consider the first four solutions in Table 2. For every true clauses \(\zeta _j\) of the form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) [according to Eq. (20)], we get \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 45\), while \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 49\) for every false clause. For every clauses \(\zeta _j\) of the form \(\bar{x}_k\vee x_l\) and \({x}_k\vee \bar{x}_l\), we get \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 = 44\) for a true clause and \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2 =48\) for a false clause, respectively. The optimal values for \(v_k^i\) are given by
For all the other solutions in the table, the values of \(\Phi _{\zeta _j}^1 + \Phi _{k}^2+\Phi _l^2\) are strictly greater than for the first four solutions, therefore, any optimal solution fulfills
This proves part (b) and (c). To show part (a), recall that L is the number of total clauses and let \(L_1\) and \(L_2\) be the number of clauses of form \(x_k\vee x_l\) or \(\bar{x}_k\vee \bar{x}_l\) (Form 1) and the number of clauses of form \(x_k\vee \bar{x}_l\) or \(\bar{x}_k\vee x_l\) (Form 2), respectively.
Now assume an optimal point \(X^*\) of the (\(P_{R}^{l_1}\))-instance yields by Eq. (20) a solution to MAX-2-SAT with \(p_1^*\) true clauses of Form 1 and \(p_2^*\) true clauses of Form 2. Assume there exists a better solution to MAX-2-SAT with \(q^{\prime }:=p_1^{\prime }+p_2^{\prime }>p_1^*+p_2^*=: q^*\). By Eq. (20) this would yield a solution \(X^{\prime }\) with objective function value \(\Phi (X^{\prime })=45p_1^{\prime }+49(L_1-p_1^{\prime })+ 44 p_2^{\prime }+48(L_2-p_2^{\prime })\). Comparing the two objectives we get
which is a contradiction to the optimality of \(X^*\). \(\square \)
Approximation algorithms
1.1 Calculating the starting point \(v_{00}\) of the grid
To calculate the starting point \(v_{00}\) of Algorithm 1, we first need to calculate the outer demand points
These points are shown in Fig. 7. In the next cases Intersect is the intersection of two rays.
1.1.1 Case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \)
For the case with \(\mathcal {R}^{u}\subseteq \mathrm {conv}\left( \mathcal {A}\right) \) (Sect. 6.1) it is enough to define the corner points of the grid as




Moreover, the length L and the height \(L^{\prime }\) of the grid can be reduced to
This might be faster in practice, but the worst case running time does not change.
1.1.2 Case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \)
For the case \(\mathcal {R}^{u}\not \subseteq \mathrm {conv}\left( \mathcal {A}\right) \) (Sect. 6.2), we need to invest some more effort than before. Let, \(L^i\) and \(L_\perp ^i\) be the side lengths of the grid in direction of the longest extreme point \(b_1\in \mathrm {Ext}\left( B\right) \) and its orthogonal counterpart \(b_1^\perp \in B\), respectively. We need the grid to be balanced, i.e., the distances from \(a_1,\ldots , a_4\) to their corresponding boundaries are supposed to be balanced. The next process is illustrated in Fig. 12.
First compute


The surplus of each side of the grid is given by \(s= L^i-\gamma (h_2^{\prime }-a_2) \) and \(s^{\prime }= L_\perp ^i-\gamma (h_1^{\prime }-a_1) \). Then the points on the boundary of the grid are given by
which yields




1.2 Calculating a lower bound
Lemma 6.8
In case of \(\left|\mathcal {A}\right|=1\), the value \(L^0=\max \big \{ \min _{r\in \mathrm {bd}\left( R_k\right) } \gamma (a-r) \big | k\in \left[ K\right] :a\in \mathrm {int}\left( R_k\right) \big \}\) can be calculated in \({O}\left( D_1 KR{\cdot poly (\mathcal {R})} \right) \) time, where
Proof
Recall that B is the unit ball of \(\gamma \). Since \(R_k\) is convex the minimum is attained at one of the extreme points of \(a+\lambda B\) for a \(\lambda >0\). Starting with \(\lambda = 1\), we check if \(a+\lambda B \subseteq R_k\) is satisfied for at least one \(k\in \left[ K\right] \). If this is not the case, we iteratively divide \(\lambda \) by half until all extreme points are in at least one \(R_k\). If this is the case we iteratively double \(\lambda \) until it is not satisfied anymore and take the highest \(\lambda \)-value which satisfies it. Then it is possible to approximate \(L^0\) with a lower bound in
where \({ poly (\mathcal {R})}\) is the polynomial running time to calculate whether a given point lies in the forbidden region or not like given before. This time is still polynomial in the encoding length of the input data by assumptions (A2) and (A3). \(\square \)
Special problem structures: dynamic programming
Theorem 7.1
Algorithm 4 finds an optimal solution of (\(P_D^{ Tree }\)), provided that \(G_X=(V_X,E_X)\) is a tree.
Proof
Let \(T_{k^{\prime }}=(V(T_{k^{\prime }}),E(T_{k^{\prime }}))\) be the tree with root \(k^{\prime }\) and all arcs will point away from \(k^{\prime }\). We will show by induction on the height of the tree \(h({k^{\prime }})\) that for each \(v_i^{k^{\prime }}\in V_{k^{\prime }}\), the subtree of \(G_D\) rooted at \(v_i^{k^{\prime }}\) and iteratively defined by its successors \( succ (v_i^{k^{\prime }})\) will yield an optimal solution to

I.e., it is a solution to problem (\(P_D^{ Tree }\)) when fixing location \(x_{k^{\prime }}\) to \(v_i^{k^{\prime }}\). In addition, the objective value is equal to \(w(v_i^{k^{\prime }})\).
-
Induction Base: We consider \(h({k^{\prime }})=0\) and \(h({k^{\prime }})=1\). For the first case, we will have a single facility location problem, which can be easily solved by complete enumeration as done in line 12. Therefore, assume \(h({k^{\prime }})=1\). Then, for each \(v_i^{k^{\prime }}\) problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) reduces to
$$\begin{aligned} \begin{aligned} \text {minimize}\quad&\sum _{\begin{array}{c} l\in children (k^{\prime });\\ m:(l,m)\in E_A \end{array}} w_{lm} \gamma (x_l-a_m) + \sum _{l\in children (k^{\prime })} \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-x_l) \\&\qquad + \sum _{m:(k^{\prime },m)\in E_A} w_{k^{\prime }m}\gamma (v_i^{k^{\prime }}-a_m) \\ \text {subject to}\quad&x_i \in V_i\qquad \qquad i\in {V(T_{k^{\prime }})}{\setminus } k^{\prime }. \end{aligned} \end{aligned}$$(28a)Since \(T_{k^{\prime }}\) is a star graph with node \(k^{\prime }\) as its internal node this can be decomposed into \(\left|E(T_{k^{\prime }})\right|\) subproblems
$$\begin{aligned} \begin{array}{ll} \text {minimize}\quad &{}\displaystyle \sum _{m:(l,m)\in E_A} w_{lm} \gamma (x-a_m) + \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-x)\\ \text {subject to}\quad &{} x\in V_l \end{array} \end{aligned}$$(28b)for \(l\in children (k^{\prime })\). In the first iteration of Algorithm 4, \(s=0\) and only leaves of \(G_X\) are considered. Each node is assigned its node cost \(w(v_i^k)=c(v_i^k) = \sum _{m:(k,m)\in E_A} w_{km} \gamma (v_i^k-a_m)\). In the second and final iteration for \(s=h({k^{\prime }})=1\), we have that \( height (s)=\{k^{\prime }\}\). In line 9 the minimum of
$$\begin{aligned} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$is taken for each \(l\in children (k^{\prime })\), which is equivalent to (28b) since
$$\begin{aligned} w(v_j^l)= \sum _{m:(l,m)\in E_A} w_{lm} \gamma (v_j^l-a_m) \end{aligned}$$and
$$\begin{aligned} c(v_i^{k^{\prime }}, v_j^l) = \tilde{w}_{k^{\prime }l}{\gamma }(v_i^{k^{\prime }}-v_j^l). \end{aligned}$$Adding up all the subproblems in (28b) we get \(w(v_i^{k^{\prime }})\) as objective function value of (28a).
-
Induction Step,\(h(k^{\prime })\mapsto h(k^{\prime })+1:\) Let \(k^{\prime }\) be again the root node of \(T_{k^{\prime }}\). Fix \(x_{k^{\prime }}=v_i^{k^{\prime }}\in V_{k^{\prime }}\) to obtain (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)). As \(T_{k^{\prime }}\) is a tree, problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) decomposes into \(\left| children (k^{\prime })\right|\) subproblems. For each \(l\in children (k^{\prime })\), denote with \(P_l\) the lth subproblem and fix \(x_l=v_j^l\) for a \(v_j^l\in V_l\). By induction hypotheses Algorithm 4 finds an optimal solution to the subproblem \(P_l\) with \(x_l=v_j^l\) fixed and objective value \(w(v_j^l)\). Therefore,
$$\begin{aligned} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$minimizes subproblem \(P_l\) with additional demand point \(v_i^{k^{\prime }}\) for each \(l\in children (k^{\prime })\) (cf. line 9). Hence,
$$\begin{aligned} w(v_i^{k^{\prime }}) = c(v_i^{k^{\prime }}) + \sum _{l\in children (k^{\prime })} \min _{j\in \left[ \left|V_l\right|\right] } c(v_i^{k^{\prime }}, v_j^l) + w(v_j^l) \end{aligned}$$is equivalent to (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)). Since the Algorithm iterates over all \(l\in children (k^{\prime })\), problem (\(P_D^{ Tree } \big |{x_{k^{\prime }}=v_i^{k^{\prime }}}\)) is minimized for \(v_i^{k^{\prime }}\).
As consequence of the induction, \(v_i^{k^{\prime }}\in \arg \min _{v_i^{k^{\prime }}\in V_{k^{\prime }}} w(v_i^{k^{\prime }})\) minimizes the overall problem (\(P_D^{ Tree }\)) with objective function value \(w(v_i^{k^{\prime }})\). \(\square \)
Rights and permissions
About this article
Cite this article
Maier, A., Hamacher, H.W. Complexity results on planar multifacility location problems with forbidden regions. Math Meth Oper Res 89, 433–484 (2019). https://doi.org/10.1007/s00186-019-00670-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00186-019-00670-0