Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach.

Free full text 


Comput Oper Res. 2012 Jul; 39(7): 1582–1592.
PMCID: PMC3587456
PMID: 23471203

The bi-objective stochastic covering tour problem

Abstract

We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach.

Keywords: Multi-objective optimization, Stochastic optimization, Branch-and-cut, Covering tour problem, Disaster relief

1. Introduction

1.1. The problem

In the literature on transportation logistics, diverse variants of the Covering Tour Problem (CTP) have been investigated. One of the oldest proposed problem formulations is the Maximal Covering Tour Problem (MCTP) introduced by Current and Schilling [2], a bi-objective location-routing problem where a fixed number of nodes have to be selected out of the nodes of a given transportation network for visit, and a tour on these visited nodes has to be found. The objectives are: (i) minimization of total tour length, (ii) maximization of the total demand that is covered within some maximal pre-specified travel distance from a visited node.

The problem investigated in the present paper extends the bi-objective MCTP model by four features:

  • (i) The classical assumption in the CTP literature that a population node is either “covered” or “not covered” by a tour stop (i.e., a visited node of the tour), depending on the distance, is replaced by the more general assumption that a certain percentage of clients travel to a tour stop; this percentage is a decreasing function of the distance to the tour stop. The classical situation is obtained as the special case where the function is a simple step function.
  • (ii) Tour stops correspond to distribution centers (DCs) which are capacitated and have opening costs.
  • (iii) Demand in the population centers (“villages”) is uncertain and therefore modeled by random variables. This leads to a (two-stage) stochastic optimization problem, which is simultaneously multi-objective.
  • (iv) More than one vehicle can be available for transportation, such that for the delivery to the DCs, a capacitated vehicle routing problem (CVRP) instead of a simple TSP has to be solved. In other words, we deal with the more general case of a multi-vehicle CTP.

Although these four features can occur in different types of applications, they are almost always present in the application of a CTP model to the organization of a disaster relief operation. Let us shortly outline why this is the case. After a natural disaster (such as an earthquake, floods, fires or a hurricane), governmental and non-governmental organizations aim at providing the victims with diverse relief goods. For transporting the relief goods from suppliers to the disaster victims, a surrogate transportation system has to be established. It is not always feasible to deliver all goods directly to the people; rather than that, they have to be shipped to DCs that should lie as close as possible to the housings, such that the majority of people can easily reach them, either by walking or by the use of available private vehicles. Since not every small population unit can have a DC, victims have sometimes to traverse considerable distances to be able to reach the nearest DC. Evidently, assuming that below a certain distance threshold, all inhabitants of a population unit will go for picking up the relief goods and above that threshold, nobody will go, is an oversimplification. The assumption (i) above is more realistic. Because DCs are usually established in an improvised way, they typically offer only limited storage capacity, which is mapped by property (ii) above. Assumption (iii) takes account of the high degree of uncertainty on actual demands in a post-disaster scenario, which constitutes a characteristic property of disaster relief logistics (cf. [8]). Assumption (iv), finally, is natural in disaster relief since usually, a fleet consisting of several vehicles can be used for the delivery.

As a Stochastic Multi-Objective Combinatorial Optimization (SMOCO) problem, the problem investigated in this paper makes designing an appropriate solution technique a considerable challenge. Few general methods are available for tackling SMOCO problems (cf. [7,13]). In the present work, we shall combine a scenario-based approach with an epsilon-constraint method based on branch-and-cut to cope with the complexity of the optimization model.

1.2. Related literature

Gendreau et al. [6] study a single-objective variant of the CTP where the second objective of the MCTP is turned into a constraint requiring complete coverage of a given subset W of nodes. The constraint that a given number of stops have to be selected is replaced by the constraint that nodes in some set T must be visited, whereas nodes outside of T can be visited. Hachicha et al. [9] develop heuristics for the multi-vehicle CTP, where each vertex has to be covered by at least one of the vehicles. Jozefowiez et al. [11] solve a bi-objective CTP, where the second objective function of the MCTP is replaced by the largest distance between a node of some given set W and the nearest visited node.

Some articles on the CTP refer to applications in medical (emergency) supply or in disaster relief, which is also the type of application providing the main motivation for our present work. Hodgson et al. [10] propose a CTP model for planning mobile health care facilities. They take multiple classes of transportation links into account, depending on their accessibility, depending itself on the weather type. In Doerner et al. [5], a three-objective CTP model for mobile health care units in a developing country is treated by Genetic Algorithms and Ant Colony Optimization. In Nolz et al. [15], the problem of delivery of drinking water to the affected population in a post-disaster situation is formulated as a multi-objective CTP. The considered objectives are the sum of distances between all members of a population and the nearest (visited) delivery node, the number of population members unable to reach a delivery node within a predefined maximum distance, the tour length, and a minmax routing criterion measuring the latest arrival time at a population node.

The organization of the present paper is as follows: Section 2 introduces the mathematical problem formulation. In Section 3, the applied solution technique is described. Section 4 provides our experimental results for a real-life test case, and Section 5 contains concluding remarks.

2. Mathematical formulation

The problem is defined on a complete undirected graph G = (V0E) with node set V0 and edge set E. Therein, V0V ∪ {0}, where 0 denotes the depot, and V denotes the set of population nodes and/or potential distribution centers (DCs). In the envisaged application context, a population node is typically a village; therefore, for shortness, we shall use here the term “village” instead of “population node”.1 A village i ∈ V that is not a potential DC can be viewed alternatively as a potential DC with capacity zero. Conversely, a potential DC that is not a village can be viewed alternatively as a village with zero inhabitants. Thus, in the graph representation, it is not necessary to make a fundamental distinction between villages and potential DCs. Nevertheless, we shall preferably use the index i for a node when referring to its role as a village, and the index j when referring to its role as a DC.

As usual in the vehicle routing literature, we extend the given transportation network to a complete graph by setting E = {(ij)|i ∈ V0j ∈ V0i < j}. Note that edges are undirected. The distance between two nodes i ∈ V0 and j ∈ V0 is denoted by dij. The driving cost of a vehicle from node i to node j is assumed as proportional to dij, i.e., it can be expressed as τ · dij.

We restrict ourselves to a single commodity to be supplied to clients (i.e., inhabitants of the villages). This consideration also covers the situation where different commodities are to be supplied, but where their composition is homogeneous and there are no large items that could cause divisibility problems.

For the transportation of the commodity, a fleet K of vehicles is assumed to be available. The vehicles k ∈ K can have different capacities, but driving costs for a certain route are assumed to be identical, which represents a situation where velocities do not differ and the main cost component is the salary of the driver.2 The (load) capacity of vehicle k is denoted by Qk.

By the delivery period, we understand the time interval during which each (opened) DC has to be visited once. Depending on the application context, the length of a delivery period can be 1 day or also 1 week or even more. Let us start with the consideration of the planning process for a single delivery period; the case where periodic deliveries take place will be outlined in Remark 3 below.

The cost for opening a DC in node j for the current delivery period is denoted by cj. In each node, at most one DC can be opened. Possible cost components can be costs for leasing a building, setup costs, fitment costs, surveillance costs or other. We always let opening costs refer to the delivery period under consideration. E.g., if at the beginning of the delivery period, no DC has yet been installed in node j, then the full costs for establishing a DC in this node arise if it is opened. If a DC in node j was already open in the last delivery period, then cj only consists of the costs for keeping it operative in the current period. In this way, also the cost effect of changing the locations of opened DCs can easily be captured.

Furthermore, to each (potential) DC j, a capacity γj is assigned, which can depend on the location j. The quantity γj is to be interpreted as an upper bound for the throughput, i.e., for the amount of the commodity that can be supplied to clients at node j within one delivery period.

The demand occurring in village i ∈ V for the considered delivery period is denoted by Wi. The quantities Wi (i ∈ V) are assumed to be random variables with a known joint distribution. The random variables need not to be independent. It is convenient to decompose Wi as Wiξi · wi, where wiE(Wi) is the expected value of Wi, and the random variables ξi characterize the effect of uncertainty. The numbers wi can be obtained as baseline estimates, derived from the population sizes of the villages.

Covering tour models (based on facility location models using “coverage” measures) assume that the distance between a client and the nearest tour stop determines whether or not this client can be counted as “covered” by the chosen supply plan: if the distance is smaller or equal to some threshold value d0, the client is covered, otherwise she/he is uncovered (cf. Current and Schilling [2], Gendreau et al. [6]). In the present paper, we introduce a flexible generalization of this basic concept by assuming that (i) each client considers the nearest DC for possibly supply, (ii) the share of clients actually traveling from their village i to a (nearest) DC in node j is given by a non-increasing function ψ of the distance dij. The special case where ψ(d) = 1 if d ≤ d0 and ψ(d) = 0 otherwise leads us back to the assumptions of the traditional covering tour model. Possible alternatives are exponential decay functions of the form ψ(d) = rd with 0 < r < 1, or step functions of the form

ψ(d)=1ifdd0,brifdr1<ddr(r=1,,R),0ifd>dR
(1)

with 0 < d0 < d1 <  ⋯  < dR and 1 > b1 > b2 >  ⋯  > bR > 0.

The structure of the decision process is supposed to be the following:

  • • Before the delivery period starts, a decision has to be made on which DCs to open, and on the tours by which the available vehicles visit the opened DCs. Split deliveries are not allowed, i.e., each opened DC has to be visited by exactly one vehicle. We call the decision on DCs and tours the first stage decision.
  • • During the delivery period, actual demands become known. More precisely, the drivers can now observe the request in each opened DC j, where the request in DC j is defined as that part of the demand that is claimed by the people who have come to DC j. If the request in a DC j exceeds the capacity γj of the DC, the maximum supply in DC j is γj; otherwise, the maximum supply in this DC is identical to the request. If the load of a vehicle k (limited by the capacity Qk) is sufficient to cover the maximum supplies in all DCs assigned to this vehicle, the driver delivers in each DC just this maximum supply. (The commodity is assumed as available in the depot to an unlimited amount.) If, on the other hand, the load of the vehicle is not sufficient, the driver delivers the entire load Qk to the visited DCs, choosing an arbitrary distribution over the DCs. Since for our objective functions (which will be explained below), it does not matter the demand of which clients is satisfied, the special choice of the distribution is irrelevant. As a particular consequence, it makes no difference for our model whether the requests in each DC become known at once (e.g., communicated to the depot by phone), which gives the driver more freedom in choosing the distribution, or gradually (when going from DC to DC, the driver observes how many people are waiting there for delivery). Typically, in case of shortage, the driver will try to distribute the available load as evenly as possible for equity reasons, but this aspect is not represented in our objective functions. We call the decision on the amounts of actual deliveries to each DC on each tour the second stage decision.

Two objective functions are considered:

  • • Objective function f1 measures the total cost, i.e., the cost for opening DCs plus the total driving cost.
  • • Objective function f2 measures the expected uncovered demand. Uncovered demand is composed of (i) the demand of those clients who do not go to the nearest DC because the distance is too large for them, (ii) the demand that is not satisfied in a DC because of capacity limits of the DC, and (iii) the demand that is not satisfied in a DC because of vehicle capacity limits. The expectation is taken with respect to the outlined stochastic model.
Remark 1

The model can also be applied to the situation of periodic delivery: suppose that the planning horizon refers to a supply phase consisting of several delivery periods, and assume that DCs and delivery tours have to be kept unchanged during the entire supply phase. (This can be necessary for administrative reasons and/or for the reason to provide the population with a fixed supply scheme which it can rely upon.) In each single delivery period of the supply phase, the demand is determined by the stochastic model described above. Then, the bi-objective optimization model can be used in a formally identical way, with the exception that now, the opening costs ci are to be interpreted as opening costs per delivery period, i.e., they are computed by dividing the total opening costs for the supply phase by the number of delivery periods.3

We formulate the problem as a bi-objective, two-stage stochastic optimization problem with recourse. The following decision variables are used:

  • xijk denotes the number of times vehicle k drives through edge (i,j).
  • yij = 1 if village i is assigned to a DC in node j, and yij = 0 otherwise.
  • zjk = 1 if vehicle k visits node j, and zjk = 0 otherwise.
  • ujk denotes the supply delivered by vehicle k to a DC in node j.

The variables xijk, yij and zjk determine the first-stage decision, whereas the variables ujk constitute the second-stage decision. We shall use the abbreviation δ(S) for the set of edges in E with exactly one end-node in S ⊆ V0, i.e., δ(S) = {(ij) ∈ E|i ∈ Sj ∈ V0 ⧹ S or j ∈ Si ∈ V0 ⧹ S}. Instead of δ({j}), we shall shortly write δ(j). Moreover, the abbreviation xk(E′) = ∑(i,j)∈Exijk for a set E′ ⊆ E will be used. The 3-dimensional array (xijk) and the matrices (yij), (zjk) and (ujk) will be denoted by x, y, z and u, respectively.

The bi-objective stochastic covering tour problem is now given as follows:

First stage:

minx,y,z(f1,f2)s.t.
(2)
f1=τkK(i,j)Edijxijk+kKjVcjzjk,
(3)
f2E(R(yzξ)), 
(4)
jVyij=1iV,
(5)
yijkKzjki,jV
(6)
jVdijyijdim+M1kKzmki,mV,
(7)
kKzjk1jV,
(8)
kKz0k=|K|,
(9)
xk(δ(j)) = 2zjk ∀ j ∈ V0k ∈ K
(10)
xk(δ(S)) ≥ 2zjk ∀ S ⊆ Vj ∈ Sk ∈ K
(11)
xijk{0,1}(i,j)Eδ(0),xijk{0,1,2}(i,j)δ(0),
(12)
yij{0,1}(i,j)V,
(13)
zjk{0,1}jV0,kK,
(14)

Second stage:

R(y,z,ξ)=minuiVξiwikKjVujks.t.
(15)
ujkiVξiwiψ(dij)yijjV,kK,
(16)
ujk ≤ γjzjk ∀ j ∈ Vk ∈ K
(17)
jVujkQkkK,
(18)
ujk ≥ 0 ∀ j ∈ Vk ∈ K.
(19)

Eq. (2) characterizes the first-stage problem as a bi-objective minimization problem with objective functions f1 and f2. Eq. (3) defines f1 as the sum of driving costs and opening costs; observe that for j ∈ V, a DC in node j is opened if ∑kKzjk = 1. Eq. (4) introduces f2 as the expected uncovered demand, where the expectation is taken with respect to the distribution of the vector ξ of random variables ξi. The actual uncovered demand R(yzξ) results as the solution of the second-stage problem (15)–(19). In the terminology of stochastic programming, we can call f2 the recourse function; it expresses the expected cost of the decision to be made after the unknown parameter values become known. In our case, this decision consists of the choice of the amounts of the commodity to be unloaded in each DC, based on information on the actual requests. Since the first objective function is already determined by the solution (x,y,z) of the first-stage problem, the recourse function only captures the value of the second objective function, i.e., of the expected uncovered demand.

Constraints (5) ensure that to each village, exactly one DC is assigned. Constraints (6) express that a village can only be assigned to an opened DC. Constraints (7), wherein M denotes a sufficiently large number, make sure that each village is assigned to the nearest opened DC. Constraints (8) state that each DC can have no more than one visiting vehicle, and inherently there can only be at most one open DC in a village. Constraint (9) ensures that all vehicles visit the depot. Constraints (10) are the degree constraints for the tours. Constraints (11) are the usual subtour elimination constraints. Constraints (12) state that a link between two villages can be traversed by a vehicle at most once, whereas a link between a village and the depot can be traversed at most twice (the possible value 2 accounts for the feasibility of a forth-and-back tour between the depot and a single village). Constraints (13) and (14), finally, define the variables yij and zjk as binary variables.

The first-stage problem combines the mathematical programming model for the CVRP with that of a (bi-objective) location problem: the location aspect is given by the choice of the DCs. For a fixed choice of the nodes in which DCs are to be opened, a CVRP remains to be solved. In our formulation, Eqs. (10) and (11) are similar to the usual connectivity and subtour elimination constraints in a CVRP, respectively, but they are extended in such a way that only opened DCs are allowed (and required) to be visited by vehicles.

In the second-stage problem, Eq. (15) defines the objective function (to be minimized) as the uncovered demand, where the uncovered demand results as the difference between total demand and total supply. Constraints (16) state that the supply ujk delivered by vehicle k in DC j must not exceed the total request in DC j. Note that the total request in a DC is obtained as the sum of the requests from all villages that are assigned to this DC, and that the request from a village i addressed to DC j is given as the demand ξiwi in village i, multiplied by the factor ψ(dij). Constraints (17) ensure that the supply delivered by a vehicle to a DC does not exceed the capacity of this DC, and that vehicle k only delivers to DC j if it stops there. Constraints (18) make sure that the total supply provided by a vehicle does not exceed its load capacity, and constraints (19) define the supplies as nonnegative reals.

Remark 2

For given ξ, the second-stage problem can be solved explicitly: by (16) and (17), in village j, vehicle k can deliver a maximal amount of

miniVξiwiψ(dij)yij,γjzjk.

Thus, if the vehicle capacity Qk is sufficient, vehicle k delivers a total amount of

jVminiVξiwiψ(dij)yij,γjzjk.
(20)

On the other hand, if the vehicle capacity does not suffice, an amount of Qk is delivered. This determines the total supply by vehicle k as the minimum of (20) and Qk. Therefore, the maximum overall supply provided by the entire fleet is

σ(y,z,ξ)=kKminjVminiVξiwiψ(dij)yij,γjzjk,Qk.
(21)

For supply σ(yzξ), the uncovered demand is ∑iVξiwi − σ(yzξ). Now let us take the expected value E over all possible realizations of ξ. Since E(∑iVξiwi) = ∑iVE(ξi) wi does not depend on the decision, minimizing the expected uncovered demand is the same as maximizing the expected supply. Therefore, the recourse function f2 representing the expected uncovered demand is given by

f2f2(yz) = C − E(σ(yzξ)), 
(22)

where the expected demand C = ∑iVE(ξi)wi is a constant, and σ(yzξ) is given by (21). However, because of its nonlinearity, the explicit representation of σ(yzξ) by (21) is of little use within the solution framework that we shall apply in the next section.

3. Solution technique

3.1. Treatment of stochasticity

Only few techniques are available for solving stochastic multi-objective combinatorial optimization (SMOCO) problems. In the present paper, we choose a conceptually simple approach that is also frequently applied to stochastic single-objective problems, namely the optimization based on a fixed sample of random scenarios (cf. Birge and Louveaux [1]). In this approach, the given distribution of the random vector ξ is approximated by an empirical sample distribution, obtained by drawing N scenarios ξ(ν) (ν = 1, …, N) from the original distribution of ξ, and replacing the original distribution by a discrete distribution where each of the N scenarios has the same probability 1/N. (A refined method would apply an approximation where the probability of scenario ν is pν > 0 (ν = 1, …, N), with ν=1Npν=1. It is easy to generalize our approach to this situation.) The expected uncovered demand f2 is then approximated by the sample average

f¯2=1Nν=1NR(y,z,ξ(ν)).

Replacing f2 with f¯2, we obtain a deterministic bi-objective combinatorial optimization problem. It results from (2) to (19) by the following changes, where f¯2 shall be written again as f2 for convenience: constraint (4) is replaced by

f2=1Nν=1NR(ν),

and instead of (15)–(19), for each ν = 1, …, N, the following version of the second-stage problem is included in the constraints:

R(ν)=minu(ν)iVξiwikKjVujk(ν)s.t.
(23)
ujk(ν)iVξi(ν)wiψ(dij)yijjV,kK,
(24)
ujk(ν)γjzjkjV,kK,
(25)
jVujk(ν)QkkK,
(26)
ujk(ν)0jV,kK.
(27)

It is seen that this change multiplies the number of required variables of the type ujk by the factor N. In total, the resulting problem (consisting of the first-stage problem and the N versions of the second-stage problems) is a bi-objective ILP.

3.2. An exact solution algorithm

We now describe an algorithm for the exact solution of the bi-objective ILP (resulting from the sampling) presented in the previous section. The applied solution concept for the multi-objective problem is that of the determination of the set of Pareto-optimal solutions.4 Our approach is based on the epsilon-constraint framework [12]. We first provide a short description of this framework and discuss how to apply it to the covering tour problem; then we present problem-specific improvements.

The reader may wonder why we apply the concept of Pareto optimality instead of simply combining the two objective functions f1 and f2 to a weighted average w1f1w2f2 with suitable weights w1 and w2. Of course, this weighted-average approach would be computationally less expensive. However, it is a well-known practical problem that decision makers are often reluctant to define weights for conflicting objectives in advance. This may be especially true for the humanitarian relief application area considered in this paper, since to define the ratio between the weights can amount here to defining the monetary value of a human life. A decision support system that requires that an a-priori ratio between monetary objective and life-related objective is fixed and then runs a black-box procedure based on this parameter would hardly be accepted.

A second drawback of the weighted-average method is that it can only find so-called supported solutions. In the multi-objective optimization literature, a solution is called supported if it is optimal under a suitable weight vector for the objectives. However, in many cases, unsupported solutions can also be very attractive because they can serve as compromise candidates. Consider, e.g., the case where the Pareto front of a bi-objective minimization problem is a concave decreasing curve. Then a weighted average optimization would only provide the two extreme solutions on the front, but both may be sub-optimal, since the (implicit) utility function of the decision maker needs not to depend linearly on the two objectives.

Even in the case of a linear utility function, it is often observed that decision makers feel more comfortable with assigning weights to objectives after they have seen the trade-off curve between the objectives, i.e., the Pareto front.

In the considered disaster relief application, there is still another motivation for computing the entire Pareto front. It is rarely the case that a specific relief operation has a fixed (pre-determined and unchangeable) budget. Rather than that the organization carrying out the relief operation has an overall budget and must decide how to distribute it, not only over (possibly) several places where disaster aid has to be provided simultaneously, but also with respect to different types of spending, including also investment into equipment, long-term stockpiles, personnel, infrastructure etc.5 Now, as soon as the question arises how to optimally distribute a fixed global budget over several special operations, it is not sufficient anymore to know the best possible plan for one of these operations under a given fixed individual budget. Instead, information on the tradeoff curves between costs and effects is required, since an increase of the planned budget in one operation and a corresponding decrease in another operation may be detrimental as well as beneficial, depending on the tradeoffs. The tradeoff curves, however, are exactly the Pareto fronts.

3.2.1. Applying the epsilon-constraint framework to the covering tour problem

The epsilon-constraint framework is a general-purpose solution approach for multi-objective optimization; it iteratively solves single-objective versions of the multi-objective problem, with additional so-called epsilon-constraints, in order to enumerate all Pareto-optimal solutions. This framework is summarized in Algorithm 1 for the case of a problem with two objectives, f1 and f2, both of which should be minimized. The value of the constant ε is assumed to be small enough in comparison with the differences between values of f1 and f2 along the Pareto front. It is assumed that the function Min(fκ) optimizes the corresponding ILP for objective κ only (κ=1,2). In our case, total cost is used as objective 1, and average uncovered demand as objective 2.

In each iteration, the ILP is solved a first time for objective 1, using an additional constraint bounding objective 2, which is called the epsilon constraint. Then using this newly found optimal value for objective 1, objective 2 is optimized. A new Pareto-optimal solution is thus generated, which we add to the result set Λ. Then the epsilon constraint is updated so that the next point to be generated is better than all previously generated solutions with regard to objective 2.

Algorithm 1

ε-constraint framework: all Pareto-optimal solutions to a bi-objective minimization problem are computed and stored into set Λ.

1 :Λ ← ∅
2 :ε-constraint  ← f2 ≤  ∞ 
3 :add ε-constraint to ILP
4:repeat
5 : x ← Min(f1)
6 : localObjectiveBound ← f1f1(x)
7 : add localObjectiveBound to ILP
8 : x ← Min(f2)
9 : Λ ← Λ ∪ {x}
10 : remove localObjectiveBound from ILP
11 : update ε-constraint: ε-constraint  ← f2 ≤ f2(x)−ε
12:until ILP cannot be solved

The second phase of each iteration, which optimizes the ILP with regard to objective 2, can also be improved in our case. Instead of setting a bound on objective 1, as described in lines 6 and 7, we perform a stronger variable fixing: all xijk variables that have positive integer values after phase 1 (i.e. after line 5) are fixed to this integer value. In practice, this means that as soon as the cost objective has been optimized, the routes are fixed, and the solver has only to decide which quantities are to be delivered to which node. This is done under the assumption that different sets of routes will never incur exactly the same cost. Clearly, this assumption does not necessarily hold in the general case. However, we consider it to be realistic in practice. (In the case where the constants in the model are continuous values, the assumption holds generically.) Moreover, in the case where the assumption does not hold, it can only happen that a dominated solution is generated additionally to the solutions from the Pareto front; it cannot lead to dropping any Pareto-optimal solution. The fixed xijk variables are released again immediately after the ILP has been solved for objective 2.

A major difficulty with solving this ILP lies in the consideration of subtour elimination constraints (11), since there are too many of them to consider them explicitly. A common approach for vehicle routing problems is to relax these constraints and generate them dynamically if they are violated, in a branch-and-cut fashion. For more details, see [14]. In our case, the subtour elimination constraints only apply to nodes that are visited (i.e. opened DCs), which is reflected on the right-hand side of the constraint by the zjk variables, where this right-hand side value is traditionally equal to 2 for classical routing problems. This has a minor impact on the cut separation procedure. The traditional separation procedure solves a Max-Flow-Min-Cut problem on a graph reflecting the current fractional solution. In our case, a node can only be used as a sink for the Max-Flow-Min-Cut procedure if it is used as a DC, and the right-hand side has to be generated accordingly, using only the zjk variables present in the working subset of V. Additionally, the procedure is called for each vehicle separately. Another possibility would consist in aggregating xijk and zjk variables over the vehicles, which would still result in valid inequalities, provided that the fleet is homogeneous. An advantage would be that the total number of subtour elimination constraints would be divided by K. However, when we tried this alternative, we found that it leads to slightly worse performance in practice.

3.2.2. Problem-specific contextual valid inequalities

In the following, we introduce special inequalities, which we call contextual inequalities. Such inequalities are related to the epsilon-constraint framework. More precisely, the solution obtained at iteration i of the epsilon-constraint algorithm allows us to derive inequalities that are valid at iteration i+1.

The following two inequalities can be applied to strengthen the problem formulation in the case where for ψ, a function with the property ψ(d) = 0 for d > dmax with some dmax > 0 (e.g., a step function of the type (1)) is used. Let U denote the current upper bound (as used by the ɛ-constraint algorithm) on the average uncovered demand over the scenarios (i.e., the difference between total demand and lower bound on total covered demand, divided by N), and let

Dj=1Nν=1Nξj(ν)wj,

denote the average demand in node i over the scenarios. Moreover, by

γ¯j=maxij,dijdmaxγi

we denote the maximum capacity of a node i different from node j within a distance smaller than dmax from j. Observe that if there is no DC in node j, then the total supply in node j cannot exceed the value γ¯j.

With these notations, the following two sets of inequalities are valid:

DjkKzjkγjUjV,
(28)
Dj1kKzjkγ¯jUjV.
(29)

This is easy to see:

  • First inequality: Since ∑kKzjk is 1 if a DC in node j is opened and 0 otherwise, the inequality in (28) is empty for each node in which no DC is opened. On the other hand, for a node j in which a DC is opened, (28) requires that the average demand in j cannot exceed the capacity of DC j plus the average uncovered demand. Note that a DC in j is the nearest DC for the inhabitants of j, such that each part of the demand that cannot be covered by this DC contributes to the uncovered demand.
  • Second inequality: The inequality in (29) is empty for each node where a DC is opened. On the other hand, for a node j where a DC is not opened, (29) requires that the average demand in j cannot exceed the maximum capacity value among those DCs that are near enough to supply node j, plus the average uncovered demand.

By inequalities (28) and (29), a negative or positive decision on whether or not to open a DC is derived directly for some of the DCs: inequality (28) forces some DCs not to be opened, whereas (29) forces some DCs to be opened.

3.3. A heuristic solution algorithm

For being able to solve also larger instances, we modify the epsilon-constraint algorithm of the previous section in order to obtain an epsilon-constraint based heuristic. The modification is motivated by the following observations which we made when applying the exact epsilon-constraint algorithm to our real-life test instances (cf. Section 4 below); note that by the exact algorithm, the elements on the Pareto front are computed from left (top) to right (bottom): (i) for small instances, all solutions of the Pareto front were obtained within short computation time. (ii) For larger instances, it happened that the rightmost part of the Pareto front (high cost, low uncovered demand) was not found during reasonable computation time. (iii) Typically, each new solution required a higher CPU time than the previous solution (i.e., its left neighbor) on the same front.

Algorithm 2

Heuristic H1.

1:Λ ← ∅
2:ε-constraint  ← f2 ≤  ∞ 
3:Add ε-constraint to ILP
4:repeat
5: now ← getCurrentTime()
6: t ← getPeriod(now)
7:  timeLimit ← ct − now
8:  set gap tolerance of MIP solver to gt
9:  x ← Min(f1timeLimit)
10:  if MIP-solver found a solution within time limit then
11: localObjectiveBound ← f1f1(x)
12: add localObjectiveBound to ILP
13: x ← Min(f2)
14: Λ ← Λ ∪ {x}
15: remove localObjectiveBound from ILP
16: update ε-constraint: ε-constraint  ← f2 ≤ f2(x)−ε
17: end if
18:until ILP cannot be solved or getCurrentTime() ≥ c𝒯

Starting from these observations, we propose a heuristic H1 that determines the exact Pareto front for easy instances with a high probability, and finds at least a good approximation to the exact Pareto front for harder instances. The key idea of this heuristic is to adapt the accuracy of the MIP solver to the difficulty of the MIP at hand in order to spend less time for the precise identification of the most time-consuming solutions (the solutions at the right end of the Pareto front). For this purpose, we start the computation with the exact method and progressively reduce the accuracy of the MIP solver as the computation continues.

There are several possibilities to reduce the accuracy of a MIP solver. Our choice is to increase the tolerance gap between upper and lower bound used as a criterion to consider a solution as “optimal”. Normally, the MIP solver uses values close to zero for this parameter. In the C++ binding of CPLEX (Concert), the name of this parameter is “EpGap” and its default value is 0.0001; this means that an integer solution is considered to be optimal if its objective function value is within 0.01% of the lower bound. In H1, we consider values ranging from 0% up to about 100%, which means that in the least accurate case, a solution with a cost twice as high as its lower bound will already be accepted. (In our experiments, solutions with the last property have always been found within milliseconds.)

In order to apply this principle, we fix a (static) computation schedule consisting of 𝒯 time periods t = 1, …, 𝒯 with associated closing times ct and gap tolerance values gt (t = 1, …, 𝒯). For the description of H1, let us assume that we have the following functions:

  • getCurrentTime() returns the CPU time elapsed since the start of H1,
  • getPeriod(time) returns the period index t = min{t′: ct ≥ time} to which time point time belongs in the schedule being used, and
  • Min(ftimeLimit) calls the MIP solver for minimization of f with a given time limit timeLimit.

Using this notation, we propose heuristic H1 as a modified epsilon-constraint algorithm where the MIP solver is iteratively executed with a given time limit and a given gap tolerance, both depending on the period. Heuristic H1 is described in Algorithm 2.

4. Computational experiments

4.1. Test instances

We provide data for the region of Thies, located in western Senegal. The region is split into 32 so-called “communautés rurales”, each consisting of several villages. For each of these communautés rurales, we derive an instance by using the largest village as depot, two vehicles and road network information as cost matrix. We assume that 100% of a village's population will walk to the closest DC if the distance is less than 6 km, and that 50% will walk to the DC if it is more than 6 km but less than 15 km. Truck capacities are considered as not constraining, and DC capacities are set to three times the baseline demand of the associated village, which itself is equal to the population size of the village. Opening costs for DCs are assumed as identical for all potential places.

For the demand distribution, we make the following assumption: the “uncertainty factor” ξi for the demand in village i is the sum of a random baseline term that is common for the whole region, and a correction term that is specific for village i. The village-specific correction terms are assumed as independent. In this way, we model positively correlated demands, which is realistic since certain future environmental developments will impact the whole region, whereas there will also be random demand fluctuations affecting the villages separately from each other. Both the baseline term and the village-specific correction term are assumed as uniformly distributed on certain intervals. To express this in formulas, we assume that the factor determining the baseline demand is given by ξbas=ξ¯β1+2β1Z, from which the factor for the actual demand in village i is determined via ξiξbas − β2 + 2β2Zi. Therein, ξ¯ is a constant, Z and Zi (i=1,…,n) are independent uniformly distributed random numbers between 0 and 1, and β1, β2 > 0 are constant parameters. The demand in village i results then as ξiwi, as described in Section 2. We choose ξ¯=1 (which defines the demand unit as the expected baseline demand of one person) and estimated β1β2 = 0.5. For each problem instance, we generate 10 sample scenarios (demand vectors) according to the described distribution.

4.2. Examples

In order to provide some insight into the problem, we now take two of those instances as examples and study selected solutions. The first instance is called Mbayene and has 11 villages. Fig. 1 shows two solutions for this instance. These solutions are successive elements on the Pareto front. Each of the two solutions has two routes; each route visits exactly one village. Thick curvy lines represent these routes, following the real road network, and starting from the square representing the depot. (We do not represent the overall network in order to keep the pictures readable.) Full and dashed lines represent villagers coming from another village to a DC; full lines are for the case where 100% of the villagers come, dashed lines for the case where 50% only of the villagers come. If we look at the first solution, we observe that the village called Niandoul Ouolof is open as a DC, since it is visited by a vehicle. All villagers from Diemoul Ouolof go to this DC, as well as half of the villagers from Sine Amar and from Ndiourky. Now if we look at the second solution, Niandoul Ouolof is no longer open as DC; Diemoul Ouolof is open instead, and all villagers from Niandoul Ouolof as well as half of the villagers of Sine Amar and Ndiourky go to Diemoul Ouolof. In terms of cost, the second solution is worse: the routing cost is slightly higher, and the number of open DCs is the same. In terms of coverage, the second solution is better, although this is less obvious to see: it is due to the facts that (i) the capacity at Niandoul Ouolof is less than what would be needed to satisfy all inhabitants using it as a DC, and (ii) the capacity at Diemoul Ouolof is higher than that at Niandoul Ouolof because it is a larger village. In the end, the second solution has a slightly better coverage but also costs a little bit more, because the larger village is farther away from the depot.

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Example of two adjacent solutions on the Pareto front for instance Mbayene. (a) First solution to instance Mbayene and (b) Second solution to instance Mbayene.

The second example refers to the communauté of Meouane. Fig. 2 shows a solution close to the end of the Pareto front (corresponding to full coverage). For this reason, most villages in this solution are fully covered. Two of them are only partially covered though, and one of them is not covered at all. Moreover, it could be that some of the fully covered villages have in fact unsatisfied demand, due to capacity restrictions at the DCs.

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

A solution close to the end of the Pareto front for instance Meouane.

4.3. Experimental results

The algorithms described in Section 3 are implemented in C++ and compiled with g++ 4.3.2. CPLEX 12.1 is used as MIP solver and multi-threading is disabled. The program is run on a 2.67 GHz Intel Xeon CPU using the Linux operating system.

4.3.1. Exact algorithm

For each instance, we run the algorithm described in Section 3 with a CPU time limit of 3 days. We now report, for each instance, the number of nodes, the total number of solutions found, and the CPU effort in seconds. If the solution method cannot find the whole Pareto front within the 3 day budget, we report “ > 3d” for CPU time. Table 1 summarizes these results.

Table 1

Problem size, Pareto front size and CPU effort for Senegal instances.

Instance# nodesFront sizeCPU time
Cherif Lo101311.77
Diender Guedj2234 > 3d
Fandane2451 > 3d
Fissel21107 > 3d
Koul1629569.11
Malicounda Wolof122736.08
Mbayene123181.75
Mboro31112 > 3d
Meouane1731231.13
Merina Dakhar19462451.19
Mont Roland1517128.31
Ndiagagniao2492 > 3d
Ndiakhene961.42
Ndiass18556924.66
Ndieyene Shirak112749.21
Neugeniene1736883.28
Ngandiouf197513261.8
Nguekhokh1936573.29
Notto Gouye Diama111611.51
Notto2843 > 3d
Pekesse122454.72
Pire Goureye18553663.46
Pout2955 > 3d
Sandira1528198.01
Tassette21334973.5
Thiadiaye132158.33
Thienaba101414.06
Thilmanka1439268.81
Tiba Ndiaye1422104.49
Touba Toul204312,706.5

There seems to be a tractability limit closely related to the instance size: instances with 20 nodes or less can all be solved, one instance with 21 nodes can also be solved (Tassette), but one instance with 21 nodes cannot be solved (Fissel) and all instances with more than 21 nodes cannot be solved either.

4.3.2. Heuristic algorithm

We also solved all instances of Table 1 by the heuristic algorithm H1 of Section 3.3. For this purpose, we applied a schedule with the following specification:

  • • The first period lasts half of the total invested CPU time B and has a gap tolerance set to the default value of 0.0001 (this is the exact method);
  • • the second period is split into 10 subperiods, each increasing the gap tolerance of the previous period by 10%.

In other words, we choose the parameters of H1 as 𝒯 = 11, ct = 0.5B + 0.05(t − 1), and gt = 0.0001 + 0.1(t − 1) (t=1,…,11). Using this specification and a total CPU budget B of 8 h, H1 was able to solve all instances. In all cases, H1 stopped before reaching the CPU budget limitation of 8 h: the highest gap tolerance ever reached in any of the runs was 70%.

Let us now look at a first example, the Diender Guedj instance. This instance could not be solved by the exact method, although the last found solution has a very low uncovered demand; so it is likely that only few solutions on the true Pareto front have been missed by the exact method after the invested runtime of 3 days. In Fig. 3, we plot in the same diagram the points on the Pareto front found by the exact method (red “+” signs) and the image points of the solutions proposed by the heuristic H1 (green “×” signs). A vertical blue line shows the end of the first period for the heuristic H1, i.e., the period where H1 works as the exact algorithm. The solutions corresponding to the points ×to the left of this line were found using the parameter setting for the exact method, whereas those corresponding to the points to the right of this line were found using an increased gap tolerance. (Since the CPU budgets were different for the exact method and for H1, the blue line has no interpretation for the execution of the exact algorithm.)

An external file that holds a picture, illustration, etc.
Object name is gr3.jpg

Comparison of exact and H1 for instance Diender Guedj. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

We can see that in this example, if H1 is given 8 h, it finds all solutions that the exact method has found within 3 days of CPU effort, plus an additional solution with zero uncovered demand.

As a second example, let us consider the Ndiagagniao instance (see Fig. 4). Although less spectacular, the comparison shows again that H1 produces high-quality solutions: up to the largest f1 value for which the exact algorithm has found a solution, all heuristic solutions are Pareto-optimal or have at least image points very close to Pareto-optimal points. To the right hand side of the largest f1 value for which a solution was still found by the exact approach, H1 delivers additional solutions that are non-dominated by any of the solutions obtained by the exact method. This example also shows, however, that as a consequence of the nonzero gap tolerance given to the MIP solver, it can occur that H1 proposes a solution that is even dominated by another solution proposed by H1: In the considered instance, H1 proposes a solution with cost 62,411 and uncovered demand 647.33, which is dominated by another proposed solution (found in a later iteration during the execution of H1) with cost 62,083 and uncovered demand 578.01. It is easy to see that this can happen although the epsilon constraint is always respected. Nevertheless, we do not consider this as an important issue in a heuristic context, as it is always possible to filter out dominated solutions.

An external file that holds a picture, illustration, etc.
Object name is gr4.jpg

Comparison of exact and H1 for instance Ndiagagniao.

Figs. 3 and 4 also show that in addition to supported solutions, the Pareto front can contain a considerable number of unsupported solutions as well (cf. Section 3.2). In Fig. 3, e.g., solutions no. 4, 7, 9, 10 and 11 from left are obviously unsupported: None of these solutions can be optimal under any weighted average of the two objectives. In the right part of the Pareto front, more than half of all solutions are unsupported. Omitting unsupported solutions in Fig. 4 would leave a larger gap in the cost range between 14,000 and 18,000 and may thus hide meaningful solutions for a decision maker with a low budget.

4.4. Practical application of the approach

Let us outline how the proposed approach can be used within a Decision Support System (DSS) supporting the planning of a disaster relief operation. It is clear that a suitable user interface will be needed. The overall procedure may follow a general line adopted from other areas of application of multi-objective decision analysis technology. It is characterized by the following steps.

First, the system must be provided with data. Most of them can probably be extracted (automatically or semi-automatically) from existing GIS applications. Demand distribution parameters will have to be added, but note that if the optimization is done repeatedly (cf. Remark 3 in Section 2), then only an update compared to the previous period is necessary, which will usually require a very limited amount of changes and can therefore be done quickly.

Based on the data, the system computes the Pareto front and visualizes it in a similar form as in our Figs. 3 and 4. The computation of the Pareto front is the computationally challenging part of the overall procedure. It may be carried out overnight, if the planning period is 1 day. All subsequent steps are computationally easy.

In the next step, the DSS allows the decision maker (DM) to narrow down the set of Pareto-optimal solutions by the definition of aspiration levels. E.g., after having had a look at the Pareto front, the DM may decide that solutions with costs exceeding a certain level should not be taken into consideration, and that solutions for which the uncovered demand exceeds a certain threshold should be excluded as well.6 Then, the system automatically restricts the set of solution candidates based on these added constraints and shows the resulting smaller set. Also other constraints may be added by the DM. It should be noted that the computation can now be done within milliseconds, since it only requires that the (stored) list of the pre-computed Pareto-optimal solutions is scanned; the DM receives an instantaneous feedback.

If the DM wishes to see some of the remaining solutions visualized on a map, or to have some statistics on their properties, the DSS immediately provides her/him with this information. The set of solution candidates can then be further restricted by additional constraints. This process can be repeated in an interactive way as long as desired.

As soon as only a small set of solution candidates remains, their full characteristics, representations on maps etc. can be printed and left to the DM for final choice. The final choice can also take place in a group decision process; in any case, it is made by humans and not by the system. The DSS only serves as a tool to filter out a possibly huge number of suboptimal solutions and saves the time the DM might waste for the evaluation of even part of them.

5. Conclusions

We have introduced a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand, respectively. In our model, we take account of the fact that only a certain percentage of clients is willing to go from their homes to the nearest distribution center, depending on the distance. An application in humanitarian logistics has been outlined, but other application areas are also possible.

For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, we use a branch-and-cut technique within an epsilon-constraint algorithm, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors. For this counterpart version, our algorithm provides the exact set of Pareto-optimal solutions. Although already the single-objective, deterministic boundary case of the considered problem is an NP-hard Covering Tour Problem and the complexity is further increased by the bi-objective and the stochastic generalization, we were able to compute solutions for real-world test instances from rural communities in Senegal up to instance sizes of about 20 villages per community.

Future research should address at least three questions: first, we work with a fixed sample of scenarios and have to keep the number of scenarios comparably low for reasons of computational efficiency. To obtain a better approximation of the sampled counterpart model to the real stochastic structure of the demand, it would be desirable to apply variance reduction techniques such as Importance Sampling, and to switch to a variable-sample approach which allows the consideration of a much larger set of random scenarios without compromising runtime too much. The Adaptive Pareto Sampling (APS) technique [7], a variable-sample method for multi-objective stochastic optimization, could be useful in this context.

Secondly, for tackling larger instances, the application of multi-objective heuristic techniques seems indispensable. Well-known multi-objective metaheuristics such as the Nondominated Sorting Genetic Algorithm II (NSGA-II) by Deb et al. [3] could be used in this context. This may again be based on either a fixed-sample or a variable-sample approach.

Third, in our model, rather simple assumptions on the choice of distribution centers by clients are made. These assumptions could be refined, e.g., by adopting concepts from consumer behavior in location models (cf. [4]).

Acknowledgments

Financial support from the Austrian Science Fund (FWF) by grant P20342 is gratefully acknowledged. We also wish to thank Karl Doerner for his kind help.

Footnotes

1Larger cities have to be decomposed into urban quarters in order to make the model applicable, which are then also subsumed under the notion “villages”.

2The model can easily be extended to the more general situation of vehicle-dependent driving costs and/or of driving costs that are not proportional to distances.

3In the case of periodic delivery of non-perishable goods, clients might try to build up stocks, which would change the situation. We do not deal with this case here; its treatment would require an extension of the model by an inventory component.

4A solution x is called Pareto-optimal if there is no other solution that is as least as good as x in all objectives and strictly better than x in at least one objective. The set of image points (f1(x), f2(x)) of all Pareto-optimal solutions x is called the Pareto front.

5Even in cases where donators fix a designated use of their donations for a certain disaster, a decision has to be made on the specific kind and area of spending it, and usually also non-designated money is available.

6Usually, DMs shy away from making such decisions before they have an idea about the tradeoff between costs and effects.

References

1. Birge J.R., Louveaux F. Springer; New York: 1997. Introduction to stochastic programming. [Google Scholar]
2. Current J.R., Schilling D.A. The median tour and maximal covering tour problems: formulations and heuristics. European Journal of Operational Research. 1994;73:114–126. [Google Scholar]
3. Deb K., Pratap A., Agarwal S., Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation. 2002;6:182–197. [Google Scholar]
4. Drezner T., Eiselt H.A. Consumers in competitive location models. In: Drezner Z., Hamacher H.W., editors. Facility location—application and theory. Springer; 2002. [Google Scholar]
5. Doerner K., Focke A., Gutjahr W.J. Multicriteria tour planning for mobile healthcare facilities in a developing country. European Journal of Operational Research. 2006;179:1078–1096. [Google Scholar]
6. Gendreau M., Laporte G., Semet F. The covering tour problem. Operations Research. 1997;45:568–576. [Google Scholar]
7. Gutjahr W.J. A provably convergent heuristic for stochastic bicriteria integer programming. Journal of Heuristics. 2009;15:227–258. [Google Scholar]
8. Kovac G. Humanitarian logistics in disaster relief operations. International Journal of Physical Distribution and Logistics Management. 2007;37:99–114. [Google Scholar]
9. Hachicha M., Hodgson M.J., Laporte G., Semet F. Heuristics for the multi-vehicle covering tour problem. Computers and Operations Research. 2000;27:29–42. [Google Scholar]
10. Hodgson M.J., Laporte G., Semet F. A covering tour model for planning mobile health care facilities in Suhum district, Ghana. Journal of Regional Science. 1998;38:621–639. [Google Scholar]
11. Jozefowiez N., Semet F., Talbi E. The bi-objective covering tour problem. Computers and Operations Research. 2007;34:1929–1942. [Google Scholar]
12. Laumanns M., Thiele L., Zitzler E. An efficient, adaptive parameter variation scheme for metaheuristics based on the epsilon-constraint method. European Journal of Operational Research. 2006;169:932–942. [Google Scholar]
13. Liefooghe A., Basseur M., Jourdan L., Talbi E.-G. vol. 4403. Springer; 2007. Combinatorial optimization of stochastic multi-objective problems an application to the flow-shop scheduling problem; pp. 457–471. (Proceedings of EMO 2007. Lecture notes in computer science). [Google Scholar]
14. Naddef D, Rinaldi G. Branch-and-cut algorithms for the capacitated VRP. In: Toth P, Vigo D, editors. The vehicle routing problem. SIAM monographs on discrete mathematics and applications, Philadelphia; 2002. p. 53–84.
15. Nolz P.C., Doerner K.F., Gutjahr W.J., Hartl R.F. A biobjective metaheuristic for disaster relief operation planning. In: Coello Coello C.A., Dhaenes C., Jourdan L., editors. Advances in multi-objective nature inspired computing. Springer; 2009. pp. 157–177. [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1016/j.cor.2011.09.009

Supporting
Mentioning
Contrasting
0
82
0

Article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.


Funding 


Funders who supported this work.

Austrian Science Fund FWF (1)