1 Introduction

Mixed-integer nonlinear optimization (MINLP) arises in many applications across engineering, manufacturing, and the natural sciences (Boukouvala et al. 2016). An important MINLP subclass features exclusively convex nonlinearities, i.e. the nonconvexity of the MINLP comes only from the discrete variables (Kronqvist et al. 2019). Convex MINLP is highly relevant in diverse fields including process synthesis (Durán-Peña 1984; Duran and Grossmann 1986a), portfolio optimization (Bienstock 1996; Frangioni and Gentile 2006; Bonami and Lejeune 2009), and constrained layout (Castillo et al. 2005; Sawaya and Grossmann 2007). For MINLP with nonconvex nonlinearities, e.g. heat integration of chemical processes (Duran and Grossmann 1986c) and pooling problems (Misener and Floudas 2009), optimization algorithms assuming convex nonlinearities may generate excellent primal heuristics to the original optimization problem (Duran and Grossmann 1986b; Bonami et al. 2008; D’Ambrosio et al. 2012).

Convex MINLP represents a highly successful subclass of optimization problems, e.g. algorithm developers often develop convex approximations of nonconvex engineering relationships (Geiler et al. 2015) or decompose their optimization problems into a series of convex MINLP problems (Lundell and Westerlund 2018; Nowak et al. 2018). A wide range of efficient solver software is developed specifically for convex MINLP (Grossmann et al. 2002; Bonami et al. 2008; Lastusilta 2011; Bernal et al. 2020; Lundell et al. 2020; Mahajan et al. 2017; Kröger et al. 2018; Melo et al. 2020). The success of convex MINLP derives from the seminal work of Duran and Grossmann (1986b) in developing the outer approximation (OA) algorithm. The work by Duran and Grossmann (1986b) became pivotal in solving convex MINLP problems because of the algorithm’s strong convergence properties for a wide range of problem classes (Quesada and Grossmann 1992; Fletcher and Leyffer 1994) and its speed in solving practical problems (Bonami et al. 2008). In a recent benchmark by Kronqvist et al. (2019) it was shown that several of the most efficient convex MINLP solvers are based on the OA algorithm.

The concept of using an outer approximation of the nonlinear constraints for MINLP problems, developed by (Duran and Grossmann 1986b; Geoffrion 1972), forms the core of several other convex MINLP algorithms, e.g.,  extended cutting plane (ECP) (Westerlund and Petterson 1995; Westerlund and Pörn 2002), feasibility pump (Bonami and Gonçalves 2012), extended supporting hyperplane (ESH) (Kronqvist et al. 2016), and the center-cut algorithm (Kronqvist et al. 2018a). Further developments of the OA algorithm, incorporating quadratic approximations and regularization, has been presented by Su et al. (2018) and Kronqvist et al. (2018b). These algorithms could commonly be referred to as outer approximation type algorithms, although this classification is seldom used.

This paper focuses on deriving strong cutting planes for convex MINLP problems, resulting in tight outer approximations, by exploiting disjunctive structures in the problem. We use cuts obtained by the ESH algorithm as a basis, and we develop a framework for strengthening the cuts by considering the integer restrictions. The cut strengthening technique is not unique to the ESH algorithm and could also be used with an OA, ECP or generalized Benders decomposition (Geoffrion 1972) framework. The main motivation behind using the ESH algorithm is that the algorithm tends to generate a single strong cut per iteration. The ESH cuts are actually as tight as possible with regards to the nonlinear constraints (Kronqvist et al. 2016), but they do not in general form supporting hyperplanes to the convex hull of all integer feasible solutions. Here we develop a framework for strengthening the ESH cuts, which results in two new types of cuts that are always as tight or tighter than the ESH cut. The new cuts can give both a tighter representation of the nonlinear constraints as well as a tighter continuous relaxation. By obtaining a tighter outer approximation of the nonlinear constraints, we can reduce both the number of iterations and the time needed to solve problems.

Cutting planes that strengthen the continuous relaxation are nowadays an essential part of an efficient mixed-integer linear programming (MILP) solver (Achterberg and Wunderling 2013; Linderoth and Lodi 2011), and there is an active interest in developing similar cuts for convex MINLP. Disjunctive cutting planes for convex MINLP originate from the fundamental contributions of Ceria and Soares (1999) and Stubbs and Mehrotra (1999), and further developments are presented in (Trespalacios and Grossmann 2016). Lift-and-project cuts were first introduced in MILP by Balas et al. (1993), and this technique has later been adopted within convex MINLP. By linearizing the constraints, a polyhedral outer approximation can be used to derive lift-and-project cuts through a cut generating LP (Zhu and Kuno 2006; Bonami 2011; Kılınç et al. 2017; Serra 2020). An alternative approach is presented by Lodi et al. (2019), where they obtain cuts directly by solving cut generating conic programs. Other types of cuts used within MINLP includes different types of mixed-integer rounding cuts (Gomory 1960; Atamtürk and Narayanan 2010), reformulation linearization technique (RLT) based cuts (Sherali and Adams 2013; Misener et al. 2015), and split cuts (Modaresi et al. 2015).

The cut strengthening techniques presented here can be viewed as an alternative approach to the previously mentioned lift-and-project and disjunctive cuts. However, our cut strengthening procedure is more focused on obtaining a tight MILP relaxation, than getting the best improvement for the continuous relaxation. The cuts are generated by selecting a disjunction of the MINLP problem and strengthening an ESH cut over the convex hull of the selected disjunction. Trespalacios and Grossmann (2016) use a somewhat similar idea, where they derive a supporting hyperplane for a nonlinear disjunction by solving a separation problem. Instead of solving a separation problem, we strengthen the ESH cut by deriving the smallest possible right-hand side values to the ESH cut that are still valid for each term of the disjunction. This enables us to effectively use individual right-hand side values for each term of the disjunction, making the cut tight for each disjunct. A similar approach is used by Trespalacios and Grossmann (2015) to construct tighter big-M reformulations of generalized disjunctive programs. We determine right-hand side values of the cuts by solving independent convex NLP problems in the original variable space and do not rely on the convex hull formulation of the disjunctions. By doing so, numerical difficulties associated with the perspective function are avoided and instead of solving a larger (lifted) problem, we solve several smaller independent (parallelizable) problems. This approach also enables us to identify some infeasible integer assignments and to handle numerical tolerances in a straightforward fashion. To the authors’ best knowledge, this is a novel cut strengthening technique for convex MINLP.

The paper is organized as follows. Section 2 gives a short description of the ESH algorithm, along with the necessary assumptions on the MINLP problems. Section 3 presents the theory and techniques used for the cut strengthening, and a cut strengthening algorithm is presented in Sect. 4. Section 5 presents an algorithm for solving convex MINLP problems that combines the ESH algorithm with the cut strengthening techniques. Finally, some numerical results are presented in Sect. 6.

2 Background

First, we define the class of problems considered within the paper and state the assumptions needed to guarantee convergence of the ESH algorithm. The disjunctive structure that the cut strengthening technique builds upon is also presented in this section. The second part of this section briefly describes the ESH algorithm, which is later used to generate cuts and forms the basis of the convex MINLP algorithm in Sect. 5.

2.1 Problem statement

The most commonly used, and most practical, definition of a convex MINLP problem, is that all of the nonlinear constraints and objective are given by convex functions (Gupta and Ravindran 1985; Quesada and Grossmann 1992; Westerlund and Petterson 1995; Bonami et al. 2012). Throughout the paper, we use this definition of convexity. Without loss of generality, we only consider convex MINLP problems with the following structure

$$\begin{aligned} \begin{array}{ll} \min \limits _{\mathbf {x}} &{}\quad \mathbf {c}^\top \mathbf {x} \\ {\text {s.t.}} &{}\quad \mathbf {A}\mathbf {x} \le \mathbf {b},\\ &{}\quad \mathbf {B}\mathbf {x} = \mathbf {d},\\ &{}\quad g_j\left( \mathbf {x}\right) \le 0, \quad \forall j = 1, 2, \ldots , l, \\ &{} \quad \mathbf {x} \in {{\mathbb {R}}^{n}},\\ &{}\quad x_i \in {{\mathbb {Z}}}, \quad \forall i \in I_{{{\mathbb {Z}}}}, \end{array} \end{aligned}$$
(MINLP)

where \(g_j: {{\mathbb {R}}^{n}}\rightarrow {{\mathbb {R}}}\) are convex continuously differentiable functions. Here, \(I_{{\mathbb {Z}}}\) is a set containing the indices of all the integer variables. To clarify the notation, \(x_i\) referrers to the i-th element of the variable vector \(\mathbf {x}\). The feasible set defined by the nonlinear constraints will be referred to as the nonlinear feasible set, and it is given by

$$\begin{aligned} N=\left\{ \mathbf {x}\in {{\mathbb {R}}}^n \ | \ g_j(\mathbf {x}) \le 0 \quad \forall j=1,\ldots l\right\} . \end{aligned}$$
(1)

To simplify the notation, we will also introduce a set L defined by the linear constraints and a set Y given by the variable domains

$$\begin{aligned} \begin{aligned}&L=\left\{ \mathbf {x}\in {{\mathbb {R}}}^n \ | \ \mathbf {A}\mathbf {x} \le \mathbf {b},\ \mathbf {B}\mathbf {x} = \mathbf {d} \right\} ,\\&Y=\left\{ \mathbf {x}\in {{\mathbb {R}}}^n \ | \ \ x_i \in {{\mathbb {Z}}}\quad \forall i \in I_{{{\mathbb {Z}}}}\right\} . \end{aligned} \end{aligned}$$

To ensure convergence of the ESH algorithm, we need to make the following assumptions of problem (MINLP).

Assumption 1

The linear constraints form a compact set.

Assumption 2

The continuous relaxation of problem (MINLP) satisfies Slater’s condition (Slater 1950).

For the cut strengthening procedure, we make the following assumption on the problem structure.

Assumption 3

The MINLP problem contains at least one exclusive selection constraint of binary variables, i.e., \(\exists \ I_D \subset I_{{{\mathbb {Z}}}}:\ x_i \in \left\{ 0, 1\right\} \ \ \forall i \in I_D\), and either one of the constraints

$$\begin{aligned}&\sum \limits _{i \in I_d} x_i = 1, \end{aligned}$$
(2)
$$\begin{aligned}&\sum \limits _{i \in I_d} x_i \le 1, \end{aligned}$$
(3)

appears in the problem.

For the sake of simplicity and clarity, we will throughout the paper only focus on the exclusive selection constraint (2). The second type of exclusive selection constraint (3), can trivially be converted into the first type by introducing a slack binary variable and can be handled by the same approach.

The exclusive selection constraints arise, for example, from the representation of disjunctive constraints through the so-called big-M or convex hull formulation (Balas 1979; Raman and Grossmann 1994; Trespalacios and Grossmann 2014). Note that we do not restrict all of the integer variables to be binary variables, nor do we assume the problems to have disjunctive constraints of a specific type. The cut strengthening simply requires that the problem contains at least one exclusive selection constraint, which is used for strengthening the cut. However, the cut strengthening is most powerful in case the problem contains the big-M constraints, resulting in a weak continuous relaxation. Therefore, we focus on problems containing big-M constraints.

For the cut strengthening to be computationally efficient, the number of elements in \(I_D\) should be less than the elements in \(I_{{\mathbb {Z}}}\). Throughout the paper, we also assume that the main challenges in solving problem (MINLP) arise from the integer restrictions. Consequently, we assume that a continuous relaxation of the problem is significantly easier to solve than the MILP relaxations used by OA, ECP, and ESH. This is often the case for convex MINLP problems which is, for example, shown by the numerical result in Muts et al. (2020) and Su et al. (2015).

2.2 The extended supporting hyperplane algorithm

The ESH algorithm was presented by Kronqvist et al. (2016) as a method for solving convex MINLP problems, and it builds upon ideas presented by Veinott Jr (1967). It was proven by Eronen et al. (2017) that the ESH algorithm is directly applicable to nonsmooth MINLP problems with constraints given by pseudoconvex functions. Properties of the ESH algorithm have also been further analyzed by Serrano et al. (2019).

The ESH algorithm constructs a tight polyhedral outer approximation of the nonlinear feasible set N, by generating supporting hyperplanes to the set. The polyhedral outer approximation at iteration k is given by

$$\begin{aligned} \hat{N_k} =\left\{ \nabla g_j\left( \bar{\mathbf {x}}^i\right) ^\top \left( \mathbf {x} - \bar{\mathbf {x}}^i\right) \le 0 \quad \forall i=1,2\ldots k,\ j\in A_i \right\} , \end{aligned}$$
(4)

where \(\bar{\mathbf {x}}^i\) are points on the boundary of N and \(A_i\) contains the indices of all constrains active at \(\bar{\mathbf {x}}^i\). From convexity it directly follows that \(N \subseteq \hat{N_k}\), and \(\hat{N_k}\) is commonly referred to as an outer approximation of N.

A new trial solution \(\mathbf {x}^{k+1}\) is obtained by solving the following MILP relaxation

$$\begin{aligned} \begin{array}{lll} \mathbf {x}^{k+1}\in &{}\quad \mathop {\hbox {arg min}}\limits _{\mathbf {x}} &{}\quad \mathbf {c}^\top \mathbf {x}\\ &{}\quad {\text {s.t.}} &{}\quad \mathbf {x} \in L \cap \hat{N_k} \cap Y. \end{array} \end{aligned}$$
(MILP-r)

A lower bound on the optimal objective value of problem (MINLP) is given by \(\mathbf {c}^\top \mathbf {x}^{k+1}\), where \(\mathbf {x}^{k+1}\) is an optimal solution to the MILP relaxation.

The trial solutions obtained by solving problem (MILP-r) will all be outside of the nonlinear feasible set N, before the very last iteration. Therefore, linearizing the nonlinear constraints at the trial solutions \(\mathbf {x}^{k}\) would, in general, not form supporting hyperplanes to N and would result in weaker cuts. To obtain supporting hyperplanes, ESH performs an approximative projection of the trial solution \(\mathbf {x}^{k}\) onto \(N\cap L\). A point in the interior of \(N\cap L\) is needed for the projection, and such a point is obtained by solving the convex continuous problem

$$\begin{aligned} \begin{array}{lll} \mathbf {x}_{\rm {int}}, \mu \in \ &{}\quad \mathop {\hbox {arg min}}\limits _{\mathbf {x}, \mu } &{}\quad \mu \\ &{}\quad {\text {s.t.}} &{}\quad g_j\left( \mathbf {x}\right) \le \mu , \quad \forall j = 1, 2, \ldots , l \\ &{} &{} \quad \mathbf {x} \in L, \\ &{}&{}\quad \mu \in {{\mathbb {R}}}. \end{array} \end{aligned}$$
(NLP-IP)

For the approximative projection of \(\mathbf {x}^{k}\), we define the one-dimensional function

$$\begin{aligned} F(\lambda ) = \max _{j}\left\{ g_j\left( \lambda \mathbf {x}_{\rm {int}} + (1-\lambda )\mathbf {x}^k\right) \right\} , \end{aligned}$$
(5)

for \(\lambda \in [0, 1]\). Using a simple root-search algorithm we can obtain a \(\lambda ^k\) such that \(F\left( \lambda ^k\right) = 0\). The approximative projection of \(\mathbf {x}^{k}\) onto \(N\cap L\) is then given by

$$\begin{aligned} \bar{\mathbf {x}}^k = \lambda ^k\mathbf {x}_{\rm {int}} + (1-\lambda ^k)\mathbf {x}^k. \end{aligned}$$
(6)

Now, if the active constraints are linearized at \(\bar{\mathbf {x}}^k\) we obtain the following cuts

$$\begin{aligned} \nabla g_j\left( \bar{\mathbf {x}}^k\right) ^\top \left( \mathbf {x} - \bar{\mathbf {x}}^k\right) \le 0 \quad \forall j\in A_i, \end{aligned}$$
(7)

which forms supporting hyperplanes to \(N\cap L\). The supporting hyperplanes are then added to the current polyhedral outer approximation to form \({\hat{N}}_{k+1}\), which ensures that \(\bar{\mathbf {x}}^k \notin {\hat{N}}_{k+1}\).

The ESH algorithm repeats the procedure of solving (MILP-r) and improving the outer approximation by generating supporting hyperplanes. To improve the computational performance, the algorithm starts by further relaxing (MILP-r) and solving LP relaxations to quickly generate an outer approximation. For more details and computational enhancements on the ESH algorithm see Lundell et al. (2018).

The cuts generated by the ESH algorithm are as tight as possible with regards to \(N \cap L\). However, there is no guarantee that the algorithm generates supporting hyperplanes to the convex hull of \(N\cap L\cap Y\). Therefore, it can be possible to further strengthen the cuts by considering the integrality restrictions. To illustrate the possible strengthening of the cuts, consider the following example

$$\begin{aligned} \begin{array}{ll} \min \limits _{\mathbf {x}} &{}\quad -x_1 -x_2 \\{ \text {s.t.}} &{}\quad (x_1 - 1)^2 + (x_2 - 2)^2 \le 1 + 29.944(1-x_3), \\ &{}\quad (x_1 - 2)^2 + (x_2 - 5)^2 \le 1 + 29.944(1-x_4), \\ &{} \quad (x_1 - 4)^2 + (x_2 - 1)^2 \le 1 + 29.944(1-x_5), \\ &{}\quad x_3 + x_4 + x_5 = 1,\\ &{}\quad 0 \le x_1 \le 8, \ 0 \le x_2 \le 8,\\ &{} \quad x_1,x_2 \in {{\mathbb {R}}}, \ x_3, x_4, x_5 \in \{0, 1\}. \end{array} \end{aligned}$$
(EX1)

The example contains the disjunctive constraint that the \((x_1,x_2)\)-variables must be within one of three circles, which is represented by the big-M formulation. The value 29.944 is, in this case, the tightest common value for the big-M coefficients. A stronger problem formulation could simply be obtained by using individual M values for each constraint, which can easily be determined as described in the Appendix. We only use the weaker formulation in order to better highlight differences between the cuts. Figure 1 shows the feasible set of problem (EX1) along with the continuously relaxed feasible set projected down onto the \((x_1,x_2)\)-space.

Fig. 1
figure 1

The dark circles show the feasible set of problem (EX1) projected onto the \((x_1,x_2)\)-space. The light gray area in the left figure shows the feasible set of the continuous relaxation. The right figure also shows the projection of the outer approximation obtained by the first iteration of the ESH algorithm. Note that, a supporting hyperplane to \(N\cap L\) does not necessarily form a supporting hyperplane in a projected space, as shown in the figure

In the first iteration, the ESH algorithm will generate the following cut

$$\begin{aligned} 5.920x_1 + 4.536x_2 +29.944x_3 \le 59.249, \end{aligned}$$
(8)

which forms a supporting hyperplane to \(N\cap L\) but not a supporting hyperplane to convex hull of \(N \cap L\cap Y\). From Fig. 1, it is clear that the cut given by Eq. (8) is not as tight as possible when considering the integer properties. In the next section, we present a technique to further tighten the cut by utilizing the disjunctive structures of the MINLP problem.

3 Cut strengthening

From the example in the previous section, it can be observed that the ESH cut can be tightened by simply reducing the right-hand side and still remain valid for the integer feasible set, i.e., \(N\cap L \cap Y\). To reduce the right-hand side, we will consider an exclusive selection constraint, see assumption 3, and determine the smallest right-hand side values for each selection. This enables us to strengthen the cut by reducing the right-hand side alone or to further strengthen the cut by assigning individual right-hand side values for each assignment of the exclusive selection constraint.

First, we select an index set \(I_D \left( I_D \subset I_{{\mathbb {Z}}}\right)\) that contains the indices of all the binary variables included in an exclusive selection constraint of the MINLP problem. By using the ESH algorithm we obtain the cut

$$\begin{aligned} {\alpha }^\top \mathbf {x} \le \beta , \end{aligned}$$
(9)

which forms a tight valid inequality for \(N\cap L\). To tighten cut (9), consider the following disjunctive programming (DP) problem

$$\begin{aligned} \begin{array}{lll} z^*=&{}\,\underset{{\mathbf {x}}}\max &{}\quad {\alpha }^\top \mathbf {x} \\ &{}\quad {\text {s.t.}} &{} \quad \bigvee _{i \in I_D}\begin{bmatrix} \mathbf {x} \in N \cap L\\ x_i = 1 \\ x_j = 0 \ \ \forall j \in I_D\setminus i \end{bmatrix}. \end{array} \end{aligned}$$
(10)

This DP problem can be solved as a convex NLP through the convex hull formulation (Ceria and Soares 1999; Stubbs and Mehrotra 1999; Lee and Grossmann 2000). Formulating problem (10) as a convex NLP through a convex hull formulation can cause numerical difficulties, such as division by zero and non-smoothness (Sawaya and Grossmann 2007), and the problem will contain \(|I_D|\) copies of the variables. Instead of solving (10) as a single large problem we solve it as smaller individual convex problems, by considering the following alternative formulation of problem (10)

$$\begin{aligned} \begin{array}{ll} z^*= \underset{i \in I_D}\max \quad b_i = &{}\,\underset{{\mathbf {x}}}\max \ {\alpha }^\top \mathbf {x} \\ &{}\begin{array}{ll} {\text {s.t.}} &{} \quad \mathbf {x} \in N \cap L,\\ &{}\quad x_i = 1, \\ &{}\quad x_j = 0, \ \ \forall j \in I_D\setminus i. \end{array} \end{array} \end{aligned}$$
(11)

By solving each inner problem of (11) separately we can determine \(z^*\) as the largest \(b_i\). This approach requires \(|I_D|\) independent convex NLP problems to be solved, but computationally it can be more efficient than solving a single problem with \(|I_D|\) copies of the variables. Using \(z^*\) as the new right-hand side value of cut (9), we form the tightened cut

$$\begin{aligned} {\alpha }^\top \mathbf {x} \le z^*. \end{aligned}$$
(12)

Proposition 1

The cut given by Eq. (12) forms a valid inequality for \(N \cap L \cap Y\), and is at least as tight as the cut given by Eq. (9).

Proof

From optimality of problem (10) it directly follows that cut (12) forms a supporting hyperplane to the feasible set of problem (10), which contains \(N\cap L \cap Y\). Since the feasible set of problem (10) is contained within \(N\cap L\), it follows that \(z^* \le \beta\). \(\square\)

Solving (10) as smaller individual convex problems also enables us to further tighten the cut. To further strengthen the cut, we considering each term of the disjunction in problem (10) and form a convex NLP problem for each \(i \in I_D\)

$$\begin{aligned} \begin{array}{lll} b_i=&{}\,\underset{\mathbf {x}}\max &{} {\alpha }^\top \mathbf {x} \\ &{}{\text {s.t.}} &{} \mathbf {x} \in N \cap L,\\ &{}&{}x_i = 1, \\ &{}&{}x_j = 0, \ \ \forall j \in I_D\setminus i. \end{array} \end{aligned}$$
(NLP-i)

Note that each problem (NLP-i) is a subproblem of problem (11). To simplify the derivation and analysis, we first assume that all \(i \in I_D\) result in a feasible problem (NLP-i). Solving problem (NLP-i) for each \(i \in I_D\) gives the values \(b_i\) that can be used as individual right-hand side values for each integer assignment of the exclusive selection constraint (2). A new strengthened cut is then given by

$$\begin{aligned} {\alpha }^\top \mathbf {x} \le \sum \limits _{i \in I_D} b_ix_i, \end{aligned}$$
(13)

and the properties of the new cut are presented in the following two theorems.

Theorem 1

The cut given by Eq. (13) forms a valid inequality for \(N\cap L\cap Y\).

Proof

The theorem is easily proven by contradiction. First, assume \(\exists \ \bar{\mathbf {x}} \in N\cap L \cap Y :\)

$$\begin{aligned} {\alpha }^\top \bar{\mathbf {x}} > \sum \limits _{i \in I_D} b_ix_i. \end{aligned}$$
(14)

Due to the exclusive selection constraint, one and only one of the binary variables \(x_{i \in I_D}\) can be nonzero. Let j be the index of the nonzero binary variable, and the strict inequality (14) can now be written as

$$\begin{aligned} {\alpha }^\top \bar{\mathbf {x}} > b_j. \end{aligned}$$
(15)

By assumption, \(\bar{\mathbf {x}}\) must satisfy all constraints of problem (NLP-i). This implies that \(b_j\) cannot be an optimal solution to problem (NLP-i), and this leads to a contradiction. \(\square\)

Before analyzing the tightness of the cuts, we first describe our definition of a tighter cut. Here we consider cut (13) to be tighter than cut (12) in the sense that any \(\mathbf {x}\) satisfying Eq. (13) will satisfy Eq. (12), but not vice versa. In integer programming, this tightness relation is commonly referred to as cut (13) strictly dominating cut (12), e.g., see Balas and Margot (2013).

Theorem 2

The cut given by Eq. (13) is always as tight or tighter than the cut given Eq. (12).

Proof

Since \(z^*\) is chosen as the maximum of \({\alpha }^\top \mathbf {x}\) over all integer assignments of the exclusive selection constraint intersected with \(N\cap L\), it follows that \(z^* = max_{i\in I_D} \{b_i\}\). Therefore, each \(b_i\) can be split into two parts \(b_i = z^* -\varDelta _i\), where each \(\varDelta _i\ge 0\). The cut given by Eq. (13) can now be written as

$$\begin{aligned} {\alpha }^\top \mathbf {x} \le z^* -\sum \limits _{i \in I_D} \varDelta _i x_i, \end{aligned}$$
(16)

proving that the cut is always as tight as cut (12). Furthermore, if a single \(\varDelta _i >0\), then the cut given by (13) will strictly dominate cut (12). \(\square\)

Earlier we assumed that all \(i \in I_D\) result in a feasible problem (NLP-i), which is not a necessary assumption for the cut strengthening. Finding such infeasible integer assignments enables us to remove the corresponding binary variable, as further described in the following proposition.

Proposition 2

If \(i \in I_D\) result in an infeasible problem (NLP-i), then the binary variable \(x_i\) can be eliminated by permanently fixing the variable to zero.

Proof

In problem (NLP-i) all variables, except those included in the exclusive selection constraint, are relaxed to continuous variables and they are only restricted by the original constraints. Variable \(x_i\) is fixed to one, which automatically fixes the other variables in the exclusive selection constraint to zero. Therefore, the only case where problem (NLP-i) can be infeasible is when \(x_i=1\) is an infeasible partial integer assignment to the MINLP problem. \(\square\)

Fig. 2
figure 2

The figures show the true feasible set of problem (EX1) and the continuously relaxed feasible set projected onto the \((x_1,x_2)\)-space. The left figure shows the outer approximation given by cut (17) and the right figure shows the outer approximation given by cut (18)

To illustrate the difference between the two cuts, we again consider problem (EX1). By applying the cut strengthening technique to the cut given by the ESH algorithm, we can generate the following two cuts

$$\begin{aligned}&5.920x_1 + 4.536x_2 +29.944x_3 \le 52.029, \end{aligned}$$
(17)
$$\begin{aligned}&5.920x_1 + 4.536x_2 \le (52.029-29.944)x_3 + 41.192x_4 + 35.451x_5. \end{aligned}$$
(18)

The outer approximations obtained given by the two different cuts are shown in Fig. 2. The figure shows a clear advantage of the second cut, resulting in a significantly tighter linear relaxation of the MINLP problem. However, comparing Figs. 1 and 2 show that both cuts are significantly stronger than the standard ESH cut.

In an outer approximation type algorithm, it is not only important to obtain a tight continuous relaxation, but also to obtain a tight MILP relaxation, i.e., a tight linear relaxation for given integer assignments. The two are obviously related, but it is possible to have a tight MILP relaxation with a weak continuous relaxation. To further illustrate the differences between the two types of cuts, we analyze how the feasible region of the cuts to problem (EX1) varies with the feasible integer assignments. Figure 3 shows the feasible region of the cut given by Eq. (17) for each feasible integer assignment. The figure shows that cut (17) is tight for one of the feasible integer assignments, but not as tight as possible for the other two.

Figure 4 shows the cut given by Eq. (18) forms a supporting hyperplane to the feasible set of each term of the disjunction in problem (EX1), i.e., for each feasible integer assignment the cut is as tight as possible. The example highlights the fact that the individually tightened cuts, i.e., cuts formed by Eq. (13), can give both significantly tighter continuous and MILP relaxations than the cut given by Eq. (12) and the original ESH cut.

Fig. 3
figure 3

The dark circles show the feasible set of problem (EX1) projected onto the \((x_1,x_2)\)-space. The light gray area in the figures shows the feasible set of the continuous relaxation. Furthermore, the figures also show the feasible set of cut (17) for each feasible integer assignment

Fig. 4
figure 4

The figures show the feasible set of cut (18) for each feasible integer assignment in the \((x_1,x_2)\)-space

In this section, we have presented a framework for strengthening cuts obtained by the ESH algorithm. However, the same approach can also be used to strengthen cuts obtained by a similar algorithm, such as ECP, OA or generalized Benders decomposition. The next section will focus more on the computational aspects, and how to practically utilize the cut strengthening framework within a solver.

4 A cut strengthening algorithm

This section focuses on the computational aspects and how to utilize the cut strengthening techniques from the previous section in an algorithm. We present a simple strategy for selecting one out of multiple exclusive selection constraints, and describe some computational enhancements along with a discussion on how to deal with tolerances.

The cut strengthening techniques in the previous section utilizes the exclusive selection constraint (2) to tighten cuts of the type given by Eq. (9). However, MINLP problems can contain multiple exclusive selection constraints, e.g., originating from multiple disjunctive constraints. Given a cut, there is a choice of which exclusive selection constraint and the corresponding variables to choose for the tightening procedure. Ideally one wants to choose the exclusive selection constraint with the binary variables \(x_{i}\) for \(i \in I_D\) such that the coefficients \(b_i\) obtained by solving (NLP-i) are as small as possible. However, such an optimal choice cannot trivially be determined, and instead, we will make the choice based on the variable connections.

Suppose that we have obtained cut (9), which is given by linearizing the nonlinear constraint \(g_j(\mathbf {x})\le 0\). To compare the different exclusive selection constraints, and their corresponding variables \(x_{i}\) for \(i \in I_D\), we check the connections of the variables \(x_{i}\) for \(i \in I_D\) to the constraint \(g_j(\mathbf {x}) \le 0\). Here we consider two types of connections, direct connections and step-one connections. Variable \(x_{i}\) is directly connected to \(g_j(\mathbf {x}) \le 0\), if the variable is included in the constraint. In a step-one connection, the variable \(x_{i}\) is included in another constraint (linear or nonlinear) that has at least one variable in common with \(g_j(\mathbf {x}) \le 0\). The number of direct connections in an exclusive selection constraint is given by number of variables in \(I_D\) that are directly connected to the nonlinear constraint \(g_j(\mathbf {x}) \le 0\), and similarly for the step-one connections. Here, we use the following heuristic rule for selecting an exclusive selection constraint.

Rule 1

Given cut (9), select the exclusive selection constraint with the largest number of direct connections to the corresponding nonlinear constraint. If there are no direct connections, chose the one with the largest number of step-one connections. In case of multiple exclusive selection constraints with the same number of connections, chose one of them randomly.

A feasible solution to the MINLP problem \(\hat{\mathbf {x}}\) can also be utilized within the cut strengthening procedure. This is done by simply including the objective reduction constraint

$$\begin{aligned} \mathbf {c}^\top {\mathbf {x}} \le \mathbf {c}^\top \hat{\mathbf {x}}, \end{aligned}$$
(19)

as a constraint in problem (NLP-i). Including the objective reduction constraint can further reduce the coefficients \(b_i\), resulting in a stronger cuts. Furthermore, including the objective reduction constraint can enforce infeasibility on some partial integer assignments, and cause assignments in problem (NLP-i) to be infeasible. As mentioned earlier, the only way problem (NLP-i) can be infeasible is if the partial assignment, i.e., \(x_i=1 \ i\in I_D,\ x_j=0 \ \forall j\in I_D \setminus i\), is infeasible for the MINLP problem. Finding such infeasibilities is desirable since it allows us to eliminate a variable from the MINLP problem by fixing it to zero.

Including the previously tightened cuts into problem (NLP-i) can also improve performance by tightening the continuous relaxation. Obtaining a tighter continuous relaxation in problem (NLP-i) can further strengthen the cut and infer infeasibilities. In the numerical results presented in Sect. 6, it was noticed that including the tightened cuts and an objective reduction constraint can greatly help in identifying infeasible or non-optimal partial integer assignments. The ability to identify and eliminate these from the search space can result in fewer iterations but can also reduce the complexity of the MILP relaxations, used by algorithms such as ESH, ECP, and OA.

The cut strengthening techniques are summarized as pseudo-code in Algorithm 1. In the algorithm, the two different cuts from the previous section are considered as different strategies. The cut given by Eq. (13) is referred to as a Multi Tightening (MT) strategy, since it effectively uses multiple values for the right-hand side. Similarly, the cut given by Eq. (12) is referred to as a Single Tightening (ST) strategy.

figure a

4.1 Computational comments

When solving an optimization problem to generate a cut, it is important to take the solver tolerances into consideration. The tolerances are especially important when dealing with nonlinear problems, where it is rare that a solver returns an exact optimal solution. In the cut strengthening procedure, presented in the previous section, the solver tolerance will only affect the coefficients \(b_i\). If we can ensure that the solution of problem (NLP-i) is within an \(\epsilon\)-tolerance from the true optimal objective value, then the suboptimality can easily be handled by relaxing the cut, i.e., adding \(\epsilon\) to the right-hand side.

As a comparison, some other techniques to obtain strong cuts for convex MINLP problems use the minimum distance (separation) problem to generate cuts (Stubbs and Mehrotra 1999; Bonami et al. 2009; Trespalacios and Grossmann 2016). In these approaches, the minimizer of an NLP subproblem forms the coefficients of both the left- and right-hand side of the cut. For these cuts, it is important to obtain a high optimality accuracy in the variable space, since it affects both the angle and level of the cut. Issues with numerical tolerances can be reduced or effectively eliminated, e.g., by post-processing the cut and optimizing over each term in the disjunction to determine a valid right-hand side, but this comes at a significant computational expense. However, since both the coefficients on the left- and right-hand side are optimized, this approach is not limited to a specific cut but can basically generate any supporting hyperplane to the convex hull of the disjunction. Generating cuts by solving the separation problem can, therefore, result in stronger cuts than the cut strengthening procedure which is limited by the structure of the original cut.

In the cut strengthening procedure, we optimize over each term of a disjunction in problem (10) separately. This allows us to obtain stronger cuts and identify infeasible partial integer assignments, as described in Sect. 3. In an efficient implementation, the individual problems given by (NLP-i) can be solved in parallel since they are completely independent. This approach also has computational advantages, since the convex hull formulation and the perspective function, in particular, comes with numerical challenges. There are formulations to avoid division by zero (Sawaya 2006) and for some types of problems, the convex hull is second-order cone representable, which can be handled more efficiently (Ben-Tal and Nemirovski 2001). However, if some of the partial integer assignments are infeasible it can cause difficulties for solvers since the convex hull of problems (NLP-i) will then have an empty interior even though it is feasible. Such issues can be eliminated by analyzing each term of the disjunction in a pre-processing and eliminating infeasible terms, but this also comes at a significant computational expense.

As previously mentioned, our cut strengthening approach is limited to a specific cut and, therefore, it may result in a weaker cut compared to generating the cut from solving a separation problem. The main advantage of our cut strengthening approach is that the cut is obtained by solving several smaller independent convex problems, compared to solving the larger separation problem. Therefore, the trade-off of our cut strengthening approach is a reduced computational complexity at the expense of a possibly weaker cut.

5 Computational setup

To compare the cuts and to show the advantage of the cut strengthening, we have included a numerical study where we compare the ESH algorithm with and without the cut strengthening technique. These are preliminary results and are mainly intended as a proof of concept. To focus on the effects of cut strengthening, we apply them to a basic implementation of the ESH algorithm. As shown by Lundell et al. (2016, 2020) several other techniques can be combined with the algorithm to improve the computational performance, such as early MILP termination and multiple cut generation strategies. Before presenting the results, we will give a more detailed description of the computational setup.

5.1 A Convex MINLP algorithm

To solve the MINLP problems we will use the ESH algorithm, which was briefly presented in Sect. 2. In each iteration, we use the cut strengthening algorithm from Sect. 4 to strengthen the cut generated by the ESH algorithm. It is known that the basic ESH algorithm tends to only generate a single cut per iteration (Kronqvist et al. 2016; Lundell et al. 2017). However, in some iterations the root-search can result in a point where multiple constraints are active, resulting in multiple cuts. Here, we will only strengthen one cut per iteration. If we obtain multiple cuts in an iteration, then we randomly pick one of them for the strengthening procedure. We do not use the LP-preprocessing from (Kronqvist et al. 2016), which simplifies the algorithm and allows us to better focus on the effect of the cut strengthening.

Besides the basic ESH algorithm, we only include two simple primal heuristics that have proven to be effective within this framework (Kronqvist et al. 2016; Lundell et al. 2018). Without any primal heuristics the ESH algorithm will generally not obtain feasible solutions during the solution procedure, making it difficult to terminate based on the optimality gap. Therefore, the primal heuristics is an important enhancement to the ESH algorithm and practically needed within a solver. From the numerical tests, we also noticed that feasible solutions improve the cuts and help to identify non-optimal partial integer assignments. The primal heuristics we use here are checking the alternative solution in the MILP solver’s solution pool, and fixing the integer assignments in the MINLP and solving the resulting convex NLP problem in every fourth iteration. The primal heuristics are summarized as a pseudo-code in Algorithm 2.

figure b

For more details on heuristics in combination with the ESH algorithm, see (Lundell et al. 2020). For a summary of different primal heuristics see, for example, (Berthold 2014; D’Ambrosio et al. 2012).

As a termination criterion we use the relative optimality gap defined as

$$\begin{aligned} {\text {gap}}=\frac{{\text {ub}}-{\text {lb}}}{|{\text {ub}}|+10^{-10}}, \end{aligned}$$
(20)

where ub and lb are upper and lower bounds on the optimal objective value of the MINLP problem. Here, ub is given by the best found feasible solution and lb is given by \(\mathbf {c}^\top \mathbf {x}^{k}\), where \(\mathbf {x}^{k}\) is given by problem (MILP-r). We consider the MINLP problem as solved when the relative gap is reduced to \(10^{-3}\), thus proving that the best found solution is within \(0.1\%\) of the global optimum. The method used for solving the MINLP problems is summarized as a pseudo-code in Algorithm 3.

figure c

5.2 Implementation and hardware

For the numerical comparison, we use a simple implementation of the ESH algorithm utilizing IPOPT 3.12.9 (Wächter and Biegler 2006) and Gurobi 8.1 (Gurobi 2019) as subsolvers for NLP and MILP subproblems. For reading and parsing the MINLP problems, we use the open-source MATLAB toolbox OPTI Toolbox (Currie and Wilson 2012). In the current implementation, we are not able to run the cut strengthening NLP subproblems in parallel, which could significantly speedup the cut strengthening. However, the computational results in the following section still clearly show an advantage of the cut strengthening, both in terms of total computational time and in number of iterations.

The numerical comparisons are performed on a basic desktop computer with an Intel i7-7700k processor, 16 GB RAM, and Windows 10. For the subsolvers, we use default settings except for allowing Gurobi to run on 8 threads. By running Gurobi on multiple threads, the MILP subproblems are solved faster and this is simply done by changing the Threads-parameter. The root-search, in the approximative projection, is done to a tolerance of \(10^{-16}\) in the \(\lambda\)-variable.

6 Numerical results

To test the efficiency of the cut strengthening, we apply the simple implementation of the ESH algorithm, described in Algorithm 3, to a set of test problems. The ESH algorithm forms the baseline for the numerical comparison, and we compare how the cut strengthening techniques affect the number of iterations and solution times. As already mentioned, the results are mainly intended as a proof of concept to show the impact of the cut strengthening. By using techniques such as early stopping (Lundell et al. 2018) and running the cut tightening NLP problems in parallel it would be possible to significantly reduce the solution times.

For the numerical test we have chosen convex MINLP instances from MINLPLib (MINLPLib 2020) containing at least one exclusive selection constraint. The cut strengthening is mainly intended to strengthen the linear approximation of the nonlinear constraints, and it is expected to be most beneficial for problems containing the big-M formulation of disjunctions containing nonlinear constraints. The disjunctions are identified through the exclusive selection constraints, and the cuts are strengthened through a tighter representation of the disjunctions. If nonlinear disjunctions are represented by the convex hull formulation, our approach will not necessarily be able to tighten the relaxation. For example, if a nonlinear disjunction is represented by the convex hull, then the ESH algorithm can generate supporting hyperplanes to the convex hull of the disjunction and the ST-strategy will not be able to change such cuts. The MT-strategy could still give a tighter approximation for integer feasible solutions, as it may cut off parts of the convex hull for some integer values as shown in Fig. 4. Since the cut strengthening is an expensive operation, it is better suited for problems with the big-M formulation as the impact will be more significant and the subproblems are smaller. Therefore, we focus on problems where disjunctions, with either linear or nonlinear constraints, are represented by the big-M formulation. The problems we consider from MINLPLib are different versions of the problems clay, flay, slay, sssd, and tls. These problems represent optimization tasks such as trimloss problems (Harjunkoski et al. 1998), optimal placement tasks (Sawaya 2006) and service systems design (Elhedhli 2006). We also consider a problem called stockcycle (Silver and Moon 1999), which is known to be difficult to solve without any reformulations (Kronqvist et al. 2018c). Furthermore, we also consider a class of test problems called p_ball, that are described in the Appendix. The p_ball instances contain several relatively large nonlinear disjunctions, and are designed to be challenging due to both the nonlinearity and the combinatorial aspects. We use a 2 h time limit for all the problems except for stockcycle, where we use a time limit of 96 h.

The results are presented in Table 1, showing both the number of iterations and the time needed to solve each problem. The table shows that both cut strengthening techniques can significantly reduce the number of iterations needed to solve the problems. However, the table shows a clear advantage of the multi tightening (MT) strategy. This result aligns well with the theory since cut (13) used in the MT strategy can dominate the cuts used by the single tightening (ST) strategy. On average, the ST strategy reduces the number of iterations by a factor of 1.5, and the MT strategy gives a further reduction with a factor of 2.9 on average. Both cut strengthening strategies give a significant reduction in solution times, but the MT strategy has a clear advantage and is faster by a factor of 2 compared to the ST strategy. The performance in terms of speed for the strategies is illustrated in Fig. 5, which shows the performance profiles of the different strategies. From the figure, it can be observed that MT strategy gives a great advantage for the more challenging problems.

Table 1 The table shows the solution times in seconds and the number of iterations needed to solve (to a relative gap \(\le 0.1\%\)) the MINLP instances with different cut strategies
Fig. 5
figure 5

Solution profiles for the ESH algorithm and the ESH algorithm with the cut strengthening techniques. The graphs show the number of instances solved as a function of time. The test set has 43 instances (clay*, flay*, p_ball*, slay*, sssd*, stockcycle*). An instance is considered solved when it reaches a relative gap of less than 0.1%

As shown in Table 1, the cut strengthening is especially powerful for the clay and p_ball problems. These problems contain nonlinear disjunctions that are represented by the big-M formulations, giving weak continuous relaxations that can be efficiently strengthened by the cut strengthening technique. For the p_ball problems, the MT cut strengthening reduces the number of iterations by a factor of 11.7 on average. Without the cut strengthening the larger p_ball problems are practically intractable with the ESH algorithm, and the optimality gap remained large after 2 h.

As previously mentioned, the cut tightening comes at a computational cost of solving convex NLP subproblems. For example, problem p_ball_40b_5p_4d contains nonlinear disjunctions of size 40, which results in 40 subproblems for the cut tightening per iteration. Solving these subproblems accumulates to about 35% of the total solution time. However, this is well compensated for by the great reduction in the number of iterations.

It is worth mentioning that the cut strengthening techniques do not necessarily result in computationally more demanding iterations. For example, the average iteration time for slay10m is 10.6 seconds with the ESH strategy and less than 2 seconds with both the ST and MT strategies. There are two reasons behind the significantly faster iterations. First, the strengthened cuts can result in a tighter continuous relaxation, making the MILP relaxations easier to solve. But more importantly, the cut strengthening procedure can sometimes identify infeasible or non-optimal integer assignments during the solution procedure, see Sect. 3 for details. For slay10m, the cut strengthening is able to fix 53 of the binary variables to zero. Similarly, the cut strengthening eliminates 299 of the 432 binary variables in stockcycle. By further studying these problems, we found that the binary variables fixed by the cut tightening cannot trivially be removed, e.g., by performing LP-based bounds tightening. Some of the integer assignments immediately resulted in problem (NLP-i) to be infeasible in the cut tightening, and some became infeasible due to bounds on the objective and accumulation of strengthened cuts. The ability to identify the infeasible or non-optimal integer assignments can greatly reduce the complexity of the MINLP problems and comes as a desirable side effect of the cut tightening.

Table 2 The table shows the average number of iterations and solution times (to a relative gap \(\le 0.1\%\)) for the rsyn and syn problems

MINLPLib also contains a large number of problems called syn and rsyn (Türkay and Grossmann 1996; Sawaya 2006). These problems do have a disjunctive structure, although mainly involving linear constraints. These problems are all easy to solve, and on average they require less than 10 iterations and 3 seconds with the ESH algorithm. There are in total 24 rsyn and 24 syn problems with the big-M formulation. For these problems, the cut strengthening did not provide any significant advantages. The average times and number of iterations for these two problem types are shown in Table 2.

For the rsyn and syn instances the cut tightening procedure has little effect on the cuts, and does not result in fewer iterations. In these problems the nonlinear constraints only contain three variables, and there is only a single nonlinear variable in each constraint. It is possible that these constraints are tight to begin with, which would explain why the strengthening does not have much effect for these specific problems.

The cut strengthening seems to be most efficient for problems that contain disjunctions with nonlinear constraints, e.g., clay and p_ball problems. Some aspects of why the multi tightening strategy works particularly well for these problems are described in the next section. For problems with nonlinear disjunctions, the choice of which exclusive selection constraint to perform the strengthening on is also straight forward since binary variables will be present in the nonlinear constraints. The cut strengthening also performed well on the problems slay, sssd, and stockcyckle, where there are only disjunctions with linear constraints.

6.1 Comparing strong problem formulations and cut strengthening

These results show that the strengthening procedure can give a great advantage for problems where disjunctions are represented by big-M constraints. To further analyze the cut strengthening procedure, we compare the cut strengthening procedure with applying the basic ESH algorithm on the same MINLP instances in a convex hull form, where all or some disjunctions are represented by the convex hull formulation. For this test, we use all problems from the previous section that are available in both a big-M and convex hull form. The results are presented in Table 3. Here we only use the multi strengthening technique, since it results in stronger cuts than the single tightening at the same computational cost.

For nonlinear disjunctions represented by a convex hull formulation, the ESH algorithm can generate supporting hyperplanes to the convex hull of the disjunction. Therefore, applying the ESH algorithm to MINLP instances where nonlinear disjunctions are represented by the convex hull can result in significantly tighter cuts compared to cuts obtained from big-M constraints. This can be seen from the results in Table 3, which shows that the ESH algorithm requires fewer iterations for most of the problems in the convex hull form. The ESH algorithm is still faster on some of the problems in big-M form, which is most likely due to the smaller subproblems.

Table 3 The table shows the solution times in seconds and the number of iterations needed with the ESH algorithm to solve (to a relative gap \(\le 0.1\%\)) the MINLP instances in both the big-M and convex hull form. The table also shows the solution times and number of iterations with the ESH algorithm combined with multi tightening strategy applied to the big-M formulation

It is important to notice that the cut strengthening procedure will not necessarily result in similar cuts as applying the ESH algorithm to the convex hull formulation of the problem. This is well illustrated by problem (EX1), where the single tightening strategy does not result in a supporting hyperplane to the convex hull of the disjunction as illustrated in Fig. 2. The multi tightening strategy forms a supporting hyperplane to the convex hull of the disjunction, but it still behaves differently compared to a cut obtained by applying the ESH algorithm to the convex hull form of the problem. Figure 4 shows that the multi tightened cut not only forms a supporting hyperplane to the convex hull of the disjunction, but for each feasible integer assignment it also forms a supporting hyperplane to the corresponding term of the disjunction. For problem (EX1), a single multi tightened cut effectively acts as a supporting hyperplane to three different nonlinear constraints for each feasible integer assignments. The multi tightened cuts behave similarly for the p_ball instances, where each disjunction corresponds to assigning a point to one of the balls. For a feasible integer assignment, a cut obtained by multi tightening will then act as a supporting hyperplane to each ball for one of the points. For example, for the problem p_ball_40b_5p_3d a multi tightened cut effectively behaves as a tight cut for 40 different nonlinear constraints. This behaviour can make the multi tightened cuts especially powerful for problems with nonlinear disjunctions, which is also shown by the results in Table 3.

Only the p_ball and clay instances contain nonlinear disjunctions and for most of these problems the multi tightening strategy significantly reduces both solution times and number of iterations. For problems with only linear disjunctions, the multi tightening strategy does not necessarily give the same advantage. However, the multi tightening strategy also performed well on the test problems with only linear disjunctions. On average the multi tightening strategy reduces the number of iterations by a factor of 7.2 compared to the ESH algorithm with the big-M formulation and by a factor of 1.5 compared to ESH algorithm with the convex hull formulation of the problems. In terms of total solution time, the multi tightening strategy reduces the total solution time by more than a factor of 3 on average compared to the other two approaches.

7 Conclusions

In this paper, we have presented a new framework for strengthening cuts to obtain tighter outer approximations for convex MINLP. The cut strengthening is based on analyzing disjunctive structures in the MINLP problem, and either strengthen the cut for the entire disjunction or separately for each term of the disjunction. We have proven that the strengthening results in valid cuts that can dominate the original cut. The numerical results show that the strengthening can greatly reduce the number of iterations and time needed to solve convex MINLP problems. We have focused on strengthening cuts derived from the ESH algorithm, but the same techniques can just as well be used to strengthen cuts obtained by OA, ECP or generalized Benders decomposition.