Abstract
The problem of minimizing the difference of two convex functions is called polyhedral d.c. optimization problem if at least one of the two component functions is polyhedral. We characterize the existence of global optimal solutions of polyhedral d.c. optimization problems. This result is used to show that, whenever the existence of an optimal solution can be certified, polyhedral d.c. optimization problems can be solved by certain concave minimization algorithms. No further assumptions are necessary in case of the first component being polyhedral and just some mild assumptions to the first component are required for the case where the second component is polyhedral. In case of both component functions being polyhedral, we obtain a primal and dual existence test and a primal and dual solution procedure. Numerical examples are discussed.
Similar content being viewed by others
1 Introduction
D.c. programming and concave minimization are known to be closely related problems, see e.g. [16]. Theory and methods for concave minimization are surveyed, for instance, in [2]. An overview about the field of d.c. programming is given, for example, in [8, 17].
Let \(g :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) be convex functions, where one of the functions g or h is assumed to be polyhedral, i.e., the epigraph of the respective function is a convex polyhedron. We consider the polyhedral d.c. optimization problem
This problem will be transformed into a concave minimization problem under linear constraints. The general form of such a problem is as follows. For a concave function \(f:\mathbb {R}^k \rightarrow \mathbb {R}\cup \{-\infty \}\) we consider
where the feasible set P is an arbitrary polyhedron. For the reformulation of (DC) we choose the concave objective function
The feasible set P is the epigraph of g. The concave minimization problem associated to (DC)
is equivalent to (DC) in the following sense:
In the literature on concave minimization, many authors assume a compact feasible set in order to guaranty the existence of optimal solutions, see e.g. [1, 3, 12]. However, problem (ConcMin) always has a non-compact feasible set. In [4], algorithms for concave (even quasi-concave) minimization based on a modification of methods for vector linear programming (VLP) are studied. An implementation of these methods based on the VLP solver bensolve [10, 11] is provided by the Octave/Matlab package bensolve tools [5, 6]. This approach allows non-compact feasible sets but requires certain other assumptions.
While the reformulation of (DC) as (ConcMin) is straightforward, our research focuses on the assumptions which are required for the solution methods. It turns out that verifying or disproving the existence of optimal solutions of (DC) is the crucial task here. For the case where g is polyhedral, we prove that whenever an optimal solution of (DC) exists, it can be computed by solving the associated problem (ConcMin) using the methods of [4]. In the case where h is polyhedral, the same applies to the dual problem of (DC). Under mild assumptions, an optimal solution of (DC) can be obtained from an optimal solution of the dual problem.
This article is organized as follows. In Sect. 2 we characterize the existence of optimal solutions for polyhedral d.c. programs. Section 3 explains how a polyhedral d.c. program can be solved using a (quasi-)concave minimization solver like bensolve tools after the existence of optimal solutions has been verified. The last section presents two numerical examples for the case where both g and h are polyhedral. Both existence tests and solution methods are addressed.
We use the following notation. The domain of a function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) is defined by \(\mathrm{dom~}f :=\{x \in \mathbb {R}^n \mid f(x) < \infty \}\) and the epigraph of f is the set \(\mathrm{epi~}f :=\{(x,r) \in \mathbb {R}^n\times \mathbb {R}\mid r \ge f(x)\}\). A convex function f is called closed if \(\mathrm{epi~}f\) is a closed set. We write \(\mathbb {R}^n_+\) for the set of vectors with non-negative components. The recession cone\(0^+C\) of a convex set \(C \subseteq \mathbb {R}^n\) is the set of all y with \(C + \{y\} \subseteq C\). The lineality space of C is the set \(\mathrm{lineal}(C):=0^+C\cap (-0^+C)\).
2 Existence of global optimal solutions
In this section we discuss the existence of optimal solutions of problem (DC) for the following three cases: (1) g being polyhedral, (2) h being polyhedral, (3) both g and h being polyhedral.
2.1 The case of g being polyhedral
The following characterization of the existence of optimal solutions is the main result of this article.
Theorem 1
Problem (DC) with g being polyhedral has an optimal solution if and only if the following three properties hold:
-
(i)
\(\mathrm{dom~}g \ne \emptyset \),
-
(ii)
\(\mathrm{dom~}g \subseteq \mathrm{dom~}h\),
-
(iii)
\(0^+\mathrm{epi~}g \subseteq 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\).
Proof
Let (DC) have an optimal solution \(x^0\). Since \(x^0 \in \mathrm{dom~}g\), (i) is satisfied. Let \(x \in \mathrm{dom~}g\), then \(-h(x) \ge g(x^0)-h(x^0)-g(x) > -\infty \) and hence \(x \in \mathrm{dom~}h\), i.e., (ii) holds. Assume (iii) is violated, that is, \(0^+\mathrm{epi~}g \nsubseteq 0^+(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). Then we can choose \((d,s) \in 0^+ \mathrm{epi~}g\setminus 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). By the definition of the recession cone, there exists some \((x,t) \in \mathrm{epi~}h\cap (\mathrm{dom~}g \times \mathbb R)\) such that \((x,t)+\alpha (d,s) \notin \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\) for some \(\alpha > 0\). Without loss of generality we can set \(\alpha = 1\). Since \(t \ge h(x)\) we get \((x,h(x))+ (d,s) \notin \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\). We find some \(r \in \mathbb {R}\) such that \((x,r) \in \mathrm{epi~}g\).
Case 1 If \((x,h(x))+(d,s) \notin \mathrm{dom~}g \times \mathbb {R}\), then \((x,r)+ (d,s) \notin \mathrm{dom~}g \times \mathbb {R}\) and hence \((d,s) \not \in 0^+\mathrm{epi~}g\), a contradiction.
Case 2 If \((x,h(x))+ (d,s) \notin \mathrm{epi~}h\), then
by definition of the epigraph. We consider problem (ConcMin) which is equivalent to (DC) as discussed above. For its objective function f defined in (2) we obtain
For any \(n \in \mathbb {N}\), \((x+nd,r+ns)\) is feasible for (ConcMin). But \(f(x+nd,r+ns)\) tends to \(-\infty \), by concavity of f and (4). This contradicts the assumption that (DC) has an optimal solution. Thus, (iii) is satisfied.
Assume now that (i), (ii) and (iii) hold. By (i), \(\mathrm{epi~}g\) is non-empty. Let \((x,r) \in \mathrm{epi~}g\) and \((d,s) \in 0^+ \mathrm{epi~}g\). Then \((x,r)+\alpha (d,s) \in \mathrm{epi~}g\) for all \(\alpha > 0\). By (ii), we have \((x,h(x)) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\). By (iii) we obtain \((d,s) \in 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). Thus, \((x,h(x))+\alpha (d,s) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\) for all \(\alpha > 0\). From the definition of the epigraph, we obtain \(h(x) + \alpha s \ge h(x+\alpha d)\). For the objective function f of problem (ConcMin), this implies
Since \(\mathrm{epi~}g\) is a polyhedron, it can be expressed by a polytope Q as \(\mathrm{epi~}g = Q + 0^+ \mathrm{epi~}g\). By (ii), the values of f are finite over Q. The polytope Q is the convex hull of its vertices. Concavity of f implies that f attains its minimum at a vertex \((x^0,r^0)\) of Q. Since (5) holds for every \((x,r) \in Q\) and every \((d,s) \in 0^+ \mathrm{epi~}g\), \((x^0,r^0)\) is an optimal solution of (ConcMin) and hence \(x^0\) is an optimal solution of (DC). \(\square \)
If g is not polyhedral, the conditions (i), (ii) and (iii) in Theorem 1 are still necessary for the existence of optimal solutions. This can be seen in the first part of the proof, where the assumption of g being polyhedral was not used. However, the conditions are no longer sufficient, not even if h is polyhedral, as the following simple example shows.
Example 2
Let \(g,h:\mathbb {R}\rightarrow \mathbb {R}\cup \{\infty \}\) be defined as
Then (DC) is unbounded and thus has no optimal solution. We have \(\mathrm{dom~}g = \mathrm{dom~}h = \mathbb {R}_+\) and \(0^+ \mathrm{epi~}g = 0^+ \mathrm{epi~}h = \mathbb {R}^2_+\). Thus, the conditions (i), (ii) and (iii) of Theorem 1 are satisfied.
An extension to the non-polyhedral case requires further assumptions as discussed in the following remark.
Remark 3
The second part of the proof of Theorem 1 still works for non-polyhedral functions g if \(\mathrm{epi~}g\) is of the special form \(Q + 0^+ \mathrm{epi~}g\) for some compact set Q and if h is assumed to be upper semicontinuous. Then, by the Weierstrass theorem, the minimum of the objective function f of (ConcMin) is attained in Q.
Under certain assumptions, condition (iii) in Theorem 1 can be simplified. We start with two propositions and formulate this result as a corollary of Theorem 1.
Proposition 4
Let \(A,B \subseteq \mathbb {R}^n\) be non-empty convex sets with \(A \subseteq B\) and let B be closed. Then \(0^+A \subseteq 0^+B\).
Proof
This follows from [13, Theorem 8.3], which states that for a non-empty closed convex set B, \(d \in 0^+B\) if and only if there is some \(x \in B\) satisfying \(x+\alpha d \in B\) for all \(\alpha \ge 0\). Let \(d \in 0^+A\). By definition of the recession cone, we have \(x+\alpha d \in A\) for all \(x \in A\). Since \(A \subseteq B\), we get \(d \in 0^+B\). \(\square \)
Proposition 5
Let \(0^+\mathrm{epi~}h=0^+\mathrm{cl\,}\mathrm{epi~}h\). Then condition (iii) in Theorem 1 is equivalent to
-
(iii’)
\(0^+ \mathrm{epi~}g \subseteq 0^+ \mathrm{epi~}h\).
Proof
Let (iii) be satisfied. Then
Let (iii’) be satisfied. Let \((d,s) \in 0^+\mathrm{epi~}g\) and \((x,r) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb R)\). By (iii’), we have \((x,r)+\alpha (d,s) \in \mathrm{epi~}h\) for all \(\alpha \ge 0\). Assuming that \((x,r)+\alpha (d,s) \notin \mathrm{dom~}g \times \mathbb R\), we get \((x,g(x))+\alpha (d,s) \notin \mathrm{dom~}g \times \mathbb R\), which contradicts the precondition \((d,s) \in 0^+\mathrm{epi~}g\). Consequently, \((x,r)+\alpha (d,s) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb R)\) for all \(\alpha \ge 0\). \(\square \)
Corollary 6
Problem (DC) with g being polyhedral and h being closed has an optimal solution if and only if the following properties hold:
-
(i)
\(\mathrm{dom~}g \ne \emptyset \),
-
(ii)
\(\mathrm{dom~}g \subseteq \mathrm{dom~}h\),
-
(iii’)
\(0^+\mathrm{epi~}g \subseteq 0^+ \mathrm{epi~}h\).
Proof
If h is closed, we have \(\mathrm{epi~}h = \mathrm{cl\,}\mathrm{epi~}h\) and hence \(0^+\mathrm{epi~}h = 0^+\mathrm{cl\,}\mathrm{epi~}h\). Thus, the result follows from Proposition 5 and Theorem 1. \(\square \)
The following example shows that condition (iii’) is not adequate if h is not assumed to be closed.
Example 7
Consider problem (DC) for the functions \(g, h:\mathbb {R}^2 \rightarrow \mathbb {R}\cup \{\infty \}\) with
Both g and h are convex and g is polyhedral. Both functions coincide on \(\mathrm{dom~}g=\mathbb {R}\times [1,\infty )\), whence (DC) has optimal solutions of the form (0, y) for \(y \ge 1\). The recession cones of the functions are
and
We see that \((1,0,1)^{T} \in 0^+\mathrm{epi~}g \setminus 0^+\mathrm{epi~}h\), i.e., (iii’) is violated.
2.2 The case of h being polyhedral
We consider the Toland-Singer dual problem of (DC), see [14, 15], that is,
where \(g^{*}(y) :=\sup _{x \in \mathbb {R}^{n}} [{y}^Tx - g(x)]\) is the conjugate of g and likewise for h. The duality theory by Toland and Singer states that the optimal objective values of (DC) and (DC\(^{*}\)) coincide under the assumption of h being closed.
Since \(g^{*}\) is convex and \(h^{*}\) is polyhedral convex, the existence result of Theorem 1 applies to problem (DC\(^{*}\)). The following result provides the relation between optimal solutions of (DC) and (DC\(^{*}\)). We denote by \(\partial f(x):=\{y \in \mathbb {R}^n \mid \forall z \in \mathbb {R}^n: f(z) \ge f(x) + {y}^T(z-x)\}\) the subdifferential of a convex function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{ \infty \}\) at \(x \in \mathrm{dom~}f\). We set \(\partial f(x) :=\emptyset \) for \(x \not \in \mathrm{dom~}f\).
Proposition 8
(e.g. [8, Proposition 4.7] or [18, Proposition 3.20]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) be convex functions with non-empty domain. Then:
-
(i)
If x is an optimal solution of (DC), then each \(y \in \partial h(x)\) is an optimal solution of (DC\(^{*}\)).
If, in addition, g and h are closed, a dual statement holds:
-
(ii)
If y is an optimal solution of (DC\(^{*}\)), then each \(x \in \partial g^*(y)\) is an optimal solution of (DC).
Remark 9
As already mentioned in [9], the assumption of g and h being closed for statement (ii) of Proposition 8 is missing in [8, Proposition 4.7]. In [18, Proposition 3.20] the assumption of g being closed is required. Examples can be found in [9].
Proposition 10
(e.g. [13, Theorem 23.5]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) be a proper closed convex function. Then \(x \in \partial g^{*}(y)\) if and only if x is an optimal solution of
Theorem 11
Let g be closed and let (6) have an optimal solution for every \(y \in \mathrm{dom~}g^*\). Let h be polyhedral. Then, problem (DC) has an optimal solution if and only if the following properties hold:
-
(i*)
\(\mathrm{dom~}h^{*} \ne \emptyset \),
-
(ii*)
\(\mathrm{dom~}h^{*} \subseteq \mathrm{dom~}g^{*}\),
-
(iii*)
\(0^+\mathrm{epi~}h^{*} \subseteq 0^+ \mathrm{epi~}g^{*}\).
Proof
Let (DC) have an optimal solution x. Since \(x \in \mathrm{dom~}h\) and h is polyhedral, there exists some \(y \in \partial h(x)\), see e.g. [13, Theorem 23.10]. Proposition 8 states that y is an optimal solution of (DC\(^{*}\)). Theorem 1 applied to (DC\(^{*}\)) yields the conditions (i*), (ii*) and (iii*).
Let the conditions (i*), (ii*) and (iii*) be satisfied. By Theorem 1 we obtain that (DC\(^{*}\)) has an optimal solution y. By assumption, (6) has an optimal solution x, which belongs to \(\partial g^*(y)\), by Proposition 10. Since g and h are closed, Proposition 8 yields that x is an optimal solution to (DC). \(\square \)
2.3 The case of both g and h being polyhedral
Combining the previous results we obtain the following statement.
Corollary 12
Let \(g :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) are polyhedral convex functions. Then, the following statements are equivalent:
-
(a)
Problem (DC) has an optimal solution.
-
(b)
Problem (DC\(^{*}\)) has an optimal solution.
-
(c)
The conditions (i), (ii), (iii’) in Corollary 6 are satisfied.
-
(d)
The conditions (i*), (ii*), (iii*) in Theorem 11 are satisfied.
Proof
By Corollary 6, (a) is equivalent to (c). Since g and h are polyhedral, the assumptions of Theorem 11 are satisfied. Indeed, by \(y \in \mathrm{dom~}g^*\), problem (6) has a finite optimal value and hence the minimum is attained as g is polyhedral. Theorem 11 yields that (b) is equivalent to (d). By Proposition 8 and the fact that the subdifferential of a polyhedral function is non-empty at points of the domain of the function, we see that (a) is equivalent to (b). \(\square \)
3 Solution procedure
Let g be polyhedral in problem (DC). We are going to solve (DC) by the following procedure. First we check whether or not an optimal solution of (DC) exists by using Theorem 1. If so, we solve the associated problem (ConcMin) by the solution methods of [4]. By using Theorem 1 again, we will check the following assumptions which are required to be satisfied for the algorithms presented in [4]:
There exists a polyhedral convex pointed cone \(C \subseteq \mathbb {R}^n\) such that
-
(M)
f is C-monotone (i.e. \(y-x \in C\) implies \(f(x)\le f(y)\)),
-
(B)
P is C-bounded (i.e. \(0^+ P \subseteq C\)).
If (M) and (B) are satisfied for some polyhedral convex pointed cone \(C \subseteq \mathbb {R}^n\), then (ConcMin) has an optimal solution ([4, Corollary 6]). Moreover, under these assumptions the methods in [4] compute optimal solutions of (ConcMin), see [4, Algorithm 2, Theorem 16] for the primal algorithm, [4, Algorithm 4, Theorem 22] for the dual algorithm, and [4, Section 6] for the extension to the case of the interior of C being empty.
Theorem 13
Let problem (DC) with g being polyhedral have an optimal solution and let h be closed. Then, for the associated problem (ConcMin), assumptions (M) and (B) are satisfied for \(P = \mathrm{epi~}g\) and the polyhedral convex cone \(C = 0^+\mathrm{epi~}g\).
Proof
The set \(C=0^+\mathrm{epi~}g\) is a polyhedral convex cone. Obviously, (B) holds. It remains to show (M). Let \((x,r),(y,s) \in \mathbb {R}^n\times \mathbb {R}\) such that
If \(x \notin \mathrm{dom~}g\), then \(f(x,r) = -\infty \le f(y,s)\). We have
By definition of \(\mathrm{epi~}h\), we obtain \(h(x)+s-r \ge h(y)\) and hence \(r-h(x) \le s-h(y)\), which proves (M). \(\square \)
Remark 14
In the previous theorem, the assumption of h being closed can be omitted if the definition of f in (2) is replaced by
The proof is similar by using Theorem 1 (iii) instead of Corollary 6 (iii’).
The cone C in the previous theorem is not necessarily pointed, as required for the solution methods of [4], see above. However, pointedness can be achieved by a reformulation of problem (DC): Denote by L the lineality space of the convex function g which is defined by Let \(L^{\bot }\) be the orthogonal complement of L. For some fixed \(\bar{x} \in \mathrm{dom~}g\) we define
We denote by (\(\mathrm{{\overline{DC}}}\)) the polyhedral d.c. optimization problem (DC) where g is replaced by \(\bar{g}\).
Proposition 15
Let (DC) with g being polyhedral have an optimal solution. Then \(\mathrm (\overline{DC})\) has an optimal solution and every optimal solution of \(\mathrm (\overline{DC})\) is also an optimal solution of (DC).
Proof
We have \(\mathrm{dom~}\bar{g} \ne \emptyset \), \(\mathrm{dom~}\bar{g} \subseteq \mathrm{dom~}g\) and \(0^+ \mathrm{epi~}\bar{g} \subseteq 0^+ \mathrm{epi~}g\). Theorem 1 yields the first statement. Now let \(x^0\) be an optimal solution of the modified problem \(\mathrm (\overline{DC})\). The point \(x^0\) is feasible for (DC). Assume there is some \(\tilde{x} \in \mathrm{dom~}g\) such that \(g(\tilde{x})-h(\tilde{x}) < g(x^0)-h(x^0) = \bar{g}(x^0)-h(x^0)\). Define
We show that \(g(\tilde{x})-h( \tilde{x}) = \bar{g}(\hat{x})- h(\hat{x})\). Indeed, we have \({\hat{x}} - \tilde{x} \in L\), hence there is some \(r \in \mathbb {R}\) such that
From Theorem 1 (iii) we conclude that
where \(h\vert _{\mathrm{dom~}g}\) is the function that coincides with h on \(\mathrm{dom~}g\) and is \(\infty \) elsewhere. From [13, Theorem 8.8] we conclude that
for all \(x \in \mathbb {R}^n\) and all \(\lambda \in \mathbb {R}\). Likewise we get
for all \(x \in \mathrm{dom~}g\) and all \(\lambda \in \mathbb {R}\). We obtain
Since \(\hat{x} \in \mathrm{dom~}\bar{g}\), we have \(g(\hat{x})=\bar{g}(\hat{x})\). Together we have \(\bar{g}(\hat{x})- h(\hat{x}) < \bar{g}(x^0)-h(x^0)\) which contradicts the assumption that \(x^0\) is optimal for \(\mathrm (\overline{DC})\). \(\square \)
The following example shows that an optimal solution of (DC) is not necessarily an optimal solution of \(\mathrm (\overline{DC})\).
Example 16
Let \(g,h:\mathbb {R}\rightarrow \mathbb {R}\), \(g\equiv 0\) and \(h\equiv 0\). Then \(L=\mathbb {R}\), \(L^\bot =\{0\}\). We have \(\bar{g}(0)=0\) and \(\bar{g}(x)=\infty \) if \(x\ne 0\). Thus 0 is the only optimal solution of \(\mathrm (\overline{DC})\) but every \(x\in \mathbb {R}\) is optimal solution of (DC).
Summarizing the results, we solve (DC) with g being polyhedral by the following procedure:
-
(1)
Check whether (DC) has an optimal solution or not using Theorem 1, if not, stop.
-
(2)
Determine \(\bar{x} \in \mathrm{dom~}g\) and \(L^\bot \) in order to define the function \(\bar{g}\) in (8).
-
(3)
Solve (ConcMin) with g replaced by \(\bar{g}\) using the methods of [4].
In the case where h is polyhedral, we need to assume additionally that g is closed and (6) has an optimal solution for every \(y \in \mathrm{dom~}g^{*}\). Then we can check whether or not (DC) has an optimal solution by using Theorem 11. If so, by Theorem 1 we know that (DC\(^{*}\)) has an optimal solution. An optimal solution y of (DC\(^{*}\)) can be obtained by the same method (steps (2) and (3) of the above procedure) but applied to (DC\(^{*}\)) rather than (DC) (replace g by \(h^*\) and h by \(g^{*}\)). Finally, we solve (6), which provides an optimal solution of (DC).
If both g and h are polyhedral, (DC) can be solved by two different methods: We speak about the primal method in case we use the method where g is required to be polyhedral. The term dual method refers to the method where h is required to be polyhedral. Furthermore there are two different tests for existence of an optimal solution of (DC). The test in Corollary 12 (c) is referred to as primal existence test whereas (d) in Corollary 12 is called dual existence test.
4 Numerical results
We implemented the results of this article in Matlab 9.6 by using bensolve tools, version 2.3, see [5, 6]. The code and the test instances are available at http://tools.bensolve.org/dcsolve. By two (new) commands dcsolve and dcdsolve the user can run, respectively, the primal and dual method described in the previous section. The input arguments of both commands are two arbitrary polyhedral convex functions g and h in the usual format of bensolve tools. Both commands solve arbitrary polyhedral d.c. optimization problems (of small size) or certify that no optimal solution exists.
The following two numerical examples were run on a computer with Intel® Core™ i5 CPU with 3.3 GHz.
Example 17
Let \(A \in \mathbb {R}^{n \times m}\) be a matrix and denote by \(a^i\) its columns. We define a polyhedral convex function \(f_A:\mathbb {R}^n \rightarrow \mathbb {R}\) by
where \(\Vert y\Vert _1 = \sum _{j=1}^n |y_j|\) denotes the sum norm of a vector y. Given two matrices \(G \in \mathbb {R}^{n\times m_G}\) and \(H \in \mathbb {R}^{n \times m_H}\) we consider the polyhedral d.c. optimization problem
Problems of this type occur in locational analysis, see e.g. [9] and the references therein. In Fig. 1 numerical results are depicted for matrices G and H with components \(g_{ij}=sin(i+j)\) and \(h_{ij}=cos(i+j)\) (just to make the results easily reproducible in comparison to random numbers). The recession cone of \(\mathrm{epi~}f_A\) is just the recession cone of \(\mathrm{epi~}(m \Vert \cdot \Vert _1)\) and \(\mathrm{dom~}f_A = \mathbb {R}^n\). Thus, by Corollary 6, a solution of (11) exists if and only if \(m_G \ge m_H\). Figure 1 (left) shows the run time of a numeric verification of this fact for some problem instances by checking the conditions of Corollary 6 (primal existence test) and Theorem 11 (dual existence test). Fig. 1 (right) shows the run time of the primal and dual solution methods.
The following example from [7] was solved in [9] using the VLP solver bensolve [10]. We implemented the algorithms of [9] with bensolve tools and compare them with our algorithms. While the methods of [9] compute all vertices of \(\mathrm{epi~}g\) in the primal algorithm and all vertices of \(\mathrm{epi~}h^*\) in the dual algorithm, we compute only part of these vertices by using the qcsolve command of bensolve tools. We observe a better performance for the bigger instances.
Example 18
Consider the polyhedral d.c. optimization problem (DC) with
and
One can easily verify that the all-one vector provides an optimal solution. In Fig. 2, the run time of the primal and dual solution method proposed in this article is compared to the primal and dual method of [9].
5 Summary
We characterized the existence of global optimal solutions of polyhedral d.c. optimization problems in Theorem 1, Theorem 11 and Corollary 12 depending on whether the first, the second, or both components of the objective function are polyhedral. We provided a solution procedure based on both an existence tests and a reformulation of the polyhedral d.c. optimization problem into a quasi-concave minimization problem. Numerical experiments were run for the case where both components of the objective function are polyhedral.
References
Benson, H.P.: A finite algorithm for concave minimization over a polyhedron. Naval Res. Logist. Q. 32(1), 165–177 (1985)
Benson, H.P.: Concave minimization: theory, applications and algorithms. In: Horst, R., Pardalos, P.M. (eds.) Handbook of Global Optimization, pp. 43–148. Springer, Boston (1995)
Chinchuluun, A., Pardalos, P.M., Enkhbat, R.: Global minimization algorithms for concave quadratic programming problems. Optimization 54(6), 627–639 (2005)
Ciripoi, D., Löhne, A., Weißing, B.: A vector linear programming approach for certain global optimization problems. J. Glob. Optim. 72(2), 347–372 (2018)
Ciripoi, D., Löhne, A., Weißing, B.: Calculus of convex polyhedra and polyhedral convex functions by utilizing a multiple objective linear programming solver. Optimization 68(10), 2039–2054 (2018)
Ciripoi, D., Löhne, A., Weißing, B.: Bensolve tools, version 1.3, (2019). Gnu Octave / Matlab toolbox for calculus of convex polyhedra, calculus of polyhedral convex functions, global optimization, vector linear programming, http://tools.bensolve.org. Accessed 25 May 2019
Ferrer, A., Bagirov, A., Beliakov, G.: Solving dc programs using the cutting angle method. J. Global Optim. 61(1), 71–89 (2014)
Horst, R., Thoai, N.V.: DC programming: overview. J. Optim. Theory Appl. 103(1), 1–43 (1999)
Löhne, A., Wagner, A.: Solving DC programs with a polyhedral component utilizing a multiple objective linear programming solver. J. Global Optim. 69(2), 369–385 (2017)
Löhne, A., Weißing, B.: The vector linear program solver bensolve—notes on theoretical background. Eur. J. Oper. Res. 260(3), 807–813 (2017)
Löhne, A., Weißing, B.: Bensolve, version 2.1.0, A free vector linear program solver, (2017). http://bensolve.org. Accessed 25 May 2019
Pardalos, P.M., Rosen, J.B.: Methods for global concave minimization: a bibliographic survey. SIAM Rev. 28(3), 367–379 (1986)
Rockafellar, R .T.: Convex Analysis. Princeton Mathematical Series, vol. 28. Princeton University Press, Princeton (1970)
Singer, I.: A Fenchel–Rockafellar type duality theorem for maximization. Bull. Aust. Math. Soc. 20(2), 193–198 (1979)
Toland, J.F.: Duality in nonconvex optimization. J. Math. Anal. Appl. 66(2), 399–415 (1978)
Tuy, H.: Global minimization of a difference of two convex functions. In: Cornet, B., Nguyen, V.H., Vial, J.P. (eds.) Nonlinear Analysis and Optimization, pp. 150–182. Springer, Berlin (1987)
Tuy, H.: D.c. optimization: theory, methods and algorithms. In: Horst, R., Pardalos, P .M. (eds.) Handbook of Global Optimization, pp. 149–216. Springer, Boston (1995)
Tuy, H.: Convex Analysis and Global Optimization. Nonconvex Optimization and its Applications, vol. 22. Kluwer Academic Publishers, Dordrecht (1998)
Acknowledgements
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
vom Dahl, S., Löhne, A. Solving polyhedral d.c. optimization problems via concave minimization. J Glob Optim 78, 37–47 (2020). https://doi.org/10.1007/s10898-020-00913-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-020-00913-z