1 Introduction

The 0-1 knapsack problem (KP) is a well-studied combinatorial optimization problem that has been treated extensively in the literature, with two monographs [3, 4] devoted to KP and its relatives. Given a positive knapsack capacity C and n items \(j=1,\dots ,n\) with positive weights \(w_j\) and profits \(p_j\), the task in the classical 0-1 knapsack problem is to select a subset of items with maximum total profit subject to the constraint that the total weight of the selected items may not exceed the knapsack capacity. The 0-1 knapsack problem is (weakly) \(\textsf {NP}\)-hard, but it admits a fully polynomial-time approximation scheme (FPTAS) and can be solved exactly in pseudopolynomial time by dynamic programming (cf. [3]).

The product knapsack problem (PKP) is a new addition to the knapsack family. It has recently been introduced in [1] and is formally defined as follows:

Definition 1

(Product knapsack problem (PKP))

INSTANCE: :

Items \(j\in N:=\{1,\dots ,n\}\) with weights \(w_j\in \mathbb {Z}\) and profits \(p_j\in \mathbb {Z}\), and a positive knapsack capacity \(C\in \mathbb {N}_+\).

TASK: :

Find a subset \(S\subseteq N\) with \(\sum _{j\in S}w_j\le C\) such that \(\prod _{j\in S} p_j\) is maximized.

The solution \(S=\emptyset \) is always feasible and is assumed to yield an objective value of zero. Note that the assumption that the knapsack capacity as well as all weights and profits are integers is without loss of generality. Indeed, any instance with rational input data can be transformed into an equivalent instance with integer input data in polynomial time by multiplying all numbers by their lowest common denominator.

D’Ambrosio et al. [1] list several application scenarios for PKP, in particular in the area of computational social choice, and also provide pointers to literature on other nonlinear knapsack problems. Furthermore, two different ILP formulations for PKP are presented and compared from both a theoretical and a computational perspective. In addition, D’Ambrosio et al. [1] develop an algorithm performing dynamic programming by weights with pseudopolynomial running time \(\mathcal {O}(nC)\). A computational study exhibits the strengths and weaknesses of the dynamic program and the ILP approaches for determining exact solutions of PKP depending on the characteristics of the test instances.

Concerning the complexity of PKP, a short proof of weak NP-hardness is given as a side remark in [1]. This proof, however, uses a reduction from KP and requires an exponential blow-up of the profits of the given instance of KP (by putting them into the exponent of 2). Since KP is only weakly NP-hard, this does not prove the desired hardness result. However, a valid NP-hardness proof for PKP has recently been provided in [2], which shows that the problem is weakly NP-hard even when all profits are required to be positive.

Note that this proof requires concepts of advanced calculus. As a possibly useful alternative, we provide a simpler proof using only elementary operations in an extended version of this paper, which is available as a technical report [5].

1.1 Our contribution

In this paper, we provide an FPTAS for PKP based on dynamic programming by profits. Since PKP is weakly NP-hard, an FPTAS is the best approximation result possible for the problem unless \(\textsf {P}=\textsf {NP}\). Moreover, the construction of an FPTAS deserves attention since standard greedy-type algorithms do not yield a constant approximation ratio for PKP. We demonstrate this in Sect. 4 by providing a tight analysis of the greedy algorithm obtained by extending the classical greedy procedure for KP to PKP in the natural way.

We do not report computational experiments on the FPTAS or the greedy algorithm. The presented pseudocode descriptions are provided in order to illustrate the algorithms and allow a rigorous analysis of the obtained approximation ratios, but are not optimized for practical efficiency.

2 Preliminaries

In contrast to KP, both the item weights \(w_j\) and the item profits \(p_j\) are allowed to be negative in PKP. However, one can exclude certain weight-profit combinations that yield “useless” items, which leads to the following assumption used throughout the paper:

Assumption 1

Any instance of PKP satisfies:

  1. (a)

    Any single item fits into the knapsack, i.e., \(w_j\le C\) for all \(j\in N\).

  2. (b)

    All profits are nonzero, i.e., \(p_j\in \mathbb {Z}\setminus \{0\}\) for all \(j\in N\).

  3. (c)

    For each item \(j\in N\) with negative profit \(p_j<0\), there exists another item \(j'\in N\setminus \{j\}\) with negative profit \(p_{j'}<0\) such that \(w_j+w_{j'} \le C\).

  4. (d)

    All weights are nonnegative, i.e., \(w_j\in \mathbb {N}_0\) for all \(j\in N\).

  5. (e)

    All items with weight zero have negative profit, i.e., \(p_j<0\) if \(w_j=0\).

We note that Assumption 1 imposes no loss of generality and can easily be checked in polynomial time. Indeed, items \(j\in N\) violating (a), (b), or (c) can never be part of any feasible solution with positive objective value and may, thus, be removed from the instance. The nonnegativity of the weights \(w_j\) demanded in (d) has been shown to impose no loss of generality in [1]. For (e), we note that items j with \(w_j=0\) can always be assumed to be packed if their profit is positive (but items j with \(w_j=0\) and negative profit remain part of the optimization).

Using Assumption 1 (b), the item set N can be partitioned into \(N^+ :=\{j \in N \mid p_j \ge 1\}\) and \(N^- :=\{j \in N \mid p_j \le -1\}\). For convenience, we define \(p_{\max }:=\max _{j\in N} |p_j|\), \(p^+_{\max }:=\max _{j\in N^+} p_j\), and \(p^-_{\max }:=\max _{j\in N^-} |p_j|\).

Throughout the paper, we denote an optimal solution set for a given instance of PKP by \(S^*\) and the optimal objective value by \(z^*\). Note that we must always have \(z^*\ge 1\) since packing any item from \(N^+\) or any feasible pair of items from \(N^-\) yields an objective value of at least 1.

Definition 2

For \(0<\alpha \le 1\), an algorithm A that computes a feasible solution set \(S\subseteq N\) with \(\prod _{j\in S}p_j \ge \alpha \cdot z^*\) in polynomial time for every instance of PKP is called an \(\alpha \)-approximation algorithm for PKP. The value \(\alpha \) is then called the approximation ratio of A.

A polynomial-time approximation scheme (PTAS) for PKP is a family of algorithms \((A_{\varepsilon })_{\varepsilon >0}\) such that, for each \(\varepsilon >0\), the algorithm \(A_{\varepsilon }\) is a \((1-\varepsilon )\)-approximation algorithm for PKP. A PTAS \((A_{\varepsilon })_{\varepsilon >0}\) for PKP is called a fully polynomial-time approximation scheme (FPTAS) if the running time of \(A_{\varepsilon }\) is additionally polynomial in \(\frac{1}{\varepsilon }\).

Throughout the paper, \(\log (x)\) always refers to the base 2 logarithm of x and \(\ln (x)\) refers to the natural logarithm of x.

3 A fully polynomial-time approximation scheme

We now derive a fully polynomial-time approximation scheme (FPTAS) for PKP based on dynamic programming.

The most common approach for the exact solution of knapsack-type problems in pseudo-polynomial time applies dynamic programming by weights. This means that, for every capacity value \(d=0,1,\ldots ,C\), the largest profit value reachable by a feasible solution is determined, which yields a running time polynomial in C (see [3, Sec. 2.3]). However, for obtaining fully polynomial-time approximation schemes, one usually performs dynamic programming by profits. In this case, for every profit value p up to some upper bound U on the objective function value, the smallest weight required for a feasible solution with profit p is sought, which leads to a running time polynomial in U (see [3, Lemma 2.3.2]). Then, the profit space is simplified in some way, e.g., by scaling (cf. [3, Sec. 2.6]), such that the running time of the dynamic program becomes polynomial and the incurred loss of accuracy remains bounded. D’Ambrosio et al. [1] provide an algorithm solving PKP with dynamic programming by weights, where each entry of the dynamic programming array contains the objective value of a subproblem. However, exchanging the roles of profits and weights (as it is done, e.g., for KP, see [3, Sec. 2.3]), would require a dynamic programming array of length \(\mathcal {O}(p_{\max }^n)\), which is exponential and does not permit a suitable scaling procedure.

An obvious way out of this dilemma would be the application of the logarithm to the profits. In fact, such an approach is suggested as a side remark in [1, Sec. 3] for dynamic programming by weights (without commenting on the details of the rounding process). For dynamic programming by profits, however, the profit values must be mapped to integers as indices of the dynamic programming array and there seems to be no way to preserve optimality in such a process. It should also be noted that applying any k-approximation algorithm for KP to the instance resulting from logarithmization would only yield a \((1/z^*)^{1/k}\)-approximation for PKP. Thus, constant-factor approximations for PKP require different approaches.

We now construct a scaled profit space that actually yields a \((1-\varepsilon )\)–approximation for PKP. Our scaling construction is based on a parameter \(K>0\) depending on \(\varepsilon \), which will be defined later. For every item j, we define an integer scaled profit value in the logarithmized space as

$$\begin{aligned} \tilde{p}_j:=\left\lfloor \frac{\log (|p_j|)}{K}\right\rfloor . \end{aligned}$$
(1)

Since \(|p_j|\ge 1\), we have \(\tilde{p}_j\ge 0\), and we obtain \(\tilde{p}_j=0\) if and only if \(|p_j|=1\). Note that an item j with \(p_j=-1\) and \(\tilde{p}_j=0\) might still be useful for changing the sign of the solution of PKP. Analogous to \(p_{\max }\), we define \(\tilde{p}_{\max } :=\left\lfloor \frac{\log (p_{\max })}{K}\right\rfloor \). Ruling out trivial instances, we can assume without loss of generality that \(p_{\max }\ge 2\), so \(\log (p_{\max })\ge 1\).

Regarding the computability of the scaled profits \(\tilde{p}_j\), we observe that their definition involves logarithms \(\log (|p_j|)\), which cannot be computed exactly in polynomial time. However, these logarithms only appear in expressions that are rounded to integers, so we do not have to compute these values exactly.

We define the following dynamic programming arrays for profit values \(\tilde{p}=0,1,\ldots , n\cdot \tilde{p}_{\max }\):

$$\begin{aligned} W_j^+(\tilde{p}):= & {} \min _{S\subseteq \{1,\ldots ,j\}} \left\{ \sum _{i\in S} w_i \mid \sum _{i \in S} \tilde{p}_i = \tilde{p},\, |S\cap N^-| \text{ is } \text{ even } \right\} ,\\ W_j^-(\tilde{p}):= & {} \min _{S\subseteq \{1,\ldots ,j\}} \left\{ \sum _{i\in S} w_i \mid \sum _{i \in S} \tilde{p}_i = \tilde{p},\, |S\cap N^-| \text{ is } \text{ odd } \right\} . \end{aligned}$$

Note that the empty set has even cardinality. For convenience, we set the minimum over the empty set equal to \(+\infty \).

The computation of these arrays can be done by the following recursion, which is related to Algorithm \(\text {DP}_{\text {PKP}}\) in [1, Fig. 1]:

$$\begin{aligned} \begin{array}{ll} \text{ If } p_j\ge 1, \text{ then: } \\ &{} W_j^+(\tilde{p}):=\min \{W_{j-1}^+(\tilde{p}),\, W_{j-1}^+(\tilde{p}-\tilde{p}_j) +w_j\} \\ &{} W_j^-(\tilde{p}):=\min \{W_{j-1}^-(\tilde{p}),\, W_{j-1}^-(\tilde{p}-\tilde{p}_j) +w_j\} \\ \text{ If } p_j\le -1, \text{ then: } \\ &{} W_j^+(\tilde{p}):=\min \{W_{j-1}^+(\tilde{p}),\, W_{j-1}^-(\tilde{p}-\tilde{p}_j) +w_j\} \\ &{} W_j^-(\tilde{p}):=\min \{W_{j-1}^-(\tilde{p}),\, W_{j-1}^+(\tilde{p}-\tilde{p}_j) +w_j\} \end{array} \end{aligned}$$

The obvious initialization is given by \(W_0^+(0):=0\) and setting all other entries (including the hypothetical ones with \(\tilde{p}<0\)) to \(+\infty \).

The approximate solution set \(S^A\) is represented by the array entry with \(\max \{\tilde{p}\mid W_n^+(\tilde{p}) \le C\}\). It follows by construction that \(S^A\) maximizes the total profit in the associated instance of KP with scaled profits \(\tilde{p}_j\) among all subsets of N that fulfill the weight restriction and contain an even number of items from \(N^-\). In the following, we show that, by choosing

$$\begin{aligned} K:=\frac{\varepsilon }{n^2}>0\,, \end{aligned}$$
(2)

the set \(S^A\) yields a \((1-\varepsilon )\)-approximation for PKP and can be computed in polynomial time via the above dynamic programming procedure. To this end, we use the following two lemmas:

Lemma 1

For \(\varepsilon \in (0,1)\), we have \(\varepsilon \le - \log (1-\varepsilon )\).

Proof

The statement follows since, for any \(x\in (0,1)\), we have

$$\begin{aligned} -\log (1-x) = -\ln (1-x) / \ln (2) \ge -\ln (1-x) = \sum _{k=1}^\infty \frac{x^k}{k} \ge x. \end{aligned}$$

\(\square \)

Lemma 2

Any optimal solution set \(S^*\) for PKP satisfies

$$\begin{aligned} \sum _{j\in S^*}\log (|p_j|) \ge \log (p_{\max }). \end{aligned}$$

Proof

Let \(j_{\max }\in N\) denote an item with \(|p_{j_{\max }}|=p_{\max }\). If \(p_{j_{\max }}>0\), then the set \(\{j_{\max }\}\), which is feasible for PKP by Assumption 1 (a), has objective value \(p_{\max }\). If \(p_{j_{\max }}<0\), Assumption 1 (c) implies that there exists another item \(j'\ne j_{\max }\) with \(p_{j'}<0\) such that \(\{j_{\max },j'\}\) is feasible for PKP, and this set has objective value \(p_{j'}\cdot p_{j_{\max }} \ge p_{\max }\) since \(p_{j'}\le -1\) by Assumption 1 (b). Thus, in both cases, the optimality of \(S^*\) for PKP implies that

$$\begin{aligned}&\prod _{j\in S^*}p_j \ge p_{\max } \ \Leftrightarrow \; \log \left( \prod _{j\in S^*}p_j\right) \ge \log (p_{\max }) \\&\quad \Leftrightarrow \; \log \left( \prod _{j\in S^*}|p_j|\right) \ge \log (p_{\max }) \ \Leftrightarrow \; \sum _{j\in S^*}\log (|p_j|) \ge \log (p_{\max }). \end{aligned}$$

\(\square \)

Proposition 1

The running time for computing \(S^A\) is in \(\mathcal {O}(\frac{n^4}{\varepsilon } \log (p_{\max }))\), which is polynomial in \(1/\varepsilon \) and encoding length of the input of PKP.

Proof

Clearly, for each of the n items, one has to pass through the whole length of the two dynamic programming arrays. Therefore, the total running time is in

$$\begin{aligned} \mathcal {O}(n^2\,\tilde{p}_{\max }) = \mathcal {O}\left( n^2 \frac{\log (p_{\max })}{K}\right) = \mathcal {O}\left( n^4 \frac{\log (p_{\max })}{\varepsilon }\right) . \end{aligned}$$

\(\square \)

Proposition 2

The set \(S^A\) yields a \((1-\varepsilon )\)–approximation for PKP.

Proof

The proof consists of two parts. First, we analyze the effect of scaling by K and rounding down in (1) by showing that \(S^A\) yields an objective value close to the value of an optimal solution set \(S^*\) for PKP in the associated instance of KP with profits \(\log (|p_j|)\). The argumentation closely follows the standard FPTAS for KP (see [3, Sec. 2.6]):

$$\begin{aligned} \sum _{j\in S^A} \log (|p_j|)&\ge \sum _{j\in S^A} K \cdot \left\lfloor \frac{\log (|p_j|)}{K}\right\rfloor \ge \sum _{j\in S^*} K \cdot \left\lfloor \frac{\log (|p_j|)}{K}\right\rfloor \end{aligned}$$
(3)
$$\begin{aligned}&\ge \sum _{j\in S^*} K \cdot \left( \frac{\log (|p_j|)}{K}-1\right) \ge \sum _{j\in S^*} \log (|p_j|) - n\cdot K \end{aligned}$$
(4)

To obtain the second inequality in (3), we exploited the optimality of \(S^A\) for the KP instance with profits \(\tilde{p}_j\). We now set

$$\begin{aligned} \varepsilon ' :=\frac{-\log (1-\varepsilon )}{n\cdot \log (p_{\max })}>0. \end{aligned}$$
(5)

Then, using the definition of K in (2) and that \(\varepsilon \le - \log (1-\varepsilon )\) for \(\varepsilon \in (0,1)\), we obtain

$$\begin{aligned} n\cdot K = \frac{\varepsilon }{n} \le \frac{-\log (1-\varepsilon )}{n} = \varepsilon '\cdot \log (p_{\max }), \end{aligned}$$

and using that \(\sum _{j\in S^*}\log (|p_j|) \ge \log (p_{\max })\) by Lemma 2, the chain of inequalities in (3)–(4) yields that

$$\begin{aligned} \sum _{j\in S^A} \log (|p_j|) \ge \sum _{j\in S^*} \log (|p_j|) - \varepsilon '\cdot \log (p_{\max }) \ge (1-\varepsilon ') \sum _{j\in S^*} \log (|p_j|). \end{aligned}$$

In the second part of the proof, we simply raise two to the power of both sides of this inequality, i.e., \(2^{\sum _{j\in S^A} \log (|p_j|)} \ge \left( 2^{\left( \sum _{j\in S^*} \log (|p_j|)\right) }\right) ^{1-\varepsilon '}\), so

$$\begin{aligned} \prod _{j\in S^A} |p_j|&\ge \left( \prod _{j\in S^*} |p_j|\right) ^{1-\varepsilon '} = z^* \cdot (1/z^*)^{\varepsilon '} \ge z^* \cdot \left( \frac{1}{(p_{\max })^n}\right) ^{\varepsilon '} \end{aligned}$$
(6)
$$\begin{aligned}&= z^* \cdot 2^{-\varepsilon '\, n \log (p_{\max })} = z^* \cdot 2^{\log (1-\varepsilon )} = (1-\varepsilon ) z^*. \end{aligned}$$
(7)

Here, the right inequality in (6) is derived from the trivial bound \(z^*\le (p_{\max })^n\), and the second equality in (7) from the definition of \(\varepsilon '\) in (5). Recalling that \(S^A\) contains an even number of items from \(N^-\), the claim follows.\(\square \)

Table 1 Profits \(p_j\), weights \(w_j\), and scaled profits \(\tilde{p}_j\) of the items in Example 1

Propositions 1 and 2 immediately yield the following theorem:

Theorem 1

There exists an FPTAS with running time in \(\mathcal {O}(\frac{n^4}{\varepsilon } \log (p_{\max }))\) for PKP. \(\square \)

We conclude this section with an example illustrating how the FPTAS works.

Example 1

Consider the instance of PKP given by the \(n=5\) items with profits and weights as shown in Table 1 and a knapsack capacity of \(C :=9\). We choose \(\varepsilon = 0.025\) so that \(K=\frac{\varepsilon }{n^2}=0.001\).

The resulting scaled profits \(\tilde{p}_j\) are shown in the last row of Table 1 and we have \(\tilde{p}_{\max }=10001\), so \(n\cdot \tilde{p}_{\max }=50005\). Thus, the FPTAS computes the relevant dynamic programming arrays \(W_j^+(\tilde{p})\) and \(W_j^-(\tilde{p})\) for all profit values \(\tilde{p}=0,1,\ldots , 50005\). Note that \(N^+=\{1,2,5\}\) and \(N^-=\{3,5\}\).

For this instance, the FPTAS finds the optimal solution set \(S^*=\{3,5\}\) during the computation of \(W_5^+(10001)\), which is given as follows:

$$\begin{aligned} W_5^+(10001) = \min \left\{ W_{4}^+(10001),\, W_{4}^-(10001-0) + 4\right\} = \min \left\{ +\infty , 5+4\right\} =9 \end{aligned}$$

Here, \(W_{4}^+(10001) = +\infty \) since a scaled profit of 10001 cannot be obtained by any subset \(S\subseteq \{1,2,3,4\}\) containing an even number of items from \(N^-\), and \(W_{4}^-(10001) = 5\) since a scaled profit of 10001 is reachable by the subset \(S=\{3\}\subseteq \{1,2,3,4\}\) that contains an odd number of items from \(N^-\). Thus, the solution set corresponding to the array entry \(W_5^+(10001)\) is \(\{3,5\}=S^*\), and since \(\tilde{p}=10001\) is indeed the highest value of \(\tilde{p}\) for which \(W_5^+(\tilde{p})\le C=9\), this is also the set \(S^A\) returned by the FPTAS.

4 A greedy-type algorithm

For KP, the classical greedy procedure is probably one of the most obvious first attempts for anybody confronted with the problem. Hence, it is interesting to evaluate the performance of a variant of this greedy procedure also for PKP.

It is known that, for obtaining a bounded approximation ratio for KP in the classical greedy procedure, one has to take into account also the item with largest profit as a singleton solution (cf. [3, Sec. 2.1]). Extending this requirement to the negative profits allowed in PKP, we additionally determine, among all items with negative profits, a feasible pair of items with largest profit product. Moreover, if the greedy solution contains an odd number of items from \(N^-\), we simply remove the negative-profit item whose profit has the smallest absolute value. This leads to the following natural greedy algorithm for PKP, which we refer to as Product Greedy:

figure a

We note that, since \(\log (|p_j|)/w_j=\log \left( |p_j|^{1/w_j}\right) \) and the logarithm is a strictly increasing function, the sorting and renumbering of the items in step 1 of Product Greedy can equivalently be done by sorting the items in nonincreasing order of \(|p_j|^{1/w_j}\), which means that the values \(\log (|p_j|)/w_j\) do not have to be computed in the algorithm.

Let \({j^+_{\max } :=\mathop {\hbox {argmax}}\limits \{p_j \mid j \in N^+\}}\) denote an item with largest positive profit (i.e., with \(p_{j^+_{\max }} = p^+_{\max }\)) as in Product Greedy. Similarly, we let \(j^-_{\max } :=\mathop {\hbox {argmax}}\limits \{|p_j| \mid j \in N^-\}\) denote an item with smallest negative profit (i.e., with \(-p_{j^-_{\max }}=p^-_{\max }\)). Then, by Assumption 1 (c), there exists another item in \(N^-\) that can be packed into the knapsack together with \(j^-_{\max }\). This implies that the profits of the items \(j^-\) and \(j_1,j_2\) considered in Product Greedy satisfy

$$\begin{aligned} p_{j_1} \cdot p_{j_2} \ge -p_{j^-_{\max }} \ge -p_{j^-}. \end{aligned}$$
(8)

In the following analysis, we denote the objective value obtained by Product Greedy by \(z^H\).

Theorem 2

  1. (a)

    Product Greedy is a \(1/(z^*)^{2/3}\)-approximation algorithm for PKP.

  2. (b)

    Product Greedy is a \(1/(p_{\max })^2\)-approximation algorithm for PKP.

Proof

The algorithm clearly runs in polynomial time. In order to analyze its approximation ratio, let \(s\in N\) be the split item, i.e., the first item in the given order that cannot be packed into the knapsack anymore during the greedy procedure performed in step 2. Similar to the analysis of the greedy procedure for KP, the analysis concentrates on the split solution, i.e., the set of items \(\bar{S}= \{j \in N \mid j \le s-1\}\) produced in step 2 of Product Greedy.

We distinguish two cases depending on the number of items with negative profits in \({\bar{S}}\) and, for each of the two cases, two subcases depending on the sign of the profit \(p_s\) of the split item s:

Case 1: \(|{\bar{S}} \cap N^-|\) is even.

In this case, the solution \(S=\bar{S}\) is considered when choosing the best solution in step 11. Consider the sign of the split item’s profit. If \(p_s>0\), then

$$\begin{aligned} 2\cdot \log (z^H)&\ge 2\cdot \max \left\{ \sum _{j \in \bar{S}} \log (|p_j|), \, \log (p^+_{\max })\right\} \\&\ge \sum _{j \in \bar{S}} \log (|p_j|) + \log (p^+_{\max }) \ \ge \ \sum _{j=1}^s \log (|p_j|). \end{aligned}$$

Obviously, we also have \(\log (z^H) + \log (p^+_{\max }) \ge \sum _{j=1}^s \log (|p_j|)\).

Similarly, if \(p_s<0\), then

$$\begin{aligned} 2\cdot \log (z^H)&\ge 2\cdot \max \left\{ \sum _{j \in \bar{S}} \log (|p_j|), \, \log (|p_{j_1}|)+\log (|p_{j_2}|)\right\} \\&\ge \sum _{j \in \bar{S}} \log (|p_j|) + \log (|p_{j_1}|)+\log (|p_{j_2}|) \\&\ge \sum _{j \in \bar{S}} \log (|p_j|) + \log (|p_s|) \ = \ \sum _{j=1}^s \log (|p_j|), \end{aligned}$$

where the third inequality follows from (8). Moreover, we have \(\log (z^H) + \log (p^-_{\max }) \ge \sum _{j=1}^s \log (|p_j|)\).

Case 2: \(|{\bar{S}} \cap N^-|\) is odd.

In this case, the solution \(S=\bar{S}\setminus \{j^-\}\) is considered when choosing the best solution in step 11. If \(p_s>0\), we obtain

$$\begin{aligned} 3\cdot \log (z^H)&\ge 3 \cdot \max \left\{ \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|), \, \log (|p_{j_1}|)+\log (|p_{j_2}|),\, \log (p^+_{\max })\right\} \\&\ge \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|) + \log (|p_{j_1}|)+\log (|p_{j_2}|) + \log (p^+_{\max }) \\&\ge \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|) + \log (|p_{j^-}|) + \log (p_s) \ {=}\ \sum _{j=1}^s \log (|p_j|), \end{aligned}$$

by invoking (8) again. In this case, we also have \(\log (z^H) + \log (p^-_{\max }) + \log (p^+_{\max }) \ge \sum _{j=1}^s \log (|p_j|)\).

Similarly, if \(p_s<0\), then

$$\begin{aligned} 3\cdot \log (z^H)&\ge 3 \cdot \max \left\{ \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|), \, \log (|p_{j_1}|)+\log (|p_{j_2}|)\right\} \\&\ge \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|) + 2\left( \log (|p_{j_1}|)+\log (|p_{j_2}|)\right) \\&\ge \sum _{j \in \bar{S}{\setminus \{j^-\}}} \log (|p_j|) + \log (|p_{j^-}|) + \log (|p_s|) \ {=} \ \sum _{j=1}^s \log (|p_j|). \end{aligned}$$

Moreover, we have \(\log (z^H) + 2\log (p^-_{\max }) \ge \sum _{j=1}^s \log (|p_j|)\).

Summarizing all four cases, we always have \(3\cdot \log (z^H) \ge \sum _{j=1}^s \log (|p_j|)\).

Then, since \(\sum _{j=1}^s \log (|p_j|)\) is an upper bound on the optimal objective value of the LP relaxation of the associated instance of KP with profits \(\log (|p_j|)\) (see, e.g., [3]), we have \(\sum _{j=1}^s \log (|p_j|) \ge \sum _{j\in S^*} \log (|p_j|) =\log (\prod _{j\in S^*} p_j)=\log (z^*)\) (clearly, \(|S^* \cap N^-|\) must be even). This yields

$$\begin{aligned} 3\cdot \log (z^H) \ge \log (z^*) \Longleftrightarrow z^H \ge (z^*)^{1/3} \end{aligned}$$

and proves the approximation ratio in (a).

Moreover, in all four cases the additive error in the logarithmic space can be bounded by \(\max \{\log (p^+_{\max }), \, \log (p^-_{\max })\} + \log (p^-_{\max }) \le 2\cdot {\log }(p_{\max })\), which yields the approximation ratio in (b). \(\square \)

The approximation ratios obtained by Product Greedy are rather disappointing. The following example, however, shows that the analysis in the proof of Theorem 2 is asymptotically tight and that a considerable deviation from the greedy principle would be necessary to improve upon the obtained approximation ratios:

Example 2

Consider the instance of PKP given by the item profits and weights shown in Table 2 and a knapsack capacity of \(C:=3M\) for some large, positive integer M.

Table 2 Profits \(p_j\) and weights \(w_j\) of the items in Example 2 with items indexed in nonincreasing order of \(\log (|p_j|)/w_j\)

Algorithm Product Greedy first finds \({\bar{S}}=\{1,2,3\}\) in step 2, but has to remove item 3 in step 5 since \(|{\bar{S}}\cap N^-|=1\), which yields \(S=\{1,2\}\) with an objective value of \(2(M+2)\). The best negative pair found in step \({9}\) is given by \(j_1=3\) and \(j_2=6\), and has profit product \(M+1\). Finally, \(j^+_{\max }=2\) with \(p_{j^+_{\max }}=p^+_{\max }=M+2\) in step \({10}\). Therefore, Product Greedy returns the solution \(\{1,2\}\) with an objective value of \(z^H=2(M+2)\), while the optimal solution consists of items 2, 4, and 5 with objective value \(z^*=(M+2)M^2\).