Abstract

This paper explains a region-division-linearization algorithm for solving a class of generalized linear multiplicative programs (GLMPs) with positive exponent. In this algorithm, the original nonconvex problem GLMP is transformed into a series of linear programming problems by dividing the outer space of the problem GLMP into finite polynomial rectangles. A new two-stage acceleration technique is put in place to improve the computational efficiency of the algorithm, which removes part of the region of the optimal solution without problems GLMP in outer space. In addition, the global convergence of the algorithm is discussed, and the computational complexity of the algorithm is investigated. It demonstrates that the algorithm is a complete polynomial time approximation scheme. Finally, the numerical results show that the algorithm is effective and feasible.

1. Introduction

Consider a class of generalized linear multiplicative programs (GLMPs):

Here, , X is a nonempty bounded closed set, , , and . represents the transpose of a vector (e.g., represents the transpose of a vector ). Besides, we assume that for any , all make .

The problem GLMP usually has multiple nonglobal local optimal solutions and is a class of NP-hard problems [1], which can be widely used in the fields of finance optimization [2, 3], robust optimization [4], microeconomics [5], and multiobjective decision making [6, 7]. In addition, the GLMP also includes a wide range of mathematical programming categories, such as linear multiplicative programming, quadratic programming, bilinear programming, and so on. Therefore, for these and various other reasons, GLMP has caught the attention of many experts, scholars, and engineering practitioners who have studied this theory and set off a new wave of global optimization learning. With the increasing dependence of practical problems on modeling optimization, local optimization theory and global optimization algorithms have made remarkable progress. However, compared with local optimization algorithm, the theory of global optimization algorithm is still quite insufficient. There are many methods to study this kind of problems, such as level set algorithm [8], heuristic algorithm [9, 10], branch and bound algorithm [1113], outer approximation algorithm [14], parametric simplex algorithm [15], and so on, but these methods do not give the computational complexity of the algorithm. In addition, Depetrini and Locatelli [16] considered the problem of minimizing the product of two affine functions over a polyhedron set and proposed a polynomial time approximation algorithm. Locatelli [17] presented an approximate algorithm for solving more general types of global optimization problems and deduced the computational complexity of the algorithm, but the numerical results of the algorithm are lacking. Recently, Shen and Wang [18] also proposed a full polynomial time approximation algorithm for resolving the problem GLMP globally, but there is no acceleration technique. Moreover, for a more comprehensive overview of the GLMP, we encourage the readers to go through the more detailed literature [8, 1921].

In this paper, in order to solve the GLMP, two approximation algorithms are proposed, which is mainly by establishing a nonuniform grid; the process of solving the original problem is transformed into the process of solving a series of linear problems; it is proved that the proposed algorithm can obtain a global approximation solution for GLMP. Besides, we put forward a two-stage acceleration technique to speed up Algorithm 1, which yields Algorithm 2. Then, by discussing the computational complexity of the algorithm, it is shown that the two algorithms are polynomial time approximation algorithms. Numerical experiments show that the performance of Algorithm 2 is obviously better than that of Algorithm 1, and the numerical results in Table 2 show that in solving problem 1-3, Algorithm 2 uses less CPU running time and iterations than [17, 18].

(1)Step 0 (initialization). Set , . By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is , which subdivides into smaller rectangles. Represent the vertex of each small rectangle as , which is stored in the set .
(2)Step 1. Select a point from the , solve the linear programming problem , and let .
(3)Step 2. If the problem is solvable, then , and let ; if , let , , ; if , set and go to Step 1; otherwise, the algorithm terminates; let

The rest of this paper is organized as follows. In Section 2, we first transform the problem GLMP into its equivalent optimization problem EOP and give its region-decomposition-linearization technique. Section 3 presents the global approximation algorithm for problem GLMP and obtains the convergence for the proposed algorithm. In Section 4, we give the computational complexity for the proposed algorithm and carry out some numerical experiments in Section 5 to verify the feasibility and effectiveness of the algorithm. The concluding section is a simple summary.

2. Equivalence Problem and Its Linearization Technique

In this section, we will give the equivalent optimization problem EOP of the problem GLMP, then give the corresponding properties by studying the objective function of the EOP, and then explain the linearization technique of the equivalent problem.

2.1. Equivalent Problems and Their Properties

In order to solve the problem GLMP, the definition of global approximation solution is given below.

Definition 1. Let be a global optimal solution to the problem GLMP at a given precision . If satisfies , is referred to as the global approximation of the problem GLMP.
To obtain the global approximation solution for GLMP, let , .

Theorem 1. For each , let , , , . Then, for each , let ; then, with .

Proof. It is easy to know that for any , there are ; thus,Therefore, and then the conclusion holds.
Next, according to Theorem 1, for each , provide an upper bound for every .
On the basis of the above definition of and , define the rectangle as follows.Moreover, the rectangle is also called the outer space of the GLMP. Thus, by introducing variable , the problem GLMP is equivalent to the following problem P1.Next, the equivalence of problems GLMP and P1 is explained by Theorem 1.

Theorem 2. is a global optimal solution of problem GLMP if and only if is an optimal solution of problem P1 and .

Proof. Let if is a global optimal solution of the problem GLMP. Then, then it is obvious that is a feasible solution to P1. Suppose is not an optimal solution of P1; then, there is at least one feasible solution of P1, which makeswhich contradicts the optimality of the , so the hypothesis does not hold, and then is an optimal solution of P1.
Conversely, if is an optimal solution for P1 and if there is a that makes , let , then is a feasible solution for P1 andwhich contradicts the optimality of , so . Suppose is not a global optimal solution of the problem GLMP; then, there must be a that makes . Let ; obviously, is a feasible solution to P1, so we havewhich contradicts the optimality of . Therefore, is the global optimal solution of the problem GLMP, which proves to be completed.
It is easy to understand from Theorem 2 that the problems GLMP and P1 are equivalent and have the same global optimal value.
Then, for a given , define the setand functionThen, the problem P1 is equivalent to the following equivalent optimization problem.

Theorem 3. is the global optimal solution of the problem EOP if and only if is the optimal solution of P1 and .

Proof. Suppose is an optimal solution of P1; then, according to Theorem 2, we can know and . In addition, . Suppose that is not the global optimal solution of the problem EOP; there must be a such that and ; then, there must also be a such that . Then, is a feasible solution of P1; there is , which contradicts the optimality of , so the hypothesis does not hold, so is the global optimal solution of the problem.
On the other hand, if is a global optimal solution of the problem EOP, then , and there must be a such that is a feasible solution of P1. Suppose is not the global optimal solution of the problem P1; then, there must be an optimal solution to the problem P1 such that , so and , which contradicts the fact that is the global optimal solution of the problem EOP. Therefore, is the global optimal solution of P1, and can be obtained from Theorem 2 and then proved to be over.
Through Theorem 3, the problems EOP and P1 have the same global optimal value, so combined with Theorem 2, the problems EOP and GLMP are also equivalent. Therefore, we can solve the equivalent problem EOP instead of addressing the problem GLMP.
Next, we consider the following linear programming problem:If , the optimal solution to the problem is recorded as , and let , ; then,Furthermore, according to the Jensen inequality, we haveand then

Theorem 4. Suppose is a global optimal solution of the original problem GLMP; let ; then, and is also a global optimal solution of the problem .

Proof. Firstly, according to Theorems 2 and 3, we know that is a global optimal solution of the problem EOP. Then, by using formula (14) and the optimality of the global optimal solution of the EOP, we can see that is an optimal solution of the problem .
Next, the properties of the function over are given by Theorem 5.

Theorem 5. For a given precision , let ; then, for any , there is

In addition, if , the optimal solution to the problem is recorded as ; then, let ; there is also

Proof. For all , according to the definition of and , one can know .
If , for any , we have ; obviously, and for each . Thus,Moreover, according to the definition of function , ; thus,And in combination with the formulas (17) and (18), we haveFurther, through formula (19) and combined with the definition of , we can understand that formula (16) is formed, and formula (15) is of course also true.
If , it is clear that the inequality is established.
For all , if , we have , and ; then,Besides,By using the definition of and formulas (20) and (21), one can infer that formulas (15) and (16) hold.
If and , then formulas (15) and (16) obviously hold.
If , the problem is not solved, and for any , there is , then , so formula (15) is clearly established and the proof of the conclusion is completed.
Theorem 5 shows that for any , we can determine whether the is not empty by solving the linear programming problem and then determine whether formula (16) holds.

2.2. Linearization Techniques

The objective function of the problem EOP is still nonconvex compared to the problem GLMP. But the space in which the variable of the objective function is located is dimensions. Therefore, based on the above discussion, in order to solve the EOP, for a given , we first split the outer space on each dimension at a ratio of , thus producing several small rectangles.

To do this, letwhere represents a non-negative integer set. Therefore, the number of these small rectangles is finite, and the set of all their vertices iswhere . Obviously, for each , there must be a vertex making . Then, it can be concluded that the rectangle can be approximated by the set .

Next, by using the set , the process of solving the problem EOP can be transformed into solving a series of subproblems. To this end, for each , we need to consider the value of the , that is, we need to determine whether the set is not empty. According to Theorem 5, we can determine whether is not empty by solving the linear programming problem . Therefore, for each vertex , the following linear programming subproblem needs to be solved here, that is,

On the basis of the conclusion of Theorem 5, if the problem can be solved (its solution is recorded as ), thenand thus

3. Analysis of Algorithm and Its Computational Complexity

This section brings an approximate algorithm based on linearization-decomposition to solve the problem EOP. After that, the analysis of its computational complexity is proved accordingly.

3.1. Approximate Algorithm

To solve the EOP, we subdivide the external space into a finite number of small rectangles with ratio and put all the vertices of these small rectangles into the set .

Then, for each vertex , by solving the linear programming problem , if is feasible and has an optimal solution , then , and we can obtain a feasible solution (formula (25)) of the EOP according to , which makes

If there is a that satisfies , thenand thus is a global approximation solution of the problem GLMP. The specific algorithm steps are as follows.(1)Step 0 (initialization). Set , . By using formulas (22) and (23), the ratio used for the two consecutive segments in each dimension is , which subdivides into smaller rectangles. Represent the vertex of each small rectangle as , which is stored in the set .(2)Step 1. Select a point from the , solve the linear programming problem , and let .(3)Step 2. If the problem is solvable, then , and let ; if , let , , ; if , set and go to Step 1; otherwise, the algorithm terminates; letand then , is a global approximation solution to problems GLMP and EOP, respectively.

Theorem 6. For a given precision , let , , and be an optimal solution of the linear programming problem . Then, Algorithm 1 will get a global approximation solution for problem GLMP, i.e.,where is the global optimal solution to the original problem GLMP.

Proof. LetAccording to Theorem 1, we haveThen, formula (32) implies that , so there must be a which makesSo, using Theorem 5 on the small rectangle , there will beThus,Noting that , we can knowSince is the optimal solution to the linear programming problem , letApparently, . So, by taking advantage of the formula (16) in Theorem 5, we haveTherefore, by integrating formulas (35) and (38) and combining the , we can obtainand this proof is completed.

Remark 1. According to Theorem 6, if , then from Theorem 5, the optimal solution of the linear programming problem is exactly the global optimal solution of the original problem GLMP.
Through Theorem 6, we can see that for a given precision , Algorithm 1 will obtain a global approximation solution to the problem GLMP. Moreover, Remark 1 also shows that if , then Algorithm 1 will find a global optimal solution of the problem GLMP exactly.

3.2. Accelerating Techniques

Algorithm 1 shows that, for any , it is required to solve the linear programming problem , in order to verify that the is nonempty. Hence, the computational cost of Algorithm 1 depends on the number of points within the set , respectively. Then, the proposal of the acceleration technique will discard some points that are not necessary to consider the set and only consider the region that contains the global optimal solution of the problem EOP. The detailed process is given below.

If is the best known solution to the problem EOP, is the optimal solution to the linear programming problem ; for each , let ; obviously ; then, may be a better solution than . Well, using may be able to remove more vertices from that do not need to be explored. To give the acceleration technique for Algorithm 1, we first need to specify a necessary condition that the points in each subrectangle containing the global optimal solution of the problem EOP must be satisfied, that is,where , , , . Similarly, if are used to segment rectangles on each dimension, this will produce a limited number of small rectangles. For this purpose, let

Then, a set of vertices of a finite number of small rectangles will also be generated on a rectangular , that is,where . Clearly, and .

Based on the above discussion, we will give Propositions 1 and 2 to clarify the acceleration techniques of the algorithm.

Proposition 1. The global optimal solution of the problem EOP cannot be obtained on the set if a makes , of which

Proof. If , then there must be , and thus there iswhich contradicts the inequality chain (40), so the conclusion is valid.
With Proposition 1, we generate a new rectangle and vertex set , i.e., for each , letas well asWell, with .
Moreover, the above rules may produce a small rectangular vertex set with relatively few new elements, but there is still , so we then give Proposition 2 to delete the other unconsidered elements in .

Proposition 2. If is the best known solution to the problem EOP, is the optimal solution to the linear programming problem ; for each , let , and define the set

Then, for any , the EOP cannot get a better solution than .

Proof. Since is the optimal solution to a linear programming problem , then there is at least one point in the set , so . For arbitrary , obviously , and thus . According to the definition of the function , for each , the objective function value of the EOP meetsand this conclusion is proved.
Next, for a given , , make use of Proposition 2; letThrough the expression of in (46), the set is defined as follows.Therefore, for the convenience of narration, let . This means that in order to obtain a global approximation solution for problem EOP, it is only necessary to calculate up to linear programming subproblems to determine whether the is not empty, which determines the function value at each vertex . Then, by using the set , the computational efficiency of Algorithm 1 will be improved, leading to the following algorithm.
and then , is a global approximation solution to the problems GLMP and EOP, respectively.
Note that the Algorithm 2 simply removes the set of vertices that do not contain a global optimal solution; therefore, it is similar to Theorem 6; Algorithm 2 will also return a global approximation solution of the problem GLMP and EOP as well.

(1)Step 0 (initialization). Set . By using formulas (22) and (23), is subdivided into smaller rectangles, such that the ratio of two consecutive segments is in each dimension. Represent the vertex of each small rectangle as , which is stored in the set . Let , , , , .
(2)Step 1. Select a point from the , solve the linear programming problem , and let .
(3)Step 2. If the problem is solvable, then , and let ; if , let , , , . Use rules (45) and (46) to produce and and use formulas (49) and (50) to obtain set ; let , . If , set and go to Step 1; otherwise, the algorithm terminates; let

4. Analysis of Computational Complexity of the Algorithm

We first give Lemma 1 to discuss the computational complexity of the two algorithms.

Lemma 1. (see [22]). Let be the maximum of the absolute values of all the elements in problem GLMP; then, each component of any pole of can be expressed as , where , , .

Because for each , the solution to the linear programming problem is the pole of X, by Lemma 1, we have , where , , . Thus, . Moreover, letand for the sake of the following smooth description of Theorem 7, here is defined in Theorem 1.

Theorem 7. For a given , in order to obtain a global approximation solution to the problem GLMP, the upper limit of the time required for the proposed Algorithm 1 iswhere , , and represents the upper limit of the time used to solve a linear programming problem with linear constraints and variables at a time.

Proof. From the formulas (22) and (23), we can see that the maximum number of midpoint of the set isUsing the definition of in formula (52) and Lemma 1, we haveFurthermore, we also haveby using formula (53) and the above inequality (56). Of course, according to the definition of and in Theorem 1, and in conjunction with , there will beBy means of above formulas (56) and (58), we can haveand thusUsing in Algorithm 1 and , then there will beThen, by using the above formulas (55), (60), and (61), the upper limit of the number (expressed in ) of interior points of isin the utilized formula (55), (60), (61). From the above formula (62), we can see that the running time of Algorithm 1 is at mostwhen the global approximation solution is obtained, and then the proof of the conclusion is completed.

Remark 2. Propositions 1 and 2 show that we can accelerate Algorithm 1 by removing the vertices of the small rectangle that needs not be considered, which leads to Algorithm 2 that is more resource-efficient than Algorithm 1; in other words, Algorithm 2 is an improvement on Algorithm 1. Then, the upper bound of the CPU running time required by Algorithm 2 is the same as that of Algorithm 1 in the most extreme cases (where acceleration techniques always fail). Therefore, Algorithm 2 is likewise a polynomial time approximation algorithm.

5. Numerical Experiments

This section will test the performance of the algorithm through several test problems. All of our testing procedures were performed via MATLAB (2012a) on computers with Intel(R) Core(TM)i5-2320, 3.00 GHz power processor, 4.00 GB memory, and Microsoft Win7 operating system.

Problem 1. (see [17, 18])

Problem 2. (see [17, 18])

Problem 3. (see [8, 17, 18])

Problem 4. (see [20])

Problem 5. (see [19])whereObviously, Problem 5 can be transformed into the following forms:

Problem 6. (see [8])

Problem 7. where , are pseudo-random numbers in [0, 1], are pseudo-random numbers in [0.00001, 1], , constraint matrix elements are generated in [−1, 1] via , in which are pseudo-random numbers in [0, 1], and the right-hand side values are generated via , in which are pseudo-random numbers in [0, 1].
The numerical results in Tables 1 and 2 show that Algorithms 1 and 2 can effectively solve the three test problems known in the literature and get an approximate solution, so both algorithms are feasible.
Further, we do the corresponding random numerical experiments through Problem 7, which is utilized to explore the performance of the two algorithms. We determine the convergence accuracy of the algorithm to 0.05. For each set of fixed parameters , we run the two algorithms 10 times for numerical comparison, and the numerical results are given in Table 3. In Table 3, Avg (Std) time and Avg (Std) Iter represent the average (standard deviation) of the CPU running time and the average (standard deviation) of iterations, respectively, after the algorithm has run 10 times. Table 3 shows that the computation effect of Algorithm 2 is better than that of Algorithm 1, mainly because our acceleration technique plays a significant role by deleting the vertices of small rectangles that do not need to be considered. Hence, we believe that this acceleration technique may be generalized on other approximation algorithms such as [17, 18, 20].
Moreover, under the condition that the fixed parameters are invariant, the CPU running time of the two algorithms will increase with the scale of Problem 7. Under the condition that the prefixed parameters are invariant, the CPU running time and iterations of the two algorithms will grow with the number () of linear functions in the objective function of Problem 7.

6. Concluding Remarks

In this paper, we mainly propose two polynomial time approximation algorithms that can be utilized to solve the problem GLMP globally, where Algorithm 2 is obtained by accelerating Algorithm 1 by the proposed acceleration technique. The numerical results show that both algorithms are effective and feasible, but the overall calculation effect of Algorithm 2 is better than that of Algorithm 1, which shows that our acceleration technique is efficient and may be extended to some approximation algorithms such as [17, 18, 20].

Data Availability

All data and models generated or used during the study are described in Section 5 of this article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (grant no. 11961001), the Construction Project of First-Class Subjects in Ningxia Higher Education (NXYLXK2017B09), and the Major Proprietary Funded Project of North Minzu University (ZDZX201901).