Introduction

In this paper, we consider solving the unconstrained optimization problem

$$\begin{aligned} {\text {min}}\ f(x), \end{aligned}$$
(1)

where \(x \in {\mathbb {R}}^n\) is an n-dimensional real vector and \(f:{\mathbb {R}}^n \rightarrow {\mathbb {R}}\) is a smooth function, using a nonlinear conjugate gradient method. Optimization problems arise naturally in problems from many scientific and operational applications (see e.g. [12, 19,20,21,22, 35, 36], among others).

To solve problem (1), a nonlinear conjugate gradient method starts with an initial guess, \(x_{0}\in {\mathbb {R}}^n\), and generates a sequence \(\{x_{k}\}_{k=0}^{\infty }\) using the recurrence

$$\begin{aligned} x_{k+1}=x_{k}+\alpha _{k}d_{k}, \end{aligned}$$
(2)

where the step size \(\alpha _{k}\) is a positive parameter and \(d_{k}\) is the search direction defined by

$$\begin{aligned} d_{k} = \left\{ \begin{array}{ll} -g_{k}, &{}\quad {\text {if}}\ \quad k=0, \\ -g_{k}+\beta _{k}d_{k-1}, &{}\quad {\text {if}}\ \quad k > 0. \end{array} \right. \end{aligned}$$
(3)

The scalar \(\beta _{k}\) is the conjugate gradient update coefficient and \(g_{k}=\nabla f(x_{k})\) is the gradient of f at \(x_{k}.\) In finding the step size \(\alpha _{k},\) the inexpensive line searches such as the weak Wolfe line search

$$\begin{aligned} \left\{ \begin{array}{ll} f(x_{k}+\alpha _{k}d_{k})\le f(x_{k})+\delta \alpha _{k}g_{k}^{T}d_{k} \\ g_{k+1}^{T}d_{k}\ge \sigma g_{k}^{T}d_{k}, \end{array} \right. \end{aligned}$$
(4)

the strong Wolfe line search

$$\begin{aligned} \left\{ \begin{array}{ll} f(x_{k}+\alpha _{k}d_{k})\le f(x_{k})+\delta \alpha _{k}g_{k}^{T}d_{k} \\ |g_{k+1}^{T}d_{k}|\le \sigma |g_{k}^{T}d_{k}|, \end{array} \right. \end{aligned}$$
(5)

or the generalized Wolfe conditions

$$\begin{aligned} \left\{ \begin{array}{ll} f(x_{k}+\alpha _{k}d_{k})\le f(x_{k})+\delta \alpha _{k}g_{k}^{T}d_{k} \\ \sigma g_{k}^{T}d_{k}\le g_{k+1}^{T}d_{k} \le -\sigma _{1}g_{k}^{T}d_{k}, \end{array} \right. \end{aligned}$$
(6)

where \(0< \delta< \sigma <1\) and \(\sigma _{1}\ge 0\) are constants, are often used. Generally, conjugate gradient methods differ by the choice of the coefficient \(\beta _{k}.\) Well-known formulas for \(\beta _{k}\) can be divided into two categories. The first category includes Fletcher and Reeves (FR) [11], Dai and Yuan (DY) [6] and Conjugate Descent (CD) [10]:

$$\begin{aligned} \beta _{k}^{FR}=\frac{\Vert g_{k}\Vert ^{2}}{\Vert g_{k-1}\Vert ^{2}}, \ \ \beta _{k}^{DY}=\frac{\Vert g_{k}\Vert ^{2}}{d_{k-1}^{T}y_{k-1}},\ \ \beta _{k}^{CD}=-\frac{\Vert g_{k}\Vert ^{2}}{d_{k-1}^{T}g_{k-1}}, \end{aligned}$$

where \(\Vert \cdot \Vert\) denotes the Euclidean norm and \(y_{k-1}=g_{k}-g_{k-1}.\) These methods have strong convergence properties. However, since they are very often susceptible to jamming, they tend to have poor numerical performance. The other category includes Hestenes and Stiefel (HS) [16], Polak-Ribi\(\grave{\text {e}}\)re-Polyak (PRP) [28, 29] and Liu and Storey (LS) [26]:

$$\begin{aligned} \beta _{k}^{HS}=\frac{g_{k}^{T}y_{k-1}}{d_{k-1}^{T}y_{k-1}}, \ \ \beta _{k}^{PRP}=\frac{g_{k}^{T}y_{k-1}}{\Vert g_{k-1}\Vert ^{2}},\ \ \beta _{k}^{LS}=-\frac{g_{k}^{T}y_{k-1}}{d_{k-1}^{T}g_{k-1}}. \end{aligned}$$

Although these methods may fail to converge, they have an in-built automatic restart feature which helps them avoid jamming and hence makes them numerically efficient [5].

In view of the above stated drawbacks and advantages, many researchers have proposed hybrid conjugate gradient methods that combine different \(\beta _{k}\) coefficients so as to limit the drawbacks and maximize in the advantages of the original respective conjugate gradient methods. For instance, Touati-Ahmed and Storey [31] suggested one of the first hybrid method where the coefficient \(\beta _{k}\) is given by

$$\begin{aligned} \beta _{k}^{TS} = \left\{ \begin{array}{ll} \beta _{k}^{PRP}, &{}\quad {\text {if}}\ \quad 0\le \beta _{k}^{PRP} \le \beta _{k}^{FR}, \\ \beta _{k}^{FR}, &{}\quad {\text {otherwise}}. \end{array} \right. \end{aligned}$$

The authors proved that \(\beta _{k}^{TS}\) has good convergence properties and numerically outperforms both the \(\beta _{k}^{FR}\) and \(\beta _{k}^{PRP}\) methods. Alhawarat et al. [3] introduced a hybrid conjugate gradient method in which the conjugate gradient update coefficient is computed as

$$\begin{aligned} \beta _{k}^{AZPRP} = \left\{ \begin{array}{ll} \displaystyle {\frac{\Vert g_{k}\Vert ^{2}-g_{k}^{T}g_{k-1}}{\Vert g_{k-1}\Vert ^{2}}} , &{}\quad {\text {if}}\ \Vert g_{k}\Vert ^{2}> |g_{k}^{T}g_{k-1}|, \\ \displaystyle {\frac{\Vert g_{k}\Vert ^{2}-\mu _{k}|g_{k}^{T}g_{k-1}|}{\Vert g_{k-1}\Vert ^{2}}} , &{}\quad {\text {if}}\ \Vert g_{k}\Vert ^{2}> \mu _{k}|g_{k}^{T}g_{k-1}|, \\ 0, &{}\quad {\text {otherwise}}, \end{array} \right. \end{aligned}$$

where \(\mu _{k}\) is defined as

$$\begin{aligned} \mu _{k}=\frac{\Vert x_{k}-x_{k-1}\Vert }{\Vert y_{k-1}\Vert }. \end{aligned}$$

The authors proved that the method possesses global convergence property when weak Wolfe line search is employed. Moreover, numerical results demonstrate that the proposed method outperforms both the CG-Descent 6.8 [14] and CG-Descent 5.3 [13] methods on a number of benchmark test problems.

In [5], Babaie-Kafaki gave a quadratic hybridization of \(\beta _{k}^{FR}\) and \(\beta _{k}^{PRP}\), where

$$\begin{aligned} \beta _{k}^{HQ\pm } = \left\{ \begin{array}{ll} \beta _{k}^{+}(\theta ^{\pm }_{k}), &{}\quad \theta ^{\pm }_{k} \in [-1,1] , \\ \max (0,\beta _{k}^{PRP}), &{}\quad \theta ^{\pm }_{k} \in {\mathbb {C}}, \\ -\beta _{k}^{FR}, &{}\quad \theta ^{\pm }_{k} < -1, \\ \beta _{k}^{FR}, &{}\quad \theta ^{\pm }_{k} > 1, \end{array} \right. \end{aligned}$$

and the hybridization parameter \(\theta ^{\pm }_{k}\) is taken from the roots of the quadratic equation

$$\begin{aligned} \theta _{k}^{2}\beta _{k}^{PRP}-\theta _{k}\beta _{k}^{FR} +\beta _{k}^{HS}-\beta _{k}^{PRP}=0, \end{aligned}$$

that is

$$\begin{aligned} \theta ^{\pm }_{k}=\frac{\beta _{k}^{FR}\pm \sqrt{(\beta _{k}^{FR})^{2}-4\beta _{k}^{PRP}(\beta _{k}^{HS} -\beta _{k}^{PRP})}}{2\beta _{k}^{PRP}}, \end{aligned}$$

and

$$\begin{aligned} \beta _{k}^{+}(\theta _{k})=\max (0,\beta _{k}^{PRP})(1-\theta _{k}^{2}) +\theta _{k}\beta _{k}^{FR},\ \ \ \theta _{k}\in [-1,1]. \end{aligned}$$

Thus, the author suggested two methods \(\beta _{k}^{HQ+}\) and \(\beta _{k}^{HQ-}\), corresponding to \(\theta ^{+}\) and \(\theta ^{-},\) respectively, with numerical results showing \(\beta _{k}^{HQ-}\) to be more efficient than \(\beta _{k}^{HQ+}\).

More recently, Salih et al. [30] presented another hybrid conjugate gradient method defined by

$$\begin{aligned} \beta _{k}^{YHM} = \left\{ \begin{array}{ll} \displaystyle {\frac{g^{T}_{k}(g_{k}-g_{k-1})}{\Vert g_{k-1}\Vert ^{2}}}, &{}\quad {\text {if}}\ 0\le g_{k}^{T}g_{k-1}\le \Vert g_{k}\Vert ^{2}, \\ \displaystyle {\frac{g^{T}_{k}(g_{k}-\frac{\Vert g_{k}\Vert }{\Vert g_{k-1}\Vert }g_{k-1})}{\Vert g_{k-1}\Vert ^{2}}}, &{}\quad {\text {otherwise}}. \end{array} \right. \end{aligned}$$

The authors showed that the \(\beta _{k}^{YHM}\) method satisfies the sufficient descent condition and possesses global convergence property under the strong Wolfe line search. In 2019, Faramarzi and Amini [9] introduced a spectral conjugate gradient method defined as

$$\begin{aligned} \beta _{k}^{ZDK} =\frac{g_{k}^{T} {z}_{k-1}}{d_{k-1}^{T} {z}_{k-1}}- \frac{\Vert {z}_{k-1}\Vert ^{2}}{d_{k-1}^{T}{z}_{k-1}} \frac{g_{k}^{T}d_{k-1}}{d_{k-1}^{T}{z}_{k-1}}, \end{aligned}$$

with the spectral search direction given as

$$\begin{aligned} d_{k}=-\theta _{k}g_{k}+\beta _{k}^{ZDK}d_{k-1},\ d_{0}=-g_{0}, \end{aligned}$$

where

$$\begin{aligned} z_{k-1}& = {} y_{k-1}+h_k||g_{k-1}||^rs_{k-1},\, s_{k-1}=x_k-x_{k-1} {\text { and} }\,\, h_k \\& = {} D +\max \left\{ -\frac{s_{k-1}^Ty_{k-1}}{||s_{k-1}||^2},0\right\} ||g_{k-1}||^{-r}, \end{aligned}$$

with r and D being positive constants. The authors suggested computing the spectral parameter \(\theta _{k}\) as

$$\begin{aligned} \theta _{k} = \left\{ \begin{array}{ll} \theta _{k-1}^{N+}, &{}\quad {\text {if}}\ \theta _{k}^{N+} \in [\frac{1}{4}+\eta ,\tau ], \\ 1, &{}\quad {\text {otherwise}} \end{array} \right. \end{aligned}$$

or

$$\begin{aligned} \theta _{k} = \left\{ \begin{array}{ll} \theta _{k-1}^{N-}, &{}\quad {\text {if}}\ \theta _{k}^{N-} \in [\frac{1}{4}+\eta ,\tau ], \\ 1, &{}\quad {\text {otherwise}}, \end{array} \right. \end{aligned}$$

where \(\eta\) and \(\tau\) are constants such that \(\frac{1}{4}+\eta \le \theta _{k}\le \tau ,\)

$$\begin{aligned} \theta _{k}^{N-}=1- \frac{\Vert {z}_{k-1}\Vert ^{2}d_{k-1}^{T}g_{k}}{(d_{k-1}^{T}{z}_{k-1})({z}_{k-1}^{T}g_{k})} \end{aligned}$$

and

$$\begin{aligned} \theta _{k}^{N+}=1-\frac{1}{{z}^{T}_{k-1}g_{k}} \bigg (\frac{\Vert {z}_{k-1}\Vert ^{2}d_{k-1}^{T}g_{k}}{d_{k-1}^{T}{z}_{k-1}} -s_{k-1}^{T}g_{k}\bigg ). \end{aligned}$$

Convergence of this method is established under the strong Wolfe conditions. For more conjugate gradient methods, the reader is referred to the works of [1, 2, 8, 15, 17, 18, 23,24,25, 27, 32, 33].

In this paper, we suggest another new hybrid conjugate gradient method that inherits good computational efforts of \(\beta _{k}^{LS}\) and \(\beta _{k}^{HS}\) methods and also nice convergence properties of \(\beta _{k}^{DY}\) and \(\beta _{k}^{CD}\) methods. This proposed method is presented in the next section, and the rest of the paper is structured as follows. In Sect. 3, we show that the proposed method satisfies the descent condition for any line search and also present its global convergence analysis under the strong Wolfe line search. Numerical comparison with respect to performance profiles of Dolan-Morè [7] and conclusion is presented in Sects. 4 and 5, respectively.

A new hybrid conjugate gradient method

In [32], a variant of the \(\beta _k^{PRP}\) method is proposed, where the coefficient \(\beta _{k}\) is computed as

$$\begin{aligned} \beta _{k}^{WYL}=\frac{\Vert g_{k}\Vert ^{2}-\frac{\Vert g_{k}\Vert }{\Vert g_{k-1}\Vert }g_{k}^{T}g_{k-1}}{\Vert g_{k-1}\Vert ^{2}}. \end{aligned}$$

This method inherits the good numerical performance of the PRP method. Moreover, Huang et al. [17] proved that the \(\beta _k^{WYL}\) method satisfies the sufficient descent property and established that the method is globally convergent under the strong Wolfe line search if the parameter \(\sigma\) in (5) satisfies \(\sigma < \frac{1}{4}.\) Yao et al. [34] extended this idea to the \(\beta _k^{HS}\) method and proposed the update

$$\begin{aligned} \beta _{k}^{YWH}=\frac{\Vert g_{k}\Vert ^{2}-\frac{\Vert g_{k}\Vert }{\Vert g_{k-1}\Vert } g_{k}^{T}g_{k-1}}{d_{k-1}^{T}(g_{k}-g_{k-1})}. \end{aligned}$$

The authors proved that the method is globally convergent under the strong Wolfe line search with the parameter \(\sigma <\frac{1}{3}\). In Jian et al. [18], a hybrid of \(\beta _{k}^{DY}, \beta _{k}^{FR}, \beta _{k}^{WYL}\ {\text {and}}\ \beta _{k}^{YWH}\) is proposed by introducing the update

$$\begin{aligned} \beta _{k}^{N}=\frac{\Vert g_{k}\Vert ^{2} -\max \{0,\frac{\Vert g_{k}\Vert }{\Vert g_{k-1}\Vert }g_{k}^{T}g_{k-1}\}}{\max \{\Vert g_{k-1}\Vert ^{2},d_{k-1}^{T}(g_{k}-g_{k-1})\}}, \end{aligned}$$

with

$$\begin{aligned} d_{k} = \left\{ \begin{array}{ll} -g_{k}, &{}\quad {\text {if}}\ \quad k=0, \\ -g_{k}+\beta _{k}^{N}d_{k-1}, &{}\quad {\text {if}}\ \quad k > 0. \end{array} \right. \end{aligned}$$

Independent of the line search, the method generates a descent direction at every iteration. Furthermore, its global convergence is established under the weak Wolfe line search.

Now, motivated by the ideas of Jian et al. [18], in this paper we suggest a hybrid conjugate gradient method that inherits the strengths of the \(\beta _{k}^{LS},\ \beta _{k}^{HS},\ \beta _{k}^{DY}\) and \(\beta _{k}^{CD}\) methods by introducing \(\beta _{k}^{PKT}\) as

$$\begin{aligned} \beta _{k}^{PKT} = \left\{ \begin{array}{ll} \displaystyle {\frac{\Vert g_{k}\Vert ^{2}-g_{k}^{T}g_{k-1}}{\max \{d_{k-1}^{T}y_{k-1},-g_{k-1}^{T}d_{k-1}\}}}, &{}\quad {\text {if}}\ 0<g_{k}^{T}g_{k-1}<\Vert g_{k}\Vert ^{2}, \\ \displaystyle {\frac{\Vert g_{k}\Vert ^{2}}{\max \{d_{k-1}^{T}y_{k-1},-g_{k-1}^{T}d_{k-1}\}}}, &{}\quad {\text {otherwise}}, \end{array} \right. \end{aligned}$$
(7)

with direction \(d_{k}\) defined as

$$\begin{aligned} d_{k} = \left\{ \begin{array}{ll} -g_{k}, &{}\quad {\text {if}}\ \quad k=0 \ {\text {or}}\ |g_{k}^{T}g_{k-1} |\ge 0.2 \Vert g_{k}\Vert ^{2}, \\ -\left( 1+\beta _{k}^{PKT}\frac{d_{k-1}^{T}g_{k}}{\Vert g_{k}\Vert ^{2}}\right) g_{k}+\beta _{k}^{PKT}d_{k-1}, &{}\quad {\text {if}}\ \quad k > 0. \end{array} \right. \end{aligned}$$
(8)

Now, with \(\beta _{k}\) and \(d_{k}\) defined as in (7) and (8), respectively, we present our hybrid conjugate gradient algorithm below.

figure a

Global convergence of the proposed method

The following standard assumptions which have been used extensively in the literature are necessary to analyse the global convergence properties of our hybrid method.

Assumption 3.1

The level set

$$\begin{aligned} X=\{x\in {\mathbb {R}}^{n}:f(x)\le f(x_{0})\}, \end{aligned}$$

is bounded, where \(x_{0}\in {\mathbb {R}}^n\) is the initial guess of the iterative method (2).

Assumption 3.2

In some neighbourhood N of X, the objective function f is continuous and differentiable, and its gradient is Lipschitz continuous, that is, there exists a constant \(L>0\) such that

$$\begin{aligned} \Vert g(x)-g(y)\Vert \le L\Vert x-y\Vert \ {\text {for all}} \ x,y \in N. \end{aligned}$$

If Assumptions 3.1 and 3.2 hold, then there exists a positive constant \(\varsigma\) such that

$$\begin{aligned} \Vert g(x)\Vert \le \varsigma \ {\text {for all}} \ x\in N. \end{aligned}$$
(9)

Lemma 3.1

Consider the sequence \(\{g_{k}\}\) and \(\{d_{k}\}\) generated by Algorithm 1. Then, the sufficient descent condition

$$\begin{aligned} d_{k}^{T}g_{k}= -\Vert g_{k}\Vert ^{2}, \ \forall k\ge 0, \end{aligned}$$
(10)

holds.

Proof

If \(k=0\) or \(|g_{k}^{T}g_{k-1}|\ge 0.2 \Vert g_{k}\Vert ^{2},\) then the search direction \(d_{k}\) is given by

$$\begin{aligned} d_{k}& = {} - g_{k}. \end{aligned}$$

This gives

$$\begin{aligned} g^{T}_{k}d_{k}& = {} -\Vert g_{k}\Vert ^{2}. \end{aligned}$$

Otherwise, the search direction \(d_{k}\) is given by

$$\begin{aligned} d_{k}=-\left( 1+\beta _{k}^{PKT}\frac{d_{k-1}^{T}g_{k}}{\Vert g_{k}\Vert ^{2}} \right) g_{k}+\beta _{k}^{PKT}d_{k-1}. \end{aligned}$$
(11)

Now, if we pre-multiply Eq. (11) by \(g_{k}^T,\) we get

$$\begin{aligned} g_{k}^{T}d_{k}& = {} -\Vert g_{k}\Vert ^{2}\left( 1+\beta _{k}^{PKT}\frac{d_{k-1}^{T}g_{k}}{\Vert g_{k}\Vert ^{2}}\right) +\beta _{k}^{PKT}g_{k}^{T}d_{k-1} \\& = {} -\Vert g_{k}\Vert ^{2}-\beta _{k}^{PKT}d_{k-1}^{T}g_{k}+\beta _{k}^{PKT}g_{k}^{T}d_{k-1} \\& = {} -\Vert g_{k}\Vert ^{2}. \end{aligned}$$

Therefore, the new method satisfies the sufficient descent property (10) for all k. \(\square\)

Lemma 3.2

Suppose that Assumptions 3.1 and3.2 hold. Let the sequence \(\{x_{k}\}\) be generated by (2) and the search direction\(d_{k}\) be a descent direction. If\(\alpha _{k}\) is obtained by the strong Wolfe line search, then the Zoutendijk condition

$$\begin{aligned} \sum _{k\ge 0} \frac{(g_{k}^{T}d_{k})^{2}}{\Vert d_{k}\Vert ^{2}}<+\infty \end{aligned}$$
(12)

holds.

Lemma 3.3

For any\(k\ge 1,\) the relation\(0< \beta _{k}^{PKT}\le \beta _{k}^{CD}\) always holds.

Proof

From (5) and (10), it follows that

$$\begin{aligned} d_{k-1}^{T}y_{k-1}\ge & {} (1-\sigma )\Vert g_{k-1}\Vert ^{2}, \end{aligned}$$

and since \(0<\sigma <1,\) we have

$$\begin{aligned} d_{k-1}^{T}y_{k-1}>0. \end{aligned}$$
(13)

Also, by descent condition (10), we get

$$\begin{aligned} -g_{k-1}^{T}d_{k-1}= \Vert g_{k-1}\Vert ^{2}, \end{aligned}$$

implying

$$\begin{aligned} -g_{k-1}^{T}d_{k-1}> 0. \end{aligned}$$
(14)

Therefore, from (7), (13) and (14), it is clear that \(\beta _{k}^{PKT}>0.\) Moreover, if \(0<g_{k}^{T}g_{k-1}<\Vert g_{k}\Vert ^{2},\) then

$$\begin{aligned} \beta _{k}^{PKT}& = {} \frac{\Vert g_{k}\Vert ^{2}-g_{k}^{T}g_{k-1}}{\max \{d_{k-1}^{T}y_{k-1},-g_{k-1}^{T}d_{k-1}\}} \\&< {} \frac{\Vert g_{k}\Vert ^{2}}{\max \{d_{k-1}^{T}y_{k-1},-g_{k-1}^{T}d_{k-1}\}}, \end{aligned}$$

and since \(\max \{d_{k-1}^{T}y_{k-1},-g_{k-1}^{T}d_{k-1}\}\ge -g_{k-1}^{T}d_{k-1},\) we have

$$\begin{aligned} \beta _{k}^{PKT}\le & {} \frac{\Vert g_{k}\Vert ^{2}}{-g_{k-1}^{T}d_{k-1}} \\& = {} \beta _{k}^{CD}. \end{aligned}$$

Hence, lemma is proved. \(\square\)

Theorem 3.1

Suppose that Assumptions 3.1 and 3.2 hold. Consider the sequences \(\{g_{k}\}\) and \(\{d_{k}\}\) generated by Algorithm 1. Then

$$\begin{aligned} \liminf _{k\rightarrow \infty } \Vert g_{k}\Vert = 0. \end{aligned}$$
(15)

Proof

Assume that (15) does not hold. Then there exists a constant \(r>0\) such that

$$\begin{aligned} \Vert g_{k}\Vert \ge r,\ \forall k\ge 0. \end{aligned}$$
(16)

Letting \(\xi _{k}= 1+\beta _{k}^{PKT}\frac{d_{k-1}^{T}g_{k}}{\Vert g_{k}\Vert ^{2}},\) it follows from (8) that

$$\begin{aligned} d_{k}+\xi _{k}g_{k}=\beta _{k}^{PKT}d_{k-1}. \end{aligned}$$

Squaring both sides gives

$$\begin{aligned}&\Vert d_{k}\Vert ^{2}+ 2\xi _{k}d_{k}^{T}g_{k}+\xi _{k}^{2}\Vert g_{k}\Vert ^{2} = (\beta _{k}^{PKT})^{2}\Vert d_{k-1}\Vert ^{2}, \\&\quad \Rightarrow \Vert d_{k}\Vert ^{2} = (\beta _{k}^{PKT})^{2}\Vert d_{k-1}\Vert ^{2} - 2\xi _{k}d_{k}^{T}g_{k} -\xi _{k}^{2}\Vert g_{k}\Vert ^{2}. \end{aligned}$$

Now, dividing by \((g_{k}^{T}d_{k})^{2}\) and applying the descent condition \(g_{k}^{T}d_{k}= -\Vert g_{k}\Vert ^{2}\) yields

$$\begin{aligned} \frac{\Vert d_{k}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}} = (\beta _{k}^{PKT})^{2}\frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}}+ \frac{2\xi _{k}}{\Vert g_{k}\Vert ^{2}}-\frac{\xi _{k}^{2}}{\Vert g_{k}\Vert ^{2}}. \end{aligned}$$

Since \(\beta _{k}^{PKT}\le \beta _{k}^{CD}=\frac{\Vert g_{k}\Vert ^{2}}{-g_{k-1}^{T}d_{k-1}},\) we obtain

$$\begin{aligned} \frac{\Vert d_{k}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}}&\le {} \left( \frac{\Vert g_{k}\Vert ^{2}}{-g_{k-1}^{T}d_{k-1}}\right) ^{2} \frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}}+ \frac{2\xi _{k}}{\Vert g_{k}\Vert ^{2}}-\frac{\xi _{k}^{2}}{\Vert g_{k}\Vert ^{2}} \nonumber \\&= {} \frac{\Vert g_{k}\Vert ^{4}}{(g_{k-1}^{T}d_{k-1})^{2}} \frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}}+ \frac{2\xi _{k}}{\Vert g_{k}\Vert ^{2}} -\frac{\xi _{k}^{2}}{\Vert g_{k}\Vert ^{2}} \nonumber \\& = {} \frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k-1}^{T}d_{k-1})^{2}} -\frac{1}{\Vert g_{k}\Vert ^{2}}(\xi _{k}^{2}-2\xi _{k}+1-1) \nonumber \\& = {} \frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k-1}^{T}d_{k-1})^{2}} -\frac{(\xi _{k}-1)^{2}}{\Vert g_{k}\Vert ^{2}}+\frac{1}{\Vert g_{k}\Vert ^{2}}\nonumber \\&\le {} \frac{\Vert d_{k-1}\Vert ^{2}}{(g_{k-1}^{T}d_{k-1})^{2}}+\frac{1}{\Vert g_{k}\Vert ^{2}}. \end{aligned}$$
(17)

Noting that

$$\begin{aligned} \frac{\Vert d_{0}\Vert ^{2}}{(g_{0}^{T}d_{0})^{2}}= \frac{1}{\Vert g_{0}\Vert ^{2}}, \end{aligned}$$

and using (17) recursively yields

$$\begin{aligned} \frac{\Vert d_{k}\Vert ^{2}}{(g_{k}^{T}d_{k})^{2}}\le & {} \sum _{i=0}^{k}\frac{1}{\Vert g_{i}\Vert ^{2}}. \end{aligned}$$
(18)

From (16), we have

$$\begin{aligned} \sum _{i=0}^{k } \frac{1}{\Vert g_{i}\Vert ^{2}} \le \frac{k+1}{r^{2}}. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{(g_{k}^{T}d_{k})^{2}}{\Vert d_{k}\Vert ^{2}}&\ge {} \frac{r^{2}}{k+1}, \end{aligned}$$

which implies that

$$\begin{aligned} \sum _{k=0}^{\infty }\frac{(g_{k}^{T}d_{k})^{2}}{\Vert d_{k}\Vert ^{2}}&\ge {} r^{2}\sum _{k=0}^{\infty }\frac{1}{k+1}=+\infty . \end{aligned}$$

This contradicts the Zoutendjik condition (12), concluding the proof. \(\square\)

Numerical results

In this section, we analyse the numerical efficiency of our proposed \(\beta _k^{PKT}\) method, herein denoted PKT, by comparing its performance to that of Jian et al. [18], herein denoted N, and that of Alhawarat et al. [3], herein denoted AZPRP, on a set of 55 unconstrained test problems selected from [4]. We stop the iterations if either \(\Vert g_{k}\Vert \le 10^{-5}\) or a maximum of 10,000 iterations is reached. All the algorithms are coded in MATLAB R2019a. The authors for both N and AZPRP methods suggested that their algorithms will have optimal performance if they are implemented with the generalized Wolfe line search conditions (6) with the following choice of parameters: \(\sigma =0.1, \ \delta = 0.0001 \ {\text {and}}\ \sigma _{1}=1-2\delta\) for N method and \(\sigma =0.4, \ \delta = 0.0001 \ {\text {and}}\ \sigma _{1}=0.1\) for AZPRP method. Hence for our numerical experiments, we set the parameters of N and AZPRP as determined in the respective papers. For the PKT method, we implemented the strong Wolfe line search conditions with \(\delta = 0.0001 \ {\text {and}}\ \sigma =0.05.\)

Numerical results are presented in Table 1, where “Function” denotes name of test problem, “Dim” denotes dimension of test problem,“NI” denotes number of iterations, “FE” denotes number of function evaluations, “GE” denotes number of gradient evaluations, “CPU” denotes CPU time in seconds and “-” means that the method failed to solve the problem within 10,000 iterations. The bolded figures show the best performer for each problem. From Table 1, we observe that the PKT and AZPRP methods successfully solved all the problems, whereas the N method failed to solve one problem within 10,000 iterations. Moreover, the numerical results in the table indicate that the new PKT method is competitive as it is the best performer for a significant number of problems.

Table 1 Numerical results of the methods

To further illustrate the performance of the three methods, we adopted the performance profile tool proposed by Dolan and Morè [7]. This tool evaluates and compares the performance of \(n_{s}\) solvers running on a set of \(n_{p}\) problems. The comparison between the solvers is based on the performance ratio

$$\begin{aligned} r_{p,s}=\frac{f_{p,s}}{\min (f_{p,i}: 1\le i \le n_{s})}, \end{aligned}$$
(19)

where \(f_{p,s}\) denotes either number of functions (gradient) evaluations, number of iterations or CPU time required by solver s to solve problem p. The overall evaluation of the performance of the solvers is then given by the performance profile function

$$\begin{aligned} \phi _{s}(\tau ) = \frac{1}{n_{p}} {\text {size}}\{p:1\le p \le n_{p},\ \ln (r_{p,s})\le \tau \}, \end{aligned}$$
(20)

where \(\tau \ge 0.\) If solver s fails to solve a problem p, we set the ratio \(r_{p,s}\) to some sufficiently large number.

The corresponding profiles are plotted in Figs. 1, 2, 3 and 4, where Fig. 1 shows the performance profile of number of iterations, Fig. 2 shows the performance profile of number of gradient evaluations, Fig. 3 shows the performance profile of function evaluations and Fig. 4 shows the performance profile of CPU time. The figures illustrate that the new method outperforms the AZPRP and N conjugate gradient methods.

Fig. 1
figure 1

Iterations performance profile

Fig. 2
figure 2

Gradient evaluations performance profile

Fig. 3
figure 3

Function evaluations performance profile

Fig. 4
figure 4

Cpu performance profile

Conclusion

In this paper, we developed a new hybrid conjugate gradient method that inherits the features of the famous Liu and Storey (LS), Hestenes and Stiefel (HS), Dai and Yuan (DY) and Conjugate Descent (CD) conjugate gradient methods. The global convergence of the proposed method was established under the strong Wolfe line search conditions. We compared the performance of our method with those of Jian et al. [18] and Alhawarat et al. [3] on a number of benchmark unconstrained optimization problems. Evaluation of performance based on the tool of Dolan-Mor\(\acute{e}\) [7] showed that the proposed method is both efficient and effective.