Skip to main content

A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems

Abstract

In this paper, we introduce a self-adaptive inertial subgradient extragradient method for solving pseudomonotone equilibrium problem and common fixed point problem in real Hilbert spaces. The algorithm consists of an inertial extrapolation process for speeding the rate of its convergence, a monotone nonincreasing stepsize rule, and a viscosity approximation method which guaranteed its strong convergence. More so, a strong convergence theorem is proved for the sequence generated by the algorithm under some mild conditions and without prior knowledge of the Lipschitz-like constants of the equilibrium bifunction. We further provide some numerical examples to illustrate the performance and accuracy of our method.

1 Introduction

Muu and Oettli [40] introduced the equilibrium problem (shortly, EP) as a generalization of many problems in nonlinear analysis, which include variational inequalities, convex minimization, saddle point problems, fixed point problems, and Nash-equilibrium problems; see, e.g., [13, 40]. The EP also has a great impact on the development of many mathematical models arising from several branches of pure and applied sciences such as physics, economics, finance, image reconstruction, ecology, transportation, network elasticity and optimization. Given a nonempty, closed, and convex subset C of a real Hilbert space H and a bifunction \(f:C \times C \to \mathbb{R}\) such that \(f(x,x) = 0\) for all \(x \in C\). The EP in the sense of Muu and Oettli [40] (see also Blum and Oettli [13]) is defined as finding a point \(x^{*} \in C\) such that

$$ f \bigl(x^{*},y \bigr)\geq 0, \quad \forall y \in C. $$
(1.1)

We denote the set of solutions of the EP by \(\operatorname{EP}(f)\). Due to its importance and applications, many authors have extensively studied several iterative methods for approximating the solutions of the EP, see for example [11, 12, 18, 22, 23, 29–31, 41, 47].

Recently, researchers have considered solving EP (1.1) with pseudomonotone bifunction f. This is because many application problems are modeled with pseudomonotone bifunction. More so, it is important to study the approximation of common solutions of the EP and fixed point problem (i.e., find \(x \in C\) such that \(x \in \operatorname{EP}\mathrel{\cap} F(T)\), where \(F(T) = \{x\in H: x =Tx\}\)) because of some mathematical models whose constraints are expressed as fixed point and equilibrium problems. Such models can be found in many practical problems such as signal processing, network resource allocation, image recovery, etc.; see for instance [26, 27, 36, 37].

Tada and Takahashi [46] first proposed the following hybrid method for approximating a common solution of EP with monotone bifunction and fixed points of a nonexpansive mapping T in Hilbert spaces:

$$ \textstyle\begin{cases} x_{0}\in C_{0} = Q_{0} = C, \\ z_{n}\in C \quad \text{such that } f(z_{n}, w)+\frac{1}{\lambda _{n}} \langle w-z_{n}, z_{n}-x_{n}\rangle \geq 0, \forall w\in C, \\ w_{n} = \alpha _{n}x_{n}+(1-\alpha _{n})Tz_{n}, \\ C_{n} = \{v\in C: \Vert w_{n} - v \Vert \leq \Vert x_{n} - v \Vert \}, \\ Q_{n} = \{v\in C: \langle x_{0} - x_{n}, v - x_{n}\rangle \leq 0\}, \\ x_{n+1} = P_{C_{n}\cap Q_{n}}x_{0}. \end{cases} $$
(1.2)

It should be noted that the implementation of Algorithm (1.2) required solving a strongly monotone regularized equilibrium problem for point \(z_{n}\):

$$ \mbox{Find } z_{n}\in C \text{ such that } f(z_{n}, w)+ \frac{1}{\lambda _{n}}\langle w-z_{n}, z_{n}-x_{n} \rangle \geq 0, \quad \forall w\in C. $$
(1.3)

Such method does not hold if the bifunction is relaxed to pseudomonotone, which makes the iterative method difficult for solving pseudomonotone EP (1.1). In 2008, Quoc et al. [43] introduced a new process called extragradient method (EM) by extending the work of Korpelevich [33] and Antipin [7] to the case of pseudomonotone EP. In particular, Quoc et al. [43] algorithm is given as in Algorithm 1.1.

Algorithm 1.1
figure a

Extragradient method

Later in 2013, Anh [1] introduced the following iterative scheme for approximating a common solution of pseudomonotone EP and a fixed point of nonexpansive mapping T:

$$ \textstyle\begin{cases} x_{0}\in C_{0} = Q_{0} = C, \\ y_{n} = \arg \min \{\lambda _{n}f(x_{n}, z)+\frac{1}{2} \Vert x_{n} - z \Vert ^{2}: z\in C\}, \\ t_{n} = \arg \min \{\lambda _{n}f(y_{n}, z)+\frac{1}{2} \Vert x_{n} - z \Vert ^{2}: z\in C\}, \\ z_{n} = \alpha _{n}x_{n}+(1-\alpha _{n})Tt_{n}, \\ C_{n} = \{u\in C: \Vert z_{n} - u \Vert \leq \Vert x_{n} - u \Vert \}, \\ Q_{n} = \{u\in C: \langle x_{0} - x_{n}, u - x_{n}\rangle \leq 0\}, \\ x_{n+1} = P_{C_{n}\cap Q_{n}}x_{0}. \end{cases} $$
(1.4)

It is evident that in Algorithm 1.1 and (1.4), one needs to solve two strongly convex optimization problems in the feasible set C in each iteration of the algorithm. This task can be very difficult if the set C is not a simple set, i.e., explicit. Hence, there is need for an improvement on the extragradient method. Following Censor et al. [14, 15], Hieu [22] introduced a Halpern-type subgradient extragradient method (shortly, HSEM) which involves a half-space in the second minimization problem as in Algorithm 1.2.

Algorithm 1.2
figure b

Halpern-type subgradient extragradient method

Note that the half-space \(T_{n}\) is explicit, and thus the HSEM has a competitive advantage over the extragradient method in numerical computations. Also, it should be noted that the convergence of the HSEM depends on the Lipschitz constants \(c_{1}\) and \(c_{2}\) which are not easy to be determined. In an attempt to provide an alternative method which does not require prior knowledge of the Lipschitz constants \(c_{1}\) and \(c_{2}\), Yang and Liu [48] proposed the following modified HSEM with a nonincreasing stepsize and proved a strong convergence theorem for finding a common solution of pseudomonotone EP and a fixed point of a quasi-nonexpansive mapping S in real Hilbert spaces (Algorithm 1.3).

Algorithm 1.3
figure c

Modified Halpern-type subgradient extragradient method

For more examples of extragradient and subgradient methods for finding a common element in the set of solutions of equilibrium and fixed point problems, see for instance [1–6, 20, 28, 32] and the references therein.

Motivated by the work of Hieu [22] and Yang and Liu [48], in this paper, we introduce a new self-adaptive inertial subgradient extragradient method which does not depend on the Lipschitz constants \(c_{1}\) and \(c_{2}\) for finding solutions of pseudomonotone EP and a common fixed point of a countable family of demicontractive mappings in real Hilbert spaces. The inertial extrapolation step in our algorithm is regarded as a means of improving the speed of convergence of the algorithm (see for instance [8, 9, 17, 29, 31, 34, 39] on inertial-type algorithms). Our method also consists of the viscosity approximation method (see [38]) which is a generalization of the Halpern-type algorithm. We prove a strong convergence theorem for the sequence generated by our method under some mild conditions and without prior knowledge of the Lipschitz constants. It is noted in [10] that strong convergence method is more desirable than weak convergence one because it translates the physically tangible property that the energy \(\Vert x_{n} -x^{\dagger }\Vert ^{2}\) of the error between the iterate \(x_{n}\) and a solution \(x^{\dagger }\) eventually become small. More importance of strong convergence was also underlined in Güler [21].

The paper is organized as follows. In Sect. 2, we recall some basic definitions and preliminary results which are necessary for our proof. Section 3 deals with proposing and analyzing the convergence of our iterative method for solving the EP and finding a fixed point of demicontractive mappings in real Hilbert spaces. Finally, in Sect. 4, we present some numerical examples to illustrate the performance of the proposed algorithms in comparison with related algorithms in the literature. Throughout this paper, we denote the strong (resp. weak) convergence of a sequence \(\{x_{n}\} \subseteq H\) to a point \(p \in H\) by \(x_{n} \rightarrow p\) (resp. \(x_{n}\rightharpoonup p\)). We also denote the index set \(\mathcal{N} = \mathbb{N}\backslash \{0\} = \{1,2,\dots \}\).

2 Preliminaries

In this section, we present some basic notions and results that are needed in the sequel.

Definition 2.1

Recall that the bifunction \(f:C \times C \rightarrow \mathbb{R}\) is said to be

  1. (i)

    η-strongly monotone on C if there exists a constant \(\eta >0\) such that

    $$ f(u,v) + f(v,u) \leq -\eta \Vert u-v \Vert ^{2},\quad \forall u,v \in C; $$
  2. (ii)

    monotone on C if

    $$ f(u,v) + f(v,u) \leq 0,\quad \forall u,v\in C; $$
  3. (iii)

    pseudomonotone on C if

    $$ f(u,v) \geq 0\quad \Rightarrow\quad f(v,u) \leq 0,\quad \forall u,v \in C; $$
  4. (iv)

    satisfying a Lipschitz-like condition if there exist constants \(c_{1} >0\) and \(c_{2} >0\) such that

    $$ f(u,v)+f(v,w) \geq f(u,w) - c_{1} \Vert u-v \Vert ^{2} - c_{2} \Vert v-w \Vert ^{2},\quad \forall u,v,w \in C. $$
    (2.1)

We note that(i) ⇒ (ii) ⇒ (iii) but the converse implications do not hold (see, e.g., [44]).

Throughout this paper, we assume that the following assumptions hold on f:

  1. (A1)

    f is pseudomonotone on C and \(f(u,u) = 0\ \forall u \in C\);

  2. (A2)

    \(f(\cdot ,v)\) is continuous on C for every \(v \in C\);

  3. (A3)

    \(f(u,\cdot )\) is convex, lower semicontinuous, and subdifferentiable on C for every \(u \in C\);

  4. (A4)

    f satisfies a Lipschitz-like condition.

For each \(u \in C\), the subgradient of the convex function \(f(u,\cdot )\) at u is denoted by \(\partial _{2} f(u,u)\), i.e.,

$$\begin{aligned} \partial _{2} f(u,u) =& \bigl\{ w\in H: f(u,v) \geq f(u,u) + \langle w,v-u \rangle , \forall v \in C \bigr\} \\ =& \bigl\{ w \in H: f(u,v) \geq \langle w, v-u \rangle , \forall v \in C \bigr\} . \end{aligned}$$

The metric projection \(P_{C}:H\to C\) at a point \(x \in H\) is the necessary unique vector \(P_{C}x\) in C such that

$$ \bigl\Vert P_{C}(x) - x \bigr\Vert = \min \bigl\{ \Vert y-x \Vert : y \in C \bigr\} . $$

The metric projection satisfies the following identities (see, e.g., [45]):

  1. (i)

    \(\langle x-y, P_{C} x - P_{C} y \rangle \geq \Vert P_{C} x-P_{C} y\Vert ^{2}\) for every \(x,y \in H\);

  2. (ii)

    for \(x \in H\) and \(z\in C\), \(z = P_{C} x \Leftrightarrow \)

    $$ \langle x-z, z-y \rangle \geq 0, \quad \forall y \in C; $$
    (2.2)
  3. (iii)

    for \(x \in H\) and \(y \in C\),

    $$ \bigl\Vert y-P_{C}(x) \bigr\Vert ^{2} + \bigl\Vert x- P_{C}(x) \bigr\Vert ^{2} \leq \Vert x-y \Vert ^{2}. $$
    (2.3)

The mapping \(\phi : H \to H\) is called a contraction if \(\Vert \phi (x) - \phi (y)\Vert \leq \alpha \Vert x-y\Vert \) for \(\alpha \in [0,1)\) and \(x,y \in H\). It is easy to check that \(P_{C}\) is an example of a contraction mapping on C.

A subset D of H is called proximal if, for each \(x \in H\), there exists \(y \in D\) such that

$$ \Vert x-y \Vert = d(x,D). $$

In what follows, \(P(H)\) denotes the family of all nonempty proximal bounded subsets of H and \(\operatorname{CB}(H)\) denotes the family of nonempty, closed bounded subsets of H. The Hausdorff metric on \(\operatorname{CB}(H)\) is defined as

$$ \mathcal{H}(A,B):= \max \Bigl\{ \sup_{x\in A} d(x,B), \sup _{y\in B} d(y,A) \Bigr\} $$

for all \(A,B \in \operatorname{CB}(H)\). A point \(\bar{x} \in H\) is called a fixed point of a multivalued mapping \(S:H \rightarrow 2^{H}\) if \(\bar{x} \in S\bar{x}\). We say that S satisfies the endpoint condition if \(S\bar{x} = \{\bar{x}\}\) for all \(\bar{x} \in F(S)\). We recall some basic definitions of multivalued mappings.

Definition 2.2

A multivalued mapping \(S:H \rightarrow \operatorname{CB}(H)\) is said to be

  1. (1)

    nonexpansive if

    $$ \mathcal{H}(Su,Sv) \leq \Vert u-v \Vert ,\quad \forall u,v \in H; $$
  2. (2)

    quasi-nonexpansive if \(F(S) \neq \emptyset \) and

    $$ \mathcal{H}(Su,Sp) \leq \Vert u-p \Vert ,\quad \forall u \in H, p \in F(S); $$
  3. (3)

    κ-demicontractive for \(0 \leq \kappa <1\) if \(F(S) \neq \emptyset \), and

    $$ \mathcal{H}(Su,Sp)^{2} \leq \Vert u-p \Vert ^{2} + \kappa d(u,Su)^{2},\quad \forall u \in H, p \in F(S). $$

Note that the class of κ-demicontractive mappings includes the class of nonexpansive and quasi-nonexpansive mappings. Let \(S: H \rightarrow P(H)\) be a multivalued mapping. We defined the best approximation operator of S as follows:

$$ P_{S}(x) := \bigl\{ y \in Sx: \Vert x-y \Vert = d(x,Sx) \bigr\} . $$

It is easy to show that \(F(S) = F(P_{S})\) and \(P_{S}\) satisfies the endpoint condition. More so, \(I-S\) is said to be demiclosed at zero if for any sequence \(\{x_{n}\} \subset H\) such that \(x_{n} \rightharpoonup \bar{x}\) and \(x_{n} - u_{n} \rightarrow 0\), for \(u_{n} \in Sx_{n}\), then \(\bar{x} \in F(S)\).

The following identities hold in any Hilbert space:

$$\begin{aligned} \Vert x+y \Vert ^{2} \leq \Vert x \Vert ^{2} + 2 \langle y, x+y \rangle ,\quad \forall x,y \in H, \end{aligned}$$
(2.4)

and

$$\begin{aligned} \Vert x +y \Vert ^{2} = \Vert x \Vert ^{2} + 2 \langle x,y \rangle + \Vert y \Vert ^{2}, \quad \forall x,y \in H. \end{aligned}$$
(2.5)

Lemma 2.3

([16])

LetHbe a real Hilbert space, \(x_{i} \in H\) (\(1 \leq i \leq n\)) and\(\{\alpha _{i}\} \subset (0,1)\)with\(\sum_{i=1}^{n}\alpha _{i} =1\). Then

$$ \Biggl\Vert \sum_{i=1}^{n} \alpha _{i} x_{i} \Biggr\Vert ^{2} = \sum _{i=1}^{n} \alpha _{i} \Vert x_{i} \Vert ^{2} - \sum _{i,j=1, i\neq j}^{n} \alpha _{i} \alpha _{j} \Vert x_{i} -x_{j} \Vert ^{2}. $$
(2.6)

Lemma 2.4

([19])

LetCbe a convex subset of a real Hilbert spaceHand\(\varphi :C \rightarrow \mathbb{R}\)be a convex and subdifferentiable function onC. Then\(x^{*}\)is a solution to the convex problem: \(\textit{minimize}\{\varphi (x):x \in C\}\)if and only if\(0 \in \partial \varphi (x^{*}) + N_{C}(x^{*})\), where\(\partial \varphi (x^{*})\)denotes the subdifferential ofφand\(N_{C}(x^{*})\)is the normal cone ofCat\(x^{*}\).

Lemma 2.5

([35])

Let\(\{\alpha _{n}\}\)and\(\{\delta _{n}\}\)be sequences of nonnegative real numbers such that

$$ \alpha _{n+1} \leq (1-\delta _{n})\alpha _{n} + \beta _{n} + \gamma _{n},\quad n\geq 1, $$

where\(\{\delta _{n}\}\)is a sequence in\((0,1)\)and\(\{\beta _{n}\}\)is a real sequence. Assume that\(\sum_{n=0}^{\infty }\gamma _{n} < \infty \). Then the following results hold:

  1. (i)

    If\(\beta _{n} \leq \delta _{n} M\)for some\(M \geq 0\), then\(\{\alpha _{n}\}\)is a bounded sequence.

  2. (ii)

    If\(\sum_{n=0}^{\infty }\delta _{n} = \infty \)and\(\limsup_{n\rightarrow \infty } \frac{\beta _{n}}{\delta _{n}} \leq 0\), then\(\lim_{n\rightarrow \infty }\alpha _{n} =0\).

Lemma 2.6

([36])

Let\(\{a_{n}\}\)be a sequence of real numbers such that there exists a nondecreasing subsequence\(\{a_{n_{i}}\}\)of\(\{a_{n}\}\). Then there exists a nondecreasing sequence\(\{m_{k}\}\subset \mathbb{N}\)such that\(m_{k} \rightarrow \infty \)and the following properties are satisfied for all (sufficiently large number\(k\in \mathbb{N}\)): \(a_{m_{k}} \leq a_{{m_{k}}+1}\)and\(a_{k} \leq a_{{m_{k}}+1}\), \(m_{k} = \max \{j \leq k: a_{j} \leq a_{j+1}\}\).

3 Main results

In this section, we present our iterative algorithm and its analysis.

In what follows, let C be a nonempty closed convex subset of a real Hilbert space H, \(f:C \times C \to \mathbb{R}\) be a bifunction satisfying (A1)–(A4). For \(i \in \mathcal{N}\), let \(S_{i}: H \to \operatorname{CB}(H)\) be \(\kappa _{i}\)-demicontractive mappings such that \(I-S_{i}\) are demiclosed at zero and \(S_{i}(x^{*}) = \{x^{*}\}\) for all \(x^{*} \in F(S_{i})\). Let ϕ be a fixed contraction on H with coefficient \(\tau \in (0,1)\). Suppose that the solution set

$$ Sol=\operatorname{EP}(f)\cap \bigcap_{i=1}^{\infty }F(S_{i}) \neq \emptyset . $$

The following conditions are assumed to be satisfied by the control parameters of our algorithm.

  1. (C1)

    \(\theta \in [0,1)\), \(\lambda _{1} >0\);

  2. (C2)

    \(\{\delta _{n}\}\subset (0,1)\) such that \(\sum_{n=1}^{\infty }\delta _{n} = \infty \) and \(\lim_{n \to \infty }\delta _{n} = 0\);

  3. (C3)

    \(\{\epsilon _{n}\}\subset [0,\infty )\) such that \(\epsilon _{n} = o(\delta _{n})\), i.e., \(\lim_{n\to \infty }\frac{\epsilon _{n}}{\delta _{n}} = 0\);

  4. (C4)

    \(\{\beta _{n,i}\}_{i=0}^{\infty }\subset (0,1)\) such that \(\sum_{i=0}^{n} \beta _{n,i}=1\);

  5. (C5)

    \(\liminf_{n\to \infty }(\beta _{n,0}-\kappa )\beta _{n,i}>0\), and \(\kappa = \max \{\kappa _{i}\}\) for all \(i \in \mathcal{N}\).

Next, we present our algorithm (Algorithm 3.1).

Algorithm 3.1
figure d

Self-adaptive inertial subgradient extragradient method

Remark 3.2

It is obvious that \(T_{n}\) is a half-space and \(C \subset T_{n}\) (see [22] for more details). Note that (3.1) can be implemented easily since the value of \(\Vert x_{n} -x_{n-1}\Vert \) is known prior to choosing \(\alpha _{n}\). Also, we deduced from (3.1) that \(\lim_{n\to \infty }\frac{\alpha _{n}}{\delta _{n}}\Vert x_{n} -x_{n-1}\Vert = 0\). More so, if \(w_{n} = y_{n}\) and \(w_{n} \in S_{i}(w_{n})\) for all \(i \in \mathcal{N}\), we arrived at common solutions of the equilibrium and fixed point problems.

We begin by proving the following necessary results.

Lemma 3.3

The sequence\(\{\lambda _{n}\}\)generated by (3.5) is monotonically nonincreasing and

$$ \lim_{n\rightarrow \infty }\lambda _{n} \geq \min \biggl\{ \frac{\mu }{2\max \{c_{1},c_{2}\}},\lambda _{1} \biggr\} . $$

Proof

Clearly, \(\lambda _{n}\) is nonincreasing. On the other hand, it follows from Assumption (A4) that

$$\begin{aligned} \frac{\mu ( \Vert w_{n} - y_{n} \Vert ^{2}+ \Vert z_{n}-y_{n} \Vert ^{2})}{2[f(w_{n},z_{n})-f(w_{n},y_{n})-f(y_{n},z_{n})]} \geq \frac{\mu ( \Vert w_{n}-y_{n} \Vert ^{2}+ \Vert z_{n}-y_{n} \Vert ^{2})}{2[c_{1} \Vert w_{n}-y_{n} \Vert ^{2} + c_{2} \Vert y_{n}-z_{n} \Vert ^{2}]} \geq \frac{\mu }{\max \{c_{1},c_{2}\}}. \end{aligned}$$

So, the sequence \(\{\lambda _{n}\}\) is nonincreasing and has a lower bound of \(\frac{\mu }{\max \{c_{1},c_{2}\}}\). This implies that there exists \(\lim_{n\rightarrow \infty }\lambda _{n} = \lambda \geq \min \{ \frac{\mu }{\max \{c_{1},c_{2}\}},\lambda _{1} \} \). □

Lemma 3.4

For each\(v \in Sol\)and\(n \geq 1\), we have

$$ \Vert z_{n} -v \Vert ^{2} \leq \Vert w_{n} -v \Vert ^{2} - (1-\mu ) \Vert w_{n} - y_{n} \Vert ^{2} - (1-\mu ) \Vert y_{n} -z_{n} \Vert ^{2}. $$
(3.6)

Proof

By the definition of \(z_{n}\) and Lemma 2.4, we get

$$ 0 \in \partial _{2} \biggl(\lambda _{n} f(y_{n},y)+\frac{1}{2} \Vert x_{n} - y \Vert ^{2} \biggr) (z_{n}) + N_{T_{n}}(z_{n}). $$

Thus, there exist \(q_{n} \in \partial _{2} f(y_{n},z_{n})\) and \(\xi \in N_{T_{n}}(z_{n})\) such that

$$ \lambda _{n} q_{n} + z_{n} - w_{n} + \xi = 0. $$
(3.7)

Since \(\xi \in N_{T_{n}}(z_{n})\), then \(\langle \xi , y -z_{n} \rangle \leq 0\) for all \(y \in T_{n}\). This together with (3.7) implies that

$$ \lambda _{n} \langle q_{n}, y -z_{n} \rangle \geq \langle w_{n} -z_{n}, y - z_{n} \rangle \quad \forall y \in T_{n}. $$
(3.8)

Since \(v \in Sol\), then \(v \in \operatorname{EP}(f)\) and \(v \in \bigcap_{i=1}^{\infty }F(S_{i})\). Note that \(\operatorname{EP}(f)\subset C \subseteq T_{n}\), we derive from (3.8) that

$$ \lambda _{n} \langle q_{n}, v - z_{n} \rangle \geq \langle w_{n} - z_{n}, v - z_{n} \rangle . $$
(3.9)

By the fact that \(q_{n} \in \partial _{2} f(y_{n},z_{n})\), we get

$$ f(y_{n},v) - f(y_{n},z_{n}) \geq \langle q_{n}, v - z_{n} \rangle . $$

This together with (3.9) implies that

$$ \lambda _{n} \bigl(f(y_{n},v)-f(y_{n},z_{n}) \bigr) \geq \langle w_{n} -z_{n}, v - z_{n} \rangle . $$
(3.10)

Since \(v \in \operatorname{EP}(f)\), then \(f(v, y_{n}) \geq 0\). From the pseudo-monotonicity of f, we get \(f(y_{n},v) \leq 0\). Hence from (3.10), we get

$$ -2 \lambda _{n} f(y_{n},z_{n}) \leq 2 \langle w_{n} -z_{n}, v -z_{n} \rangle . $$
(3.11)

Since \(z_{n} \in T_{n}\), then

$$ \bigl\langle (w_{n} - \lambda _{n} \eta _{n})-y_{n}, z_{n} - y_{n} \bigr\rangle \leq 0. $$

Thus

$$ \langle w_{n} - y_{n}, z_{n} -y_{n} \rangle \leq \lambda _{n} \langle \eta _{n}, z_{n} - y_{n} \rangle . $$
(3.12)

From the fact that \(\eta _{n} \in \partial _{2} f(w_{n},y_{n})\) and the definition of subdifferential, we get

$$ f(w_{n},y) - f(w_{n},y_{n}) \geq \langle \eta _{n}, y - y_{n} \rangle\quad \forall y \in H, $$

hence

$$ \lambda _{n} \bigl(f(w_{n},z_{n}) - f(w_{n},y_{n}) \bigr) \geq \lambda _{n} \langle \eta _{n},z_{n} - y_{n} \rangle \geq \langle w_{n} - y_{n},z_{n} - y_{n} \rangle . $$
(3.13)

Combining (3.11) and (3.13), we get

$$\begin{aligned} 2\lambda _{n} \bigl(f(w_{n},z_{n}) - f(w_{n},y_{n})- f(y_{n},z_{n}) \bigr) \geq & 2 \langle w_{n} - y_{n},z_{n} - y_{n} \rangle + 2 \langle w_{n} -z_{n}, v - z_{n} \rangle \\ =& \Vert w_{n} -y_{n} \Vert ^{2} + \Vert y_{n} -z_{n} \Vert ^{2} - \Vert w_{n} -z_{n} \Vert ^{2} \\ &{} - \Vert w_{n} -v \Vert ^{2} + \Vert w_{n} -z_{n} \Vert ^{2} + \Vert z_{n} -v \Vert ^{2}. \end{aligned}$$

Hence

$$\begin{aligned} \Vert z_{n} -v \Vert ^{2} \leq & \Vert w_{n} -v \Vert ^{2} - \Vert w_{n} - y_{n} \Vert ^{2} - \Vert y_{n} -z_{n} \Vert ^{2} \\ &{} + 2\lambda _{n} \bigl(f(w_{n},z_{n})-f(w_{n},y_{n}) - f(y_{n},z_{n}) \bigr). \end{aligned}$$

Using the definition of \(\lambda _{n}\) in the above inequality, we get

$$\begin{aligned} \Vert z_{n} -v \Vert ^{2} \leq & \Vert w_{n} -v \Vert ^{2} - \Vert w_{n} - y_{n} \Vert ^{2} - \Vert y_{n} -z_{n} \Vert ^{2} \\ & {}+ \frac{2\lambda _{n}}{\lambda _{n+1}}\lambda _{n+1} \bigl(f(w_{n},z_{n})-f(w_{n},y_{n}) - f(y_{n},z_{n}) \bigr). \\ \leq & \Vert w_{n} -v \Vert ^{2} - \Vert w_{n} - y_{n} \Vert ^{2} - \Vert y_{n} -z_{n} \Vert ^{2} \\ &{} + \frac{\lambda _{n}}{\lambda _{n+1}}\mu \bigl( \Vert w_{n}-y_{n} \Vert ^{2} + \Vert z_{n} -y_{n} \Vert ^{2} \bigr). \end{aligned}$$
(3.14)

Observe that \(\lim_{n \to \infty } \frac{\lambda _{n}\mu }{\lambda _{n+1}} = \mu \). Hence, we obtain from (3.14)

$$\begin{aligned} \Vert z_{n} -v \Vert ^{2} \leq & \Vert w_{n} -v \Vert ^{2} - (1-\mu ) \Vert w_{n} - y_{n} \Vert ^{2} - (1-\mu ) \Vert y_{n} -z_{n} \Vert ^{2}. \end{aligned}$$

 □

Lemma 3.5

The sequence\(\{x_{n}\}\)generated by Algorithm 3.1 is bounded.

Proof

Put \(u_{n} = \beta _{n,0}z_{n} + \sum_{i=1}^{n} \beta _{n,i}\nu _{n,i}\) and let \(v \in Sol\). Then from Lemma 2.3 we have

$$\begin{aligned} \Vert u_{n} -v \Vert ^{2} =& \Biggl\Vert \beta _{n,0}z_{n} + \sum_{i=1}^{n} \beta _{n,i}\nu _{n,i} - v \Biggr\Vert ^{2} \\ \leq & \beta _{n,0} \Vert z_{n} -v \Vert ^{2} + \sum_{i=1}^{n} \beta _{n,i} \Vert \nu _{n,i} -v \Vert ^{2} - \sum_{i=1}^{n} \beta _{n,0} \beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \\ \leq & \beta _{n,0} \Vert z_{n} -v \Vert ^{2} + \sum_{i=1}^{n} \beta _{n,i}H^{2} (S_{i}z_{n},S_{i}v) - \sum_{i=1}^{n} \beta _{n,0} \beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \\ \leq & \beta _{n,0} \Vert z_{n} -v \Vert ^{2} + \sum_{i=1}^{n} \beta _{n,i} \bigl( \Vert z_{n} - v \Vert ^{2} + \kappa _{i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \bigr) \\ &{}- \sum_{i=1}^{n} \beta _{n,0}\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \\ =& \Vert z_{n} -v \Vert ^{2} - \sum _{i=1}^{n} (\beta _{n,0}-\kappa )\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2}, \end{aligned}$$
(3.15)

and by condition (C5), we get

$$ \Vert u_{n} - v \Vert ^{2} \leq \Vert z_{n} -v \Vert ^{2}. $$

Therefore

$$\begin{aligned} \Vert x_{n+1}-v \Vert =& \bigl\Vert \delta _{n} \phi (x_{n}) + (1-\delta _{n})u_{n} - v \bigr\Vert \\ =& \bigl\Vert \delta _{n} \bigl(\phi (x_{n})-v \bigr)+(1-\delta _{n}) (u_{n}-v) \bigr\Vert \\ \leq & \delta _{n} \bigl\Vert \phi (x_{n})-v \bigr\Vert + (1-\delta _{n}) \Vert u_{n} -v \Vert \\ =& \delta _{n} \bigl\Vert \phi (x_{n})- \phi (v)+\phi (v)-v \bigr\Vert + (1-\delta _{n}) \Vert z_{n} -v \Vert \\ \leq & \delta _{n} \tau \Vert x_{n} -v \Vert + \delta _{n} \bigl\Vert \phi (v)-v \bigr\Vert + (1- \delta _{n}) \Vert w_{n} -v \Vert \\ =& \delta _{n} \tau \Vert x_{n} -v \Vert + \delta _{n} \bigl\Vert \phi (v)-v \bigr\Vert + (1- \delta _{n}) \bigl\Vert x_{n} +\alpha _{n}(x_{n} -x_{n-1})-v \bigr\Vert \\ \leq & \delta _{n}\tau \Vert x_{n} -v \Vert + \delta _{n} \bigl\Vert \phi (v)-v \bigr\Vert + (1- \delta _{n}) \bigl[ \Vert x_{n} - v \Vert +\alpha _{n} \Vert x_{n} -x_{n-1} \Vert \bigr] \\ =& \bigl(1-(1-\tau )\delta _{n} \bigr) \Vert x_{n} -v \Vert \\ &{} + (1-\tau )\delta _{n} \biggl[\frac{ \Vert \phi (v)-v \Vert }{1-\tau }+ \frac{\alpha _{n}(1-\delta _{n}) \Vert x_{n} - x_{n-1} \Vert }{\delta _{n}(1-\tau )} \biggr]. \end{aligned}$$

Let \(M = \max \{ \frac{\Vert \phi (v)-v\Vert }{1-\tau }, \sup_{n\geq 1} (\frac{1-\delta _{n}}{1-\tau } ) \frac{\alpha _{n}}{\delta _{n}}\Vert x_{n} -x_{n-1}\Vert \} \). Then we have

$$ \Vert x_{n+1}-v \Vert \leq \bigl(1-(1-\tau )\delta _{n} \bigr) \Vert x_{n} -v \Vert + (1-\tau ) \delta _{n}M. $$

Hence, by induction and Lemma 2.5(i), we have that \(\{\Vert x_{n} -v\Vert \}\) is bounded. This implies that \(\{x_{n}\}\) is bounded, and consequently \(\{y_{n}\}\), \(\{z_{n}\}\), and \(\{S_{i}z_{n}\}\) are bounded. □

The following lemma will be used in proving the strong convergence of our Algorithm 3.1.

Lemma 3.6

Let\(\{x_{n}\}\)be the sequence generated by Algorithm 3.1. Then

$$\begin{aligned} a_{n+1} \leq (1-\theta _{n})a_{n} + \theta _{n}b_{n} + c_{n}, \end{aligned}$$
(3.16)

where\(a_{n} = \Vert x_{n} -v\Vert ^{2}\), \(\theta _{n} = \frac{2\delta _{n}(1-\tau )}{1-\delta _{n}\tau }\), \(b_{n} = \frac{\langle \phi (v)-v, x_{n+1} - v \rangle }{1-\tau }\), \(c_{n} = \frac{\delta _{n}^{2}}{1-\delta _{n}\tau }\Vert x_{n} -v\Vert ^{2} + \frac{\alpha _{n}M(1-\delta _{n})^{2}}{1-\delta _{n}\tau }\Vert x_{n} - x_{n-1}\Vert \), and\(v \in Sol\)for some\(M>0\).

Proof

From (3.1), we get

$$\begin{aligned} \Vert w_{n} -v \Vert ^{2} =& \bigl\Vert x_{n} + \alpha _{n}(x_{n} -x_{n-1}) - v \bigr\Vert ^{2} \\ \leq & \Vert x_{n} -v \Vert ^{2} + \alpha _{n}^{2} \Vert x_{n} -x_{n-1} \Vert ^{2} + 2 \alpha _{n} \Vert x_{n} -v \Vert \Vert x_{n} -x_{n-1} \Vert \\ =& \Vert x_{n} - v \Vert ^{2} + \alpha _{n} \Vert x_{n} -x_{n-1} \Vert \bigl( \alpha _{n} \Vert x_{n} -x_{n-1} \Vert +2 \Vert x_{n} -v \Vert \bigr) \\ \leq & \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert , \end{aligned}$$
(3.17)

where \(M = \sup_{n\geq 1}(\alpha _{n}\Vert x_{n} -x_{n-1}\Vert +2\Vert x_{n} -v\Vert )\). More so,

$$\begin{aligned} \Vert x_{n+1}-v \Vert ^{2} =& \bigl\Vert \delta _{n} \phi (x_{n}) + (1-\delta _{n})u_{n} - v \bigr\Vert ^{2} \\ \leq & (1-\delta _{n})^{2} \Vert u_{n} - v \Vert ^{2} + 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1} - v \bigr\rangle \\ \leq & (1-\delta _{n})^{2} \Vert w_{n} -v \Vert ^{2} + 2\delta _{n} \bigl\Vert \phi (x_{n})- \phi (v) \bigr\Vert \Vert x_{n+1}-v \Vert \\ &{} + 2\delta _{n} \bigl\langle \phi (v)-v, x_{n+1} - v \bigr\rangle \\ \leq & (1-\delta _{n})^{2} \bigl[ \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert \bigr] + 2\delta _{n} \tau \Vert x_{n} -v \Vert \Vert x_{n+1}-v \Vert \\ &{}+ 2\delta _{n} \bigl\langle \phi (v)-v, x_{n+1} - v \bigr\rangle \\ \leq & (1-\delta _{n})^{2} \bigl[ \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert \bigr] + \delta _{n} \tau \bigl( \Vert x_{n} -v \Vert ^{2} + \Vert x_{n+1}-v \Vert ^{2} \bigr) \\ &{}+ 2\delta _{n} \bigl\langle \phi (v)-v, x_{n+1} - v \bigr\rangle . \end{aligned}$$

Thus

$$\begin{aligned} \Vert x_{n+1}-v \Vert ^{2} \leq & \biggl( \frac{1-2\delta _{n}+\delta _{n} \tau }{1-\delta _{n}\tau } \biggr) \Vert x_{n} -v \Vert ^{2} + \frac{\delta _{n}^{2}}{1-\delta _{n}\tau } \Vert x_{n} -v \Vert ^{2} \\ &{} + \frac{\alpha _{n}M(1-\delta _{n})^{2}}{1-\delta _{n}\tau } \Vert x_{n} - x_{n-1} \Vert \\ &{}+ \frac{2\delta _{n}}{1-\delta _{n}\tau } \bigl\langle \phi (v)-v, x_{n+1} - v \bigr\rangle \\ =& \biggl(1 - \frac{2\delta _{n}(1-\tau )}{1-\delta _{n}\tau } \biggr) \Vert x_{n} - v \Vert ^{2} + \frac{\delta _{n}^{2}}{1-\delta _{n}\tau } \Vert x_{n} -v \Vert ^{2} \\ &{}+ \frac{\alpha _{n}M(1-\delta _{n})^{2}}{1-\delta _{n}\tau } \Vert x_{n} - x_{n-1} \Vert \\ &{}+ \frac{2\delta _{n}}{1-\delta _{n}\tau } \bigl\langle \phi (v)-v, x_{n+1} - v \bigr\rangle . \end{aligned}$$
(3.18)

Thus, we obtained the desired result. □

Next, we prove our strong convergence theorem.

Theorem 3.7

The sequence\(\{x_{n}\}\)generated by Algorithm 3.1 converges strongly tox̄, where\(\bar{x} = P_{Sol}(\bar{x})\)is the unique solution of the variational inequality

$$ \bigl\langle (I-h)\bar{x},z-\bar{x} \bigr\rangle \geq 0,\quad \forall z \in Sol. $$
(3.19)

Proof

Let \(v \in Sol\) and put \(\varGamma _{n} = \Vert x_{n} -v\Vert ^{2}\). We divide the proof into two cases.

Case 1: Suppose that there exists \(N \in \mathbb{N}\) such that \(\{\varGamma _{n}\}\) is monotonically decreasing for all \(n \geq N\). This implies that \(\lim_{n\to \infty }\varGamma _{n}\) exists, and since \(\{x_{n}\}\) is bounded, we have

$$ \varGamma _{n} - \varGamma _{n+1} \to 0, \quad \text{as } n \to \infty . $$

From (3.15), we have

$$\begin{aligned} \Vert x_{n+1} -v \Vert ^{2} \leq & (1-\delta _{n}) \Vert u_{n} -v \Vert ^{2} + 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \Biggl[ \Vert z_{n} -v \Vert ^{2} - \sum_{i=1}^{n} ( \beta _{n,0}-\kappa )\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \Biggr] \\ &{} + 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \Biggl[ \Vert w_{n} -v \Vert ^{2} - \sum_{i=1}^{n} ( \beta _{n,0}-\kappa )\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \Biggr] \\ &{}+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \Biggl[ \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert - \sum_{i=1}^{n} (\beta _{n,0}-\kappa )\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \Biggr] \\ &+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle . \end{aligned}$$

This implies that

$$\begin{aligned} (1-\delta _{n}) \sum_{i=1}^{n} (\beta _{n,0}-\kappa )\beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2} \leq & (1- \delta _{n}) \Vert x_{n} -v \Vert ^{2} - \Vert x_{n+1}-v \Vert ^{2} \\ &{}+ \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert + 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle . \end{aligned}$$

Since \(\delta _{n} \to 0\) and \(\alpha _{n}\Vert x_{n} -x_{n-1}\Vert \to 0\), then

$$ \lim_{n\rightarrow \infty } \sum_{i=1}^{n} (\beta _{n,0}-\kappa ) \beta _{n,i} \Vert z_{n} - \nu _{n,i} \Vert ^{2}. $$

Using condition (C5), we obtain

$$ \lim_{n\rightarrow \infty } \Vert z_{n} - \nu _{n,i} \Vert = 0. $$
(3.20)

Similarly, from Lemma 3.5, we get

$$\begin{aligned} \Vert x_{n+1} -v \Vert ^{2} \leq & (1-\delta _{n}) \Vert u_{n} -v \Vert ^{2} + 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \Vert z_{n} -v \Vert ^{2}+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \bigl[ \Vert w_{n} -v \Vert ^{2} - (1-\mu ) \Vert w_{n} - y_{n} \Vert ^{2} - (1-\mu ) \Vert y_{n} -z_{n} \Vert ^{2} \bigr] \\ &{}+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle \\ \leq & (1-\delta _{n}) \bigl[ \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert - (1-\mu ) \Vert w_{n} - y_{n} \Vert ^{2} \\ &{} - (1-\mu ) \Vert y_{n} -z_{n} \Vert ^{2} \bigr] \\ &{}+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle , \end{aligned}$$

that is,

$$\begin{aligned} (1-\delta _{n}) (1-\mu ) \bigl[ \Vert w_{n} - y_{n} \Vert ^{2} + \Vert y_{n} -z_{n} \Vert ^{2} \bigr] \leq & (1-\delta _{n}) \Vert x_{n} -v \Vert ^{2} + \alpha _{n}M \Vert x_{n} -x_{n-1} \Vert \\ &{}+ 2\delta _{n} \bigl\langle \phi (x_{n})-v, x_{n+1}-v \bigr\rangle . \end{aligned}$$

This implies that

$$ \lim_{n\rightarrow \infty }(1-\mu ) \bigl[ \Vert w_{n} - y_{n} \Vert ^{2} + \Vert y_{n} -z_{n} \Vert ^{2} \bigr] = 0. $$

Since \(\mu \in (0,1)\), thus we have

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert w_{n} -y_{n} \Vert = 0 \quad \text{and}\quad \lim_{n\rightarrow \infty } \Vert y_{n} -z_{n} \Vert = 0, \end{aligned}$$
(3.21)

hence

$$ \lim_{n\to \infty } \Vert z_{n} -w_{n} \Vert = 0. $$
(3.22)

Clearly, from (3.1)

$$\begin{aligned}& \Vert x_{n} -w_{n} \Vert = \alpha _{n} \Vert x_{n} -x_{n-1} \Vert = \frac{\alpha _{n}}{\delta _{n}} \Vert x_{n} -x_{n-1} \Vert .\delta _{n} \to 0, \end{aligned}$$
(3.23)
$$\begin{aligned}& \Vert u_{n} - z_{n} \Vert \leq \beta _{n,0} \Vert z_{n} - z_{n} \Vert + \sum _{i=1}^{n} \beta _{n,i} \Vert \nu _{n,i}-z_{n} \Vert \to 0, \end{aligned}$$
(3.24)

and

$$\begin{aligned} \Vert x_{n+1} - u_{n} \Vert \leq \delta _{n} \bigl\Vert \phi (x_{n}) - u_{n} \bigr\Vert + (1- \delta ) \Vert u_{n} - u_{n} \Vert \to 0. \end{aligned}$$
(3.25)

Therefore from (3.22)–(3.25), we obtain

$$ \lim_{n\rightarrow \infty } \Vert x_{n+1} -x_{n} \Vert = 0. $$
(3.26)

Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}} \rightharpoonup z \in H\) and

$$ \limsup_{n\to \infty } \bigl\langle f(v) -v, x_{n} -v \bigr\rangle = \lim_{k \to \infty } \bigl\langle f(v)-v,x_{n_{k}} - v \bigr\rangle = \bigl\langle f(v) - v, z -v \bigr\rangle . $$

By (2.2), we have

$$ \limsup_{n\to \infty } \bigl\langle f(v) -v, x_{n} -v \bigr\rangle = \bigl\langle f(v) - v, z -v \bigr\rangle \leq 0. $$
(3.27)

Combining (3.27) and (3.26), it is easy to see that

$$ \limsup_{n\to \infty } \bigl\langle f(v) -v, x_{n+1}-v \bigr\rangle \leq 0. $$
(3.28)

Since C is closed and convex, then \(z \in C\). From (3.21) and (3.23), we get \(y_{n_{k}} \rightharpoonup z\) and \(w_{n_{k}} \rightharpoonup z\) and by the definition of subdifferential of f and (3.12), we have

$$ \lambda _{n_{k}} \bigl(f(w_{n_{k}},y)-f(w_{n_{k}},y_{n_{k}}) \bigr) \geq \langle w_{n_{k}} - y_{n_{k}},y - y_{n_{k}} \rangle\quad \forall y \in C. $$

Passing to the limit in the last inequality as \(k \to \infty \) and using assumptions (A1) and (A3) with the fact that \(\lim_{k\to \infty }\lambda _{n_{k}} = \lambda >0\), we get \(f(z,y) \geq 0\) for all \(y \in C\). Hence, \(z \in \operatorname{EP}(f)\). Furthermore, from (3.22) and (3.23), we have \(z_{n_{k}} \rightharpoonup z\). Since \(S_{i}\) are demiclosed at zero for \(i=1,2,\dots ,m\), it follows from (3.20) that \(z \in F(S_{i})\) for \(i=1,2,\dots ,m\). Therefore \(z \in Sol = \operatorname{EP}(f)\cap \bigcap_{i=1}^{\infty }F(S_{i})\).

Using Lemma 2.5(ii), Lemma 3.6, and (3.28), we arrive at \(\Vert x_{n} - v\Vert \to 0\). This implies that \(\{x_{n}\}\) converges strongly to v.

Case 2: Suppose that \(\{\varGamma _{n}\}\) is not monotonically decreasing, that is, there is a subsequence \(\{\varGamma _{n_{j}}\}\) of \(\{\varGamma _{n}\}\) such that \(\varGamma _{n_{j}}<\varGamma _{n_{j}+1}\) for all \(j \in \mathcal{N}\). Then, by Lemma 2.6, we can define an integer sequence \(\{\tau (n)\}\) for all \(n \geq n_{0}\) by

$$ \tau (n)=\max \{k\leq n: \varGamma _{k} < \varGamma _{k+1}\}. $$

Moreover, \(\{\tau (n)\}\) is a nondecreasing sequence such that \(\tau (n) \to \infty \) as \(n \to \infty \) and \(\varGamma _{\tau (n)} \leq \varGamma _{{\tau (n)}+1}\) for all \(n \geq n_{0}\). From (3.6) and (3.15), we can show that

$$ \lim_{n\rightarrow \infty } \Vert z_{\tau (n)} - \nu _{{\tau (n)},i} \Vert = 0, \qquad \lim_{n\rightarrow \infty } \Vert y_{\tau (n)} - w_{\tau (n)} \Vert = 0 \quad \text{and}\quad \lim_{n\rightarrow \infty } \Vert y_{\tau (n)} - z_{ \tau (n)} \Vert = 0. $$

By a similar argument as in Case 1, we can also show that

$$ \limsup_{n\to \infty } \bigl\langle f(v) -v, x_{{\tau (n)}+1}-v \bigr\rangle \leq 0. $$

Also from Lemma 3.6, we get

$$\begin{aligned} 0 \leq & \varGamma _{{\tau (n)}+1} - \varGamma _{n} \\ \leq & \biggl(1 - \frac{2\delta _{n}(1-\tau )}{1-\delta _{\tau (n)}\tau } \biggr) \Vert x_{ \tau (n)} - v \Vert ^{2} + \frac{\delta _{\tau (n)}^{2}}{1-\delta _{\tau (n)}\tau } \Vert x_{\tau (n)} -v \Vert ^{2} \\ &{} + \frac{\alpha _{\tau (n)}M(1-\delta _{\tau (n)})^{2}}{1-\delta _{\tau (n)}\tau } \Vert x_{ \tau (n)} - x_{{\tau (n)}-1} \Vert \\ &{}+ \frac{2\delta _{\tau (n)}}{1-\delta _{\tau (n)}\tau } \bigl\langle \phi (v)-v, x_{{\tau (n)}+1} - v \bigr\rangle - \Vert x_{\tau (n)} - v \Vert ^{2}. \end{aligned}$$

Hence, we have

$$\begin{aligned} \frac{2(1-\tau )}{1-\delta _{\tau (n)}\tau } \Vert x_{\tau (n)} - v \Vert ^{2} \leq & \frac{\delta _{\tau (n)}}{1-\delta _{\tau (n)}\tau } \Vert x_{\tau (n)} -v \Vert ^{2} + \frac{\alpha _{\tau (n)}M(1-\delta _{\tau (n)})^{2}}{\delta _{\tau (n)}(1-\delta _{\tau (n)}\tau )} \Vert x_{ \tau (n)} - x_{{\tau (n)}-1} \Vert \\ & {}+\frac{2}{1-\delta _{\tau (n)}\tau } \bigl\langle \phi (v)-v, x_{{\tau (n)}+1} - v \bigr\rangle \\ \to& 0. \end{aligned}$$

This implies that \(\lim_{n\rightarrow \infty }\varGamma _{{\tau (n)}+1} = 0\). Thus, we have

$$ 0 \leq \varGamma _{n} \leq \max \{\varGamma _{\tau (n)}, \varGamma _{{\tau (n)}+1} \} \leq \varGamma _{{\tau (n)}+1} \to 0. $$

Therefore \(\Vert x_{n} -v\Vert \to 0\), and thus \(x_{n} \to 0\). This completes the proof. □

Remark 3.8

We highlight our contributions as follows:

  1. (i)

    It is easy to see that if \(\phi (x_{n}) = u\) for any arbitrary \(u \in H\), Algorithm 3.1 reduces to a modified Halpern-type inertial extragradient method and the result is still valid. This improves the results of Hieu [22] as well as Yang and Liu [48].

  2. (ii)

    In [25], the authors presented a weak convergence theorem for approximating solution of pseudomonotone EP provided the stepsize satisfies certain mild condition which requires prior estimate of the Lipschitz-like constants \(c_{1}\) and \(c_{2}\). In this paper, we proved a strong convergence result without prior estimate of the Lipschitz-like constants. This is very important since there is no best estimate for the Lipschitz-like constant.

  3. (iii)

    Also our result extends the results of [23, 24, 42] to self-adaptive inertial subgradient extragradient method with viscosity approximation method.

4 Numerical examples

In this section, we give some numerical examples to support our main result. All the optimization subproblems are effectively solved by the function quadprog in Matlab. The computations are carried out using MATLAB program on a Lenovo X250, Intel (R) Core i7 vPro with RAM 8.00 GB. We show the numerical behavior of the sequences generated by Algorithm 3.1 and also compare the performance with Algorithm 1.2 of Hieu [22] and Algorithm 1.3 of Yang and Liu [48].

Example 4.1

We consider the Nash–Cournot equilibrium model in [42] with the bifunction \(f:\mathbb{R}^{N} \to \mathbb{R}^{N} \to \mathbb{R}\) defined by

$$ f(x,y) = \langle Px+Qy+q, y-x \rangle , $$

where \(q \in \mathbb{R}^{N}\) and P, Q are two matrices of order N such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. In this case, the bifunction f satisfies (A1)–(A4) and has \(c_{1} =c_{2} = \frac{\Vert P-Q\Vert }{2}\), see [42], Lemma 6.2. The vector q is generated randomly and uniformly with its entries being in \([-2,2]\) and the two matrices P, Q are generated randomly such that their properties are satisfied. For each \(i \in \mathcal{N}\), we define the multivalued mapping \(S_{i} : \mathbb{R}^{N} \to 2^{\mathbb{R}^{N}}\) by

$$ S_{i} = \textstyle\begin{cases} [ \frac{-(1+2i)}{2}x, - (1+i)x ], & x \leq 0, \\ [ -(1+i)x, - \frac{(1+2i)}{2}x ],& x >0. \end{cases} $$

It is easy to show that \(S_{i}\) is a \(\kappa _{i}\)-demicontractive mapping with \(\kappa _{i} = \frac{4i+8i}{4i^{2} + 12 i + 9}\in (0,1)\) and \(F(S_{i})=\{0\}\) for each \(i \in \mathcal{N}\). Let \(\lambda _{1} = 0.07\), \(\mu = 0.7\), \(\alpha = 3\), \(\phi (x) = \frac{x}{4}\), \(\epsilon _{n} = \frac{1}{(n+1)^{1.5}}\), \(\delta _{n} = \frac{1}{(n+1)^{0.5}}\) and for each \(n \in \mathcal{N}\), \(i \geq 1\), let \(\{\beta _{n,i}\}\) be defined by

$$ \beta _{n,i} = \textstyle\begin{cases} 0 & \text{if } n < i, \\ 1 - \frac{n}{n+1}\sum_{k=1}^{n} \frac{1}{2^{k}} & \text{if } n =i, \\ \frac{1}{2^{i+1}} (\frac{n}{n+1} ) & \text{if } n >i. \end{cases} $$

It is easy to verify that conditions (C1)–(C5) are satisfied. We implement the experiments for various values of N and m as follows:

  1. Case I:

    \(N = 50\) and \(m = 5\),

  2. Case II:

    \(N = 100\) and \(m = 20\),

  3. Case III:

    \(N= 200\) and \(m = 50\),

  4. Case IV:

    \(N = 500\) and \(m = 100\).

For Algorithm 1.2, we take \(\lambda _{n} = 0.07\), \(\alpha _{n} = \frac{1}{n+1}\), while for Algorithm 1.3, we take \(\lambda _{1} = 0.07 \), \(\mu = 0.7\), and \(Sx = \frac{x}{2}\), which is quasi-nonexpansive. We use \(\Vert x_{n+1}-x_{n}\Vert <10^{-4}\) as a stopping criterion for the numerical computation. Figures 1–3 describe the behavior of \(\Vert x_{n+1}-x_{n}\Vert \) against the number of iterations for Example 4.1. Also, we showed the CPU time taken and the number of iterations for each algorithm in Table 1.

Figure 1
figure 1

Example 4.1, top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV

Table 1 Computational result for Example 4.1

Example 4.2

Let \(H_{1}=H_{2}=\ell _{2}(\mathbb{R})\) be the linear spaces whose elements are all 2-summable sequences \(\{x_{j}\}_{j=1}^{\infty }\) of scalars in \(\mathbb{R}\), that is,

$$\begin{aligned} \ell _{2}(\mathbb{R}) := \Biggl\{ x=(x_{1},x_{2}, \ldots , x_{j}, \ldots ), x_{j} \in \mathbb{R} \text{ and } \sum_{j=1}^{ \infty } \vert x_{j} \vert ^{2} < \infty \Biggr\} \end{aligned}$$

with the inner product \(\langle \cdot ,\cdot \rangle : \ell _{2} \times \ell _{2} \to \mathbb{R}\) and \(\Vert \cdot \Vert : \ell _{2} \to \mathbb{R}\) defined by \(\langle x,y \rangle :=\sum_{j=1}^{\infty }x_{j}y_{j}\) and \(\Vert x\Vert = (\sum_{j=1}^{\infty } |x_{j}|^{2} )^{ \frac{1}{2}}\), where \(x=\{x_{j}\}_{j=1}^{\infty }\), \(y=\{y_{j} \}_{j=1}^{\infty }\). Let \(C = \{x\in H : \Vert x\Vert \leq 1\}\). Define the bifunction \(f:C \times C \to \mathbb{R} \) by

$$ f(x,y) = \bigl(3- \Vert x \Vert \bigr)\langle x,y-x\rangle\quad \forall x,y \in C. $$

Following [44], it is easy to show that f is a pseudomonotone bifunction which is not monotone and f satisfies the Lipschitz-like condition with constant \(c_{1} = c_{2} = \frac{5}{2}\). Also, f satisfies assumptions (A2) and (A3). For \(i \in \mathcal{N}\), we define \(S_{i} :C \rightarrow \operatorname{CB}(\ell _{2}(\mathbb{R}))\) by

$$\begin{aligned} S_{i}x = \biggl[0,\frac{x}{5i} \biggr]\quad \text{for all } i \in \mathcal{N}, x \in C . \end{aligned}$$
(4.1)

It can easily be seen that \(S_{i}\) is 0-demicontractive for all \(i \in \mathcal{N}\) and \(F(S_{i}) = \{0\}\). Also \(Sol = \{0\}\). We define \(\phi : \ell _{2} \to \ell _{2}\) by \(\phi (x) = \frac{x}{2}\) where \(x = (x_{1},x_{2},\dots ,x_{j},\dots )\), \(x_{j} \in \mathbb{R}\), and take \(\lambda _{1} =0.01\), \(\mu = 0.5\), \(\delta _{n} = \frac{1}{10(n+1)} \), \(\epsilon _{n} = \delta _{n}^{2}\) and \(\{\beta _{n,i}\}\) is as defined in Example 4.1. For Algorithm 1.2, we take \(\lambda _{n} = 0.01\), \(\alpha _{n} = \frac{1}{10(n+1)}\) and for Algorithm 1.3, we take \(\lambda _{1} = 0.01\) and \(\mu = 0.5\). We test the algorithms for the following initial values as follows:

  1. Case I:

    \(x_{0} = (1,1,0,\dots ,0,\dots )\), \(x_{1} = (1,0,1,,\dots ,0,\dots )\),

  2. Case II:

    \(x_{0} = (1,0,0,\dots ,0,\dots )\), \(x_{1} = (0,1,1,,\dots ,0,\dots )\),

  3. Case III:

    \(x_{0} = (1,0,1,\dots ,0,\dots )\), \(x_{1} = (1,1,0,,\dots ,0,\dots )\),

  4. Case IV:

    \(x_{0} = (1,1,0,\dots ,0,\dots )\), \(x_{1} = (1,0,0,,\dots ,0,\dots )\).

We use the following stopping criterion \(\Vert x_{n+1}-x_{n}\Vert < 10^{-5}\) for the numerical computation and plots the graphs \(\Vert x_{n+1} -x_{n}\Vert \) against the number of iterations in each case. The numerical results can be seen in Table 2 and Fig. 2.

Figure 2
figure 2

Example 4.1, top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV

Table 2 Computational result for Example 4.2

Example 4.3

In this example, we take \(H = L_{2}([0,1])\) with the inner product \(\langle x,y \rangle = \int _{0}^{1}x(t)y(t)\,dt\) and the norm \(\Vert x\Vert = (\int _{0}^{1}x^{2}(t)\,dt )^{\frac{1}{2}}\) for all \(x,y \in L_{2}([0,1])\). The set C is defined by \(C = \{x \in H:\int _{0}^{1}(t^{2}+1)x(t)\,dt \leq 1 \}\) and the function \(f:C \times C \to \mathbb{R}\) is given by \(f(x,y) = \langle Ax, y-x \rangle \) where \(Ax(t) = \max \{0, x(t)\}\), \(t \in [0,1]\) for all \(x \in H\). We defined the mapping \(S_{i}:H \to 2^{H}\) by \(S_{i}(x)= \{ (-\frac{1+i}{i} )x \} \) for each \(i \in \mathcal{N}\). It is not difficult to show that \(S_{i}\) is demicontractive with \(\kappa _{i} = \frac{1}{1+2i}\), \(F(S_{i}) = \{0\}\) and \((I-S_{i})\) is demiclosed at 0. We take \(\phi (x) = \frac{x(t)}{2}\), \(\mu = 0.05\), \(\lambda _{0} = 0.25\), \(\epsilon _{n} = \frac{1}{(n+1)^{2}}\), \(\delta _{n} = \frac{1}{n+1}\), and \(\{\beta _{n,i}\}\) is as defined in Example 4.1. We test Algorithms 3.1, 1.2, and 1.3 for the following initial values:

  1. Case I:

    \(x_{0} = t^{2} + 1\) and \(x_{1} = \sin (2t)\),

  2. Case II:

    \(x_{0} = \frac{\cos (2t)}{2}\) and \(x_{1} = \frac{\sin (3t)}{30}\),

  3. Case III:

    \(x_{0} = \frac{\exp (5t)}{80}\) and \(x_{1} = \frac{\exp (2t)}{40}\),

  4. Case IV:

    \(x_{0} = \cos (2t)\) and \(x_{1} = \exp (-3t^{2})\).

We use \(\Vert x_{n+1}-x_{n}\Vert < 10^{-4}\) as a stopping criterion for the numerical computation and plot the graphs of \(\Vert x_{n+1}-x_{n}\Vert \) against the number of iterations in each case. The numerical results can be seen in Table 3 and Fig. 3.

Figure 3
figure 3

Example 4.1, top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV

Table 3 Computational result for Example 4.3

5 Conclusion

The paper presents a self-adaptive inertial subgradient extragradient method for finding a solution of pseudomonotone equilibrium problem and a common fixed point of a countable family of κ-demicontractive mappings in Hilbert spaces. The algorithm consists of an inertial extrapolation step and a viscosity approximation method, while the stepsize is defined by a nonincreasing monotone stepsize rule. A strong convergence result is proved without prior estimate of the Lipschitz-like constants of the pseudomonotone bifunction. Finally, some numerical examples were given to show the efficiency and accuracy of the method with respect to related methods in the literature. The result in this paper improves and extends the corresponding results of [1–6, 22, 24, 25, 32, 42, 48].

References

  1. Anh, P.N.: A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems. Bull. Malays. Math. Sci. Soc. 36(1), 107–116 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Anh, P.N., An, L.T.H.: The subgradient extragradient method extended to equilibrium problems. Optimization 64(2), 225–248 (2015)

    Article  MathSciNet  Google Scholar 

  3. Anh, P.N., An, L.T.H.: New subgradient extragradient methods for solving monotone bilevel equilibrium problems. Optimization 68(11), 2097–2122 (2019)

    Article  MathSciNet  Google Scholar 

  4. Anh, P.N., Muu, L.D.: A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems. Optim. Lett. 8(2), 727–738 (2014)

    Article  MathSciNet  Google Scholar 

  5. Anh, P.N., Quoc, T.D., Son, D.X.: Hybrid extragradient-type methods for finding a common solution of an equilibrium problem and a family of strict pseudo-contraction mappings. Appl. Math. 3, 1357–1367 (2012)

    Article  Google Scholar 

  6. Anh, P.N., Thuy, L.Q., Anh, T.H.: Strong convergence theorem for the lexicographic Ky Fan inequality. Vietnam J. Math. 46(3), 517–530 (2018)

    Article  MathSciNet  Google Scholar 

  7. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Èkon. Mat. Metody 12, 1164–1173 (1976)

    Google Scholar 

  8. Attouch, H., Goudon, X., Redont, P.: The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2, 1–34 (2000)

    Article  MathSciNet  Google Scholar 

  9. Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward—backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)

    Article  MathSciNet  Google Scholar 

  10. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Féjer-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)

    Article  MathSciNet  Google Scholar 

  11. Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Nonlinear Programming Technique for Equilibria. Springer, Berlin (2019)

    Book  Google Scholar 

  12. Bigi, G., Passacantando, M.: Descent and penalization techniques for equilibrium problems with nonlinear constraints. J. Optim. Theory Appl. 164, 804–818 (2015)

    Article  MathSciNet  Google Scholar 

  13. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  14. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 148, 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  15. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for variational inequality problems in Euclidean space. Optimization 61, 119–1132 (2012)

    Article  MathSciNet  Google Scholar 

  16. Chidume, C.E., Ezeora, J.N.: Krasnoselskii-type algorithm for family of multivalued strictly pseudocontractive mappings. Fixed Point Theory Appl. 2014, 111 (2014)

    Article  Google Scholar 

  17. Cholamjiak, W., Pholasa, N., Suantai, S.: A modified inertial shrinking projection method for solving inclusion problems and quasi nonexpansive multivalued mappings. Comput. Appl. Math. 37, 1–25 (2018)

    Article  MathSciNet  Google Scholar 

  18. Dadashi, V., Iyiola, O.S., Shehu, Y.: The subgradient extragradient method for pseudomonotone equilibrium problem. Optimization (2019). https://doi.org/10.1080/02331934.2019.1625899

    Article  MATH  Google Scholar 

  19. Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht (2003)

    Book  Google Scholar 

  20. Dinh, B.V., Kim, D.S.: Projection algorithms for solving nonmonotone equilibrium problems in Hilbert space. J. Comput. Appl. Math. 302, 106–117 (2016)

    Article  MathSciNet  Google Scholar 

  21. Güler, O.: On the convergence of the proximal point algorithms for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)

    Article  MathSciNet  Google Scholar 

  22. Hieu, D.V.: Halpern subgradient extragradient method extended to equilibrium problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 111, 823–840 (2017)

    Article  MathSciNet  Google Scholar 

  23. Hieu, D.V.: Hybrid projection methods for equilibrium problems with non-Lipschitz type bifunctions. Math. Methods Appl. Sci. 40, 4065–4079 (2017)

    Article  MathSciNet  Google Scholar 

  24. Hieu, D.V.: New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 97(5), 811–824 (2018). https://doi.org/10.1080/00036811.2017.1292350

    Article  MathSciNet  MATH  Google Scholar 

  25. Hieu, D.V., Cho, Y.-J., Xiao, Y.-B.: Modified extragradient algorithms for solving equilibrium problems. Optimization (2018). https://doi.org/10.1080/02331934.2018.1505886

    Article  MathSciNet  MATH  Google Scholar 

  26. Iiduka, H.: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 59, 873–885 (2010)

    Article  MathSciNet  Google Scholar 

  27. Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)

    Article  MathSciNet  Google Scholar 

  28. Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems. Rend. Circ. Mat. Palermo (2019) https://link.springer.com/article/10.1007/s12215-019-00431-2

  29. Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: An inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization (2020). https://doi.org/10.1080/02331934.2020.1716752

    Article  MATH  Google Scholar 

  30. Jolaoso, L.O., Karahan, I.: A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. Comput. Appl. Math. 39, 150 (2020) https://doi.org/10.1007/s40314-020-01178-8

    Article  MathSciNet  MATH  Google Scholar 

  31. Jolaoso, L.O., Oyewole, K.O., Okeke, C.C., Mewomo, O.T.: A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space. Demonstr. Math. 51, 211–232 (2018)

    Article  MathSciNet  Google Scholar 

  32. Kim, J.K., Anh, P.N., Nam, Y.-M.: Strong convergence of an extragradient method for equilibrium problems and fixed point problems. J. Korean Math. Soc. 49(1), 187–200 (2012)

    Article  MathSciNet  Google Scholar 

  33. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  34. Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)

    Article  MathSciNet  Google Scholar 

  35. Mainge, P.E.: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219(1), 223–236 (2008)

    Article  MathSciNet  Google Scholar 

  36. Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)

    Article  MathSciNet  Google Scholar 

  37. Maingé, P.E.: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 205, 501–506 (2010)

    Article  MathSciNet  Google Scholar 

  38. Moudafi, A.: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241(1), 46–55 (2000)

    Article  MathSciNet  Google Scholar 

  39. Moudafi, A.: A seconder-order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 4(1), 18 (2003)

    MATH  Google Scholar 

  40. Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)

    Article  MathSciNet  Google Scholar 

  41. Okeke, C.C., Jolaoso, L.O., Isiogugu, F.O., Mewomo, O.T.: Solving split equality equilibrium and fixed point problems in Banach spaces without prior knowledge of operator norm. J. Nonlinear Convex Anal. 20(4), 661–683 (2019)

    MathSciNet  Google Scholar 

  42. Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52, 139–159 (2012)

    Article  MathSciNet  Google Scholar 

  43. Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)

    Article  MathSciNet  Google Scholar 

  44. Rehman, H., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Popov’s extragradient explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1734805

    Article  Google Scholar 

  45. Rudin, W.: Functional Analysis. McGraw-Hill, New York (1991)

    MATH  Google Scholar 

  46. Tada, A., Takahashi, W.: Strong convergence theorem for an equilibrium problem and a nonexpansive mapping. In: Takahashi, W., Tanaka, T. (eds.) Nonlinear Analysis and Convex Analysis. Yokohama Publishers, Yokohama (2006)

    Google Scholar 

  47. Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Parallel hybrid algorithm for solving pseudomonotone equilibrium and split common fixed point problems. Bull. Malays. Math. Sci. Soc. 43, 1893–1918 (2020). https://doi.org/10.1007/s40840-019-00781-1

    Article  MathSciNet  MATH  Google Scholar 

  48. Yang, J., Liu, H.: The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. (2019). https://doi.org/10.1007/s1150-019-01474-1

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

Availability of data and materials

Not applicable.

Funding

This first author is supported by the postdoctoral research grant from the Sefako Makgatho Health Sciences University, South Africa.

Author information

Authors and Affiliations

Authors

Contributions

All authors worked equally on the results and approved the final manuscript.

Corresponding author

Correspondence to Lateef Olakunle Jolaoso.

Ethics declarations

Competing interests

The authors declare that there are no competing interests on the paper.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jolaoso, L.O., Aphane, M. A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl 2020, 9 (2020). https://doi.org/10.1186/s13663-020-00676-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-020-00676-y

MSC

Keywords