Abstract

In this paper, we consider the split feasibility problem in Banach spaces. By applying the shrinking projection method, we propose an iterative method for solving this problem. It is shown that the algorithm under two different choices of the stepsizes is strongly convergent to a solution of the problem.

1. Introduction

In this paper, we consider the split feasibility problem [1]. It is very useful in dealing with problems arising from various applied disciplines (see, e.g., [26]). More precisely, the split feasibility problem requires to find a point satisfying the following property:where and are nonempty closed convex subsets of and , respectively, and is a linear operator.

The split feasibility problem was first treated in Euclidean spaces and recently was extended to more general framework including Hilbert spaces and Banach spaces. In Hilbert spaces, Byrne [7] introduced the algorithm:where is a properly chosen stepsize, is the conjugate of , is the identity operator, and denote the metric projections onto the respective sets. By using Polyak’s gradient method, Wang [8] recently proposed another iterative algorithm:where is a properly chosen stepsize (see also [4, 713] for some related works). In the framework of Banach spaces, Schöpfer et al. [14] extended the CQ method aswhere is a positive parameter, are, respectively, the duality mappings on and , denotes the Bregman projection, and denotes the metric projection. The weak convergence of (4) is guaranteed if is -uniformly convex and uniformly smooth and is sequentially weak-to-weak continuous. Recently, Takahashi [15] suggested a novel way for the split feasibility problem:

By applying the shrinking projection method, he [16] also proposed another method:

Instead of weak convergence, Takahashi proved the strong convergence of both methods, under the assumption that is uniformly convex and smooth, which is clearly weaker than that used in [14]. Following the above works, Wang [10] recently proposed a new method, which generates a sequence as

Our aim of this paper is to continue the above works by constructing new iterative methods in Banach spaces. By applying ideas (6) and (7), we introduce a new iterative algorithm and propose two different choices of the stepsize. We show that if the spaces involved are smooth and uniformly convex, then the algorithm converges strongly under both choices of the stepsize. It is also worth noting that one choice of the stepsize does not need any a priori knowledge of the operator norm.

2. Preliminaries

In what follows, we shall assume that the split feasibility problem is consistent, that is, its solution set denoted by is nonempty. The notation “” stands for strong convergence, “” represents weak convergence, and is the set of weak cluster points of a sequence . Let and , respectively, be the unit sphere and unit ball of . denotes the null-point set of an operator defined on . For , we let and .

Definition 1. Let be a real Banach space.(1)The modulus of convexity is defined as(2)The modulus of smoothness is defined by(3)The duality mapping is defined by

Definition 2. Let be a real Banach space.(1) is called strictly convex if .(2) is called smooth if exists for each .(3) is called uniformly convex if for any .(4) is called uniformly smooth if .(5) is called p-uniformly convex if there exist and a constant such that .

Lemma 1 (see [1719]). If is uniformly convex, then is uniformly smooth; is strictly convex and reflexive.

Lemma 2 (see [1719]). Let be a sequence in such that and as . If is uniformly convex, then .

Lemma 3 (see [1719]). Let and be two sequences in such that and as . If is uniformly convex, then .

Lemma 4 (see [1719]). Let be the duality mapping on .(1) is surjective if and only if is reflexive.(2) is injective if and only if is strictly convex.(3) is single-valued if and only if is smooth.(4)If is smooth, then is monotone, that is,Moreover, if is further strictly convex, then is strictly monotone, that is,(5)If is reflexive, smooth, and strictly convex, then is one-to-one single-valued and , where is the duality mapping of .

The Bregman distance with respect to is given by

This notion goes back to Bregman [20] and now is successfully used in various optimization problems in Banach spaces (see, e.g., [21, 22]). In general, the Bregman distance is not a metric due to the absence of symmetry, but it has some distance-like properties.

Definition 3. Let be a nonempty closed convex subset of .(1)The metric projection is defined as(2)The Bregman projection is defined asIn Hilbert spaces, the metric and Bregman projections are the same, but in general they are completely different. More importantly, the metric projection cannot share the descent property as the Bregman projection in Banach spaces. We now collect some properties of metric projections.

Lemma 5 (see [14]). Let be a sequence in and be a nonempty closed convex subset. Then, for , the following holds.(1).(2).(3)If and , then .

3. Convergence Analysis

To construct our algorithm, we need the following lemma. It converts the split feasibility problem to an equivalent null-point problem, which indeed amounts to a fixed point problem.

Lemma 6 (see [10]). Let . Then, .

By applying idea (5) to Lemma 6, we thus can propose the following algorithm for solving the split feasibility problem in Banach spaces. Choose and . Given , update by the iteration formula:

Lemma 7. Assume that both and are reflexive, smooth, and strictly convex. If is chosen so that , then for each , the set is nonempty, closed, and convex. Hence, the proposed algorithm is well defined.

Proof. It suffices to show that is nonempty since it is clearly closed and convex. To this end, we now show by induction that for all and that is obvious. Suppose that for some . Take any . Then, we have . Furthermore, we havewhich from Lemma 5 yieldsBy a simple calculation, we haveHence, we haveIt then follows from (18) thatSubstituting (20) into the above inequality, we haveBy our choice of , we see thatThis implies that . Since is chosen in arbitrarily, we conclude that . Consequently, for all . Now it is clear that the set is nonempty, closed, and convex. Thus, the proposed algorithm is well defined.
Now let us state the convergence of the proposed algorithm.

Theorem 1. Assume that is uniformly convex and smooth and is reflexive, smooth, and strictly convex. If is chosen so that , then the sequence generated by (16) converges strongly to , where .

Proof. We first show the following equality:To this end, let . From the previous lemma, it is clear that , . Thus, we have for each ,This indicates that is nondecreasing and bounded above; thus, the limit of exists. Now set . We haveOn the other hand, we havewhere the inequality follows from the fact . Altogether, we have . Since is uniformly convex, this yields thatFurthermore, since , we havewhich clearly implies thatHence, we have , which yields (24).
We next show that every weak cluster point of is a solution of the split feasibility problem. To this end, let be any weak cluster point of and take a subsequence of converging weakly to . In view of (18) and (24), we haveBy Lemma 5, . On the other hand, for any , it follows thatimplying . By Lemma 5, . Altogether, . Since is arbitrary, we obtain the desired conclusion.
Finally, we prove that converges strongly to . Now take any . Then, , and there exists a subsequence of converging weakly to . It then follows thatwhere the first and the last inequalities follow from the property of metric projections and the second one follows from the weak lower semicontinuity of the norm. Hence,Since is chosen arbitrarily, this implies that is exactly a single-point set, that is, converges weakly to . Note that . By Lemma 2, the uniform convexity implies . Since converges weakly, this yields as desired.
As we see from the previous theorem, the choice of is related to . Thus, to implement this algorithm, one has to compute the norm , which is generally not easy in practice. In what follows, we introduce another choice of , which ultimately has no relation with . By applying an idea in [4], we can propose another choice of the parameter as follows:Now let us state the convergence of under this choice of .

Theorem 2. Assume that is uniformly convex and smooth and is reflexive, smooth, and strictly convex. Then, the proposed algorithm with (35) is well defined. Moreover, the sequence generated by (16) converges strongly to , where .

Proof. We first show that the algorithm under (35) is well defined. To this end, it suffices to show that for each , is nonempty. We now prove this by induction. Suppose that for some . Take any . Then, we have . Furthermore, by Lemma 5, we haveandConsequently, we haveThis implies that . Since is chosen in arbitrarily, we conclude that . Consequently, for all . Now it is clear that the set is nonempty, closed, and convex. Thus, the proposed algorithm is well defined.
We now prove that the sequence generated by (16) converges strongly to . From the proof of the previous theorem, it suffices to verify that (31) still holds. Similarly, we obtain . From (37), we haveOn the other hand, we see thatThis together with (39) yields (31) as desired. Hence, the proof is complete.

Remark 1. It is worth noting that our algorithm is new even in Hilbert spaces. Indeed, in Hilbert spaces, our algorithm is reduced to

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (no. 11701154), Key Scientific Research Projects of Universities in Henan Province (nos. 19B110010 and 20A110029), and Peiyu Jijin of Luoyang Normal University (no. 2018-PYJJ-001).