Next Article in Journal
When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
Next Article in Special Issue
Predicting Fire Brigades Operational Breakdowns: A Real Case Study
Previous Article in Journal
Maximal Domains for Fractional Derivatives and Integrals
Previous Article in Special Issue
Spectrally Sparse Tensor Reconstruction in Optical Coherence Tomography Using Nuclear Norm Penalisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Projection Methods for Uniformly Convex Expandable Sets

by
Stéphane Chrétien
1,2,3,* and
Pascal Bondon
4
1
Laboratoire ERIC, Université Lyon 2, 69500 Bron, France
2
The Alan Turing Institute, London NW1 2DB, UK
3
Data Science Division, The National Physical Laboratory, Teddington TW11 0LW, UK
4
Laboratoire des Signaux et Systèmes, CentraleSupélec, CNRS, Université Paris-Saclay, 91190 Gif-sur-Yvette, France
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1108; https://doi.org/10.3390/math8071108
Submission received: 21 February 2020 / Revised: 19 June 2020 / Accepted: 22 June 2020 / Published: 6 July 2020
(This article belongs to the Special Issue New Trends in Machine Learning: Theory and Practice)

Abstract

:
Many problems in medical image reconstruction and machine learning can be formulated as nonconvex set theoretic feasibility problems. Among efficient methods that can be put to work in practice, successive projection algorithms have received a lot of attention in the case of convex constraint sets. In the present work, we provide a theoretical study of a general projection method in the case where the constraint sets are nonconvex and satisfy some other structural properties. We apply our algorithm to image recovery in magnetic resonance imaging (MRI) and to a signal denoising in the spirit of Cadzow’s method.

1. Introduction

1.1. Background and Goal of the Paper

Many problems in applied mathematics, engineering, statistics and machine learning can be reduced to finding a point in the intersection of some subsets of a real separable Hilbert space H . In mathematical terms, let ( S i ) i I be a finite family of proximinal subsets (i.e., sets S such that any x H admits a closest point in S) [1] of H with a non-empty intersection, we address the problem of finding a point in the intersection of the sets ( S i ) i I using a successive projection method.
When the sets are closed and convex, the problem is known as the convex feasibility problem, and is the subject of extensive literature; see [2,3,4,5] and references therein. In this paper, we take one step further into the scarcely investigated topic of finding a common point in the intersection of nonconvex sets. Convex feasibility problems have been applied to an extremely wide range of topics in systems analysis and control, signal and image processing. Important examples are: model order reduction [6], controller design [7], tensor analysis [8], image recovery [9], electrical capacitance tomography [10], MRI [11,12], and stabilisation of quantum systems and application to quantum computation [13,14].
Extension of this problem to the nonconvex setting also has many applications, related to sparse estimation and more specifically low rank constraints, such as in control theory [15], signal denoising [16], phase retrieval [17], structured matrix estimation [18,19] and has great potential impact on the design of scalable algorithm in many machine learning problems such as Deep Neural Networks [20]. Studies of projection methods in the nonconvex settings have been quite scarce in the literature [21,22,23] and a lot of work still remains to be done in order to understand the behavior of such methods for these difficult problems.
In the present paper, our goal is to investigate how the results in [22] can be improved in such a way that strong convergence can be obtained. The results proved in [22] make the assumption that the sets involved in the feasibility problem can be written as a possibly not countable union of closed convex sets, and one of the sets is boundedly compact. Our study will be based on the assumption that the convex sets in the family are uniformly convex, which allows to remove the boundedly compactness assumption. As will be seen in Section 4, uniform convexity can be obtained by simply modifying the algorithm even in the case where the sets S i are not expandable into a family of uniformly convex sets.

1.2. Preliminary on Projections and Expansion into Convex Sets

The notation Card will denote the cardinality of a set. H will denote a real separable Hilbert space with scalar product · · , norm · , and distance d. Let S be a subset of H . S ¯ denotes the closure of S. S is proximinal if each point in H has at least one projection onto S. If S is proximinal, then S is closed. When S is proximinal, P S is the projection point-to-set mapping onto S defined by
( x H ) P S ( x ) = { y S | x y | = inf z S | x z | } .
p S ( x ) denotes an arbitrary element of P S ( x ) and d ( x , S ) = | x p S ( x ) | . B ( x , ρ ) denotes the closed ball of center x and radius ρ . In the case of a projection onto a convex set C, Kolmogorov’s criterion is:
x P C ( x ) , x P C ( x ) 0
for all x C . Our work is a follow-on project to the work in [22] which introduced the notion of convex expandable sets in order to tackle certain feasibility problems involving rank-constrained matrices as described in [23]. We recall what it means for a nonconvex set to be expandable into a family of convex sets.
Definition 1.
A subset S of H is said to be expandable into a family of convex sets when it is the union of non-trivial, i.e., not reduced to a single point, convex subsets C j of H , i.e.,
S = j J C j
where J is a possibly uncountable index set. Any family ( C j ) j J satisfying (2) is called a convex expansion of S.
Remark 1.
Uncountable unions often appear in practice. An important example is the set of matrices of rank r in R n × m for 0 < r < min { n , m } . This set is the union of the rays passing through the null matrix and any matrix of rank r different from zero. This constraint often appears in signal processing problems such as signal denoising [16] and more generally, structured matrix estimation problems [19].
Uniform convexity is an important concept, which allows to prove many strong convergence results in the framework of successive projection methods ([4], Section 5.3).
Definition 2.
A convex subset C of H is uniformly convex with respect to a positive-valued nondecreasing function b ( t ) if
( x , y ) C 2 , B ( ( x + y ) / 2 , b ( | x y | ) ) C .
Uniform convexity will be instrumental in our analysis. We define the following condition which is stronger than the condition of being expandable into a family of convex sets.
Definition 3.
A subset S of H is said to be expandable in uniformly convex sets when the sets C j in (2) are moreover uniformly convex with the same function b ( · ) . Any family ( C j ) j J of uniformly convex sets satisfying (2) is called a uniformly convex expansion of S.

1.3. The Projection Algorithm

For any point x in H , for the sake of simplicity, p S i ( x ) will be noted by p i ( x ) . We will assume throughout that the family { S i } i I is finite.
Definition 4.
(Method of Projection onto Uniformly Convex Expandable Sets)
Given a point x 0 in H , two real numbers α and λ in ( 0 , 1 ] , a nonnegative integer M and a sequence ( I n ) of subsets of I satisfying the following conditions
C 1 . ( n N ) I n I , I = j = n n + M I j a n d C a r d ( I n ) > 1 C 2 . ( n N ) ( i I n ) α α i , n a n d j I n α j , n = 1
the projection-like method considered in this paper is iteratively defined by
( n N ) 1 . I f j I n s u c h t h a t p j ( x n ) i I n S i T h e n I f I n I s e t I n = I a n d g o t o 1 I f I n = I s e t λ n = 1 , α j , n = 1 a n d g o t o 2 E l s e g o t o 2 2 . x n + 1 = x n + λ n i I n α i , n ( p i ( x n ) x n )
with λ n satisfying
C 3 . ( n N ) λ λ n μ n   w h e r e   μ n = i I n α i , n | x n p i ( x n ) | 2 | x n i I n α i , n p i ( x n ) | 2 i f x n i I n S i 1 i f x n i I n S i .
Remark 2.
Using the assumption Card ( I n ) > 1 , the coefficients α i , n , i I n , can be chosen in such a way that the denominator in the definition of μ n is not equal to zero. Note that one can relax the constraint Card ( I n ) > 1 in the case where μ n is well defined at every iteration, thus allowing to recover the cyclic projection method.
The numbers α and λ ensure that the steps taken at each iteration are sufficiently large as compared to the distance of the iterations to their projections on each S i , i I . The use of the integer M ensures that each S i is involved at least every M iterations. The “If” condition in Step 1 of the algorithm ensures that x n will move at each iteration by enforcing that it does not yet belong to all the selected sets S i , i I n . Our method is an extension of Ottavy’s method [4] to the case of nonconvex sets. The idea of using a potentially variable index set I n at each iteration allows to recover several variants, such as cyclic projections [2,21,22,24] or parallel projections [2,23,24]. Notice that, due to finiteness of the cardinality of I, finding an index j such that p j ( x n ) i I n S i is easy if such an index exists.

1.4. Our Contributions

The main contributions of our work are a proof that strong convergence holds in the proposed setting, and a new constructive modification of the projection method in order to accommodate the case of non-necessarily uniformly convex expandable sets. Based on our findings, we will then weaken our assumptions and provide a new algorithm which does not need uniform convexity, while preserving strong convergence to a feasible solution. Applications to an infinite dimensional MRI problem and to low rank matrix estimation for signal denoising are presented in Section 5.

2. Ottavy’s Framework

2.1. Successive Projection Point-to-Set Mapping

Let α and λ be two real numbers chosen in ( 0 , 1 ) . Define the point-to-set mapping
D : x H D ( x ) = { x ˜ H ; x ˜ = x + λ ( x ) ( x ¯ x ) }
where x ¯ is a weighted average,
x ¯ = i I ( x ) α i ( x ) p i ( x )
I ( x ) is a subset of selected constraints such that
I ( x ) I and I ( x )
α i ( x ) , i I is a set of normalized nonvanishing weightings such that
α i ( x ) [ 0 , 1 ] , i I ( x ) α i ( x ) = 1 , and p i ( x ) x α i ( x ) α
λ ( x ) is the relaxation parameter such that
λ ( x ) [ λ , + ) and x x ¯ λ ( x ) [ λ , μ ( x ¯ ) ]
with
μ ( x ¯ ) = α I ( x ) α i ( x ) | x p i ( x ) | 2 / | x x ¯ | 2 .

2.2. Ottavy’s Lemma

In [4], Ottavy proved the very useful lemma.
Lemma 1.
For any x H , x ˜ D ( x ) , and z i I S i , the following results hold:
( i ) | x ˜ x | 2 β | x z | 2 | x ˜ z | 2 ( ii ) 2 x z | x x ˜ 1 + 2 λ | x z | 2 | x ˜ z | 2 ( iii ) max i I ( x ) p i ( x ) x 2 ( 2 + λ ) 2 α λ 2 | x z | 2 | x ˜ z | 2 .

3. A Strong Convergence Result

Let ( x n ) n N be the sequence of iterates of the projection method defined by (3). In the present section, we show strong convergence of this sequence to a point in i I S i .
For every n in N and for each i I , define
T i , n = { C S i C is convex and p i ( x n ) C } T n = i I n T i , n .
When the (finite) family ( S i ) i I is expandable into a family of convex sets, T i , n for every n in N and for each i I .
Lemma 2.
If the following conditions are satisfied
C 4 . ( n N ) T n C 5 . The finite family ( S i ) i I is expandable in convex sets
then, for every n in N and for every z n in T n , we have
( i ) | x n + 1 x n | 2 2 λ | x n z n | 2 | x n + 1 z n | 2 ( ii ) 2 x n z n x n x n + 1 ( 1 + 2 λ ) | x n z n | 2 | x n + 1 z n | 2 ( iii ) sup i I n | p i ( x n ) x n | 2 ( 2 + λ ) 2 α λ 2 | x n z n | 2 | x n + 1 z n | 2 .
Proof. 
Fix n in N and z n in T n . According to C 5 , each point p i ( x n ) belongs to a set C i , j n S i , a convex set in the uniformly convex expansion of S i indexed by j n . Hence p i ( x n ) is also the projection of x n onto C i , j n , and z n C i , j n . Replacing respectively x, x ˜ and z by x n , x n + 1 and z n in Ottavy’s Lemma (Lemma 1, or [4] Lemma 7.1), we obtain the desired results. ☐
Lemma 3.
If the conditions C 4 , C 5 , and
C 6 . ( ( z n ) n N ) such that z n T n for all n in N and n N | z n + 1 z n | < +
are satisfied, then
( i ) The sequences ( | x n z n | ) n N and ( | x n + 1 z n | ) n N are bounded . ( ii ) The series n N ( | x n z n | 2 | x n + 1 z n | 2 ) is convergent . ( iii ) For each i in I , the sequence ( | p i ( x n ) x n | ) n N converges to zero . ( iv ) The sequence ( | x n z n | ) n N is convergent . ( v ) The sequence ( z n ) n N converges strongly to a point in i I S i .
Proof. 
(i) Let ( z n ) n N be a sequence satisfying C 6 , and let k in N . We deduce from Lemma 2(i) that | x k + 1 z k | | x k z k | . Then we have
| x k + 1 z k + 1 | | x k + 1 z k | + | z k + 1 z k | | x k z k | + | z k + 1 z k | .
Therefore
| x n + 1 z n + 1 | | x 0 z 0 | + k = 0 n | z k + 1 z k |
and then C 6 ensures that the sequences ( | x n z n | ) n N and ( | x n + 1 z n | ) n N are bounded.
(ii) Now, the cosine law, followed by the Cauchy-Schwarz inequality give
| x k + 1 z k + 1 | 2 = | x k + 1 z k | 2 + 2 x k + 1 z k z k z k + 1 + | z k z k + 1 | 2 | x k + 1 z k | 2 + 2 | x k + 1 z k | | z k + 1 z k | + | z k + 1 z k | 2 .
Using Lemma 2(i), we then obtain that
0 | x k z k | 2 | x k + 1 z k | 2 | x k z k | 2 | x k + 1 z k + 1 | 2 + 2 | x k + 1 z k | | z k + 1 z k | + | z k + 1 z k | 2 .
Thus
0 k = 0 n | x k z k | 2 | x k + 1 z k | 2 | x 0 z 0 | 2 + 2 k = 0 n | x k + 1 z k | | z k + 1 z k | + k = 0 n | z k + 1 z k | 2 .
Since the sequence ( | x n + 1 z n | ) n N is bounded and the series n N | z n + 1 z n | is convergent, the result (i) follows at once.
(iii) Lemma 3(i) and Lemma 2(iii), imply that lim n sup i I n | p i ( x n ) x n | 2 = 0 . Thus for any k [ 0 , M ] ,
lim n sup i I n + k | p i ( x n ) x n | = 0 .
Then we deduce from C 1 that lim n sup i I | p i ( x n ) x n | = 0 , which completes the proof of (iii).
(iv) Squaring the triangle inequality gives
| x k + 1 z k | 2 | x k + 1 z k + 1 | 2 2 | x k + 1 z k + 1 | | z k + 1 z k | + | z k + 1 z k | 2 2 | x k + 1 z k | | z k + 1 z k | + 3 | z k + 1 z k | 2 .
Using (4), we deduce that
| x k + 1 z k + 1 | 2 | x k + 1 z k | 2 2 | x k + 1 z k | | z k + 1 z k | + 3 | z k + 1 z k | 2 .
Since the sequence ( | x n + 1 z n | ) n N is bounded, and n N | z n + 1 z n | is convergent, we deduce from the last inequality that n N ( | x n + 1 z n + 1 | 2 | x n + 1 z n | 2 ) is convergent. Then using Lemma 3(i), we obtain that the series n N ( | x n + 1 z n + 1 | 2 | x n z n | 2 ) is convergent, which is equivalent to the convergence of the sequence ( | x n z n | ) n N .
(v) Since, by C 6 , ( z n ) n N is a Cauchy sequence, it is strongly convergent to a point z in the Hilbert space H . For every n in N , z n T n = i I n T i , n i I n S i . Fix i in I. Due to condition C 1 , there exists a subsequence ( z σ ( n ) ) n N of ( z n ) n N satisfying z σ ( n ) S i for all n N . Therefore, z S i ¯ . Since this assertion is true for all i in I, and each S i is closed, we obtain that z i I S i . ☐
Remark 3.
In the convex case, Lemma 3(iii) is known in the simpler case where ( z n ) n N is chosen to be a constant sequence in i I S i and is a consequence of the Fejér monotonicity [5].
We now state the main result of this section, which is strong convergence under the assumption that the sets S i , i I , are expandable in uniformly convex sets.
Theorem 1.
If the conditions C 4 , C 6 , and C 7 .
The finite family ( S i ) i I is expandable in uniformly convex sets with respect to a function b are satisfied, then the sequence ( x n ) n N converges strongly to a point in i I S i .
Proof. 
Notice that C 7 implies C 5 . According to C 5 , each point p i ( x n ) belongs to a convex set C i , j n S i in the uniformly convex expansion of S i , indexed by j n . Hence p i ( x n ) is also the projection of x n onto C i , j n , and z n C i , j n . Let ( z n ) n N be a sequence satisfying C 6 . Fix n in N , and i in I. Define
H ( x n , p i ( x n ) ) = { x H x p i ( x n ) p i ( x n ) x n 0 } .
According to C 7 , p i ( x n ) and z n belong to a uniformly convex set C i , j n S i . Thus
B ( ( z n + p i ( x n ) ) / 2 , ρ i , n ) C i , j n ,
where ρ i , n = b ( | z n p i ( x n ) | ) . Since p i ( x n ) is also the projection of x n onto C i , j n , it results from the Kolmogorov criterion (1) that
C i , j n H ( x n , p i ( x n ) ) .
Then
B ( ( z n + p i ( x n ) ) / 2 , ρ i , n ) C i , j n H ( x n , p i ( x n ) ) .
Now, consider the translation T : x y = x ( p i ( x n ) z n ) / 2 . Clearly,
x B ( ( z n + p i ( x n ) ) / 2 , ρ i , n ) if and only if y B ( z n , ρ i , n ) .
On the other hand, using z n C i , j n , we get that x H ( x n , p i ( x n ) ) implies y H ( x n , p i ( x n ) ) . Indeed,
y p i ( x n ) p i ( x n ) x n = x ( p i ( x n ) z n ) / 2 p i ( x n ) p i ( x n ) x n = x p i ( x n ) p i ( x n ) x n 1 2 p i ( x n ) z n p i ( x n ) x n
and since z n C i , j n , Kolmogorov’s criterion gives
1 2 p i ( x n ) z n p i ( x n ) x n 0 ,
which implies
y p i ( x n ) p i ( x n ) x n = x ( p i ( x n ) z n ) / 2 p i ( x n ) p i ( x n ) x n x p i ( x n ) p i ( x n ) x n
and, after recalling that x H ( x n , p i ( x n ) ) ,
y p i ( x n ) p i ( x n ) x n 0
which implies that y H ( x n , p i ( x n ) ) as desired. Hence (5), together with (6) imply
B ( z n , ρ i , n ) H ( x n , p i ( x n ) ) .
Now, assume x n S i , and define
y n = z n + ( p i ( x n ) x n ) / | p i ( x n ) x n | u n = z n + ρ i , n ( z n y n ) .
One easily checks that | u n z n | = ρ i , n , i.e., u n B ( z n , ρ i , n ) . Thus, using (2.2), we get u n H ( x n , p i ( x n ) ) , i.e., u n p i ( x n ) p i ( x n ) x n 0 . Therefore
z n p i ( x n ) p i ( x n ) x n z n u n p i ( x n ) x n = ρ i , n | p i ( x n ) x n | .
Since this inequality is obviously satisfied when x n S i , it holds whether x n S i or not. Now, let
ρ n = min i I n ρ i , n ,
and
ρ = inf n N ρ n .
The case: ρ = 0 .
In the case where ρ = 0 , we have
  • either there exist n 0 in N and i 0 in I n 0 , such that ρ i 0 , n 0 = 0 ,
  • or lim inf n ρ n = 0 .
In the first case, since b ( · ) vanishes only at zero, we have z n 0 = p i 0 ( x n 0 ) and p i 0 ( x n 0 ) T n 0 i I n 0 S i . According to (3), p i 0 ( x n 0 ) i I S i , and for all n > n 0 , x n = p i 0 ( x n 0 ) . Therefore, the sequence ( x n ) n N has converged to a solution in a finite number of steps. In the second case, there exists a subsequence ( ρ σ ( n ) ) n N which converges to zero. Fix i in I. Since b ( · ) is nondecreasing,
ρ σ ( n ) min j I σ ( n ) b | z σ ( n ) p i ( x σ ( n ) ) | | p i ( x σ ( n ) ) p j ( x σ ( n ) ) | .
According to Lemma 3(ii) and (iii), ( | p i ( x n ) p j ( x n ) | ) n N converges to zero for all j in I, and ( | z n p i ( x n ) | ) n N converges to a limit c independent of i. Hence, for all ϵ > 0 there exists N in N such that for all n N , | z σ ( n ) p i ( x σ ( n ) ) | c ϵ / 2 and | p i ( x n ) p j ( x n ) | ϵ / 2 for all j in I. Thus, for all n N , and for all j in I, | z σ ( n ) p i ( x σ ( n ) ) | | p i ( x σ ( n ) ) p j ( x σ ( n ) ) | c ϵ .
Assume c > 0 , and take ϵ = c / 2 . Then, since b ( · ) is nondecreasing and vanishes only at zero, we deduce from (8) that for all n N , ρ σ ( n ) b ( c / 2 ) > 0 , which contradicts lim n ρ σ ( n ) = 0 . Hence, c = 0 , and lim n | z n p i ( x n ) | = 0 for all i in I. Using Lemma 3(ii), we deduce that lim n | x n z n | = 0 , and it results from Lemma 3(iv) that ( x n ) n N converges strongly to a point in i I S i .
The case ρ 0 .
In the case ρ 0 , we deduce from (7) that
z n p i ( x n ) p i ( x n ) x n ρ | p i ( x n ) x n |
and since p i ( x n ) x n p i ( x n ) x n = | p i ( x n ) x n | 2 0 ,
z n x n p i ( x n ) x n ρ | p i ( x n ) x n | .
Therefore, combining the definition of x n + 1 and (9), we have
λ n i I n α i , n p i x n x n λ n ρ i I n α i , n z n x n , p i x n x n = 1 ρ z n x n , λ n i I n α i , n p i x n x n = 1 ρ z n x n , x n + 1 x n .
Using Lemma 2(ii) and Lemma 3(i), we deduce that the series n N | x n + 1 x n | is convergent. Then ( x n ) n N is a Cauchy sequence, and is therefore strongly convergent to a point x * in H . Using Lemma 3(ii), we deduce that ( p i ( x n ) ) n N is also strongly convergent to x * for any i in I. Since each set S i is closed, we immediately conclude that x * i I S i , which completes the proof. ☐

4. Projections onto Stepwise Generated Uniformly Convex Sets

In this section, we show how the results of Section 3 may be used to define a strongly convergent cyclic projection algorithm in the case where the sets S i are only expandable in convex sets, but not into uniformly convex sets. To our knowledge, this method is new, even in the convex case. The underlying idea of the algorithm is as follows. First, note that the condition C 4 is realistic for the type of nonconvex sets considered in this paper, and may be interpreted as a strong consistency condition. On the other hand, in many cases, a trivial sequence ( a n ) n N where a n T n for all n in N is already known, as in the applications presented in Section 5. Using this sequence, we define a projection method which converges strongly to a point in i I S i . For every n in N , i in I, and a n in T n , B i , n will denote the closed ellipsoid with main axis [ a n , p i ( x n ) ] , with center 1 / 2 ( a n + p i ( x n ) ) , which is rotationally invariant around this axis and with maximal radius equal to | a n p i ( x n ) | / 2 and with minimal radius equal to γ | a n p i ( x n ) | / 2 for some γ ( 0 , 1 ) . We denote by p i ( x n ) the projection of x n onto B i , n ; see Figure 1.
Definition 5.
(Method of Double Projection onto Convex Expandable Sets)
Assume C 4 is satisfied. Given a point x 0 in H , two real number α and λ in ( 0 , 1 ] , a nonnegative integer M, a sequence ( I n ) of subsets of I and a sequence ( a n ) n N where a n T n for all n in N , the projection method onto stepwise generated uniformly convex sets is iteratively defined by
( n N ) 1 . ( i I n ) p i ( x n ) = p B i , n ( x n ) 2 . I f j I n s u c h t h a t p j ( x n ) i I n S i T h e n I f I n I s e t I n = I a n d g o t o 2 I f I n = I s e t λ n = 1 , α j , n = 1 a n d g o t o 3 E l s e g o t o 3 3 . x n + 1 = x n + λ n i I n α i , n ( p i ( x n ) x n )
under the conditions C 1 , C 2 , and
C 3 . ( n N ) λ λ n μ n w h e r e μ n = i I n α i , n | x n p i ( x n ) | 2 | x n i I n α i , n p i ( x n ) | 2 i f x n i I n B i , n 1 i f x n i I n B i , n .
In the remainder of this section, the sequence ( x n ) n N is defined by the algorithm of Definition 5. The main result is the following.
Theorem 2.
If the condition C 5 is satisfied, and the sequence ( a n ) n N introduced in Definition 5 satisfies C 6 , then the sequence ( x n ) n N converges strongly to a point in i I S i .
Proof. 
The method introduced in Definition 5 is a general projection algorithm onto specific uniformly convex sets constructed at each iteration. For every n in N and each i in I, a n and p i ( x n ) belong to B i , n . Then Lemma 2 and Lemma 3 where p i ( x n ) and ( z n ) n N are respectively replaced with p i ( x n ) and ( a n ) n N hold true. In the same way, taking ( z n ) n N = ( a n ) n N in the proof of Theorem 1, we deduce that the sequence ( x n ) n N converges strongly to some point x * in H . Nevertheless, since p i ( x n ) is replaced with p i ( x n ) , it is still not clear why x * i I S i . Let us now address this specific point. Fix n in N , i in I, and a n in T n . According to C 5 , a n and p i ( x n ) belong to a convex set C i , j n S i , and p i ( x n ) is also the projection of x n onto C i , j n . Hence the Kolmogorov criterion (1) gives p i ( x n ) a n x n p i ( x n ) 0 . Therefore
p i ( x n ) a n x n p i ( x n ) + p i ( x n ) a n p i ( x n ) p i ( x n ) 0
which implies
p i ( x n ) a n x n p i ( x n ) p i ( x n ) a n p i ( x n ) p i ( x n ) .
Moreover, since the segment [ a n , p i ( x n ) ] is the main axis of the ellipsoid B i , n and p i ( x n ) B i , n , we have a n p i ( x n ) p i ( x n ) p i ( x n ) 0 . Combining this with (10), we get
0 p i ( x n ) a n p i ( x n ) p i ( x n ) p i ( x n ) a n x n p i ( x n ) | p i ( x n ) a n | | x n p i ( x n ) | .
We deduce from Lemma 3(iii) where p i ( x n ) is replaced with p i ( x n ) that lim n | p i ( x n ) x n | = 0 . On the other hand, as p i ( x n ) is the projection of x n onto C i , j n , and a n C i , j n , we have | p i ( x n ) a n | | x n a n | . Moreover, following the same steps as in the proof of Lemma 3(i) gives that ( | x n a n | ) n N is bounded, from which we deduce that ( | p i ( x n ) a n | ) n N is bounded as well. Therefore we deduce from the last inequality that
lim n a n p i ( x n ) p i ( x n ) p i ( x n ) = 0 .
Let x = a n p i ( x n ) and y = p i ( x n ) p i ( x n ) . Then, we have
x y = a n p i ( x n )
and
x ω y = ω 1 ω ( a n + ( ω 1 ) p i ( x n ) ) p i ( x n )
for all ω R .
Assume that p i ( x n ) p i ( x n ) . Using Claim 1, we get that
a n p i ( x n ) 1 ω i , n ( a n + ( ω i , n 1 ) p i ( x n ) ) p i ( x n ) 0
as long as we enforce
ω i , n p i ( x n ) a n p i ( x n ) a n p i ( x n ) a n p i ( x n ) p i ( x n ) .
In particular, we can take
ω i , n = max p i ( x n ) a n p i ( x n ) a n p i ( x n ) a n p i ( x n ) p i ( x n ) , ϵ
for any appropriately chosen ϵ > 0 . Using Niculescu’s Lemma 4 in Appendix B, we obtain that
1 ω i , n + ω i , n 1 a n p i ( x n ) , p i ( x n ) p i ( x n ) 2 | a n p i ( x n ) | | p i ( x n ) p i ( x n ) | .
Note that this inequality also holds without resorting to Claim 1 if p i ( x n ) = p i ( x n ) . According to Theorem 1, ( x n ) n N and ( p i ( x n ) ) n N are strongly convergent to a point x * in H . We now split the remainder of the argument into the following complementary cases involving an increasing function σ : N N which parametrises possible subsequences.
  • If the sequence ( | a n p i ( x n ) | ) n N converges to zero, then using Claim 2, ( | p i ( x n ) p i ( x n ) | ) n N also converges to zero. Therefore, ( p i ( x n ) ) n N is also strongly convergent to x * for each i in I. Since each set S i is closed, we again conclude that x * i I S i .
  • If the sequence ( | a n p i ( x n ) | ) n N does not converge to zero, and the sequence ( p i ( x n ) a n p i ( x n ) p i ( x n ) ) n N converges to zero, then we must have | a σ ( n ) p i ( x σ ( n ) ) | > ϵ for some ϵ > 0 and some subsequence indexed by an appropriately chosen increasing function σ . This implies in particular that
    lim n + | p i ( x σ ( n ) ) p i ( x σ ( n ) ) | | cos p i ( x σ ( n ) ) a σ ( n ) , p i ( x σ ( n ) ) p i ( x σ ( n ) ) | = 0 .
    As a result, either
    lim n + cos p i ( x σ ( n ) ) a σ ( n ) , p i ( x σ ( n ) ) p i ( x σ ( n ) ) = 0
    or
    lim n + | p i ( x σ ( n ) ) p i ( x σ ( n ) ) | = 0 .
    Notice further that (16) holds only if ( | p i ( x σ ( n ) ) p i ( x σ ( n ) ) | ) n N converges to zero. Thus, both cases simplify into the conclusion that ( | p i ( x σ ( n ) ) p i ( x σ ( n ) ) | ) n N converges to zero. Therefore ( p i ( x σ ( n ) ) ) n N is also strongly convergent to x * for each i in I. Since each set S i is closed, we again conclude that x * i I S i .
  • If both sequences ( | a n p i ( x n ) | ) n N and ( p i ( x n ) a n p i ( x n ) p i ( x n ) ) n N do not converge to zero, then | a σ ( n ) p i ( x σ ( n ) ) | > ϵ and p i ( x σ ( n ) ) a σ ( n ) p i ( x σ ( n ) ) p i ( x σ ( n ) > ϵ for some ϵ > 0 and for some subsequence indexed by an appropriately chosen increasing function σ . Since | a n p i ( x n ) | is the length of the main axis of B i , n , and, as such, bounds from above the distance between any two points in B i , n , we deduce that | a n p i ( x n ) | | a n p i ( x n ) | . Furthermore, since the sequences ( | a n p i ( x n ) | ) n N and ( | a n x n | ) n N are bounded, we deduce that
    | a n p i ( x n ) | | a n x n | B
    for some B > 0 . Then using the Cauchy-Schwarz inequality in the numerator in (14), we get
    ω i , σ ( n ) [ ϵ , B 2 ϵ ]
    for some ϵ ( 0 , B ) . Since | a σ ( n ) p i ( x σ ( n ) ) | > ϵ , we deduce from (11) and (15) that ( | p i ( x σ ( n ) ) p i ( x σ ( n ) ) | ) n N converges to zero. Therefore, ( p i ( x σ ( n ) ) ) n N is also strongly convergent to x * for each i in I. Since each set S i is closed, we again conclude that x * i I S i .
These previous cases cover all the possible cases and all lead to the conclusion that x * i I S i . The proof is thus complete. ☐
The two following claims were instrumental in the proof of Theorem 2. We now provide their proofs.
Claim 1.
Assume that p i ( x n ) p i ( x n ) . Then
p i ( x n ) a n p i ( x n ) a n p i ( x n ) a n p i ( x n ) p i ( x n )
is a well defined nonnegative real number. Moreover, we have
a n p i ( x n ) 1 ω i , n ( a n + ( ω i , n 1 ) p i ( x n ) ) p i ( x n ) 0
for all
ω i , n p i ( x n ) a n p i ( x n ) a n p i ( x n ) a n p i ( x n ) p i ( x n ) .
Proof. 
We have p i ( x n ) [ a n , p i ( x n ) ] . Indeed, assume for contradiction that p i ( x n ) [ a n , p i ( x n ) ] . This would imply that p i ( x n ) lies in the interior of B i , n and therefore x n = p i ( x n ) . However, since by convexity of C i , j n , [ a n , p i ( x n ) ] C i , j n , this would imply that x n C i , j n , and therefore p i ( x n ) = x n = p i ( x n ) , the sought-after contradiction.
Consider now the 2D plane containing a n , p i ( x n ) and p i ( x n ) . Set g n to be at the intersection of the line perpendicular at p i ( x n ) to [ a n , p i ( x n ) ] with the segment [ a n , p i ( x n ) ] (see Figure 1). First, we have
p i ( x n ) a n p i ( x n ) a n = cos ( α i , n ) | p i ( x n ) a n | | p i ( x n ) a n |
and
| p i ( x n ) a n | 2 = g n a n p i ( x n ) a n = cos ( α i , n ) | g n a n | | p i ( x n ) a n |
from which we obtain
| g n a n | = | p i ( x n ) a n | 2 | p i ( x n ) a n | p i ( x n ) a n p i ( x n ) a n
Since B i , n is an ellipse with maximal axis [ a n , p i ( x n ) ] , our next step is to express g n as the convex combination
g n = θ i , n a n + ( 1 θ i , n ) p i ( x n )
of a n and p i ( x n ) for some θ i , n [ 0 , 1 ] . Notice that (20) implies
| g n a n | = ( 1 θ i , n ) | p i ( x n ) a n | ,
which itself gives
θ i , n = 1 | g n a n | | p i ( x n ) a n | .
Using (19), we get
θ i , n = 1 | p i ( x n ) a n | 2 p i ( x n ) a n p i ( x n ) a n = p i ( x n ) a n p i ( x n ) p i ( x n ) p i ( x n ) a n p i ( x n ) a n .
Notice further that, by construction, since [ a n , p i ( x n ) ] is the main axis of B i , n and since, by assumption, p i ( x n ) p i ( x n ) , we have (see Figure 1)
p i ( x n ) a n p i ( x n ) p i ( x n ) > 0 .
Then, setting
d n = 1 ω i , n ( a n + ( ω i , n 1 ) p i ( x n ) )
we see that we need to take ω i , n 1 / θ i , n , i.e.,
ω i , n p i ( x n ) a n p i ( x n ) a n p i ( x n ) a n p i ( x n ) p i ( x n ) ,
(which, owing to (21), is well defined), in order to ensure that
a n p i ( x n ) d n p i ( x n ) 0
i.e., (18) holds. Finally, (19) and (21) together ensure that (17) is nonnegative. ☐
Claim 2.
We have
| a n p i ( x n ) | | p i ( x n ) p i ( x n ) | .
Proof. 
By definition of p i ( x n ) , the fact that a n S i implies that
| p i ( x n ) x n | | a n x n | .
Thus x n belongs to the half space
H = x H ; x 1 2 ( a n + p i ( x n ) ) p i ( x n ) 1 2 ( a n + p i ( x n ) ) 0
which is the set of points in H which are closer to p i ( x n ) than a n . Clearly, p i ( x n ) H as well, which completes the proof. ☐

5. Applications

5.1. MRI Image Reconstruction: The Infinite-Dimensional Hilbert Space Setting

The field of inverse problems was extensively investigated in the last five decades, fueled in particular by the many challenged in medical imaging. Projection methods have played an important role in the development of efficient reconstruction techniques [25], a well studied example being the method of projection onto convex sets (POCS) [2]. In recent years, techniques from penalised estimation have gained increasing popularity since the discovery of the compressed sensing paradigm, allowing for fewer measurements to be collected whilst achieving remarkable reconstruction performance. One penalisation approach which has been a center of focus since its introduction is the Rudin Osher Fatemi functional [26], which can be described as follows: the reconstructed image u is the solution of the minimization problem,
arg min u B V ( Ω ) u TV ( Ω ) + λ 2 Ω ( y o b s ( x ) A [ u ] ( x ) ) 2 d x
where λ is a positive relaxation parameter and the term u TV ( Ω ) is defined in Appendix A. The term Ω ( y o b s ( x ) A [ u ] ( x ) ) 2 d x is called the fidelity term and the term u TV ( Ω ) is called the regularisation term. This infinite-dimensional minimization problem enforces the solution, which is a priori in L 2 ( Ω ) , to satisfy a finite bounded variation (BV) norm constraint.
One important remark is that the optimisation problem can be turned into a pure feasibility problem by imposing the T V -norm and the fidelity terms to be less than or equal to a certain tolerance. Using this approach, one can define an alternating projection procedure as in Algorithm 1 below, where we will use the forward operator A = F p a r t i a l , the partial Fourier Transform which plays a central role in MRI. The usual Fourier transform will be denoted by F as usual.
In this example, we will use the projections on the sets
S 1 = u L 2 ( Ω ) Ω ( λ y o b s A [ u ] ( x ) ) 2 d x λ η f i d , λ R + ,
S 2 = u L 2 ( Ω ) u p , TV ( Ω ) η T V ,
S 3 = u L 2 ( Ω ) F p a r t i a l [ u ] = λ y o b s , λ R + .
The set S 2 is not convex for p ( 0 , 1 ) , but, since the epigraph of | · | p , T V ( Ω ) is a cone, the set S 2 is expandable into convex sets as discussed in [22,23]. Notice that when the scaling factor λ is dropped in the definition of S 1 and S 3 , the problem is equivalent to the standard T V -penalised reconstruction approach. Introducing this scaling factor allows to recover a solution up to a scaling, which can easily be recovered by rescaling the solution by using the constraint F p a r t i a l [ u ] = y o b s as a post-processing step. The introduction of the scaling α in these definition allows the null function to belong to the intersection of the sets S 1 , S 2 and S 3 . Thus, we will set a n = 0 , n N in the sequel. In what follows, y ˜ o b s will be the function will value equal to zero except for the observed frequencies which are taken as the observed values.
Our implementation is described in Algorithm 1. Experiments were made after incorporating the projection onto the T V ball into a freely available code provided in [27]. The fidelity term was set to η f i d = 10 3 and the regularisation term η T V was tuned using cross validation. The projection onto the T V -ball was computed using the method described in [28]. In order to avoid numerical instabilities, the iterates were rescaled every 10 iteration, which does not change the convergence analysis due to conic invariance of our formulation. We chose very thin ellipsoids B i , n by setting α = 10 6 . A numerical illustration is given in Figure 2 below. Extensive numerical simulations will be presented in a forthcoming paper devoted to a thorough study of projection methods for MRI and X-ray computed tomography (XCT).
Algorithm 1: Alternating projection method for MRI for p = 1 .
Mathematics 08 01108 i001
Since S 1 and S 3 are convex and S 2 is expandable into convex sets, we deduce from Theorem 2 that Algorithm 1 converges strongly to a point in i = 1 3 S i .

5.2. A Uniformly Convex Version of Cadzow’s Method

The method presented in Section 4 is now applied to a denoising problem in the spirit of Cadzow’s method [16].
Let x = [ x 0 , , x N 1 ] be a complex-valued vector of length N. Let H be a linear operator which maps x into the set of Hankel L × K matrices, with L K , L + K = N + 1 , defined by setting the ( i , j ) component to the value x i + j 2 i.e.,
H ( x ) = x 0 x 1 x 2 x K 1 x 1 x 2 x 3 x K x 2 x 3 x 4 x K + 1 x L 1 x L x L + 1 x N 1 .
The adjoint operator associated with H , denoted by H * , is a linear map from L × K matrices to vectors of length N, obtained by averaging each skew-diagonal.
The denoising problem we consider in this section is the one of estimating a signal x corrupted by random noise, as described in the observation model
y t = x t + ϵ t ,
for t = 0 , , N 1 , where
  • x t is a sum of damped exponential components, i.e.,
    x t = k = κ κ c k ρ k t exp ( 2 ι π f k t ) ,
    with 2 κ + 1 K , ρ k a real damping factor satisfying | ρ k | 1 , f k a real number representing a frequency and c k a complex coefficient,
  • ϵ t is a random noise.
This denoising problem is of crucial importance in many applications and is also known as super-resolution in the literature; see [16,29,30,31,32,33]. The main motivation behind Cadzow’s algorithm is that many signals of interest have low rank Hankelization. In mathematical terms, we can often look for approximations and denoising, for signals which satisfy the constraint that rank( H ( x ) ) = r where r 2 κ + 1 . Such constraints are important when the observed signal is corrupted with additive noise, which increases the rank of H ( x ) . Consequently, an intuitive way of estimating x from y is to do rank reduction: starting from x ( 0 ) = y = [ y 0 , , y N 1 ] , the Cadzow’s algorithm iteratively updates the estimate via the following rule
x ( n + 1 ) = H * ( T r ( H ( x ( n ) ) ) ) , n = 0 , 1 ,
Here T r computes the truncated singular value decomposition (SVD) of any L × K matrix X, that is,
T r ( X ) = j = 1 r σ j u j v j * ,
with
X = j = 1 K σ j u j v j *
being an SVD of X and with σ 1 σ 2 σ K being the singular values. As is now well known, Cadzow’s algorithm is easily interpreted as a method of alternating projections in the matrix domain [23]. Denote by P M r the projection onto the set M r of L × K matrices of rank r, and by P M H the projection onto the space M H of L × K Hankel matrices. Then, Cadzow’s method is easily seen to be equivalent to the following matrix recursions
X ( n + 1 ) = P M H P M r ( X ( n ) ) , n = 0 , 1 ,
Since M H is convex and M r is expandable into convex sets, we deduce from the main result of [22] that Cadzow’s method has cluster points in M H M r . On the other hand, applying the method of Section 4 can be advantageous in the application of projection methods to the case of low rank matrix constraints such as in the application of Cadzow’s method. In particular, our method shows that the SVD can be computed only approximately in the first iterations and still belong to the ellipsoids B i , n , thus implying possibly important computational savings. Our new method converges to a point in M H M r .
An example showing the efficiency of Cadzow’s method is presented in Figure 3 below. In this experiment, we set the signal-to-noise ratio (SNR) to 3.5 dB and the rank constraint equal to 15 (an appropriate choice for the rank can be made using statistical techniques such as cross-validation). We took K = N / 2 10 .

6. Conclusions and Future Work

The work presented in this paper is a follow on project to the work of [22]. We obtain strong convergence results in Hilbert spaces in the case of uniformly convex expandable sets, a notion which refines the definition of convex expandable sets introduced in [22,23]. We showed that the proposed methods apply to practical inverse and denoising problems which are instrumental in engineering and signal and data analytics.
Future work is planned in developing new and faster algorithms for nonconvex feasibility problems, using acceleration schemes such as Richardon’s extrapolation [34].

Author Contributions

S.C. and P.B. contributed to all the mathematical findings presented in the paper and to the various stages of the writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the iCODE Institute, research project of the IDEX Paris-Saclay, and by the Hadamard Mathematics LabEx (LMH) through the grant number ANR-11-LABX-0056-LMH in the “Programme d’Investissement d’Avenir’’.

Acknowledgments

The authors would like to thank the reviewers for the high quality of their comments, which greatly helped improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Definition of the Total Variation (TV) Norm

Recall that a function u is in B V ( Ω ) for a bounded open set Ω R d if it is integrable and there exists a Radon measure D u such that
Ω u ( x ) div g ( x ) d x = Ω g , D u ( x ) d x
for all g C c 1 Ω , R d . This measure D u is the distributional gradient of u. When u is smooth, D u ( x ) = u ( x ) d x , and TV-norm is equivalently the integral of its gradient magnitude,
u TV ( Ω ) = Ω | u | d x .
For p [ 1 , + ] , the L p -TV norm is defined by
u p , TV ( Ω ) = Ω | u | p d x 1 p .
For p ( 0 , 1 ) , formula (A1) defines a quasi-norm which is nonconvex, called the L p -TV norm by abuse of language, as is common in the image processing community.

Appendix B. Niculescu’ Lemma

Niculescu’s Lemma [35] (Theorem 2), [36] is a converse of the Cauchy-Schwarz inequality. It can be stated as follows.
Lemma 4.
Assume there exist two positive real numbers ω and ω such that
x ω y x ω y 0 .
Then, we have
x , y x , x 1 2 y , y 1 2 2 ω ω + ω ω .

References

  1. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  2. Escalante, R.; Raydan, M. Alternating Projection Methods; SIAM: Philadelphia, PA, USA, 2011. [Google Scholar]
  3. Gurin, L.; Polyak, B.T.; Raik, È.V. The method of projections for finding the common point of convex sets. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 1967, 7, 1211–1228. [Google Scholar]
  4. Ottavy, N. Strong convergence of projection-like methods in Hilbert spaces. J. Optim. Theory Appl. 1988, 56, 433–461. [Google Scholar] [CrossRef]
  5. Combettes, P.L.; Puh, H. Iterations of parallel convex projections in Hilbert spaces. Numer. Funct. Anal. Optim. 1994, 15, 225–243. [Google Scholar] [CrossRef]
  6. Grigoriadis, K.M. Optimal H model reduction via linear matrix inequalities: Continuous-and discrete-time cases. Syst. Control Lett. 1995, 26, 321–333. [Google Scholar] [CrossRef]
  7. Babazadeh, M.; Nobakhti, A. Direct Synthesis of Fixed-Order H Controllers. IEEE Trans. Autom. Control 2015, 60, 2704–2709. [Google Scholar] [CrossRef]
  8. Li, Z.; Dai, Y.H.; Gao, H. Alternating projection method for a class of tensor equations. J. Comput. Appl. Math. 2019, 346, 490–504. [Google Scholar] [CrossRef]
  9. Combettes, P. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Elsevier: Amsterdam, The Netherlands, 1996; Volume 95, pp. 155–270. [Google Scholar]
  10. Krol, A.; Li, S.; Shen, L.; Xu, Y. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction. Inverse Probl. 2012, 28, 115005. [Google Scholar] [CrossRef] [Green Version]
  11. Herman, G.T. Fundamentals of Computerized Tomography: Image Reconstruction From Projections; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  12. McGibney, G.; Smith, M.; Nichols, S.; Crawley, A. Quantitative evaluation of several partial Fourier reconstruction algorithms used in MRI. Magn. Reson. Med. 1993, 30, 51–59. [Google Scholar] [CrossRef]
  13. Ticozzi, F.; Zuccato, L.; Johnson, P.D.; Viola, L. Alternating projections methods for discrete-time stabilization of quantum states. IEEE Trans. Autom. Control 2017, 63, 819–826. [Google Scholar] [CrossRef]
  14. Drusvyatskiy, D.; Li, C.K.; Pelejo, D.C.; Voronin, Y.L.; Wolkowicz, H. Projection methods for quantum channel construction. Quantum Inf. Process. 2015, 14, 3075–3096. [Google Scholar] [CrossRef]
  15. Grigoriadis, K.M.; Beran, E.B. Alternating projection algorithms for linear matrix inequalities problems with rank constraints. In Advances in Linear Matrix Inequality Methods in Control; SIAM: Philadelphia, PA, USA, 2000; pp. 251–267. [Google Scholar]
  16. Cadzow, J.A. Signal enhancement-a composite property mapping algorithm. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 49–62. [Google Scholar] [CrossRef] [Green Version]
  17. Bauschke, H.H.; Combettes, P.L.; Luke, D.R. Phase retrieval, error reduction algorithm, and Fienup variants: A view from convex optimization. J. Opt. Soc. Am. A 2002, 19, 1334–1345. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Chu, M.T.; Funderlic, R.E.; Plemmons, R.J. Structured low rank approximation. Linear Algebra Its Appl. 2003, 366, 157–172. [Google Scholar] [CrossRef] [Green Version]
  19. Markovsky, I.; Usevich, K. Structured low-rank approximation with missing data. SIAM J. Matrix Anal. Appl. 2013, 34, 814–830. [Google Scholar] [CrossRef] [Green Version]
  20. Elser, V. Learning Without Loss. arXiv 2019, arXiv:1911.00493. [Google Scholar]
  21. Combettes, P.L.; Trussell, H.J. Method of successive projections for finding a common point of sets in metric spaces. J. Optim. Theory Appl. 1990, 67, 487–507. [Google Scholar] [CrossRef]
  22. Chretien, S.; Bondon, P. Cyclic projection methods on a class of nonconvex sets. Numer. Funct. Anal. Optim. 1996, 17, 37–56. [Google Scholar]
  23. Chrétien, S. Methodes de projection pour L’optimisation ensembliste non convexe. Ph.D. Thesis, Sciences Po, Paris, France, 1996. [Google Scholar]
  24. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  25. Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput. Optim. Appl. 2012, 51, 1065–1088. [Google Scholar] [CrossRef]
  26. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  27. Michael, V. MRI Partial Fourier Reconstruction with POCS. Available online: https://fr.mathworks.com/matlabcentral/fileexchange/39350-mri-partial-fourier-reconstruction-with-pocs?s_tid=prof_contriblnk (accessed on 13 February 2020).
  28. Condat, L. Discrete total variation: New definition and minimization. SIAM J. Imaging Sci. 2017, 10, 1258–1290. [Google Scholar] [CrossRef] [Green Version]
  29. Plonka, G.; Potts, D.; Steidl, G.; Tasche, M. Numerical Fourier Analysis: Theory and Applications; Book Manuscript; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  30. Al Sarray, B.; Chrétien, S.; Clarkson, P.; Cottez, G. Enhancing Prony’s method by nuclear norm penalization and extension to missing data. Signal Image Video Process. 2017, 11, 1089–1096. [Google Scholar] [CrossRef]
  31. Barton, E.; Al-Sarray, B.; Chrétien, S.; Jagan, K. Decomposition of Dynamical Signals into Jumps, Oscillatory Patterns, and Possible Outliers. Mathematics 2018, 6, 124. [Google Scholar] [CrossRef] [Green Version]
  32. Moitra, A. Super-resolution, extremal functions and the condition number of Vandermonde matrices. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory Of Computing, Chicago, IL, USA, 22–26 June 2015; pp. 821–830. [Google Scholar]
  33. Chrétien, S.; Tyagi, H. Multi-kernel unmixing and super-resolution using the Modified Matrix Pencil method. J. Fourier Anal. Appl. 2020, 26, 18. [Google Scholar] [CrossRef] [Green Version]
  34. Bach, F. On the Unreasonable Effectiveness of Richardson Extrapolation. Available online: https://francisbach.com/richardson-extrapolation/ (accessed on 13 February 2020).
  35. Dragomir, S.S. A generalisation of the Cassels and Greub-Reinboldt inequalities in inner product spaces. arXiv 2003, arXiv:math/0306352. [Google Scholar]
  36. Niculescu, C.P. Converses of the Cauchy-Schwartz Inequality in the C*-Framework. RGMIA Research Report Collection. 2001, Volume 4. Available online: https://rgmia.org/v4n1.php (accessed on 15 February 2020).
Figure 1. Section of the ellipsoid B i , n .
Figure 1. Section of the ellipsoid B i , n .
Mathematics 08 01108 g001
Figure 2. POCS reconstruction with 30 iterations.
Figure 2. POCS reconstruction with 30 iterations.
Mathematics 08 01108 g002
Figure 3. Application of our method to denoising a benchmark Electrocardiogram (ECG) type signal.
Figure 3. Application of our method to denoising a benchmark Electrocardiogram (ECG) type signal.
Mathematics 08 01108 g003

Share and Cite

MDPI and ACS Style

Chrétien, S.; Bondon, P. Projection Methods for Uniformly Convex Expandable Sets. Mathematics 2020, 8, 1108. https://doi.org/10.3390/math8071108

AMA Style

Chrétien S, Bondon P. Projection Methods for Uniformly Convex Expandable Sets. Mathematics. 2020; 8(7):1108. https://doi.org/10.3390/math8071108

Chicago/Turabian Style

Chrétien, Stéphane, and Pascal Bondon. 2020. "Projection Methods for Uniformly Convex Expandable Sets" Mathematics 8, no. 7: 1108. https://doi.org/10.3390/math8071108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop