-step analysis of orthogonal greedy algorithms for non-negative sparse representations
Introduction
Greedy schemes are standard techniques for sparse approximation, with a relatively low computing cost compared to exact methods [4], [28], while convex relaxation yields another important branch of approximate methods [14], [33]. The principle of greedy schemes is to sequentially select atoms from a given dictionary in order to decrease the residual error. In orthogonal greedy algorithms, the decrease is maximal in the least square sense, and the approximated signal is computed as the orthogonal projection of the data vector onto the subspace spanned by the selected atoms. Orthogonal Matching Pursuit (OMP) [30] and Orthogonal Least Squares (OLS) [9] are two well-known instances of orthogonal greedy algorithms which only differ by their atom selection rule. For OMP, it simply consists in maximizing the magnitude of the inner product between the residual vector and the candidate atoms, assumed normalized. The OLS rule can be interpreted similarly, but the involved atoms are renormalized, projected versions of the candidate atoms [3], [32]. OLS is known under many other names, e.g., ORMP [11] and OOMP [31]. Throughout this paper, we will use the generic acronym Oxx in statements that refer to both OMP and OLS.
In many applications such as multispectral unmixing [20], machine learning [23], mass spectroscopy [10] and fluid mechanics [1], the sought solution is required to be sparse and non-negative. Non-negative sparse reconstruction can be addressed using iterative thresholding algorithms (including NNSP, NNCoSaMP, and NNHTP) [21], but also using orthogonal greedy algorithms. The latter were naturally extended to the non-negativity setting [5], [21], [38], the main impact of the sign constraint is that Non-Negative Least-Square (NNLS) problems need to be solved to update the sparse approximation coefficients. This yields an increase of computation time since NNLS subproblems do not have closed-form solutions, and an iterative subroutine is needed. In [29], following the early work of [38], we proposed fully recursive implementations. We showed that non-negative greedy algorithms yield accurate empirical results and that their computation cost is of the same order of magnitude as those of Oxx for moderate size problems.
The primary motivation of this paper is to elaborate mathematical conditions guaranteeing that the support of non-negative -sparse representations is exactly recovered in steps. While there is a rich literature on -step recovery analysis with Oxx (based on e.g., mutual incoherence [2], [6], [19] and restricted isometry assumptions [22], [24], [35], [36]) and other greedy algorithms, much less attention was paid to their non-negative versions. The existing analyses are scarce, and sharp worst-case exact recovery conditions are not even available. Our objective is to fill this gap in the literature and to derive sharp conditions for exact support recovery with non-negative versions of OMP and OLS.
Non-negative OMP was first introduced by Bruckstein et al. [5] under the name OMP, and then renamed NNOMP in [38] (see also [21], [29]). It relies on the repeated maximization of the positive inner product between the residual vector and the dictionary atoms, followed by the resolution of an NNLS problem. Existing analyses of NNOMP are rare and somewhat discordant. On the one hand, Bruckstein et al. claimed that the Mutual Incoherence Property (MIP) holds for NNOMP [5] and that the proof should be similar to the one given in [13], [34] for OMP. Specifically, [5, Th. 3] states without proof that any -sparse representation can be exactly recovered in steps using NNOMP as long as . On the other hand, Kim et al. elaborated a unified MIP analysis of NNOMP and its generalized version in the multiple measurement vector setting [21, Th. 1]. In the specific case of NNOMP, i.e., for single measurement vectors, the related MIP turns out to be very restrictive: is required, which can occur only when .
Indeed, we think that it is impossible to prove [5, Th. 3] as a direct extension of [13], [34], as claimed by Bruckstein et al. The major obstacle is that the NNOMP selection rule performs comparisons between signed inner products, whereas a small mutual coherence condition yields a bound on the unsigned magnitude of inner products (see Section 2.4 for further details). This is precisely why Kim et al.’s analysis, closely following Donoho et al.’s approach [13], yields over-pessimistic guarantees, instead of the expected result.
Our first contribution is to show that any non-negative -sparse representation can be exactly recovered in steps with NNOMP when the mutual coherence of the dictionary is less than . We further show that under the same condition, the non-negative extensions of OLS proposed in [37], named NNOLS and Suboptimal NNOLS (SNNOLS), are also guaranteed to recover the true support in steps. To the best of our knowledge, the latter algorithms have never been analyzed. The analysis of NNOMP, NNOLS and SNNOLS is carried out in a unified way, and applies to noisy cases with bounded noise.
Our second contribution is to unveil a sign preservation property satisfied by Oxx for non-negative sparse representations. It is well-known that when , Oxx algorithms achieve -step exact support recovery [19], [34]. We further show that at any iteration, the nonzero coefficients found by Oxx are positive. This property is of stand-alone interest, and turns out to be the cornerstone of our recovery analysis of non-negative extensions of Oxx. It enables us to prove that OMP and NNOMP coincide when , so [5, Th. 3] becomes a byproduct of our sign preservation analysis. Under the same conditions, we prove that OLS coincides with both NNOLS and SNNOLS [37].
The paper is organized as follows. Section 2 recalls known results about greedy algorithms and their non-negative versions. Section 3 contains our -step analysis of non-negative greedy algorithms. The central sign preservation property of Oxx is stated as Theorem 3.1 and proved in the same section, most technical steps being postponed in Appendix. The numerical simulations of Section 4 illustrate the average behavior of algorithms outside the exact support recovery regime. In Section 5, an extensive discussion is provided on possible analyses for coherent dictionaries, and using other analysis techniques.
Section snippets
Notations
Let us denote by the data signal and by the dictionary of elementary atoms , . We are interested in the so-called -sparse representation , in which the vector has non-zero elements. Without loss of generality, the atoms are assumed to be normalized, that is , where denotes the norm. Notations and stand for the transpose and the Moore-Penrose pseudo-inverse, respectively. For any set of indices , the subdictionary and subvector indexed
Exact recovery and sign preservation
This section contains our main results concerning the exact recovery analysis of non-negative greedy algorithms under the MIP assumption. The cornerstone of our study is Theorem 3.1, the subsequent exact recovery results being direct consequences. In Subsection 3.2, the proof of Theorem 3.1 is decomposed into distinct steps, most technical elements being postponed in Appendix.
Comparison of Oxx and their non-negative versions
The previous section showed that in some specifically favorable situations, greedy algorithms such as OMP not only recover the support of the true solution, but also yield sparse representations with non-negative weights. In such conditions, according to Lemma 3.1, implementing non-negative versions of greedy algorithms is useless. On the contrary, one can empirically observe that non-negative greedy algorithms reach superior performance for coherent dictionaries and for noisy scenarios [29],
Contributions and links with alternative algorithms
Our analysis of non-negative greedy algorithms essentially relies on the discovery of the sign preservation property for Oxx algorithms. Indeed, we could show that the Oxx algorithms yield the same iterates as their non-negative extensions under the MIP condition. Moreover, the latter condition is not only a sufficient but also a (worst-case) necessary condition of exact recovery. A strong feature of our analysis is that NNOMP, NNOLS and SNNOLS are analyzed in a unified way. In contrast, many
CRediT authorship contribution statement
Thanh T. Nguyen: Writing - original draft, Software. Charles Soussen: Methodology, Writing - review & editing. Jérôme Idier: Conceptualization, Investigation, Supervision. El-Hadi Djermoune: Formal analysis, Software, Visualization.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgment
This work is supported by the ANR project BECOSE (ANR-15-CE23-0021).
References (38)
- et al.
Greedy algorithms for nonnegativity-constrained simultaneous sparse recovery
Signal Process.
(2016) - et al.
Sufficient conditions for generalized orthogonal matching pursuit in noisy case
Signal Process.
(2015) - et al.
A new approach for volume reconstruction in tomoPIV with the alternating direction method of multipliers
Meas. Sci. Technol.
(2016) - et al.
Coherence-based performance guarantees for estimating a sparse vector under random noise
IEEE Trans. Signal Process.
(2010) - et al.
On the Difference Between Orthogonal Matching Pursuit and Orthogonal Least Squares
Technical report
(2007) - et al.
Exact sparse approximation problems via mixed-integer programming: formulations and computational performance
IEEE Trans. Signal Process.
(2016) - et al.
On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations
IEEE Trans. Inf. Theory
(2008) - et al.
Orthogonal matching pursuit for sparse signal recovery with noise
IEEE Trans. Inf. Theory
(2011) - et al.
Stable recovery of sparse signals using an oracle inequality
IEEE Trans. Inf. Theory
(2010) - et al.
An improved RIP-based performance guarantee for sparse signal recovery via orthogonal matching pursuit
IEEE Trans. Inf. Theory
(2014)
Orthogonal least squares methods and their application to non-linear system identification
Int. J. Control
Fast dictionnary-based approach for mass spectrometry data analysis
Proc. IEEE ICASSP, Calgary, Canada
Forward sequential algorithms for best basis selection
IEE Proc. Vis. Image Signal Process.
Analysis of orthogonal matching pursuit using the restricted isometry property
IEEE Trans. Inf. Theory
Stable recovery of sparse overcomplete representations in the presence of noise
IEEE Trans. Inf. Theory
Fast solution of -norm minimization problems when the solution may be sparse
IEEE Trans. Inf. Theory
Least angle regression
Ann. Stat.
A mathematical introduction to compressive sensing
Applied and Numerical Harmonic Analysis
On sparse representations in arbitrary redundant bases
IEEE Trans. Inf. Theory
Cited by (3)
A ReLU-based hard-thresholding algorithm for non-negative sparse signal recovery
2024, Signal ProcessingAdaptive reweighted quaternion sparse learning for data recovery and classification
2023, Pattern RecognitionAnalysis of Non-negative Block Orthogonal Matching Pursuit
2022, Wireless Personal Communications