1 Introduction

The problem of minimizing or maximizing the spectral radius of a non-negative matrix over a general constrained set can be very hard. There are no efficient algorithms even for solving this problem over a compact convex (say, polyhedral) set of positive matrices. This is because the objective function is neither convex nor concave in matrix coefficients and, in general, non-Lipschitz. There may be many points of local extrema, which are, moreover, hard to compute. Nevertheless, for some special sets of matrices efficient methods do exist. In this paper we consider the so-called product families. Their spectral properties were discovered and analysed by Blondel and Nesterov in [6]; the first methods of optimizing the spectral radius over such families originated in [29]. One of them, the spectral simplex method, was further developed in [34]. This method is quite simple in realization and has a fast convergence to the global minimum/maximum. Similar methods for optimising the spectral radius appeared in the literature in various contexts: in stochastic and entropy games (see [1] and references therein), PageRanking [10, 14]. The relation to the Markov Decision Processes is discussed in detail in Remark 1 below.

A modification of the spectral simplex method, the greedy method [1, 30], demonstrates a very good efficiency. Within 3–4 iterations, it finds the global minimum/maximum with a tight precision. Moreover, the number of iterations does not essentially grow with the dimension. However, for sparse matrices this method often suffers. It may either cycle or converge to a very rough estimate. We provide an example of cycling in Sect. 3. In that example there is a product family of \(3\times 3\) matrices for which the algorithm of greedy method cycles and the value of the spectral radius stabilises far from the real optimal value. For large sparse matrices, this phenomenon occurs quite often. This is a serious disadvantage, since most of applications deal with sparse matrices. There is a standard perturbation technique, when all zeros of a sparse non-negative matrix are replaced by small strictly positive entries. It is applied to avoid negative effects of sparsity: multiple Perron eigenvalues, divergence of the power method, etc. However, in our case it often causes computational problems: due to the high dimensions and the large number of matrices involved we either obtain a rough approximation of the spectral radius or set the perturbation value too small, which leads to the cycling again.

In this paper we develop the greedy method, which works equally well for all product families of non-negative matrices, including sparse matrices. Numerical results demonstrated in Sect. 6 show that even in dimension of several thousands, the greedy method manages to find minimal and maximal spectral radii within a few iterations. We also estimate the rate of convergence and thus theoretically justify the efficiency of the method. Finally we consider several applications.

Before introducing the main concepts let us define some notation. We denote the vectors by bold letters and their components by standard letters, so \({\varvec{a}}= (a_1, \ldots , a_d) \in {\mathbb {R}}^d\). The support of a non-negative vector is the set of indices of its nonzero components. For a non-negative \(d\times d\) matrix A, we denote by \(a_{ij}\) its entries, by \(\rho (A)\) its spectral radius, which is the maximal modulus of its eigenvalues. By the Perron–Frobenius theorem, \(\rho (A)\) is equal to the maximal non-negative eigenvalue \(\lambda _{\max }\) called the leading eigenvalue of the matrix A. The corresponding non-negative eigenvector \({\varvec{v}}\) is also called leading. Note that the leading eigenvector may not be unique. Each non-negative \(n\times n\) matrix A is associated to a directed graph with n vertices numbered as \(1, \ldots , n\) and defined as follows: there exists an edge from i to j if and only if \(A_{ji} >0\). A non-negative matrix is reducible if after some renumbering of coordinates it gets a block upper-triangular form. The matrix is irreducible precisely when its graph is strongly connected. An irreducible matrix has a unique up to normalization leading eigenvector. We usually normalize it as \(\Vert {\varvec{v}}\Vert = 1\), where \(\Vert {\varvec{v}}\Vert = \sqrt{({\varvec{v}}, {\varvec{v}})}\) is the Euclidean norm.

Definition 1

Let \({\mathcal {F}}_i \subset {\mathbb {R}}^d_+, \, i = 1, \ldots , d\), be arbitrary nonempty compact sets referred to as uncertainty sets. Consider a matrix A such that for each \(i = 1, \ldots d\), the ith row of A belongs to \({\mathcal {F}}_i\). The family of all those matrices A is denoted by \({\mathcal {F}}\) and called a product family or a family with product structure.

Thus, we compose a matrix A choosing each row from the corresponding uncertainty set. The family of all such matrices \({\mathcal {F}}= {\mathcal {F}}_1\times \cdots \times {\mathcal {F}}_d\) is, in a sense, a product of uncertainty sets. Of course, not every compact set of non-negative matrices has a product structure. Nevertheless, the class of product families is important in many applications. For instance, in dynamical systems and asynchronous systems [22,23,24, 34], Markov Decision Processes [2, 39], PageRanking [14], graph theory [12, 25, 29], mathematical economics [6, 9, 26], game theory [1, 4, 21, 36], matrix population models (see [26] and references therein), etc.

We consider the problems of optimising the spectral radius over a product family \({\mathcal {F}}\):

$$\begin{aligned} \left\{ \begin{array}{l} \rho (A)\ \rightarrow \ \min {/}\max \\ A \, \in \, {\mathcal {F}}\end{array} \right. \end{aligned}$$
(1)

Both minimization and maximization problems can be efficiently solved [29]. The spectral simplex method for finding global minimum and maximum demonstrates a fast convergence [34]. For example, if all the sets \({\mathcal {F}}_i\) are two-element (so the set \({\mathcal {F}}\) has a structure of a Boolean cube), then in dimension \(d=100\), the matrices with minimal and maximal spectral radii are found (precisely!) within 50–60 iterations for 10–15 s in a standard laptop. Note that in this case \({\mathcal {F}}\) contains \(|{\mathcal {F}}| = 2^{100} > 10^{30}\) matrices. If for the same dimension \(d=100\), each uncertainty set consists of 100 rows (so, \(|{\mathcal {F}}| = 10^{200}\)), then the spectral simplex method performs about 200–300 iteration and solves the problem (1) in less than a minute. We write the algorithm in the next section, but its idea can be described within a few lines. Let us consider the maximization problem (1). We start with an arbitrary matrix \(A_1 \in {\mathcal {F}}\) with rows \({\varvec{a}}_i \in {\mathcal {F}}_i, \, i = 1, \ldots , d\), and compute its leading eigenvector \({\varvec{v}}_1\). Then we solve the problem \(({\varvec{v}}_1, {\varvec{a}})\rightarrow \max , \, {\varvec{a}}\in {\mathcal {F}}_1\), in other words, we find an element from \({\mathcal {F}}_1\) that makes the biggest projection onto the eigenvector \({\varvec{v}}_1\). If the row \({\varvec{a}}_1\) is already optimal, then we leave it and do the same with the second row and with the set \({\mathcal {F}}_2\), etc. If all rows of A are optimal, then A has the biggest spectral radius. Otherwise, we replace the row \({\varvec{a}}_i\) by an optimal element from \({\mathcal {F}}_i\) and thus obtain the next matrix \(A_2\). Then we do the next iteration, etc. Thus, by one iteration we mean one computation of the leading eigenvector of a matrix, all other operations are cheap.

If all the sets \({\mathcal {F}}_i\) consist of strictly positive rows, then the spectral radius increases each iteration. So, we have a relaxation scheme which always converges to a global optimum. However, if the rows may have zeros, then the relaxation is non-strict (it may happen that \(\rho (A_{k+1}) = \rho (A_k)\)), and the algorithm may cycle or converge to a non-optimal matrix. In [34] this trouble was resolved and the method was modified to be applicable for sparse matrices. We will formulate the idea in Theorem A in the next section.

Recently in [1, 30] the spectral simplex method was further improved: in each iteration we maximize not one row but all rows simultaneously. According to numerical experiments, this slight modification leads to a significant improvement: in all practical examples the algorithm terminates within 3–4 iterations. In the example above, with \(d=100, \, |{\mathcal {F}}_i|=100, i = 1, \ldots , d\), the absolute minimum/maximum is found within a few seconds and the number of iterations rarely exceeds 5. Even for dimension \(d=2000\), the number of iterations never exceeds 10. The authors of [1] and of [30] came to problem (1) from different angles. The paper [1] studies problems from entropy games with a fixed number of states. The paper [30] solves a different problem: finding the closest stable matrix to a given matrix in the \(L_1\) norm. The authors derive a method similar to the policy iteration method and call it the greedy method. We will use the latter name. This modification of the spectral simplex method, although has a much faster convergence, inherits the main disadvantage: it works only for strictly positive matrices. For sparse matrices, it often gets stuck, cycles, or converges to a wrong solution. The standard perturbation technique leads co computational problems in high dimensions, we discuss this issue in Sects. 3 and 4.2. None of ideas from the work [34] which successively modified the spectral simplex method helps for the greedy method. This is shown in example and discussed in Sect. 3. In some of the literature on entropy and stochastic games (see [1]) the positivity assumption is relaxed to irreducibility of all matrices \(A_k\) in each step, but this condition is sometimes very restrictive and is never satisfied in practice for sparse matrices. In [30] the greedy method was modified for sparse matrices, but only for minimization problem and by a significant complication of the computational procedure. Therefore, in this paper we attack the three main problems:

The main problems:

  1. 1.

    To extend the greedy method for sparse matrices while preserving its efficiency.

  2. 2.

    To estimate the rate of convergence of the greedy method.

  3. 3.

    To apply the greedy method to some of known problems. We consider two applications: optimisation of the spectral radius of graphs and finding the closest stable/unstable linear system.

The first issue is solved in Sect. 4. We derive the selective greedy method working for all kind of non-negative matrices. To this end we introduce and apply the notion of selected leading eigenvector. The new greedy method is as simple in realisation and as fast as the previous one.

In Sect. 5 we prove two main theorems about the rate of convergence. We show that both the simplex and the greedy methods have a global linear rate of convergence. Then we show that the greedy method (but not the spectral simplex!) has a local quadratic convergence, which explains its efficiency. We also estimate the constants and parameters of the convergence.

Numerical results for matrix families in various dimensions are reported in Sect. 6. Section 7 shows application of the greedy method to problems of dynamical systems and of the graph theory. Finally, in Sect. 8, we discuss the details of practical implementation.

Remark 1

A relation of the spectral optimisation over product families to the Markov Decision Processes (MDP) deserves a separate detailed consideration. The MDP provide a mathematical framework for decision making when outcomes are partly random and partly under the control of a decision maker. The theory of MDP is developed since early 60 s and is widely known and popular nowadays, see, for instance [2, 21, 33] and references therein.

Our spectral optimisation issue has a lot in common with MDP, from the statement of the problem to methods of the solution. First, the MDP can be described by means of product families of non-negative matrices. Moreover, the policy iteration algorithm for finding the optimal strategy in MDP looks very similar to the spectral simplex method and the greedy method. That algorithm chooses the rows from uncertainty sets that are optimal in some sense (maximise the reward), then computes a special vector \({\varvec{v}}\) (the value), then again chooses the rows using this new vector, etc.

Nevertheless, the problem of optimising the spectral radius over product family cannot be derived from MDP, and the spectral simplex method and the greedy method are different from those existing in the MDP literature. The crucial difference is that MDP does not optimise the leading eigenvalue and does not deal with the leading eigenvectors. Let us recall the main scheme of the policy iteration, the algorithm which seems to be similar to the spectral simplex and to the greedy method. For the sake of simplicity we describe this algorithm in terms of product families. We are given a number \(\gamma \in [0, 1]\) (discount factor) and finite uncertainty sets \({\mathcal {F}}_i, \, i = 1, \ldots , d\). To each element \({\varvec{a}}_i^{(j)} \in {\mathcal {F}}_i\), one associates a given non-negative row vector \({\varvec{r}}_i^{(j)}\) (the reward). In each iteration of the algorithm, when we have a matrix \({\mathcal {A}}\in {\mathcal {F}}\) and the corresponding rewards \({\varvec{r}}_1, \ldots , {\varvec{r}}_d\), we find the vector \({\varvec{v}}= (v_1, \ldots , v_d)\in {\mathbb {R}}^d\) by solving the following linear non-homogeneous system:

$$\begin{aligned} v_i \ = \ ({\varvec{a}}_i, {\varvec{v}})\ + \ \gamma \, ({\varvec{a}}_i, {\varvec{r}}_i), \quad i = 1, \ldots , d. \end{aligned}$$
(2)

Then we choose new rows \({\varvec{a}}_i' \in {\mathcal {F}}_i, \, i = 1, \ldots , d\) (with the corresponding \({\varvec{r}}_i'\)) as a solution of the optimisation problem:

$$\begin{aligned} \max _{{\varvec{a}}_i' \in {\mathcal {F}}_i}\ \Bigl [\, ({\varvec{a}}_i', {\varvec{v}})\ + \ \gamma \, ({\varvec{a}}_i', {\varvec{r}}_i') \, \Bigr ] \end{aligned}$$
(3)

(now \({\varvec{v}}\) is fixed and the optimisation runs over the sets \({\mathcal {F}}_i\)). Then we find new \({\varvec{v}}\) by solving linear system (2) for the new matrix \(A'\), etc., until in some iteration we have \(A' = A\). In this case the matrix A is optimal, which corresponds to the optimal choice from the uncertainty sets. It is known that in case of finite uncertainty sets, the policy iteration algorithm converges within finite time. Moreover, that time is actually polynomial [39]. Efficient estimates for the number or iterations in terms of \(d, |{\mathcal {F}}_i|,\) and \(\gamma \) are obtained in [20, 27, 32]. There are various versions of the policy iteration algorithm, see [17, 33]. The MDP problems can also be solved by different techniques, including linear programming [39].

We see that the MDP problem is different from optimising the spectral radius, although it also deals with product families. The policy iteration, in spite of the similarity of the structure, is different from both spectral simplex and greedy method. It solves a different problem and does not deal with leading eigenvectors using another vector \({\varvec{v}}\) obtained from linear equation (2).

A natural question arises if some of MDP algorithms and the greedy method are equivalent and can be derived from each other? To the best of our knowledge, the answer is negative. One of possible arguments is that for algorithms from MDP literature, in particular, for the policy iteration, there are guaranteed estimates for the rate of convergence, while the greedy algorithm and the spectral simplex methods may not converge at all and may suffer of sparsity, cycling, etc., see [34] and Sect. 3 of this paper. So, in our opinion, the results of this paper are hardly applicable to MDP and vice versa.

Novelty. Let us formulate the main novelty of our results.

1.:

We develop the greedy method of optimising the spectral radius over a product family of non-negative matrices. This method works equally well for infinite uncertainty sets. For polyhedral uncertainty sets given by linear inequalities, where the number of vertices can be very large, the method finds the precise optimal value within a few iterations (Sect. 6). For all other compact uncertainty sets, it finds approximate values.

2.:

We prove the linear convergence for the spectral simplex method and the quadratic convergence for the greedy method. The parameters of linear and quadratic convergence are estimates in terms of geometry of the uncertainty sets.

3.:

We derive a simple procedure to extend the greedy method to sparse matrices without loss of its efficiency. The extension is based on the use of selective leading eigenvectors.

2 The minimizing/maximizing spectral radius of product families

2.1 The description of the methods

We have a product family \({\mathcal {A}}\) with compact uncertainty sets \({\mathcal {F}}_1, \ldots , {\mathcal {F}}_d\) which are supposed to be non-negative (consist of entrywise non-negative vectors). The problem is to find the maximal and the minimal spectral radius of matrices form \({\mathcal {A}}\).

A matrix \(A \in {\mathcal {A}}\) is said to be minimal in each row if it has a leading eigenvector \({\varvec{v}}\ge 0\) such that \(({\varvec{a}}_i, {\varvec{v}}) = \min _{{\varvec{b}}_i \in {\mathcal {F}}_i}({\varvec{b}}_i , {\varvec{v}})\) for all \(i = 1, \ldots , d\). It is maximal in each row if it has a leading eigenvector \({\varvec{v}}> 0\) such that \(({\varvec{a}}_i, {\varvec{v}}) = \max _{{\varvec{b}}_i \in {\mathcal {F}}_i}({\varvec{b}}_i , {\varvec{v}}), \, i = 1, \ldots , d\). Those notations are not completely analogous: the minimality in each row is defined with respect to an arbitrary leading eigenvector \({\varvec{v}}\ge 0\), while the maximality needs a strictly positive \({\varvec{v}}\).

We first briefly describe the ideas of the methods and then write formal procedures. Optimisation of spectral radius of a product family is done by a relaxation scheme. We consider first the maximization problem.

Starting with some matrix \(A_1 \in {\mathcal {F}}\) we build a sequences of matrices \(A_1, A_2, \ldots \) such that \(\rho (A_1) \le \rho (A_2) \le \cdots \) Every time the next matrix \(A_{k+1}\) is constructed from \(A_k\) by the same rule. The rule depends on the method:

Spectral simplex method. \(A_{k+1}\) is obtained from \(A_k\) by changing one of its rows \({\varvec{a}}_i^{(k)}\) so that \(({\varvec{a}}_i^{(k+1)}, {\varvec{v}}_k) > ({\varvec{a}}_i^{(k)}, {\varvec{v}}_k)\), where \({\varvec{v}}_k\) is a leading eigenvector of \(A_k\). We fix i and take the element \({\varvec{a}}_i^{(k)} \in {\mathcal {F}}_i\) which maximizes the scalar product with the leading eigenvector \({\varvec{v}}_k\) over all \({\varvec{a}}_i \in {\mathcal {F}}_i\). The algorithm terminates when no further step is possible, i.e., when the matrix \(A_k\) is maximal in each row with respect to \({\varvec{v}}_k\).

Thus, each iteration makes a one-line correction of the matrix \(A_k\) to increase the projection of this line (i.e., row) to the leading eigenvector \({\varvec{v}}_k\). Of course, such a row \({\varvec{a}}_i^{(k+1)}\) may not be unique. In [34] the smallest index rule was used: each time we take the smallest i for which the row \({\varvec{a}}_i^{(k)}\) is not maximal with respect to \({\varvec{v}}_k\). This strategy provides a very fast convergence to the optimal matrix. In [1] the convergence was still speeded up by using the pivoting rule: i is the index for which the ratio \(\frac{({\varvec{a}}_i^{(k+1)}, {\varvec{v}}_k)}{({\varvec{a}}_i^{(k)}, {\varvec{v}}_k)}\) is maximal. Thus, we always choose the steepest increase of the scalar product. One iteration of this method requires the exhaustion of all indices \(i=1, \ldots , d\), which may be a disadvantage in case of complicated sets \({\mathcal {F}}_i\).

The greedy method. \(A_{k+1}\) is obtained by replacing all rows of \(A_k\) with the maximal rows in their sets \({\mathcal {F}}_i\) with respect to the leading eigenvector \({\varvec{v}}_k\).

So, in contrast to spectral simplex method, we change not only one row of \(A_k\), but all non-optimal rows, where we can increase the scalar product \(({\varvec{a}}_i^{(k)}, {\varvec{v}}_k)\). If the row \({\varvec{a}}_i^{(k)}\) is already optimal, we do not change it and set \({\varvec{a}}_i^{(k+1)} = {\varvec{a}}_i^{(k)}\). Otherwise we replace it by the row \({\varvec{a}}_i^{(k+1)} \in {\mathcal {F}}_i\) that gives the maximal scalar product \(({\varvec{a}}_i, {\varvec{v}}_k)\) over all \({\varvec{a}}_i \in {\mathcal {F}}_i\).

The idea behind. Both the spectral simplex and the greedy methods are actually based on the well-known minimax formula for the Perron eigenvalue (see, for instance, [7]):

$$\begin{aligned} \rho (A) \ = \ \min _{i=1, \ldots , d} \, \sup _{{\varvec{x}}> 0}\, \frac{({\varvec{a}}_i, {\varvec{x}})}{x_i}, \end{aligned}$$

where A is a non-negative matrix, \({\varvec{a}}_1, \ldots , {\varvec{a}}_d\) are its rows and \({\varvec{x}}= (x_1, \ldots , x_d)\) is an arbitrary strictly positive vector. This formula makes it possible to use the product structure of a family of matrices and to actually work with rows separately. See [2, 29] for more details.

Realization. The maximal scalar product \(({\varvec{a}}_i, {\varvec{v}}_k)\) over all \({\varvec{a}}\in {\mathcal {F}}_i\) is found by solving the corresponding convex problem over \(\mathrm{co}({\mathcal {F}}_i)\). If \({\mathcal {F}}_i\) is finite, this is done merely by exhaustion of all elements of \({\mathcal {F}}_i\). If \({\mathcal {F}}_i\) is a polyhedron, then we find \({\varvec{a}}_i^{(k+1)}\) among its vertices solving an LP problem by the (usual) simplex method.

Convergence. Denote \(\rho (A_k) = \rho _k\). It was shown in [34] that both methods are actually relaxations: \(\rho _{k+1} \ge \rho _k\) for all k. Moreover, if \(A_k\) is irreducible, then this inequality is strict. Hence, if in all iterations the matrices are irreducible (this is the case, for instance, when all the uncertainty sets \({\mathcal {F}}_i\) are strictly positive), then both methods produce strictly increasing sequence of spectral radii \(\{\rho _k\}_{k\in N}\). Hence, the algorithm never cycles. If all the sets \({\mathcal {F}}_i\) are finite or polyhedral, then the set of extreme points of the family \({\mathcal {F}}\) is finite, therefore the algorithm eventually arrives at the maximal spectral radius \(\rho _{\max }\) within finite time. This occurs at the matrix \(A_k\) maximal in each row.

If all \({\mathcal {F}}_i\) are general compact sets, then it may happen that although a matrix maximal in each row exists, it will not be reached within finite time. In this case the algorithm can be interrupted at an arbitrary step, after which we apply the a posteriori estimates for \(\rho _{\max }\) defined below. We denote \({\varvec{v}}_k \, = \, \bigl (v_{k, 1}, \ldots , v_{k, d}\bigr )\), so \(\ ({\varvec{a}}_i^{(k)}, {\varvec{v}}_k) = \rho _k v_{k, i}\). Then for an arbitrary matrix A and for its leading eigenvector \({\varvec{v}}_k\), we define the following values.

$$\begin{aligned} s_i(A)\ = \ \left\{ \begin{array}{lll} \max \limits _{{\varvec{b}}\in {\mathcal {F}}_i} \frac{({\varvec{b}}, {\varvec{v}}_k)}{v_{k, i}}&{}; &{} {v_{k, i}} > 0; \\ +\infty &{}; &{} {v_{k, i}} = 0; \end{array} \right. \qquad s(A)\ = \ \max _{i = 1, \ldots , d} s_i(A) \end{aligned}$$
(4)

Thus, \(s_i(A)\) is the maximal ratio between the value \(({\varvec{a}}, {\varvec{v}}_k)\) over all \({\varvec{a}}\in {\mathcal {F}}_i\) and the ith component of \({\varvec{v}}_k\). Similarly, for the minimum:

$$\begin{aligned} t_i(A)\ = \ \left\{ \begin{array}{lll} \min \limits _{{\varvec{b}}\in {\mathcal {F}}_i} \frac{({\varvec{b}}, {\varvec{v}}_k)}{v_{k, i}}&{}; &{} v_{k, i} > 0; \\ +\infty &{}; &{} {v_{k, i}} = 0; \end{array} \right. \qquad t(A)\ = \ \min _{i = 1, \ldots , d} t_i(A) \end{aligned}$$
(5)

Then the following obvious estimates for \(\rho _{\max }\) are true:

Proposition 1

 [34] For both the spectral simplex method and the greedy method (for maximization and for minimization respectively), we have in kth iteration:

$$\begin{aligned} \rho _k \ \le \ \rho _{\max }\ \le \ s(A_k)\ ; \qquad t(A_k) \ \le \ \rho _{\min }\ \le \ \rho _k. \end{aligned}$$
(6)

In Sect. 5 we show that at least for strictly positive matrices, estimates (6) converge to \(\rho _{\max }\) and to \(\rho _{\min }\) respectively with linear rate. This means that for the maximisation problem \(|\rho _k - \rho _{\max }| \, \le \, C\, q^k\, \) and for the minimisation problem \(\, |\rho _k - \rho _{\min }| \, \le \, C\, q^k\), where \(q \in (0, 1)\). Moreover, for the greedy method the convergence is locally quadratic, which explains its efficiency in all practical examples.

However, for sparse matrices the situation is more difficult. The equalities \(\rho _{k+1} \ge \rho _k\) are not necessarily strict. The algorithm may stay in the same value of \(\rho _k\) for many iterations and even may cycle. Before addressing this issue, we write a formal procedure for both algorithms.

2.2 The algorithms

The spectral simplex method.

Initialization. Taking arbitrary \({\varvec{a}}_i \in {\mathcal {F}}_i, \ i = 1, \ldots , d\), we form a matrix \(A_1 \in {\mathcal {A}}\). Denote its rows by \({\varvec{a}}_1^{(1)}, \ldots , {\varvec{a}}_d^{(1)}\) and its leading eigenvector \({\varvec{v}}_1\).

Main loop. The kth iteration. We have a matrix \(A_{k} \in {\mathcal {A}}\) composed with rows \({\varvec{a}}_i^{(k)} \in {\mathcal {F}}_i, i = 1, \ldots , d\). Compute its leading eigenvector \({\varvec{v}}_{k}\) (if it is not unique, take any of them) and for \(i = 1, \ldots d\), find a solution \(\bar{\varvec{a}}_i \in {\mathcal {F}}_i\) of the following problem:

$$\begin{aligned} \left\{ \begin{array}{l} ({\varvec{a}}_i ,{\varvec{v}}_{k}) \ \rightarrow \ \max \\ {\varvec{a}}_i \in {\mathcal {F}}_i \end{array} \right. \end{aligned}$$
(7)

If \({\mathcal {F}}_i\) is finite, then this problem is solved by exhaustion; if \({\mathcal {F}}_i\) is polyhedral, then it is solved as an LP problem, and \(\bar{\varvec{a}}_i\) is found among its vertices.

If \((\bar{\varvec{a}}_i, {\varvec{v}}_{k}) = ({\varvec{a}}_i^{(k)}, {\varvec{v}}_{k} )\) and \(i \le d-1\), then we set \(i = i+1\) and solve problem (7) for the next row. If \(i = d\), then the algorithm terminates. In case \({\varvec{v}}_{k} > 0\), the matrix \(A_{k}\) is maximal in each row, and \(\rho (A_{k}) = \rho _{\max }\), i.e., \(A_{k}\) is a solution. If \((\bar{\varvec{a}}_i, {\varvec{v}}_{k}) > ({\varvec{a}}_i^{(k)}, {\varvec{v}}_{k} )\), then we set \({\varvec{a}}^{(k+1)}_i = \bar{\varvec{a}}_i\), \({\varvec{a}}_j^{(k+1)} = {\varvec{a}}_j^{(k)}\) for all \(j\ne i\) and form the corresponding matrix \(A_{k+1}\). Thus, \(A_{k+1}\) is obtained from \(A_k\) by replacing its ith row by \(\bar{\varvec{a}}_i\). Then we start \((k+1)\)st iteration.

If we obey the pivoting rule, we solve d problems (7) for all \(i = 1, \ldots , d\) and take the row for which the value \(s_i(A_k)\) defined in (4) is maximal. Then \(A_{k+1}\) is obtained from \(A_k\) by replacing its ith row by \(\bar{\varvec{a}}_i\)

Termination. If \(A_k\) is maximal in each row and \({\varvec{v}}_k>0\), then \(\rho _{\max } = \rho (A_k)\) is a solution. Otherwise, we get an estimate for \(\rho _{\max }\) by inequality (6).

End.

Remark 2

If each row of \(A_k\) makes the maximal scalar product with the eigenvector \({\varvec{v}}_k\), but \({\varvec{v}}_k\) is not strictly positive (i.e., \(A_k\) is “almost” maximal in each row), then we cannot conclude that \(\rho _{\max } = \rho (A_k)\). However, in this case the family \({\mathcal {F}}\) is reducible: the space spanned by the vectors with the same support as \({\varvec{v}}_k\) is invariant with respect to all matrices from \({\mathcal {F}}\). Hence, the problem of maximizing the spectral radius is reduced to two problems of smaller dimensions. Therefore, before running the algorithm we can factorize \({\mathcal {F}}\) to several irreducible families. The procedure is written in detail in [34]. However, this way is not very efficient, since the case when the final leading eigenvector is not strictly positive occurs rarely. Another way is to factorize the family after the termination into two families as written above and then run the algorithm separately to both of them.

The simplex method for minimization problem is the same, replacing maximum by minimum in the problem (7) and omitting the positivity assumption \({\varvec{v}}_{k}>0\) in the termination of the algorithm.

The greedy method

Initialization. Taking arbitrary \({\varvec{a}}_i \in {\mathcal {F}}_i, \ i = 1, \ldots , d\), we form a matrix \(A_1 \in {\mathcal {A}}\) with rows \({\varvec{a}}_1, \ldots , {\varvec{a}}_d\). Take its leading eigenvector \({\varvec{v}}_1\). Denote \({\varvec{a}}_i^{(1)}\, = \, {\varvec{a}}_i, \ i = 1, \ldots , d\).

Main loop. The kth iteration. We have a matrix \(A_{k} \in {\mathcal {A}}\) composed with rows \({\varvec{a}}_i^{(k)} \in {\mathcal {F}}_i, i = 1, \ldots , d\). Compute its leading eigenvector \({\varvec{v}}_{k}\) (if it is not unique, take any of them) and for \(i = 1, \ldots d\), find a solution \(\bar{\varvec{a}}_i \in {\mathcal {F}}_i\) of the problem (7). For each \(i = 1, \ldots , d\), we do:

If \((\bar{\varvec{a}}_i, {\varvec{v}}_{k}) = ({\varvec{a}}_i^{(k)}, {\varvec{v}}_{k} )\), then we set the ith row of \(A_{k+1}\) to be equal to that of \(A_k\), i.e., \({\varvec{a}}_i^{(k+1)} = {\varvec{a}}_i^{(k)}\).

Otherwise, if \((\bar{\varvec{a}}_i, {\varvec{v}}_{k}) > ({\varvec{a}}_i^{(k)}, {\varvec{v}}_{k} )\), then we set \({\varvec{a}}^{(k+1)}_i = \bar{\varvec{a}}_i\).

We form the corresponding matrix \(A_{k+1}\). If the first case took place for all i, i.e., \(A_{k+1} = A_k\), and if \({\varvec{v}}_k > 0\), then \(A_k\) is optimal in each row. Thus, the answer is \(\rho _{\max } = \rho (A_k)\). If \({\varvec{v}}_k\) has some zero components, then the family \({\mathcal {F}}\) is reducible. We stop the algorithm and factorize \({\mathcal {F}}\) (see Remark 2). Otherwise, go to \((k+1)\)st iteration.

Termination. If \(A_k\) is maximal in each row, then \(\rho _{\max } = \rho (A_k)\) is a solution. Otherwise, we get an estimate for \(\rho _{\max }\) by inequality (6).

End.

The greedy method for minimization problem is the same, replacing maximum by minimum in the problem (7) and omitting the positivity assumption \({\varvec{v}}_{k}>0\) in the termination of the algorithm.

As we mentioned, in case of strictly positive uncertainty sets \({\mathcal {F}}_i\) both those algorithms work efficiently and, if all \({\mathcal {F}}_i\) are finite or polyhedral, they find the optimal matrix within finite time. However, if rows from \({\mathcal {F}}_i\) have some zero components, the situation may be different. We begin with the corresponding example, then we modify those methods to converge for arbitrary non-negative uncertainty sets. Then in Sect. 5 we estimate the rate of convergence.

3 The cycling and the divergence phenomena

Both the spectral simplex method and the greedy method work efficiently for strictly positive product families. However, if some matrices have zero components, the methods may not work at all. For the spectral simplex method, this phenomenon was observed in [34]. Here is an example for the greedy method.

The example is for maximising the spectral radius in dimension 3. The starting matrix is strictly positive, so the family is irreducible. Moreover, since the staring matrix has a simple leading eigenvalue, it follows that the spectral simplex method will not cycle for this family [34, Theorem 1]. Nevertheless, the greedy method cycles. Moreover, it stabilises on the value of the spectral radius 10, while the real maximum is 12.

Thus, this example shows that the greedy method may cycle even for an irreducible family and for a totally positive starting matrix. And that the value of the objective function on which the algorithm stabilises is far from the optimal value.

We consider the product family \({\mathcal {F}}\) of \(3\times 3\) matrices defined by the following three uncertainty sets:

$$\begin{aligned} {\mathcal {F}}_1 \ = \ \left\{ \begin{array}{lcr} (1 , &{} 1, &{} 1)\\ (0 , &{} 5, &{} 10)\\ (0 , &{} 10, &{} 5)\\ (12 , &{} 0, &{} 0) \end{array} \right\} \ \qquad {\mathcal {F}}_2 \ = \ \left\{ \begin{array}{lcr} (1 , &{} 1, &{} 1)\\ (0 , &{} 10, &{} 0) \end{array} \right\} \ ; \qquad {\mathcal {F}}_3 \ = \ \left\{ \begin{array}{lcr} (1 , &{} 1, &{} 3)\\ (0 , &{} 0, &{} 10) \end{array} \right\} \end{aligned}$$

So, \({\mathcal {F}}_1, {\mathcal {F}}_2\), and \({\mathcal {F}}_3\) consists of 4, 2 and 2 rows respectively. Hence, we have in total \(4\times 2\times 2 = 16\) matrices. Taking the first row in each set we compose the first matrix \(A_1\) and the algorithm runs as follows: \(A_1 \, \rightarrow \, A_2 \rightleftarrows \, A_3\), where

$$\begin{aligned} \left( \begin{array}{lcr} 1 &{} \quad 1 &{} \quad 1\\ 1 &{} \quad 1 &{} \quad 1\\ 1 &{} \quad 1 &{} \quad 3 \end{array} \right) \xrightarrow {{\varvec{v}}_1 = (1, 1, 2)^T} \left( \begin{array}{lcr} 0 &{} \quad 5 &{} \quad 10\\ 0 &{} \quad 10 &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 10 \end{array} \right) \begin{array}{c} \xrightarrow {{\varvec{v}}_2 = (2, 2, 1)^T}\\ {}\\ \xleftarrow {{\varvec{v}}_3 = (2, 1, 2)^T} \end{array} \left( \begin{array}{lcr} 0 &{} \quad 10 &{}\quad 5\\ 0 &{} \quad 10 &{}\quad 0\\ 0 &{} \quad 0 &{}\quad 10 \end{array} \right) \end{aligned}$$

The matrix \(A_1\) has a unique eigenvector \({\varvec{v}}_1 = (1, 1, 2)^T\). Taking the maximal row in each set \({\mathcal {F}}_i\) with respect to this eigenvector, we compose the next matrix \(A_2\). It has a multiple leading eigenvalue \(\lambda = 10\). Choosing one of its leading eigenvectors \(v_2 = (2, 2, 1)^T\), we make the next iteration and obtain the matrix \(A_3\). It also has a multiple leading eigenvalue \(\lambda = 10\). Choosing one of its leading eigenvectors \(v_2 = (2, 1, 2)^T\), we make the next iteration and come back to the matrix \(A_2\). The algorithm cycles. The cycling starts on the second iteration, the length of the cycle is 2.

Some conclusions. The greedy method gets stuck on the cycle \(A_2 \rightleftarrows \, A_3\) and on the value of the spectral radius \(\rho = 10\). However, the maximal spectral radius \(\rho _{\max }\) of the family \({\mathcal {F}}\) is not 10, but 12. The value  \(\rho _{\max }\) is realized for the matrix

$$\begin{aligned} A_4 \ = \ \left( \begin{array}{lcr} 12 &{}\quad 0 &{}\quad 0\\ 1 &{} \quad 1 &{}\quad 1\\ 1 &{} \quad 1 &{}\quad 3 \end{array} \right) \end{aligned}$$

which is never attained by the algorithm.

The algorithm never terminates and, when stopped, gives a wrong solution: \(\rho (A_2) = \rho (A_3) = 10\) instead of \(\rho _{\max } = 12\). The a posteriori estimate (6) gives \(10\le \rho _{\max }\le 12.5\). The error is \(25\, \%\), and this is the best the greedy method can achieve for this family.

Of course, the cycling in this example can easily be overcome by the standard perturbation technique. It suffices to add a small positive \(\varepsilon \) to each component of vectors in uncertainty sets. However, we have to keep in mind that this is an illustrative example for small matrices. Numerical experiments show that for large sparse matrices cycling occurs quite often. In this case the perturbation may lead to significant error in the computing of \(\rho _{\max }\). Standard methods to control the change of the spectral radius are hardly applicable, because we deal with not one matrix but all matrices of the family \({\mathcal {A}}\), and some of them can have ill-conditioned leading eigenvectors. As a result, to guarantee the small change of the spectral radii of matrices from \({\mathcal {A}}\) one needs to take too small \(\varepsilon \), which leads to the cycling again due to the computation rounding at each step. We discuss this issue in more detail in Sect. 4.2.

4 Is the greedy method applicable for sparse matrices?

Can the greedy method be modified to work with sparse matrices? We need a unified approach for both minimisation and maximisation problems which, moreover, would not reduce the efficiency of the method. To attack this problem we first reveal two main difficulties caused by sparsity. Then we will see how to treat them. We also mention that the same troubles occur for the spectral simplex method but they can be overcome [34]. However, those ideas of modification are totally inapplicable for the greedy method. Note that the problem of sparsity is different from irreducibility. The latter can easily be solved by reduction to several problems of smaller dimensions [1, 29], while the sparsity is more serious.

4.1 Two problems: cycling and multiple choice of \({\varvec{v}}_k\)

In Sect. 3 we saw that if some vectors from the uncertainty sets have zero components, then the greedy method may cycle and, which is still worse, may miss the optimal matrix and give a quite rough final estimate of the optimal value. Indeed, numerical experiments show that cycling often occurs for sparse matrices, especially for minimization problem (for maximizing it is rare but also possible). Moreover, the sparser the matrix the more often the cycling occurs.

The reason of cycling is hidden in multiple leading eigenvectors. Recall that an eigenvalue is called simple if it is a simple (of multiplicity one) root of the characteristic polynomial. The geometrical multiplicity of an eigenvalue is the dimension of the subspace spanned by all eigenvectors corresponding to that eigenvalue. It never exceeds the algebraic multiplicity (the multiplicity of the corresponding root of the characteristic polynomial). We call the subspace spanned by leading eigenvectors Perron subspace. If at some iteration we get a matrix \(A_k\) with a multiple leading eigenvector, then we have an uncertainty with choosing \({\varvec{v}}_k\) from the Perron subspace. An “unsuccessful” choice may cause cycling. In the example in Sect. 3, both matrices \(A_2\) and \(A_3\) have Perron subspaces of dimension 2, and this is the reason of the trouble. On the other hand, if in all iterations the leading eigenvectors \({\varvec{v}}_k\) are simple, i.e., their geometric multiplicity are equal to one, then cycling never occurs as the following proposition guarantees.

Proposition 2

If in all iterations of the greedy method, the leading eigenvectors \({\varvec{v}}_k\) are simple, then the method does not cycle and (in case of finite or polyhedral sets \({\mathcal {F}}_i\)) terminates within finite time. The same is true for the spectral simplex method.

This fact follows from Theorem 2 proved below. Of course, if all matrices \(A_k\) are totally positive, or at least irreducible, then by the well-known results of the Perron–Frobenius theory [16, chapter 13, §2], all \({\varvec{v}}_k\) are simple, and cycling never occurs. But how to guarantee the simplicity of leading eigenvectors in case of sparse matrices? For the spectral simplex method, this problem was solved by the following curious fact:

Theorem A

[34]Consider the spectral simplex method in the maximization problem. If the initial matrix \(A_1 \in {\mathcal {F}}\) has a simple leading eigenvector, then so do all successive matrices \(A_2, A_3, \ldots \), the method does not cycle and finds the solution within finite time.

So, in the spectral simplex method, the issue is solved merely by choosing a proper initial matrix \(A_1\). In this case there will be no uncertainty in choosing the leading eigenvectors \({\varvec{v}}_k\) in all iterations. Moreover, the leading eigenvectors will continuously depend on the matrices, hence the algorithm will be stable under small perturbations. Note that this holds only for maximization problem! For minimizing the spectral radius, the cycling can be avoided as well but in a totally different way [34]. Let us stress that Theorem A implies the existence of a matrix in \({\mathcal {F}}\) with a simple leading eigenvector that gives a global maximum of the spectral radius in \({\mathcal {F}}\), provided \({\mathcal {F}}\) has at least one matrix with a simple leading eigenvector. In general, the global maximum can be attained in several matrices, but at least one of them has a simple leading eigenvector.

However, Theorem A does not hold for the greedy method. This is seen already in the example from Sect. 3: the totally positive (and hence having a simple leading eigenvector) matrix \(A_1\) produces the matrix \(A_2\) with multiple leading eigenvectors. So, a proper choice of the initial matrix \(A_1\) is not a guarantee against cycling of the greedy method!

In practice, because of rounding errors, not only multiple but “almost multiple” leading eigenvectors (when the second largest eigenvalue is very close to the first one) may cause cycling or at least essential slowing down of the algorithm.

Nevertheless, for the greedy method, a way to avoid cycling does exist even for sparse matrices. We suggest a strategy showing its efficiency both theoretically and numerically. Even for very sparse matrices, the algorithm works as fast as for positive ones.

4.2 Anti-cycling by selection of the leading eigenvectors

A naive idea to avoid cycling would be to slightly change all the uncertainty sets \({\mathcal {F}}_i\) making them strictly positive. For example, to replace them by perturbed sets \({\mathcal {F}}_{i, \varepsilon } = {\mathcal {F}}_i + \varepsilon {\varvec{e}}\), where \({\varvec{e}}= (1, \ldots , 1)\in {\mathbb {R}}^d\) and \(\varepsilon > 0\) is a small constant. So, we add a positive number \(\varepsilon \) to each entry of vectors from the uncertainty sets. All matrices \(A \in {\mathcal {F}}\) become totally positive: \(A\, \mapsto \, A_{\varepsilon } = A + \varepsilon E\), where \(E = {\varvec{e}}{\varvec{e}}^T\) is the matrix of ones. By Proposition 2, the greedy method for the perturbed family \({\mathcal {F}}_{\varepsilon }\) does not cycle. However, this perturbation can significantly change the answer of the problem. For example, if a matrix A has \(d-1\) ones over the main diagonal and all other elements are zeros (\(a_{ij} = 1\) if \(i - j = 1\) and \(a_{ij} = 0\) otherwise), then \(\rho (A) = 0\), while \(\rho (A_{\varepsilon }) > \varepsilon ^{1/d}\). Even if \(\varepsilon \) is very small, say, is \(10^{-20}\), then for \(d=50\) we have \(\rho (A_{\varepsilon }) > 0.4\), while \(\rho (A) = 0\). Hence, we cannot merely replace the family \({\mathcal {F}}\) by \({\mathcal {F}}_{\varepsilon }\) without risk of a big error. The standard methods to control the change of the spectral radius [37] are not applicable, because we have to control it for all matrices of the family \({\mathcal {A}}\) simultaneously. Since some of matrices from \({\mathcal {A}}\) have ill-conditioned leading eigenvectors (the matrix of eigenvectors have a large condition number), etc., this leads to neglectably small \(\varepsilon \), which are smaller than the rounding errors.

Our crucial idea is to deal with the original family \({\mathcal {F}}\) but using the eigenvectors \({\varvec{v}}_k\) from the perturbed family \({\mathcal {F}}_{\varepsilon }\).

Definition 2

The selected leading eigenvector of a non-negative matrix A is the limit \(\lim \nolimits _{\varepsilon \rightarrow 0}{\varvec{v}}_{\varepsilon }\), where \({\varvec{v}}_{\varepsilon }\) is the normalized leading eigenvector of the perturbed matrix \(A_{\varepsilon } \, = \, A\, +\, \varepsilon E\).

There is a closed formula for the selected eigenvector, which is, however, difficult to use in practice [35]. Nevertheless, the selected eigenvector can be found explicitly due to the next theorem. Before formulating it we make some observations.

If an eigenvector \({\varvec{v}}\) is simple, then it coincides with the selected eigenvector. If \({\varvec{v}}\) is multiple, then the selected eigenvector is a vector from the Perron subspace. Thus, Definition 2 provides a way to select one vector from the Perron subspace of every non-negative matrix. We are going to see that this way is successful for the greedy method.

Consider the power method for computing the leading eigenvectors: \(\, {\varvec{x}}_{k+1} \, = \, A{\varvec{x}}_k, \, k \ge 0\). In most of practical cases it converges to a leading eigenvector \({\varvec{v}}\) linearly with the rate \(O \bigl ( \bigl | \frac{\lambda _2}{\lambda _1}\bigr |^k\bigr )\), where \(\lambda _1, \lambda _2\) are the first and the second largest by modulus eigenvalues of A or \(O\bigl ( \frac{1}{k} \bigr )\), if the leading eigenvalue has non-trivial (i.e., of size bigger than one) Jordan blocks. In a rare case when there are several largest by modulus eigenvalues, then the “averaged” sequence \(\frac{1}{r}\, \sum _{j=0}^{r-1} x_{k+j}\) converges to \({\varvec{v}}\), where r is the imprimitivity index (see [16, chapter 13]). In all cases we will say that the power method converges and, for the sake of simplicity, assume that there is only one largest by modulus eigenvalue, maybe multiple, i.e., that \(r=1\).

Theorem 1

The selected leading eigenvector of a non-negative matrix A is proportional to the limit of the power method applied to the matrix A with the initial vector \({\varvec{x}}_0 = {\varvec{e}}\).

Proof

It may be assumed that \(\rho (A) = 1\). The spectral radius of the matrix \(A_{\varepsilon }\) strictly increases in \(\varepsilon \), so \(\rho (A_{\varepsilon }) = 1 + \delta \) for some \(\delta > 0\). We have

$$\begin{aligned} \bigl ( A\, + \, \varepsilon \, {\varvec{e}}\, {\varvec{e}}^T \bigr )\, {\varvec{v}}_{\varepsilon } \ = \ (1+\delta ){\varvec{v}}_{\varepsilon }. \end{aligned}$$

Denoting by \(S_{\varepsilon } = ({\varvec{e}}, {\varvec{v}}_{\varepsilon })\) the sum of entries of the vector \({\varvec{v}}_{\varepsilon }\), we obtain

$$\begin{aligned} A{\varvec{v}}_{\varepsilon }\ + \ \varepsilon \, S_{\varepsilon }\, {{\varvec{e}}} \ = \ (1+\delta )\, {\varvec{v}}_{\varepsilon }, \end{aligned}$$

and hence

$$\begin{aligned} \left( I \, - \, \frac{1}{(1+\delta )}\, A \right) \, {\varvec{v}}_{\varepsilon } \quad = \quad \frac{\varepsilon \, S_{\varepsilon }}{1+\delta } \, {\varvec{e}}. \end{aligned}$$

Since \(\rho (A) = 1\), we have \(\rho \bigl (\frac{1}{(1+\delta )}\, A \bigr )\, = \, \frac{1}{1+\delta } \, < \, 1\). Therefore, we can apply the power series expansion:

$$\begin{aligned} \left( I \, - \, \frac{1}{(1+\delta )}\, A \right) ^{-1} \ = \ I \, + \, (1+\delta )^{-1}A \, + \, (1+\delta )^{-2}A^2+\cdots \end{aligned}$$

and obtain

$$\begin{aligned} {\varvec{v}}_{\varepsilon } \quad = \quad \frac{1+\delta }{\varepsilon \, S_{\varepsilon }}\ \sum _{k=0}^{\infty } \ (1+\delta )^{-k}A^k {{\varvec{e}}}. \end{aligned}$$

Assume now that the power method for the matrix A and for \({\varvec{x}}_0 = {\varvec{e}}\) converges to some vector \(\tilde{\varvec{v}}\). In case \(r=1\) (r is the imprimitivity index), this means that \(A^k{\varvec{e}}\rightarrow \tilde{\varvec{v}}\) as \(k \rightarrow \infty \). Then direction of the vector \(\sum _{k=0}^{\infty } (1+\delta )^{-k}A^k {{\varvec{e}}}\) converges as \(\delta \rightarrow 0\) to the direction of the vector \(\tilde{\varvec{v}}\). Since \(\delta \rightarrow 0\) as \(\varepsilon \rightarrow 0\), the theorem follows. If \(r>1\), then \(\frac{1}{r}\, \sum _{j=0}^{r-1} A^{k+j}{\varvec{e}}\rightarrow \tilde{\varvec{v}}\). Arguing as above we conclude again that the direction of the vector \(\sum _{k=0}^{\infty } (1+\delta )^{-k}A^k {{\varvec{e}}}\) converges as \(\delta \rightarrow 0\) to the direction of the vector \(\tilde{\varvec{v}}\). This completes the proof. \(\square \)

Definition 3

The greedy method with the selected leading eigenvectors \({\varvec{v}}_k\) in all iterations is called selective greedy method.

Now we arrive at the main result of this section. We show that using selected eigenvectors \({\varvec{v}}_k\) avoids cycling.

Theorem 2

The selective greedy method does not cycle.

Proof

Consider the case of finding maximum, the case of minimum is proved in the same way. By the kth iteration of the algorithm we have a matrix \(A_k\) and its selected leading eigenvector \({\varvec{v}}_k\). Assume the algorithm cycles: \(A_{1} = A_{n+1}\), and let \(n\ge 2\) be the minimal length of the cycle (for \(n=1\), the equality \(A_{1} = A_{2}\) means that \(A_1\) is the optimal matrix, so this is not cycling). Consider the chain of perturbed matrices \(A_{1, \varepsilon }\, \rightarrow \, \cdots \rightarrow A_{n+1, \varepsilon }\), where \(A_{k, \varepsilon }\, = \, A_{k} \, +\, \varepsilon \, E\) and \(\, \varepsilon >0\) is a small number. For the matrix \(A_{k}\), in each row \({\varvec{a}}^{(k)}_i\), we have one of two cases:

  1. 1.

    if \(\bigl ( {\varvec{a}}^{(k)}_i, {\varvec{v}}_k\bigr ) \, = \, \max \nolimits _{{\varvec{a}}_i \in {\mathcal {F}}_i}({\varvec{a}}_i, {\varvec{v}}_k)\), then this row is not changed in this iteration: \({\varvec{a}}^{(k+1)}_i \, = \, {\varvec{a}}^{(k)}_i\). In this case \({\varvec{a}}^{(k+1)}_i\, +\, \varepsilon {\varvec{e}}\, = \, {\varvec{a}}^{(k)}_i + \varepsilon {\varvec{e}}\).

  2. 2.

    if \(\bigl ({\varvec{a}}^{(k)}_i, {\varvec{v}}_k\bigr ) \, < \, \max \nolimits _{{\varvec{a}}_i \in {\mathcal {F}}_i}({\varvec{a}}_i, {\varvec{v}}_k)\), then the row \({\varvec{a}}^{(k)}_i\) is not optimal, and \(\bigl ({\varvec{a}}^{(k)}_i, {\varvec{v}}_k\bigr ) \, < \, \bigl ( {\varvec{a}}^{(k+1)}_i, {\varvec{v}}_k \bigr )\).

Hence the same inequality is true for the perturbed matrices \(A_{k, \varepsilon }, A_{k+1, \varepsilon }\) and for the eigenvector \({\varvec{v}}_{k, \varepsilon }\) of \(A_{k, \varepsilon }\), whenever \(\varepsilon \) is small enough.

Thus, in both cases, each row of \(A_{k+1, \varepsilon }\) makes a bigger or equal scalar product with \({\varvec{v}}_{k, \varepsilon }\) than the corresponding row of \(A_{k, \varepsilon }\). Moreover, at least in one row this scalar product is strictly bigger, otherwise \(A_{k} = A_{k+1}\), which contradicts the minimality of the cycle. This, in view of strict positivity of \(A_{k, \varepsilon }\), implies that \(\rho _{k+1, \varepsilon } \, > \, \rho _{k, \varepsilon }\) [34, lemma 2]. We see that in the chain \(A_{1, \varepsilon }\rightarrow \cdots \rightarrow A_{n+1, \varepsilon }\) the spectral radius strictly increases. Therefore \(A_{1, \varepsilon }\ne A_{n+1, \varepsilon }\), and hence \(A_1\ne A_{n+1}\). The contradiction completes the proof. \(\square \)

Example 1

We apply the selective greedy method to the family from Sect. 3, on which the (usual) greedy method cycles and arrives to a wrong solution. Now we have:

$$\begin{aligned}&A_1 \ \xrightarrow {\ } \ A_2 \ \xrightarrow {{\varvec{v}}_2 = (3, 2, 2)^T}\ \left( \begin{array}{lcr} 12 &{} \quad 0 &{}\quad 0\\ 0 &{} \quad 10 &{}\quad 0\\ 0 &{} \quad 0 &{}\quad 10 \end{array} \right) \ \xrightarrow {{\varvec{v}}_3 = (1, 0, 0)^T}\\&A_4 \ = \ \left( \begin{array}{lcr} 12 &{}\quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 1\\ 1 &{}\quad 1 &{}\quad 3 \end{array} \right) \ \xrightarrow {{\varvec{v}}_4 = (49, 5, 6)^T} \ A_4 \end{aligned}$$

Thus, the selective greedy method arrives at the optimal matrix \(A_4\) at the third iteration. This matrix is maximal in each row with respect to its leading eigenvector \({\varvec{v}}_4\), as the last iteration demonstrates.

The selective greedy method repeats the first iteration of the (usual) greedy method, but in the second and in the third iteration it chooses different leading eigenvectors (“selected eigenvectors”) by which it does not cycle and goes directly to the optimal solution.

4.3 Realization of the selective greedy method

Thus, the greedy method does not cycle, provided it uses selected leading eigenvectors \({\varvec{v}}_k\) in all iterations. If the leading eigenvector is unique, then it coincides with the selected leading eigenvector, and there is no problem. The difficulty occurs if the leading eigenvector is multiple. The following crucial fact is a straightforward consequence of Theorem 1.

Corollary 1

The power method \({\varvec{x}}_{k+1} = A{\varvec{x}}_k, \, k \ge 0,\) applied to the initial vector \({\varvec{x}}_0 = {\varvec{e}}\) converges to the selected leading eigenvector.

Therefore, to realize the selective greedy method we compute all the leading eigenvectors \({\varvec{v}}_k\) by the power method starting always with the same vector \({\varvec{x}}_0 = {\varvec{e}}\) (the vector of ones). After a sufficient number of iterations we come close to the limit, which is, by Corollary 1, the selected eigenvector (up to normalization). So, actually we perform the greedy method with approximations for the selected eigenvectors, which are, in view of Theorem 2, the leading eigenvectors of perturbed matrices \(A_{\varepsilon }\) for some small \(\varepsilon \).

Thus, to avoid cycling we can compute all eigenvectors \({\varvec{v}}_k\) by the power method starting with the same initial vector \({\varvec{x}}_0 = {\varvec{e}}\). Because of computer approximations, actually we deal with totally positive matrices of the family \({\mathcal {F}}_{\varepsilon }\) for some small \(\varepsilon \).

Remark 3

The usual method for the non-symmetric eigenvalue/eigenvector problem is the QR algorithm. The case of non-negative matrices is not an exception. However, for large and for sparse non-negative matrices, in many cases the Perron eigenvalue problem is solved more efficiently by the power method. We use power method to avoid cycling of the greedy algorithm. Namely, the power method allows us to find selected leading eigenvector, see [35].

4.4 The procedure of the selective greedy method

We precisely follow the algorithm of the greedy method presented in Sect. 2.2, and in each iteration we compute the corresponding leading eigenvectors \({\varvec{v}}_k\) by the power method starting with the vector of ones \({\varvec{e}}\). The precision parameter \(\varepsilon \) of the power method is chosen in advance and is the same in all iterations. By Theorem 2, if \(\varepsilon > 0\) is small enough, then the greedy method does not cycle and finds the solution within finite time. In practice, if the algorithm cycles for a given \(\varepsilon \), then we reduce it taking, say, \(\varepsilon /10\) and restart the algorithm.

It is known that the power method may suffer for non-primitive matrices. In this case, the eigenvalue with the biggest modulus may not be unique, which can cause slow down or even divergence of the algorithm. In particular, this explains possible difficulties of the power method for sparse matrices. See, for example,[28]. Here to avoid any trouble with the power method for sparse matrices, we can use the following well-known fact: if \(A\ge 0\) is a matrix, then the matrix \(A+I\) has a unique (maybe multiple) eigenvalue equal by modulus to \(\rho (A)+1\). This implies that the power method applied for the matrix \(A+I\) always converges to its leading eigenvector, i.e., \((A+I)^k{\varvec{e}}\rightarrow {\varvec{v}}\) as \(k\rightarrow \infty \). Of course, \(A+I\) has the same leading eigenvector as A. Hence, we can replace each uncertainty set \({\mathcal {F}}_i\) by the set \({\varvec{e}}_i + {\mathcal {F}}_i\), where \({\varvec{e}}_i\) is the ith canonical basis vector. After this each matrix \(A_k\) is replaced by \(A_k+I\), and we will not have trouble with the convergence of the power method.

5 The rate of convergence

In this section we address two issues: explaining the fast convergence of the greedy method and realising why the spectral simplex method converges slower. We restrict ourselves to the maximization problems, where we find the maximal spectral radius \(\rho _{\max }\). For minimization problems, the results and the proofs are the same.

First of all, let us remark that due to possible cycling, no efficient estimates for the rate of convergence exist. Indeed, in Sect. 3 we see that the greedy method may not converge to the solution at all. That is why we estimate the rate of convergence only under some favorable assumptions (positivity of the limit matrix, positivity of its leading eigenvector, etc.) Actually, they are not very restrictive since the selective greedy method has the same convergence rate as the greedy method for a strictly positive family \({\mathcal {F}}_{\varepsilon }\).

We are going to show that under those assumptions, both methods have global linear convergence and the greedy method has moreover a local quadratic convergence. In both cases we estimate the parameters of linear and quadratic convergence.

5.1 Global linear convergence

The following result is formulated for both the greedy and the spectral simplex methods. The same holds for the method with the pivoting rule. We denote by \(A_k\) the matrix obtained by the kth iteration, by \(\rho _k\) its spectral radius and by \({\varvec{u}}_k, {\varvec{v}}_k\) its left and right leading eigenvectors (if there are multiple ones, we take any of them). The jth components of the vectors \({\varvec{u}}_k, {\varvec{v}}_k\) are \(u_{k, j}, \, v_{k, j}\) respectively. We also denote \(\rho _{\max } = \max _{A \in {\mathcal {F}}}\rho (A)\). As usual, \(\{{\varvec{e}}_j\}_{j=1}^d\) is the canonical basis in \({\mathbb {R}}^d\) and \({\varvec{e}}\) is the vector of ones.

Theorem 3

In both the greedy method and the spectral simplex method for maximizing the spectral radius, in the kth iteration, we have

$$\begin{aligned} \rho _{k+1} \quad \ge \quad \rho _k \ + \ (\rho _{\max } - \rho _k)\, \max \limits _{j = 1,\ldots ,d}\frac{u_{k+1, j} \, v_{k, j} }{({\varvec{u}}_{k+1}, {\varvec{v}}_k)}\ , \qquad k \in {\mathbb {N}}, \end{aligned}$$
(8)

provided \({\varvec{v}}_k > 0\).

Proof

Consider the value \(s = s(A)\) defined in (4). Let the maximum in formula (4) be attained in the mth row. Then for every matrix \(A\in {\mathcal {F}}\), we have \(A{\varvec{v}}_k \, \le \, s\, {\varvec{v}}_k\), and therefore \(\rho (A) \le s\). Thus, \(\rho _{\max } \le s\). On the other hand, in both greedy and simplex methods, the mth component of the vector \(A_{k+1}{\varvec{v}}_k\) is equal to the mth component of the vector \(\, s\, {\varvec{v}}_k\). In all other components, we have \(A_{k+1}{\varvec{v}}_k \ge A_{k}{\varvec{v}}_k = \rho _k\, {\varvec{v}}_k\). Therefore,

$$\begin{aligned} A_{k+1}{\varvec{v}}_k \ \ge \ \rho _k {\varvec{v}}_k \ + \ (s-\rho _k) v_{k, j}{\varvec{e}}_j. \end{aligned}$$

Multiplying this inequality by the vector \({\varvec{u}}_{k+1}\) from the left, we obtain

$$\begin{aligned} {\varvec{u}}_{k+1}^TA_{k+1}{\varvec{v}}_k \ \ge \ \rho _k ({\varvec{u}}_{k+1}, {\varvec{v}}_k) \ + \ (s-\rho _k)\, v_{k, j}\, ({\varvec{u}}_{k+1}, {\varvec{e}}_j). \end{aligned}$$

Since \(\, {\varvec{u}}_{k+1}^TA_{k+1} \, = \, \rho _{k+1}{\varvec{u}}_{k+1}^T\, \) and \(({\varvec{u}}_{k+1}, {\varvec{e}}_j) \, = \, u_{k+1, j}\), we have

$$\begin{aligned} \rho _{k+1}({\varvec{u}}_{k+1}, {\varvec{v}}_k) \ \ge \ \rho _k ({\varvec{u}}_{k+1}, {\varvec{v}}_k) \ + \ (s-\rho _k) v_{k, j} u_{k+1, j}. \end{aligned}$$

Dividing by \(({\varvec{u}}_{k+1}, {\varvec{v}}_k)\) and taking into account that \(s\ge \rho _{\max }\), we arrive at (8). \(\square \)

Rewriting (8) in the form

$$\begin{aligned} \rho _{\max }\ - \ \rho _{k+1} \quad \le \quad \ (\rho _{\max }\ - \ \rho _{k})\, \left( 1 \, - \, \frac{u_{k+1, j} \, v_{k, j}}{({\varvec{u}}_{k+1}, {\varvec{v}}_k)}\right) , \end{aligned}$$

we see that the kth iteration reduces the distance to the maximal spectral radius in at least \(\left( 1 \, - \, \frac{u_{k+1,j}\, v_{k,j}}{({\varvec{u}}_{k+1}, {\varvec{v}}_k)}\right) \) times. Thus, we have established

Corollary 2

Each iteration of the greedy method and of the spectral simplex method reduces the distance to the optimal spectral radius in at least \(\left( 1 \, - \, \frac{u_{k+1, j}\, v_{k,j}}{({\varvec{u}}_{k+1}, {\varvec{v}}_k)}\right) \) times.

If \(A_k > 0\), then the contraction coefficient from Corollary 2 can be roughly estimated from above. Let m and M be the smallest and the biggest entries respectively of all vectors from the uncertainty sets. For each \(A\in {\mathcal {F}}\), we denote by \({\varvec{v}}\) its leading eigenvector and by \(\rho \) its spectral radius. Let also S be the sum of entries of \({\varvec{v}}\). Then for each i, we have \(v_i = \rho ^{-1}(A{\varvec{v}}, {\varvec{e}}_i) \, \le \, \rho ^{-1}MS\). Similarly, \(v_i \ge \rho ^{-1}mS\). Hence, for every matrix from \({\mathcal {F}}\), the ratio between the smallest and of the biggest entries of its leading eigenvector is at least m/M. The same is true for the left eigenvector. Therefore,

$$\begin{aligned} \frac{u_{k+1, j}v_{k, j}}{({\varvec{u}}_{k+1}, {\varvec{v}}_k)} \quad \ge \quad \frac{m^2}{m^2 + (d-1)M^2}. \end{aligned}$$

We have arrived at the following theorem on the global linear convergence

Theorem 4

If all uncertainty sets are strictly positive, then the rate of convergence of both the greedy method and of the spectral simplex method is at least linear with the factor

$$\begin{aligned} q \quad \le \quad 1 \ - \ \frac{m^2}{m^2 + (d-1)M^2}, \end{aligned}$$
(9)

where m and M are the smallest and the biggest entries respectively of vectors from the uncertainty sets.

Thus, in each iteration of the greedy method and of the spectral simplex method, we have

$$\begin{aligned} \rho _{\max }\,- \, \rho _{k+1} \ \le \ q^{k}\, (\rho _{\max } \, - \, \rho _0), \end{aligned}$$
(10)

5.2 Local quadratic convergence

Now we are going to see that the greedy method have actually a quadratic rate of convergence in a small neighbourhood of the optimal matrix. This explains its fast convergence. We establish this result under the following two assumptions: (1) the optimal matrix \(\bar{A}\), for which \(\rho (\bar{A}) = \rho _{\max }\), is strictly positive; (2) in a neighbourhood of \(\bar{A}\), the uncertainty sets \({\mathcal {F}}_i\) are bounded by \(C^2\) smooth strictly convex surfaces. The latter is, of course, not satisfied for polyhedral sets. Nevertheless, the quadratic convergence for sets with a smooth boundary explains the fast convergence for finite or for polyhedral sets as well. Indeed, a generic polyhedron defined by sufficiently many linear inequalities (or a generic polytope with sufficiently many vertices) can be well approximated by a smooth strictly convex surface. Hence, a fast convergence of our algorithm on uncertainty sets bounded by smooth convex surfaces explains this phenomenon on big finite sets or on polyhedral sets defined by many inequalities.

The local quadratic convergence means that there are constants \(B > 0\) and \(\varepsilon \in \bigl (0, \frac{1}{B} \bigr )\) for which the following holds: if \(\Vert A_k-\bar{A}\Vert \le \varepsilon \), then \(\Vert A_{k+1}-\bar{A}\Vert \, \le \, B\, \Vert A_k-\bar{A}\Vert ^2\). In this case, the algorithm converges whenever \(\Vert A_1-\bar{A}\Vert \le \varepsilon \, \), in which case for every iteration k, we have

$$\begin{aligned} \Vert A_k-\bar{A}\Vert \ \le \ \frac{1}{B}\, \left( B\, \Vert A_1-\bar{A}\Vert \right) ^{2^k-1} \end{aligned}$$

(see, for instance, [5]).

As usual, we use Euclidean norm \(\Vert {\varvec{x}}\Vert = \sqrt{({\varvec{x}}, {\varvec{x}})}\). We also need the \(L_{\infty }\)-norm \(\Vert {\varvec{x}}\Vert _{\infty } = \max _{i=1, \ldots , d}|x_i|\). We denote by I the identity \(d\times d\) matrix; \(A \preceq B\) means that the matrix \(B-A\) is positive semidefinite. A set \({\mathcal {M}}\) is strongly convex if its boundary does not contain segments.

Theorem 5

Suppose the optimal matrix \(\bar{A}\) is strictly positive and all uncertainty sets \({\mathcal {F}}_i\) are strongly convex and \(C^2\) smooth in a neighbourhood of \(\bar{A}\); then the greedy algorithm has a local quadratic convergence to \(\bar{A}\).

In the proof we use the following auxiliary fact

Lemma 1

If A is a strictly positive matrix with \(\rho (A) = 1\) and with the leading eigenvector \({\varvec{v}}\), then for every \(\varepsilon > 0\) and for every vector \({\varvec{v}}'\ge 0, \, \Vert {\varvec{v}}'\Vert = 1\), such that \(\Vert A{\varvec{v}}' - {\varvec{v}}'\Vert _{\infty } \le \varepsilon \), we have \(\Vert {\varvec{v}}- {\varvec{v}}'\Vert \, \le \, C(A) \, \varepsilon \), where \(C(A) \, = \, \frac{1}{m} \, \bigl (1 + (d-1)\frac{M^2}{m^2}\bigr )^{1/2}\), m and M are minimal and maximal entry of A respectively.

Proof

Let \({\varvec{v}}= (v_1, \ldots , v_d)\). Denote \({\varvec{v}}' = (v_1s_1, \ldots , v_ds_d)\), where \(s_i\) are some non-negative numbers. Choose one for which the value \(|s_i - 1|\) is maximal. Let it be \(s_1\) and let \(s_1 - 1 >0\) (the opposite case is considered in the same way). Since \(\Vert {\varvec{v}}'\Vert = \Vert {\varvec{v}}\Vert \) and \(s_1> 1\), it follows that there exists q for which \(s_q< 1\). Since \(A{\varvec{v}}= {\varvec{v}}\), we have \(\sum _{j=1}^d a_{1j}v_j = v_1\). Therefore,

$$\begin{aligned} ({\varvec{v}}' - A{\varvec{v}}')_{1} \ = \ s_1v_1 \ - \ \sum _{j=1}^d a_{1j}v_js_j \ = \ \sum _{j=1}^d a_{1j}v_js_1 \ - \ \sum _{j=1}^d a_{1j}v_js_j \ = \ \sum _{j=1}^d a_{1j}v_j(s_1 - s_j) \end{aligned}$$
(11)

The last sum is bigger than or equal to one term \(a_{1q}v_m(s_1 - s_q) \, \ge \, a_{1q}v_q(s_1 - 1) \, \ge \, m (v_1' - v_1)\). Note that for all other i, we have \(|s_i - 1| \le |s_1 - 1|\), hence \(|v_i' - v_i| \le \frac{v_i}{v_1}|v_1' - v_1| \le \frac{M}{m}|v_1' - v_1|\). Therefore, \(\Vert {\varvec{v}}' - {\varvec{v}}\Vert \, \le \, \bigl (1 + (d-1)\frac{M^2}{m^2}\bigr )^{1/2}|v_1' - v_1|\). Substituting to (11), we get

$$\begin{aligned} ({\varvec{v}}' - A{\varvec{v}}')_{1} \ \ge \ m |v_1' - v_1| \ \ge \ m \left( 1 + (d-1)\frac{M^2}{m^2}\right) ^{-1/2}\Vert {\varvec{v}}' - {\varvec{v}}\Vert \end{aligned}$$

Hence, \(\Vert A{\varvec{v}}' - {\varvec{v}}'\Vert _{\infty } \ge \frac{1}{C(A)} \Vert {\varvec{v}}' - {\varvec{v}}\Vert \), which completes the proof. \(\square \)

Proof of Theorem 5

Without loss of generality it can be assumed that \(\rho (\bar{A}) = 1\). It is known that any strongly convex smooth surface \(\Gamma \) can be defined by a smooth convex function f so that \({\varvec{x}}\in \Gamma \Leftrightarrow f({\varvec{x}})=0\) and \(\Vert f'({\varvec{x}})\Vert = 1\) for all \({\varvec{x}}\in \Gamma \). For example, one can set \(f({\varvec{x}})\) equal to the distance from \({\varvec{x}}\) to \(\Gamma \) for \({\varvec{x}}\) outside \(\Gamma \) or in some neighbourhood inside it. For each \(i = 1, \ldots , d\), we set \(\Gamma _i = \partial {\mathcal {F}}_i\) and denote by \(f_i\) such a function that defines the surface \(\Gamma _i\). Fix an index \(i = 1, \ldots , d\) and denote by \({\varvec{a}}^{(k)}, {\varvec{a}}^{(k+1)},\) and \(\bar{\varvec{a}}\) the ith rows of the matrices \(A_k, A_{k+1},\) and \(\bar{A}\) respectively (so, we omit the subscript i to simplify the notation). Since \(\bar{\varvec{a}}\, = \, \mathrm{argmax}_{{\varvec{a}}\in \Gamma _i}\, ({\varvec{a}}, \bar{\varvec{v}})\), it follows that \(f_i'(\bar{\varvec{a}}) = \bar{\varvec{v}}\). Denote by L and \(\ell \) the lower and upper bounds of the quadratic form \(f''({\varvec{a}})\) in some neighbourhood of \(\bar{\varvec{a}}\), i.e., \(\, \ell \, I \preceq f''({\varvec{a}}) \preceq L\, I\) at all points \({\varvec{a}}\) such that \(\Vert {\varvec{a}}- \bar{\varvec{a}}\Vert < \varepsilon \). Writing the Taylor expansion of the function \(f_i\) at the point \(\bar{\varvec{a}}\), we have for small \({\varvec{h}}\in {\mathbb {R}}^d\):

$$\begin{aligned} f_i(\bar{\varvec{a}}+ {\varvec{h}})\ = \ f_i(\bar{\varvec{a}})\, + \, \bigl ( f_i'(\bar{\varvec{a}}) , {\varvec{h}}\bigr ) \, + \, r({\varvec{h}})\Vert {\varvec{h}}\Vert ^2, \end{aligned}$$

where the remainder \(r({\varvec{h}}) \le L\). Substituting \({\varvec{h}}= {\varvec{a}}^{(k)} - \bar{\varvec{a}}\) and taking into account that \(f_i({\varvec{a}}^{(k)})\, = \, f_i(\bar{\varvec{a}}) \, = \, 0\), because both \({\varvec{a}}^{(k)}\) and \(\bar{\varvec{a}}\) belong to \(\Gamma \), we obtain \(\bigl ( f_i'(\bar{\varvec{a}}) , {\varvec{h}}\bigr ) \, + \, r({\varvec{h}})\Vert {\varvec{h}}\Vert ^2\, = \, 0\). Hence \(\, |(\bar{\varvec{v}}, {\varvec{h}})|\, \le \, L \Vert {\varvec{h}}\Vert ^2\). Thus,

$$\begin{aligned} |(\bar{\varvec{v}}, {\varvec{a}}^{(k)}) \, - \, (\bar{\varvec{v}}, \bar{\varvec{a}})| \ \le \ L\, \Vert {\varvec{h}}\Vert ^2. \end{aligned}$$

This holds for each row of \(A_k\), therefore \(\Vert (A_k - I)\bar{\varvec{v}}\Vert _{\infty }\, \le \, L \Vert {\varvec{h}}\Vert ^2\), and hence \(\Vert {\varvec{v}}_k - \bar{\varvec{v}}\Vert \, \le \, C(A_k)\, L\, \Vert {\varvec{h}}\Vert ^2\), where the value \(C(A_k)\) is defined in Lemma 1.

Further, \({\varvec{a}}^{(k+1)} \, = \, \mathrm{argmax}_{{\varvec{a}}\in \Gamma _i}\, ({\varvec{a}}, {\varvec{v}}_k)\), hence \(f_i'({\varvec{a}}^{(k+1)}) \, = \, {\varvec{v}}_k\). Consequently,

$$\begin{aligned} \Vert {\varvec{v}}_k - \bar{\varvec{v}}\Vert \ = \ |f_i'({\varvec{a}}^{(k+1)}) - f_i'(\bar{\varvec{a}})|\ \ge \ \ \ell \, \Vert {\varvec{a}}^{(k+1)} - \bar{\varvec{a}}\Vert . \end{aligned}$$

Therefore, \(\Vert {\varvec{a}}^{(k+1)} - \bar{\varvec{a}}\Vert \, \le \, \frac{C(A_k)L}{\ell } \Vert {\varvec{h}}\Vert ^2\, = \, \frac{C(A_k)L}{\ell } \Vert {\varvec{a}}^{(k+1)} - \bar{\varvec{a}}\Vert ^2\). This holds for each row of the matrices, hence the theorem follows. \(\square \)

In the proof of Theorem 5 we estimated the parameter B of the quadratic convergence in terms of the parameters \(\ell \) and L of the functions \(f_i''\). This estimate, however, is difficult to evaluate, because the functions \(f_i\) are a priori unknown. To estimate B effectively, we need another idea based on geometrical properties of the uncertainty sets \({\mathcal {F}}_i\). Below we obtain such an estimate in terms of radii of curvature of the boundary of \({\mathcal {F}}_i\).

Assume that at each point of intersection of the surface \(\Gamma _i = \partial {\mathcal {F}}_i\) and of the corresponding \(\varepsilon \)-neighbourhood of the point \(\bar{\varvec{a}}_i\), the maximal radius of curvature of two-dimensional cross-section passing through the normal vector to the surface does not exceed R and the minimal radius of curvature is at least \(r > 0\). Denote also by C the maximal value of C(A) over the corresponding neighbourhood of the matrix \(\bar{A}\), where C(A) is defined in Lemma 1.

Theorem 6

For the greedy method, we have \(\, B \, \le \, \frac{RC}{2r \rho (\bar{A})}\).

Proof

As in the proof of Theorem 5, we assume that \(\rho (\bar{A}) = 1\) and denote by \({\varvec{a}}^{(k)}, {\varvec{a}}^{(k+1)},\) and \(\bar{\varvec{a}}\) the ith rows of the matrices \(A_k, A_{k+1},\) and \(\bar{A}\) respectively. Since \(\bar{\varvec{a}}\, = \, \mathrm{argmax}_{{\varvec{a}}\in \Gamma _i}\, ({\varvec{a}}, \bar{\varvec{v}})\), it follows that \(\bar{\varvec{v}}\) is the outer unit normal vector to \(\Gamma _i\) at the point \(\bar{\varvec{a}}\). Denote \({\varvec{h}}= {\varvec{a}}^{(k)} - \bar{\varvec{a}}\). Draw a Euclidean sphere \({\mathcal {S}}_i\) of radius r tangent to \(\Gamma _i\) from inside at the point \(\bar{\varvec{a}}\). Take a point \({\varvec{b}}^{(k)}\) on \({\mathcal {S}}_i\) such that \(\Vert {\varvec{b}}^{(k)} - \bar{\varvec{a}}\Vert = \Vert {\varvec{h}}\Vert \). Since the maximal radius of curvature of \(\Gamma _i\) is at least R, this part of the sphere is inside \({\mathcal {F}}_i\), hence the vector \({\varvec{a}}^{(k)} - \bar{\varvec{a}}\) forms a smaller angle with the normal vector \(\bar{\varvec{v}}\) than the vector \({\varvec{b}}^{(k)} - \bar{\varvec{a}}\). The vectors \({\varvec{a}}^{(k)} - \bar{\varvec{a}}\) and \({\varvec{b}}^{(k)} - \bar{\varvec{a}}\) are of the same length, therefore

$$\begin{aligned} \bigl ( {\varvec{a}}^{(k)} - \bar{\varvec{a}}, \, \bar{\varvec{v}}\bigr ) \ \ge \ \bigl ( {\varvec{b}}^{(k)} - \bar{\varvec{a}}, \, \bar{\varvec{v}}\bigr )\ = \ - \frac{\Vert {\varvec{h}}\Vert ^2}{2r} \end{aligned}$$

On the other hand, \(\bigl ( {\varvec{a}}^{(k)} - \bar{\varvec{a}}, \bar{\varvec{v}}\bigr ) \le 0\). Therefore,

$$\begin{aligned} \Bigl |\, \bigl ( {\varvec{a}}^{(k)} , \, \bar{\varvec{v}}\bigr ) \ - \ \bigl (\bar{\varvec{a}}, \, \bar{\varvec{v}}\bigr )\, \Bigl | \ \le \ \frac{\Vert {\varvec{h}}\Vert ^2}{2r} \end{aligned}$$

This inequality holds for all rows of the matrix \(A_k\), therefore

$$\begin{aligned} \Vert (A_k - I)\bar{\varvec{v}}\Vert _{\infty }\ = \ \Vert (A_k - \bar{A})\bar{\varvec{v}}\Vert _{\infty }\, \le \, \frac{\Vert {\varvec{h}}\Vert ^2}{2r}, \end{aligned}$$

and hence, by Lemma 1,

$$\begin{aligned} \Vert {\varvec{v}}_k - \bar{\varvec{v}}\Vert \quad \le \quad \frac{C }{2r}\ \Vert {\varvec{h}}\Vert ^2. \end{aligned}$$
(12)

Further, \({\varvec{a}}^{(k+1)} \, = \, \mathrm{argmax}_{{\varvec{a}}\in \Gamma _i}\, ({\varvec{a}}, {\varvec{v}}_k)\), hence \({\varvec{v}}_k\) is a normal vector to \(\Gamma _i\) at the point \({\varvec{a}}^{(k+1)}\). Consequently

$$\begin{aligned} R\, \Vert {\varvec{v}}_k - \bar{\varvec{v}}\Vert \ \ge \ \ \Vert {\varvec{a}}^{(k+1)} - \bar{\varvec{a}}\Vert . \end{aligned}$$

Combining this inequality with (12) we obtain \(\Vert {\varvec{a}}^{(k+1)} - \bar{\varvec{a}}\Vert \, \le \, \frac{C R}{2r} \, \Vert {\varvec{h}}\Vert ^2\), which concludes the proof. \(\square \)

6 Numerical results

We now demonstrate and analyse numerical results of the selective greedy method. Several types of problems are considered: maximizing and minimizing the spectral radius, finite and polyhedral uncertainty sets, positive and sparse matrices. In all tables d denotes the dimension of the matrix and N denotes the number of elements in each uncertainty set \({\mathcal {F}}_i, \, i = 1, \ldots , d\). For the sake of simplicity, in each experiment all uncertainty sets \({\mathcal {F}}_i\) are of the same cardinality. For each pair (dN) we made ten experiments and put the arithmetic mean in the table. All computations are done on a standard laptop (Dell XPS 13 with Intel Core i7-6500U CPU @ 2.50 GHz and 8GB RAM.) The algorithms are coded in Python.

6.1 Product families with finite uncertainty sets

We first test the selective greedy method on positive product families, and then on non-negative product families with density parameters \(\gamma _i\in (0, 1)\) (the percentage of nonzero entries of the vectors belonging to the uncertainty set \(\mathcal {F}_i\).).

The results of tests on positive product families are given in Tables 1 and 2, for maximization and minimization respectively. The numbers shown represent the average number of iterations performed by the algorithm to find the optimal matrix.

Table 1 Average number of iterations for maximization, for positive families
Table 2 Average number of iterations for minimization, for positive families

In Tables 3 and 4 we report the behaviour of selective greedy algorithm as dimension d and the size of the uncertainty sets N vary. For each uncertainty set \(\mathcal {F}_i\) the density parameter \(\gamma _i\) is randomly chosen in the interval (0.09, 0.15) and the elements of the set \(\mathcal {F}_i\) are randomly generated in accordance to the parameter \(\gamma _i\). The tables present the average number of iterations and the computation time in which the algorithm terminates and finds the solution. Table 3 contains the data for the maximisation, while Table 4 presents the results for the minimization problem.

Table 3 Average number of iterations (a) and computing time (b) for maximization, for non-negative families
Table 4 Average number of iterations (a) and computing time (b) for minimization, for non-negative families

Table 5 shows how the number of iterations and computing time vary as the density parameter is changed. The dimension is kept fixed at \(d = 600\) and the cardinality of each product set at \(\vert \mathcal {F}_i\vert =200\), while we vary the interval I (in the first line of the table) from which the density parameter takes value. For example, the first column of Table 5a shows results for matrices that have the ratio of nonzero elements between 0.09 and 0.15. The first line contains the results of the problem of maximisation o spectral radius (MAX), the second does for minimisation (MIN). the problems

Table 5 Effects of sparsity, number of iterations (a) and computing time (b) for \(d=600, N = 200\)

6.2 Some conclusions

The main conclusion from the numerical results is rather surprising. The number of iterations seems to depend neither on the dimension, nor on the cardinality of the uncertainty sets nor on the sparsity of matrices. The usual number of iterations is 3 - 4. The robustness to sparsity can be explained by the fact that the selective greedy method actually works with positive perturbations of matrices. The independence of cardinality of the uncertainty sets (in the tables we see that the number of iterations may even decrease in cardinality) is, most likely, explained by the quadratic convergence. The distance to the optimal matrix quickly decays to the tolerance parameter \(\varepsilon \), which is usually set up between \(10^{-8}\) and \(10^{-6}\), and the algorithm terminates. The tolerance parameter is a small number which is chosen as a precision of our computations. We use the tolerance parameter in the exit condition of the algorithm: if each row \({\varvec{a}}_i, \, i = 1, \ldots , d\), of the current matrix \(A_k\) is a solution of the problem (7) up to the precision \(\varepsilon \), then the algorithm terminates.

6.3 Polyhedral product families

Here we test the selective greedy method on product families with uncertainty sets given by systems of \(2d + N\) linear constraints of the form:

$$\begin{aligned} {\left\{ \begin{array}{ll} ({\varvec{x}},{\varvec{b}}_j)\leqslant 1 &{} j=1,\ldots ,N\\ 0\leqslant x_i\leqslant 1 &{} i=1,\ldots ,d \end{array}\right. } \end{aligned}$$

where vectors \({\varvec{b}}_j\in \mathbb {R}^d_+\) are randomly generated and normalized, and \({\varvec{x}}=(x_1,\ldots ,x_d)\). As before, d is the dimension and for each pair (dN), ten experiments have been made, the average is put in the table.

Table 6 shows the test results for maximizing the spectral radius.

Table 6 Average number of iterations (a) and computing time (b) for polyhedral sets

In polyhedral uncertainty sets the algorithm anyway searches the optimal rows among the vertices. However, the number of vertices can be huge. Nevertheless, the selective greedy method still needs less than 5 iterations to find an optimal matrix.

7 Applications

We consider two possible applications of the greedy method: optimising spectral radius of graphs and finding the closest stable matrix.

7.1 Optimizing the spectral radius of a graph

The spectral radius of a graph is the spectral radius of its adjacency matrix A. The problem of maximizing/minimizing the spectral radius under some restrictions on the graph is well known in the literature (see [8, 11,12,13, 15, 25, 31] and the references therein). Some of them deal with product families of adjacency matrices and hence can be solved by the greedy method. For example, to find the maximal spectral radius of a directed graph with d vertices and with prescribed numbers of incoming edges \(n_1, \ldots , n_d\). For this problem, all the uncertainty sets \({\mathcal {F}}_i\) are polyhedral and are given by the systems of inequalities:

$$\begin{aligned} {\mathcal {F}}_i \ = \ \left\{ x \, \in \, {\mathbb {R}}^d\ \Bigl | \ \sum _{k=1}^d x_k \, \le \, n_i, \, 0\le x_k \le 1, \, k = 1, \ldots , d\, \right\} . \end{aligned}$$
(13)

The minimal and maximal spectral radii are both attained at extreme points of the uncertainty sets, i.e., precisely when the the ith row has \(n_i\) ones and all other entries are zeros. If we need to minimize the spectral radius, we define \({\mathcal {F}}_i\) in the same way, with the only one difference: \(\sum _{k=1}^d x_k \, \ge \, n_i\).

Performing the greedy method we solve in each iteration an LP (linear programming) problem \(({\varvec{a}}_i , {\varvec{v}}_k) \rightarrow \max , \ {\varvec{a}}_i \in {\mathcal {F}}_i, \ i = 1, \ldots , d\), with the sets \({\mathcal {F}}_i\) defined in (13). In fact, this LP problem is solved explicitly: the (0, 1)-vector \({\varvec{a}}_i\) has ones precisely at positions of the \(n_i\) maximal entries of the vector \({\varvec{v}}_k\), and all other components of \({\varvec{a}}_i\) are zeros.

Thus, in each iteration, instead of solving d LP problems, which can get computationally demanding even for not so big d, we just need to deal with finding the corresponding number of highest entries of the eigenvector \({\varvec{v}}_k\). In Table 7 we present the results of numerical simulations for applying the selective greedy method to this problem, where the number of iteration and the time required for the algorithm to finish are shown.

Table 7 Row sums are randomly selected integers from the segment [75, 100]

For example, we apply the greedy method to find the maximal spectral radius of a graph with 7 vertices and with the numbers of incoming edges \( (n_1, \ldots , n_7)= (3,2,3,2,4,1,1)\). The algorithm finds the optimal graph with the adjacency matrix:

$$\begin{aligned} A = \left( \begin{array}{l c c c c c r} 1 {\ } &{} \quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{}\quad 0\\ 0 {\ } &{} \quad 0 {\ } &{}\quad 1 {\ } &{}\quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{} \quad 0\\ 1 {\ } &{} \quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{} \quad 1 {\ } &{} \quad 0 {\ } &{} \quad 0\\ 0 {\ } &{} \quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{} \quad 1 {\ } &{} \quad 0 {\ } &{}\quad 0\\ 1 {\ } &{} \quad 0 {\ } &{}\quad 1 {\ } &{}\quad 1 {\ } &{}\quad 1 {\ } &{} \quad 0 {\ } &{}\quad 0\\ 0 {\ } &{} \quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 1 {\ } &{} \quad 0 {\ } &{}\quad 0\\ 0 {\ } &{} \quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{} \quad 1 {\ } &{}\quad 0 {\ } &{}\quad 0\\ \end{array} \right) \end{aligned}$$

with \(\rho (A) = 3.21432\).

7.2 Finding the closest stable non-negative matrix

A matrix is called stable (in the sense of Schur stability) if its spectral radius is smaller than one. The matrix stabilization problem, i.e., the problem of finding the closest stable matrix to a given matrix is well known and has been studied in the literature. Finding the closest non-negative matrix is important in applications such as dynamical systems, mathematical economy, population dynamics, etc. (see [3, 18, 26, 30]). Usually the closest matrix is defined in the Euclidean or in the Frobenius norms. In both cases the problem is hard. It was first noted in [30] that in \(L_{\infty }\)-norm there are efficient methods to find the global minimum. Let us recall that \(\Vert A\Vert _{\infty } = \max _{i=1, \ldots , d}\sum _{j=1}^d|a_{ij}|\). For a given matrix \(A\ge 0\) such that \(\rho (A) > 1\), we consider the problem

$$\begin{aligned} \left\{ \begin{array}{ll} \Vert X-A\Vert _{\infty } \rightarrow \min \\ \text{ subject } \text{ to } \ X\ge 0, \, \rho (X) \le 1. \end{array} \right. \end{aligned}$$
(14)

The key observation is that for every positive r, the ball of radius r in \(L_{\infty }\)-norm centered in A, i.e., the set of non-negative matrices

$$\begin{aligned} {\mathcal {B}}(r, A)\ = \ \Bigl \{X \ge 0\ \Bigl | \ \Vert X-A\Vert _{\infty } \le r \Bigr \} \end{aligned}$$

is a product family with the polyhedral uncertainty sets \({\mathcal {F}}_i = \{{\varvec{x}}\in {\mathbb {R}}^d_+, \ \sum _{j=1}^{d} |x_{j} - a_{ij}| \le r\}\). Hence, for each r, the problem \(\rho (X)\rightarrow \min , \ X \in {\mathcal {B}}(r, A)\), can be solved by the selective greedy method. Then the minimal r such that \(\rho (X) \le 1\) is the solution of problem (14). This r can be found merely by bisection.

We found closest stable matrices to randomly generated matrices A in various dimensions up to \(d=500\). The results are given in the Table 8. We remark that for this set of experiments we set the precision of the power method to \(10^{-8}\), since for higher dimensions the procedure gets really sensitive due to the fact that the entries of the leading eigenvector get pretty close to one another. This affects the procedure which is dependent on the ordering of the entries of the leading eigenvector.

Table 8 Average computing time for finding the closest stable matrix

Below is one practical example with an unstable matrix A of size \(d=10\) with \(\rho (A) = 9.139125\):

$$\begin{aligned} A = \left( \begin{array}{l c c c c c c c c r} 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 3 {\ } &{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 8 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 8 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 8 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 2 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 4 {\ } &{}\quad 0 {\ }&{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 7 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 8 {\ } &{}\quad 0 \\ 1 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 7 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 6 {\ } &{}\quad 2 {\ } &{}\quad 2 {\ }&{}\quad 0 {\ } &{}\quad 1 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 1 {\ }&{}\quad 0 {\ } &{}\quad 7 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 9 {\ } &{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 1 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 3 {\ }&{}\quad 4 {\ } &{}\quad 8 {\ } &{}\quad 9 \\ \end{array} \right) . \end{aligned}$$

The algorithm finds the closest (in \(L_{\infty }\)-norm) stable matrix:

$$\begin{aligned} X = \left( \begin{array}{l l l l l l l l l r} 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 3 {\ } &{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 0.125 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0.125 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 8 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 2 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 4 {\ } &{}\quad 0 {\ }&{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0.125 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 6 {\ } &{}\quad 2 {\ } &{}\quad 2 {\ }&{}\quad 0 {\ } &{}\quad 1 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 1 {\ }&{}\quad 0 {\ } &{}\quad 7 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 2.125 {\ } &{}\quad 5 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ }&{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 \\ 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 0 {\ } &{}\quad 3 {\ }&{}\quad 4 {\ } &{}\quad 8 {\ } &{}\quad 1 \\ \end{array} \right) . \end{aligned}$$

8 Implementation details of the selective greedy method

8.1 Rounding errors: the tolerance parameter

Even though the selective greedy method, in theory, can safely and effectively be used on non-negative product families, the cycling might occur due to the computational errors, especially in the case of very sparse matrices. The following example sheds some light on this issue.

We run the selective greedy algorithm to the product family \(\mathcal {F} = \mathcal {F}_1\times \cdots \times \mathcal {F}_7\) of (0, 1)-matrices of dimension 7, for solving the minimization problem. Each uncertainty set \(\mathcal {F}_i\) consists of (0, 1)-vectors containing from 1 to 4 ones. The selective greedy algorithm cycles between the following two matrices:

$$\begin{aligned} A = \begin{pmatrix} 0 &{}\quad 1 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0\\ 0 &{}\quad 1 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \end{pmatrix} \quad \rightleftarrows \quad A' \ = \ \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad 1\\ 0 &{}\quad 0 &{} \quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 \end{pmatrix} \end{aligned}$$

Both matrices have the same leading eigenvalue \(\lambda _{max} = 1\), and the algorithm computes the same leading eigenvector

$$\begin{aligned} {\varvec{v}}= & {} (0.00106389,\ 0.14841278, 0.73568055,\ 0.44311056,\\&0.14841278,\ 0.14841278,\ 0.44311056). \end{aligned}$$

Both matrices are minimal in the second, fifth, and sixth rows (the rows that get changed) with the respect to the vector \({\varvec{v}}\). In addition, we have \(({\varvec{a}}_i,{\varvec{v}}) = ({\varvec{a}}'_i,{\varvec{v}})\) for \(i\in \{2,5,6\}\). According to the algorithm, those rows should remain unchanged, and algorithm have actually to terminate with the matrix A. The explanation for this contradictory behaviour is that the machine, due to the calculation imprecisions, keeps miscalculating the above dot products, first taking the “problematic” rows \({\varvec{a}}'_i\) of the second matrix as the optimal, and then rolling back to the rows \({\varvec{a}}_i\), seeing them as optimal.

One of the most straightforward strategy to resolve this issue is to simply round all calculation to some given number of decimals. This approach is particularly efficient if we are not concerned with high precision. However, if we want our calculations to have a higher order of precision, we need to resort to other strategies.

Another idea to deal with this issue is to modify the part of the algorithm that dictates the conditions under which the row will change. Here we will use notation for the maximization, although for everything is completely analogous for the minimization case.

Let \({\varvec{a}}_i^{(k)}\) be the i-th row of a matrix \(A_k\) obtained in the kth iteration, \({\varvec{v}}_k\) its leading eigenvector, and \({\varvec{a}}_i^{(k+1)} = \max _{a\in \mathcal {F}_i}({\varvec{a}},{\varvec{v}}_k)\), different from \({\varvec{a}}_i^{(k)}\). Unless the dot computed products \(({\varvec{a}}_i^{(k)},v_k)\) and \(({\varvec{a}}_i^{(k+1)},v_k)\) are the same, the row \({\varvec{a}}_i^{(k)}\) will get replaced by \({\varvec{a}}_i^{(k+1)}\). However, as can be seen from the examples above, the change may occur even if those dot products are the same. The undesired change may also occur even if the vector \({\varvec{a}}_i^{(k+1)}\) is not truly optimal, but computed dot products are really close to each other, which will lead to rolling back to the vector \({\varvec{a}}_i^{(k)}\) in the next iteration, and thus cycling. To counter this, we impose the rule that the row \({\varvec{a}}_i^{(k)}\) will get replaced by the row \({\varvec{a}}_i^{(k+1)}\) if and only if \({\varvec{a}}_i^{(k+1)} = \max _{a\in \mathcal {F}_i}({\varvec{a}},{\varvec{v}}_k)\) and \(\vert ({\varvec{a}}_i^{(k)},{\varvec{v}}_k) - ({\varvec{a}}_i^{(k+1)},{\varvec{v}}_k)\vert \geqslant \delta \), where \(\delta \) is a small tolerance parameter. In other words, in the \((k+1)\)st iteration, the row \({\varvec{a}}_i^{(k)}\) will not be replaced by the new row \({\varvec{a}}_i^{(k+1)}\) if their computed scalar products with the vector \({\varvec{v}}_k\) are really close to each other. This modification of the algorithm converges to a point of approximate maximum/minimum. The proof of convergence along with the estimates for the rates of convergence from Sect. 5 can be adapted to this modification in a straightforward manner.

8.2 Dealing with reducible families

As stated in Sect. 2.2, positivity of the leading eigenvector \({\varvec{v}}_k\) obtained in the final iteration of the selective greedy algorithm for maximization problem is a guarantee that the obtained solution is correct. However, if \({\varvec{v}}_k\) contains some zero entries, our computed solution might not be the optimal one. This is not an issue when working with the irreducible product families, since in that case the leading eigenvector of computed matrix is always positive [34, Lemma 4].

One way to resolve this issue is to compute the Frobenius factorization of the family \({\mathcal {F}}\), after which the maximization problem will be reduced to several similar problems in smaller dimensions (see [34, Section 3.3]). The algorithm of Frobenius factorization is fast [19], it can be found in [38]. In practice it is often possible to manage without the Frobenius factorization, since having only one irreducible matrix is enough to render the whole family irreducible. A simple strategy when working with reducible families is to simply include a cyclic permutation matrix P, which is irreducible, multiplied by a small parameter \(\alpha \).