1 Introduction

Let V be a finite dimensional vector space over a field K. Denote by span \(\mathcal {S}\) the subspace of V generated by a system \(\mathcal {S}\) of vectors in V. Let \(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and \({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\) be two systems of vectors in V such that \(\mathcal {A}\) is linearly independent and \(\mathrm {span } \mathcal {A} \subset \mathrm {span } {\mathcal{B}}\). The latter condition can be written in terms of matrix multiplication as

$$ \mathcal{A}^{T} = M \mathcal{B}^{T}, $$
(1)

where superscript T denotes the matrix transpose operation and M is an appropriate matrix in Mr, s(K). The Exchange Lemma, attributed by van der Waerden [8, p. 96], to Steinitz [7, p. 133], but already discovered by Grassmann many decades earlier [3, p. 30], says that rs and that there exists a system \(\mathcal {C}\) consisting of the vectors of \(\mathcal {A}\) and of (sr) vectors from \({\mathcal{B}}\) such that \(\mathrm {span } \mathcal {C} = \mathrm {span } {\mathcal{B}}\). The proofs and the resulting algorithms for constructing \(\mathcal {C}\) presented in the cited works as well as in our contemporary textbooks use somewhat tedious procedures by induction on r, called the exchange method or pivoting (cf. Section 3.3 in [5]).

The main aim of this note is to present two new concise algorithms for constructing \(\mathcal {C}\) (Section 2) and, in consequence, two new alternative proofs of the Exchange Lemma. The second algorithm, using basic minors, is in a sense more general than the first, using row echelon forms; the relation between the two algorithms is described in Remark 3. As to the proofs, we comment on them in Remark 4 (for the computational complexities, see Remarks 1 and 6).

The organization of the note is as follows. In Section 2, we present the algorithms, and in an example juxtapose them with one by pivoting. In Section 3, we prove correctness of the algorithm via row echelon forms. Prior to proving correctness of the algorithm via basic minors in Section 6, in Section 4, we recall some properties of basic minors, and in Section 5, we obtain certain spanning criterion. In Section 7, we deal with this criterion in more detail. We establish two additional algorithms in Section 8.

2 The algorithms

Input: Two systems of vectors \(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and \({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\), and a matrix M such that \(\mathcal {A}\) is linearly independent and \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\).

Output: A system \(\mathcal {C}\) consisting of the vectors of \(\mathcal {A}\) and of sr vectors of \({\mathcal{B}}\) such that \(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).

Algorithm via row echelon forms.

  1. 1.

    Transform M into a matrix G in row echelon form.

  2. 2.

    Let R be the set of the labels of those columns of G which do not contain the leading coefficients.

  3. 3.

    Set \(\mathcal {C}\) to be the concatenation of the systems \(\mathcal {A}\) and (bl)lR.

Algorithm via basic minors.

  1. 1.

    FindFootnote 1 a basic minor \({\Delta }= \det (\tilde {M})\) of M .

  2. 2.

    Let R be the set of the labels of those columns of M which do not intersect with \(\tilde {M}\).

  3. 3.

    Set \(\mathcal {C}\) to be the concatenation of the systems \(\mathcal {A}\) and (bl)lR.

Example 1

Let (e1, e2, e3, e4) be the standard basis in \({\mathbb {R}}^{4}\). Denote

$$\begin{array}{ccc} a_{1} & = & [-1, -1, 1,0],\\ a_{2} & = & [1, -1, -1,0],\\ a_{3} & = & [1, 1, -1,-2], \end{array} $$

\(\mathcal {A}=(a_{1}, a_{2}, a_{3})\), and \({\mathcal{B}}=(b_{1},\ldots , b_{5})=(e_{1}, e_{2}, e_{3}, e_{4}, -e_{4})\). For

$$M=[m_{ij}]= \left[ \begin{array}{rrrrr} -1 & -1 & 1 & 1 & 1\\ 1 & -1 & -1 & 1 & 1\\ 1 & 1 & -1 & -1 & 1 \end{array}\right]$$

we have \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). One can check that \(\mathcal {A}\) is linearly independent.

First, we find \(\mathcal {C}\) using the algorithm via row echelon forms.

  1. 1.

    Add row 1 to rows 2 and 3 to get

    $$G= \left[ \begin{array}{rrrrr} -1 & -1 & 1 & 1 & 1\\ 0 & -2 & 0 & 2 & 2\\ 0 & 0 & 0 & 0 & 2 \end{array}\right].$$
  2. 2.

    \(R=\left \{3,4\right \}\).

  3. 3.

    \(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{3}, b_{4})=(a_{1}, a_{2}, a_{3}, e_{3}, e_{4})\).

Second, we find \(\mathcal {C}\) using the algorithm via basic minors.

  1. 1.

    Apply the method described in Remark 2. Select Δ(1) = m35 = 1 ≠ 0. Minor \({\Delta }(2)=\det \left [ \begin {array}{cc} m_{2 4} & m_{2 5}\\ m_{3 4} & m_{3 5} \end {array}\right ]= \det \left [ \begin {array}{cc} 1 & 1\\ -1 & 1 \end {array}\right ]=2\) contains Δ(1) and is nonzero. Minor \({\Delta }(3)=\det (\tilde {M})=\det ([m_{ij}]_{1\le i\le 3 \le j \le 5})=4\) contains Δ(2) and is nonzero, thus is a basic minor.

  2. 2.

    \(R=\left \{1,2\right \}\).

  3. 3.

    \(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{1}, b_{2})=(a_{1}, a_{2}, a_{3}, e_{1}, e_{2})\).

Now we find \(\mathcal {C}\) using pivoting.

  1. 1.

    Since m11 = − 1 ≠ 0, we can exchange vectors a1 and b1, and get \([b_{1} a_{2} a_{3}]^{T}=M^{\prime } [a_{1} b_{2} b_{3} b_{4} b_{5}]^{T}\) with

    $$M^{\prime}=[m^{\prime}_{ij}]= \left[ \begin{array}{rrrrr} -1 & -1 & 1 & 1 & 1\\ -1 & -2 & 0 & 2 & 2\\ -1 & 0 & 0 & 0 & 2 \end{array}\right].$$
  2. 2.

    Since \(m^{\prime }_{2 2}=-2\ne 0\), we can exchange vectors a2 and b2, and get \([b_{1} b_{2} a_{3}]^{T}=M^{\prime \prime } [a_{1} a_{2} b_{3} b_{4} b_{5}]^{T}\) with

    $$M^{\prime\prime}=[m^{\prime\prime}_{ij}]= \left[ \begin{array}{ccrrr} -1/2 & 1/2 & 1 & 0 & 0\\ -1/2 & -1/2 & 0 & 1 & 1\\ -1 & 0 & 0 & 0 & 2 \end{array}\right].$$
  3. 3.

    Since \(m^{\prime \prime }_{3 5}=2\ne 0\), we can exchange vectors a3 and b5, and get \([b_{1} b_{2} b_{5}]^{T}=M^{\prime \prime \prime } [a_{1} a_{2} b_{3} b_{4} a_{3}]^{T}\) with some matrix \(M^{\prime \prime \prime }\).

  4. 4.

    \(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{3}, b_{4})=(a_{1}, a_{2}, a_{3}, e_{3}, e_{4})\).

Remark 1

The number of arithmetic operations (additions, subtractions, multiplications, and divisions) in the algorithms we consider is as follows (recall that sr = rank(M)).

  1. (i)

    Algorithm via row echelon forms, provided M is transformed using Gaussian elimination: \(r^{2}(s-\frac {r}{3}) + O(r(s+r))\).

  2. (ii)

    Algorithm via basic minors, provided Δ is indicated using the method described in Remark 2 with the intermediate minors found by elementary operations: the same as in (i).

  3. (iii)

    Algorithm via the exchange of vectors: 2r2s + O(r(s + r)).

3 Correctness of the algorithm via row echelon forms

We can assume that the matrix G is in the reduced row echelon form. Indeed, if it is not the case, we transform G further to the reduced row echelon form using Gauss-Jordan elimination. This does not affect R.

M is transformed to G by means of a finite sequence of elementary operations. Apply these operations simultaneously to both sides of (1), namely at the left hand side to the column matrix \(\mathcal {A}^{T}\) and at the right hand side to the matrix M. We get

$$ \left[ \begin{array}{c} a_{1}^{\prime}\\ a_{2}^{\prime}\\ \vdots\\ a_{r}^{\prime} \end{array}\right] =G \left[ \begin{array}{c} b_{1}\\ b_{2}\\ \vdots\\ b_{s} \end{array}\right]. $$
(2)

Denote \(\mathcal {A}^{\prime }=(a_{1}^{\prime }, \ldots , a_{r}^{\prime })\). Since the elementary operations on a system of vectors do not affect either its span or its linear independence, we conclude that

$$ \text{span } \mathcal{A}^{\prime} = \text{span } \mathcal{A} $$
(3)

and \(\mathcal {A}^{\prime }\) is linearly independent. In particular, this means that the system \(\mathcal {A}^{\prime }\) consists of nonzero vectors. Thus, all the rows of the matrix G are nonzero. The number of nonzero rows of any matrix in (the reduced) row echelon form is less than or equal to the number of its columns. The inequality rs follows.

Put G = [gij]. For i = 1,…, r denote by ji the label of the column of G which contains the leading coefficient of the i th row. Let E be the set of these labels, i.e., \(E=\left \lbrace j_{1}, \ldots , j_{r}\right \rbrace \). So \(R=\left \lbrace l_{1}, \ldots , l_{s-r} \right \rbrace =\left \lbrace 1, \ldots , s\right \rbrace \setminus E\) and

$$\mathcal{C}=(a_{1}, \ldots, a_{r}, b_{l_{1}}, \dots, b_{l_{s-r}}).$$

Since \( \text {span } \mathcal {A} \subset \text {span } {\mathcal{B}}\), we have \(\text {span } \mathcal {C} \subset \text {span } {\mathcal{B}}\). In order to prove the converse inclusion \(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\), it suffices to check that \(b_{j} \in \text {span } \mathcal {C}\) for each jE. By the definition of the reduced row echelon form, the submatrix of G consisting of the columns labeled by jE is the identity matrix. Thus, by (2), we have for i = 1,…, r

$$ a_{i}^{\prime} = b_{j_{i}}+\sum\limits_{l \in R} g_{i l} b_{l}. $$

Hence, for each jE

$$b_{j} \in \text{span } \mathcal{A}^{\prime} + \text{span } (b_{l})_{l \in R}.$$

Thus, \(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\) by (3) and the definition of \(\mathcal {C}\).

4 Basic minors

From now on, we use the following notation. For a matrix AMk, l(K), we denote by Ai (resp. Aj) the i th row (resp. the j th column) of A. If \(S \subset \left \lbrace 1, \ldots , l\right \rbrace \) then AS denotes the submatrix of A consisting of the columns Aj, where jS.

Let D = [dij] ∈ Mk, l(K) and Δ be a nonzero minor of D of order p. We call Δ a basic minor of D if either \( p = {\min \limits } (k,l)\) or all of the minors of order p + 1 that contain Δ, i.e., from which Δ is obtained by crossing out a row and column, do vanish.

Lemma 1

Let \(\tilde {D}\) be a square submatrix of D. If \({\Delta } = \det (\tilde {D})\) is a basic minor of D, then the system consisting of the rows of D intersecting \(\tilde {D}\) is a basis of the row space span (D1, D2,…, Dk) of D.

For a proof using the Laplace expansion, see [6, pp. 9–10]. Although in [6] the definition of a basic minor differs from the one we use by demanding the maximality of the order of a nonzero minor, the proof referred to only uses the conditions we impose. The equivalence of these two definitions is irrelevant to our note; actually, it is implied by the equinumerosity of bases.

Remark 2

There is a concise inductive procedure of indicating a basic minor of a matrix, called the bordering minors method. Since this method is presented in very few textbooks available in English (e.g., [4, pp. 70–73]), let us sketch it here. Given a nonzero matrix in Mk, l(K), select any nonzero first order minor Δ(1) of it. In the inductive n th step, the input is the nonzero minor Δ(n). If \(n=\min \limits (k,l)\), then Δ(n) is a basic minor and the procedure terminates. Else, check only those minors of order n + 1 that contain Δ(n), i.e., from which Δ(n) is obtained by crossing out a row and column. If all of these minors are zero, then Δ(n) is a basic minor and the procedure terminates. Else, choose the first encountered nonzero one as Δ(n + 1) and proceed to the next step.

5 Spanning criterion

Let \(\mathcal {A}=\left (a_{1}, \ldots , a_{r}\right )\) and \({\mathcal{B}}=\left (b_{1}, \ldots , b_{s}\right )\) be two systems of vectors, where rs. Note well that we do not assume that the system \(\mathcal {A}\) is linearly independent. Suppose that \(\text {span } \mathcal {A} \subset \text {span } {\mathcal{B}}\). This means that there exists a matrix MMr, s(K) such that \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). For a given set \(R \subset \left \lbrace 1, \ldots , s\right \rbrace \) of cardinality (sr), define \(\mathcal {C}\) to be the concatenation of the systems \(\mathcal {A}\) and (bl)lR. Define \({\Delta }=\det (M_{E})\), where \(E= \left \lbrace 1, \ldots , s\right \rbrace \setminus R\). We have the following spanning criterion.

Lemma 2

If Δ≠ 0, then \(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).

Proof

It simplifies the exposition of our argument, and causes no loss of generality, to assume that \(E=\left \{1, \ldots , r\right \}\). Indeed, it is enough to permute appropriately the vectors of \({\mathcal{B}}\) and the columns of M. Then \(R=\left \{r+1, \ldots , s\right \}\) and the new M equals the block matrix \(\left [ \begin {array}{cc} M_{E} & M_{R} \end {array}\right ]\).

The condition \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\) reads

$$ \left[ \begin{array}{c} a_{1}\\ \vdots\\ a_{r} \end{array}\right]=M_{E} \left[ \begin{array}{c} b_{1}\\ \vdots\\ b_{r} \end{array}\right] +M_{R}\left[ \begin{array}{c} b_{r+1}\\ \vdots\\ b_{s} \end{array}\right]. $$
(4)

Since \({\Delta }=\det (M_{E})\ne 0 \), the matrix ME is invertible. On multiplying (4) by \(M_{E}^{-1}\) we get

$$ \left[ \begin{array}{c} b_{1}\\ \vdots\\ b_{r} \end{array}\right]= M_{E}^{-1} \left[ \begin{array}{c} a_{1}\\ \vdots\\ a_{r} \end{array}\right]- M_{E}^{-1} M_{R}\left[ \begin{array}{c} b_{r+1}\\ \vdots\\ b_{s} \end{array}\right], $$

so \(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\). The converse inclusion is clear. □

6 Correctness of the algorithm via basic minors

Let I be the set of the labels of those rows of M which intersect with \(\tilde {M}\). Suppose that \(I\ne \left \{1, {\ldots } r\right \}\). Take any \(i_{0} \in \left \{1, {\ldots } r\right \} \setminus I\). By Lemma 1 on basic minors, the row \(M_{i_{0}}\) is a linear combination of the system (Mi)iI. Hence, the vector \(a_{i_{0}}\) is the corresponding linear combination of the system (ai)iI by (1). This contradicts the assumption that \(\mathcal {A}\) is linearly independent. Thus, \(I= \left \{1, \ldots r\right \}\), so the order of Δ equals r. The inequality rs follows.

From Lemma 2, we get that \(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).

Remark 3

Let \(E= \left \lbrace 1, \ldots , s\right \rbrace \setminus R\), where R is as in the first algorithm. Then \(\det (M_{E})\) is one of the basic minors of M since \(\det (G_{E}) \ne 0\). If one chooses \(\tilde {M}=M_{E}\) in the second algorithm, then the corresponding system \(\mathcal {C}\) equals the one from the first.

Remark 4

The part of the above proof concerning the inequality rs is essentially the same as in [2], pp. 30–31. This inequality is also being deduced, alternatively to our direct argument from Section 3 and to the inductive procedure, from the theorem on the existence of nontrivial solutions to systems of homogeneous linear equations with the number of variables exceeding the number of equations (see, e.g., [1], Chapter 3, Proposition 3.16 and Chapter 1, Corollary 2.17). This theorem is a corollary of the fact that every matrix can be brought to the reduced row echelon form by Gauss-Jordan elimination.

7 More on the spanning criterion

The condition in Lemma 2 is in general not necessary, even when \(\mathcal {A}\) is linearly independent. Indeed, we have the following example.

Example 2

Let (e1, e2, e3) be the standard basis in \(\mathbb {R}^{3}\). Denote a1 = [1,− 1,0] and a2 = [1,1,0]. Define \(\mathcal {A}=(a_{1}, a_{2})\) and \({\mathcal{B}}=(e_{1}, e_{2}, e_{3}, -e_{3})\). Thus, for \(M= \left [ \begin {array}{rrrr} 1 & -1 & 0 & 0\\ 1 & 1 & 0 & 0 \end {array}\right ]\), we have \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). Set \(R=\left \{2,4\right \}\). Then \(\mathcal {C}=(a_{1}, a_{2}, e_{2}, -e_{3})\), so \(\text {span } \mathcal {C}=\text {span } {\mathcal{B}}\) but Δ = 0.

However, under the assumption that \({\mathcal{B}}\) is linearly independent, the condition turns out to be necessary.

Proposition 3

Δ ≠  0 if and only if \(\text {span } \mathcal {C}=\text {span } {\mathcal{B}}\), provided \({\mathcal{B}}\) is linearly independent.

Proof

Since we have Lemma 2, we only need to prove that the assumption \(\text {span } \mathcal {C}\) \( = \text {span } {\mathcal{B}}\) implies Δ≠ 0. As in the proof of Lemma 2, we assume that \(E=\left \{1, \ldots , r\right \}\). Write \( \text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\) and \(\text {span } \mathcal {C} \subset \text {span } {\mathcal{B}}\) as

$$ \mathcal{B}^{T} = X \mathcal{C}^{T} \mathrm{ and } \mathcal{C}^{T} = Y \mathcal{B}^{T}, $$
(5)

with XMs, s(K) and

$$ Y=\left[ \begin{array}{cc} M_{E} & M_{R}\\ 0 & I \end{array}\right], $$
(6)

where 0 ∈ Msr, r(K) is the zero matrix and IMsr, sr(K) is the identity matrix. By (5),

$$ \mathcal{B}^{T} = X Y \mathcal{B}^{T}. $$

Since \({\mathcal{B}}\) is linearly independent it follows that XY is the identity matrix, so \(\det (X Y) = 1\). Thus \({\Delta }=\det (M_{E})= \det (Y) \ne 0\) by (6) and the multiplicativity of determinant. □

Remark 5

If \({\mathcal{B}}\) is linearly independent and Δ≠ 0, then both \(\mathcal {A}\) and \(\mathcal {C}\) are linearly independent. This is clear by the equinumerosity of bases.

8 Two additional algorithms

Assuming the linear independence of \({\mathcal{B}}\) rather than of \(\mathcal {A}\), we have two algorithms, analogue to those in Section 2. Their correctness can be easily established by combining our arguments in Sections 3 and 6 with Remark 5.

Input: Two systems of vectors \(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and \({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\), and a matrix M such that \({\mathcal{B}}\) is linearly independent and \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\).

Algorithm I.

Output: A system \(\mathcal {A}^{*}\) which is a basis of \(\text {span } \mathcal {A}\), and a system \(\mathcal {C}\) which is the completion of \(\mathcal {A}^{*}\) by some vectors of \({\mathcal{B}}\) to a basis of \(\text {span } {\mathcal{B}}\).

  1. 1.

    Transform M into a matrix G in row echelon form.

  2. 2.

    Let R be the set of the labels of those columns of G which do not contain the leading coefficients.

  3. 3.

    Set \(\mathcal {A}^{*}=(G_{i}{\mathcal{B}}^{T})_{i \in \left \{1, \ldots , \text {rank} (G)\right \}}\).

  4. 4.

    Set \(\mathcal {C}\) to be the concatenation of the systems \(\mathcal {A}^{*}\) and (bl)lR.

Algorithm II.

Output: A subsystem \(\tilde {\mathcal {A}}\) of \(\mathcal {A}\) which is a basis of \(\text {span } \mathcal {A}\), and a system \(\mathcal {C}\) which is the completion of \(\tilde {\mathcal {A}}\) by some vectors of \({\mathcal{B}}\) to a basis of \(\text {span } {\mathcal{B}}\).

  1. 1.

    Find a basic minor \({\Delta }= \det (\tilde {M})\) of M.

  2. 2.

    Let I be the set of the labels of those rows of M which intersect with \(\tilde {M}\) and R the set of the labels of those columns of M which do not intersect with \(\tilde {M}\).

  3. 3.

    Set \(\tilde {\mathcal {A}}=(a_{i})_{i \in I}\).

  4. 4.

    Set \(\mathcal {C}\) to be the concatenation of the systems \(\tilde {\mathcal {A}}\) and (bl)lR.

Example 3

Let \({\mathcal{B}}=(e_{1}, e_{2}, e_{3}, e_{4})\) and \(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\) where

$$M=[m_{ij}]= \left[ \begin{array}{rrrr} -1 & -1 & 1 & 1\\ 1 & -1 & -1 & 1\\ 0 & 1 & 0 & -1 \\ 1 & 0 & -1 & 0 \end{array}\right]$$

First, we find \(\mathcal {A}^{*}\) and \(\mathcal {C}\) using Algorithm I.

  1. 1.

    Add row 1 to rows 2 and 4; add row 2 multiplied by 1/2 to row 3; subtract row 2 multiplied by 1/2 from row 4. The result is

    $$G= \left[ \begin{array}{rrrrr} -1 & -1 & 1 & 1 \\ 0 & -2 & 0 & 2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right].$$
  2. 2.

    \(R=\left \{3,4\right \}\).

  3. 3.

    \(\mathcal {A}^{*}=(G_{1}, G_{2})\).

  4. 4.

    \(\mathcal {C}=(G_{1}, G_{2}, e_{3}, e_{4})\).

Second, we find \(\tilde {\mathcal {A}}\) and \(\mathcal {C}\) using Algorithm II.

  1. 1.

    Select Δ(1) = m43 = − 1 ≠ 0. Minor \({\Delta }(2)=\det (\tilde {M})= \det \left [ \begin {array}{cc} m_{3 3} & m_{3 4}\\ m_{4 3} & m_{4 4} \end {array}\right ]= \det \left [ \begin {array}{cc} 0 & -1\\ -1 & 0 \end {array}\right ]=-1\) contains Δ(1) and is nonzero. All four minors of order 3 that contain Δ(2) are zero. Thus Δ(2) is a basic minor.

  2. 2.

    \(I=\left \{3,4\right \}\), \(R=\left \{1,2\right \}\).

  3. 3.

    \(\tilde {\mathcal {A}}=(a_{3},a_{4})=(M_{3}, M_{4})\).

  4. 4.

    \(\mathcal {C}=(a_{3}, a_{4}, e_{1}, e_{2})\).

Remark 6

Given a vector space and a finite system of generating vectors, one can find a basis by successively examining the vectors in the system, and retaining only those that are not linear combinations of the predecessors. This common idea, presented in standard (linear) algebra textbooks (see, e.g., [1], Chapter 3, Proposition 3.13), can be turned into the following algorithm with the same input and output as in Algorithm II.

  • Standard algorithm.

  1. 1.

    Form the block matrix \(\left [\begin {array}{cc} M^{T} & I \end {array}\right ]\) where IMs, s(K) is the identity matrix, and transform it into a matrix H in row echelon form.

  2. 2.

    Let E be the set of the labels of those columns of H which contain the leading coefficients. Put \(K= \left \{j \in E: j \leq r\right \}\) and R = EK.

  3. 3.

    Set \(\tilde {\mathcal {A}}=(a_{i})_{i \in K}\).

  4. 4.

    Set \(\mathcal {C}\) to be the concatenation of the systems \(\tilde {\mathcal {A}}\) and (bl)lR.

The number of arithmetic operations in the standard algorithm is

$$2 \sum\limits_{n=1}^{s} n(r+n) + O(s(s+r))= s^{2}(r+\frac{2}{3}s) + O(s(s+r)). $$

For large r = s, the number of arithmetic operations in the algorithms we consider in this section is approximately as follows.

  1. (i)

    Algorithm I: \(\frac {2}{3}s^{3}\).

  2. (ii)

    Algorithm II: the same as in (i).

  3. (iii)

    Standard algorithm: \(\frac {5}{3}s^{3}\).

  4. (iv)

    Algorithm via the exchange of vectors: 2s3.