1 Introduction

Iterative procedures, among others those approximating various irrational numbers using some means, have been well-known for a long time. One of the classical iterative algorithms is the Newton process

$$\begin{aligned} x_{k+1}=\frac{1}{2}\left( x_k+\frac{a}{x_k}\right) , \qquad k \in {{\mathbb {N}}}_0, \end{aligned}$$
(1.1)

being a formalization of the Babylonian method of extracting the square root of a positive number a. Starting with an arbitrary positive \(x_0\) the sequence \(\left( x_k\right) _{k \in {{\mathbb {N}}}_0}\) is (strictly) decreasing and bounded, so convergent: it approximates the number \(\sqrt{a}\) (see, for instance, [4] by Carlson). Putting \(y_k:=a/x_k\) we see that (1.1) is equivalent to

$$\begin{aligned} x_{k+1}=\frac{1}{2}\left( x_k+y_k\right) \quad \text{ and } \quad 1/y_{k+1}=\frac{1}{2}\left( 1/x_k+1/y_k\right) , \quad k\in {{\mathbb {N}}}_0, \end{aligned}$$

or

$$\begin{aligned} x_{k+1}=A\left( x_k,y_k\right) \quad \text{ and } \quad y_{k+1}=H\left( x_k,y_k\right) , \quad k\in {{\mathbb {N}}}_0, \end{aligned}$$
(1.2)

where A and H denote, respectively, the arithmetic and harmonic means (cf. [8, p. 190] by Foster and Phillips, also [6] by Daróczy). It follows from the definition of \(\left( y_k\right) _{k \in {{\mathbb {N}}}_0}\) that it strictly increases to \(\sqrt{a}\). Consequently,

$$\begin{aligned} y_k<y_{k+1}<x_{k+1}<x_k, \quad k \in {{\mathbb {N}}}_0, \end{aligned}$$

and

$$\begin{aligned} \lim _{k\rightarrow \infty } x_k=\lim _{k\rightarrow \infty } y_k=\sqrt{a}. \end{aligned}$$

Clearly (1.2) is a particular case of the recurrent process

$$\begin{aligned} x_{k+1}=M\left( x_k,y_k\right) \quad \text{ and }\quad y_{k+1}=N\left( x_k,y_k\right) , \qquad k\in {{\mathbb {N}}}_0, \end{aligned}$$
(1.3)

where M and N are means on an interval I. Here \(M:I^2\rightarrow I\) is called a (bivariate) mean on I provided it satisfies the inequalities

$$\begin{aligned} \min \{x,y\} \le M(x,y) \le \max \{x,y\}, \qquad x,y \in I. \end{aligned}$$

Another important example of a recurrent algorithm (1.3) is that with \(M=A\) and \(N=G\), i.e. the arithmetic and geometric means, respectively:

$$\begin{aligned} x_{k+1}=\frac{1}{2}\left( x_k+y_k\right) \quad \text{ and }\quad y_{k+1}=\left( x_ky_k\right) ^{1/2}, \qquad k\in {{\mathbb {N}}}_0. \end{aligned}$$
(1.4)

Both these sequences have the common limit called the arithmetic-geometric mean (medium arithmeticum-geometricum) of \(x_0\) and \(y_0\), denoted by \(A~\otimes ~G\) \(\left( x_0,y_0\right) \). The algorithm (1.4) occured first in 1784 in the work [17] by Lagrange in connection with reduction and evaluation of elliptic integrals (see also [18, pp. 253–312, especially pp. 267, 272]). However, it was Gauss, who discovered that this algorithm provides an iterative solution to the problem of rectifying an arc of Bernoulli lemniscate. In particular, this gives a brief demonstration of Fagnano’s duplication theorem from 1718, showing how to double a lemniscate arc with a ruler and compass (cf. [23, pp. 1–7] by Siegel). In 1799 Gauss noted (see [12, p. 542]) that

$$\begin{aligned} A\otimes G\left( 1,\sqrt{2}\right) =\frac{\pi }{2}\left( \int ^{\pi /2}_0 \frac{1}{\left( 1+\sin ^2\vartheta \right) ^{1/2}}d\vartheta \right) ^{-1}. \end{aligned}$$

In general, the value of the arithmetic-geometric mean at an arbitrary point \(\left( x_0,y_0\right) \in (0, +\infty )^2\) was determined by Gauss in 1818:

$$\begin{aligned} A\otimes G\left( x_0,y_0\right) =\frac{\pi }{2}\left( \int ^{\pi /2}_0 \frac{1}{\left( x^2_0 \cos ^2\vartheta +y^2_0\sin ^2\vartheta \right) ^{1/2}}d\vartheta \right) ^{-1} \end{aligned}$$

(cf. [10]), also [11, pp. 352–355]). For a systematic description of Gauss’ theory we refer to the comprehensive article [5] by Cox. The reader interested in other compound means like the arithmetic-geometric mean is referred to the book [2] by Borweins.

In 1800 Gauss suggested studying the process

$$\begin{aligned} x_{k+1}=\frac{1}{2}\left( x_k+y_k\right) \quad \text{ and }\quad y_{k+1}=\left( x_{k+1}y_k\right) ^{1/2}, \qquad k\in {{\mathbb {N}}}_0, \end{aligned}$$
(1.5)

which is superficially similar to (1.4). Apparently he realized that both \(\left( x_k\right) _{k \in {\mathbb {N}}}\) and \(\left( y_k\right) _{k \in {\mathbb {N}}}\) approach a common limit. Unexpectedly it is expressed not by elliptic functions like in the case of (1.4) but by trigonometric or hyperbolic functions (cf. [4]). As it follows from [12, pp. 234, 284] the same was known for Pfaff at the very beginning of the 19th century. In 1880 algorithm (1.5) and its fundamental properties were rediscovered by Borchardt [1] (see also [24] and [4]). Since then it occasionally bears his name.

If we change the Borchardt algorithm by replacing the arithmetic mean in the first equality by the harmonic one, then we will come to the process

$$\begin{aligned} x_{k+1}=\frac{2}{1/x_k+1/y_k} \quad \text{ and }\quad y_{k+1}=\left( x_{k+1}y_k\right) ^{1/2}, \qquad k\in {{\mathbb {N}}}_0. \end{aligned}$$
(1.6)

Also here both sequences \(\left( x_k\right) _{k \in {\mathbb {N}}}\) and \(\left( y_k\right) _{k \in {\mathbb {N}}}\) tend to a common limit. In particular, starting with \(x_0=2\sqrt{3}\) and \(y_0=3\) we obtain the algorithm attributed to Archimedes (see [13], also [9, 22]) for estimating the number \(\pi \). For a longer and comprehensive story about the Archimedean approximations to \(\pi \) the reader is referred to the book [2] by Borweins.

Observe that sequences \(\left( x_k\right) _{k \in {\mathbb {N}}}\), \(\left( y_k\right) _{k \in {\mathbb {N}}}\) satisfy the Borchardt process if and only if the sequences \(\left( 1/x_k\right) _{k \in {\mathbb {N}}}, \left( 1/y_k\right) _{k \in {\mathbb {N}}}\) satisfy the Archimedean one. Thus we need consider only one of these two algorithms, for instance algorithm (1.5).

Both processes: Borchardt’s (1.5) as well as Archimedes’ (1.6) are particular cases of the algorithm

$$\begin{aligned} x_{k+1} =M\left( x_k, y_k\right) \quad \text{ and } \quad y_{k+1} =N\left( x_{k+1}, y_k\right) , \qquad k \in {{\mathbb {N}}}_0, \end{aligned}$$
(1.7)

where M and N are bivariate means on an interval I and \(x_0,y_0 \in I\). Observe that if M is one of the means AGH on \(I=(0,+\infty )\), then M is strict:

$$\begin{aligned} \min \{x,y\}<M(x,y)<\max \{x,y\}, \quad x,y \in I, \, x\ne y, \end{aligned}$$

symmetric:

$$\begin{aligned} M(x,y)=M(y,x), \quad x,y \in I, \end{aligned}$$

and continuous. Foster and Philips [9, Theorem] gave a simple argument for the following convergence property of the Archimedes–Borchardt algorithm:

Theorem FP

If \(x_0, y_0\) are points of an interval I and \(M,N : I^2\rightarrow I\) are continuous, strict and symmetric means, then the sequences \(\left( x_k\right) _{k \in {\mathbb {N}}}\) and \(\left( y_k\right) _{k \in {\mathbb {N}}}\), defined by (1.7), converge monotonically to a common limit.

For a discussion about some other examples of the Archimedes–Borchardt process (1.7) see, for instance, [8].

2 Reduction to generalized Gauss algorithm

In the present paper we study the generalized Archimedes–Borchardt algorithm and prove a convergence result extending Theorem FP. Given an interval I of reals and a positive integer p a function \(M:I^p\rightarrow I\) is called a mean (in p variables) on I when

$$\begin{aligned} \min \left\{ x_1, \ldots , x_p\right\} \le M(x_1, \ldots , x_p)\le \max \left\{ x_1, \ldots , x_p\right\} \end{aligned}$$

for all \(\left( x_1, \ldots , x_p\right) \in I^p\) (see, for instance, [3]). It is said to be strict when

$$\begin{aligned} \min \left\{ x_1, \ldots , x_p\right\}< M(x_1, \ldots , x_p)< \max \left\{ x_1, \ldots , x_p\right\} , \end{aligned}$$

whenever \(\left( x_1, \ldots , x_p\right) \in I^p\) and \(\min \left\{ x_1, \ldots , x_p\right\} < \max \left\{ x_1, \ldots , x_p\right\} \). If

$$\begin{aligned} M\left( x_1, \ldots , x_p\right) =M \left( x_{\sigma (1)}, \ldots , x_{\sigma (p)}\right) \end{aligned}$$

for all \(x_1, \ldots , x_p \in I\) and every permutation \(\sigma \) of the set \(\{1, \ldots , p\}\), then the mean M is called symmetric.

Given means \(M_1, \ldots , M_p:I^p\rightarrow I\) and points \(x_{0,1}, \ldots , x_{0,p}\in I\) consider the recurrent process

$$\begin{aligned} x_{k+1, 1}&=M_1 \left( x_{k,1}, \ldots , x_{k,p}\right) , \,\, k \in {{\mathbb {N}}}_0,\nonumber \\ x_{k+1, i}&=M_i \left( x_{k+1,1}, \ldots ,x_{k+1,i-1}, x_{k,i} ,\ldots , x_{k,p}\right) , \,\, i=2,\ldots , p, \,\, k \in {{\mathbb {N}}}_0, \end{aligned}$$
(2.1)

which extends both algorithms (1.5) and (1.7). Our aim is to prove, among others, the following result.

Theorem 2.1

If \(x_{0,1}, \ldots , x_{0,p}\) are points of an interval I and \(M_1, \ldots , M_p:I^p\rightarrow I\) are continuous strict means, then the sequences \(\left( x_{k,1}\right) _{k \in {{\mathbb {N}}}}, \ldots ,\)\(\left( x_{k,p}\right) _{k \in {{\mathbb {N}}}}\), defined by (2.1), converge to a common limit. The convergence is uniform with respect to the initial point \(\mathbf{x}_0=\left( x_{0,1},\ldots , x_{0,p}\right) \) running through any compact subset of the set \(I^p\).

As we see Theorem 2.1 considerably generalizes the convergence part of Theorem FP. The generalization comes in three respects:

  1. (i)

    two bivariate means have been replaced here by p means in p variables;

  2. (ii)

    the convergence is uniform with respect to \(x_{0,1}, \ldots ,x_{0,p}\) on each compact subset of the cube \(I^p\);

  3. (iii)

    the assumption of symmetricity turned out to be superfluous.

However, Theorem 2.1 says nothing about the monotonicity of the sequences considered there, unlike Theorem FP. So the problem below seems to be of interest.

Problem 2.2

Are sequences (2.1), under the assumptions of Theorem 2.1 (and possibly the symmetry of the means \(M_1,\ldots , M_p\)), monotonic?

We prove Theorem 2.1 by reducing the Archimedes–Borchardt process (2.1) to an appropriate Gauss algorithm. At first we need an auxiliary result.

In what follows, given an interval I and mappings \(M_1, \ldots , M_p:I^p\rightarrow I\), we recurrently define functions \(N_1, \ldots , N_p:I^p\rightarrow I\) by

$$\begin{aligned} N_1(\mathbf{x})= & {} M_1(\mathbf{x}), \nonumber \\ N_i(\mathbf{x})= & {} M_i\left( N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}), x_i, \ldots , x_p\right) , \,\,\, i=2,\ldots ,p, \end{aligned}$$
(2.2)

for each \(\mathbf{x}=\left( x_1, \ldots , x_p\right) \in I^p\).

Proposition 2.3

Let \( N_1, \ldots , N_p:I^p\rightarrow I\) be given by (2.2). Then:

  1. (i)

    if \(M_1, \ldots , M_p\) are means, then so are \( N_1, \ldots , N_p\);

  2. (ii)

    if the means \(M_1, \ldots , M_p\) are strict, then so are \( N_1, \ldots , N_p\);

  3. (iii)

    if the functions \(M_1, \ldots , M_p\) are continuous, then so are \( N_1, \ldots , N_p\);

  4. (iv)

    for every point \(\mathbf{x}_0=\left( x_{0,1}, \ldots , x_{0,p}\right) \in I^p\) the sequences \(\left( x_{k,i}\right) _{k \in {{\mathbb {N}}}_0}\), \(i=1,\ldots , p\), satisfy process (2.1) if and only if

    $$\begin{aligned} x_{k+1,i}=N_i\left( x_{k,1}, \ldots , x_{k,p}\right) , \quad i=1,\ldots ,p, \,\, k \in {{\mathbb {N}}}_0. \end{aligned}$$
    (2.3)

Proof

(i) and (ii). Assume that \(M_1, \ldots , M_p\) are means. Then, of course, so is \(N_1\). If \(i \in \{2, \ldots ,p\}\) and \(N_1, \ldots , N_{i-1}\) are means, then taking any \(\mathbf{x}=\left( x_1, \ldots , x_p\right) \in I^p\) we have

$$\begin{aligned} N_i(\mathbf{x})= & {} M_i\left( N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}), x_i, \ldots , x_p\right) \\\le & {} \max \left\{ N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}), x_i, \ldots , x_p\right\} \le \max \left\{ x_1, \ldots , x_p\right\} \end{aligned}$$

and, similarly, \(N_i(\mathbf{x})\ge \min \left\{ x_1, \ldots , x_p\right\} \).

Assume that, in addition, the means \(M_1, \ldots , M_p\) are strict. Clearly \(N_1\) is strict. Fix any \(\mathbf{x}=\left( x_1, \ldots , x_p\right) \in I^p\) with \(\min \left\{ x_1, \ldots , x_p\right\} <\max \left\{ x_1, \ldots , x_p\right\} \). Take any \(i \in \{2, \ldots ,p\}\) such that \(N_1, \ldots , N_{i-1}\) are strict. If

$$\begin{aligned} N_1(\mathbf{x}) =\ldots =N_{i-1}(\mathbf{x})=x_i=\ldots =x_p, \end{aligned}$$

then we have

$$\begin{aligned} N_i(\mathbf{x})=M_i\left( N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}), x_i, \ldots , x_p\right) =N_1(\mathbf{x}), \end{aligned}$$

and thus, since

$$\begin{aligned} \min \left\{ x_1, \ldots , x_p\right\}<N_1(\mathbf{x})<\max \left\{ x_1, \ldots , x_p\right\} , \end{aligned}$$

we get

$$\begin{aligned} \min \left\{ x_1, \ldots , x_p\right\}<N_i(\mathbf{x})<\max \left\{ x_1, \ldots , x_p\right\} . \end{aligned}$$
(2.4)

Otherwise

$$\begin{aligned} \min \left\{ N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}),x_i\ldots , x_p\right\} <\max \left\{ N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}),x_i, \ldots , x_p\right\} , \end{aligned}$$

and thus, as \(M_i\) is strict, we have

$$\begin{aligned} N_i(\mathbf{x})= & {} M_i\left( N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}), x_i, \ldots , x_p\right) \\< & {} \max \left\{ N_1(\mathbf{x}), \ldots , N_{i-1}(\mathbf{x}),x_i, \ldots , x_p\right\} \le \max \left\{ x_1, \ldots , x_p\right\} \end{aligned}$$

and, similarly, \(N_i(\mathbf{x})>\min \left\{ x_1, \ldots , x_p\right\} \). Thus we again come to (2.4).

(iii) This is obvious according to definition (2.2).

(iv) Fix an arbitrary point \(\left( x_{0,1}, \ldots , x_{0,p}\right) \in I^p\) and assume that the sequences \(\left( x_{k,i}\right) _{k \in {{\mathbb {N}}}_0}\), \(i=1,2 \ldots , p\), satisfy (2.1). Then, by the first equalities of (2.1) and (2.2),

$$\begin{aligned} x_{k+1,1}=M_1\left( x_{k,1}, \ldots , x_{k,p}\right) =N_1\left( x_{k,1}, \ldots , x_{k,p}\right) \end{aligned}$$

for all \(k \in {{\mathbb {N}}}_0\). Taking any \(i \in \{2,\ldots ,p\}\) and assuming inductively that (2.3) holds for all \(j=1, \ldots , i-1\), we see that again (2.1) and (2.2) give

$$\begin{aligned} x_{k+1,i}= & {} M_i\left( x_{k+1,1}, \ldots , x_{k+1,i-1},x_{k,i}, \ldots , x_{k,p}\right) \\= & {} M_i\left( N_1\left( x_{k,1}, \ldots , x_{k,p}\right) ,\ldots , N_{i-1}\left( x_{k,1}, \ldots , x_{k,p} \right) ,x_{k,i}, \ldots , x_{k,p}\right) \\= & {} N_i\left( x_{k,1}, \ldots , x_{k,p}\right) \end{aligned}$$

for all \(k \in {{\mathbb {N}}}_0\).

The converse implication can be obtained similarly. \(\square \)

The simple observation described in Proposition 2.3 (iv) shows that the investigation of the Archimedes–Borchardt process (2.1) can be reduced to the generalized Gauss algorithm (2.3) which is relatively well-studied (see [15, Sect. 2 and the references therein], also [7]). Notice that equalities (2.3) are equivalent to the condition

$$\begin{aligned} \mathbf{x}_k=\left( N_1, \ldots , N_p\right) ^k\left( \mathbf{x}_0\right) , \qquad k \in {{\mathbb {N}}}_0. \end{aligned}$$

Here, as usual, \(\mathbf{x}_k=\left( x_{k,1}, \ldots , x_{k,p}\right) \) and the symbol \(\left( N_1, \ldots , N_p\right) ^k\) denotes the k-th iterate of the mapping \(\left( N_1, \ldots , N_p\right) :I^p\rightarrow I^p\). So the sequence \(\left( \mathbf{x}_k\right) _{k \in {{\mathbb {N}}}_0}\), satisfying algorithm (2.1), is the sequence of successive iterates of the mapping \(\left( N_1, \ldots , N_p\right) \) given recurrently by formulas (2.2) and starting from the point \(\mathbf{x}_0\). Observe, however, that at first glance no iteration nature is associated to the Archimedes–Borchardt algorithm (2.1).

To prove Theorem 2.1 we will make use of the following result.

Theorem M

Let I be an interval and let \(N_1,\ldots , N_p:I^p\rightarrow I\) be continuous means such that the equalities

$$\begin{aligned} \min \left\{ N_1(\mathbf{x}), \ldots , N_p(\mathbf{x})\right\} =\min \left\{ x_1, \ldots , x_p\right\} \end{aligned}$$
(2.5)

and

$$\begin{aligned} \max \left\{ N_1(\mathbf{x}), \ldots , N_p(\mathbf{x})\right\} =\max \left\{ x_1, \ldots , x_p\right\} \end{aligned}$$
(2.6)

together imply \(x_1=\ldots =x_p\) for all \(\mathbf{x}=\left( x_1,\ldots ,x_p\right) \in I^p\). Then there exists a continuous mean \(L: I^p\rightarrow I\) such that

$$\begin{aligned} \lim _{k \rightarrow \infty } \left( N_1,\ldots , N_p\right) ^k=(L, \ldots , L) \end{aligned}$$

uniformly on every compact subset of \(I^p\); moreover, L is the unique continuous \(\left( N_1,\ldots , N_p\right) \)-invariant mean:

$$\begin{aligned} L\circ \left( N_1,\ldots , N_p\right) =L. \end{aligned}$$

The description of the generalized Gauss algorithm and its limit behaviour, presented in Theorem M, originates in the research of Matkowski in his paper [21] (see also [19, 20] and, for some further details, [15, Section 2]). In some earlier publications Matkowski assumed that at most one of the means \(N_1,\ldots , N_p\) is not strict (see [19] for the case \(p=2\), also [20] where a gap from [19] was filled). The remark below shows how to easily comply with the assumptions imposed on the means \(N_1, \ldots , N_p\) in Theorem M.

Remark 2.4

If at most one of means \(N_1, \ldots , N_p: I^p\rightarrow I\) is not strict, then for every \(\mathbf{x}=\left( x_1,\ldots , x_p\right) \in I^p\) equalities (2.5) and (2.6) together imply \(x_1=\ldots = x_p\).

Proof

The only mean in one variable is the identity function, so the assertion clearly holds when \(p=1\). So assume that \(p\ge 2\). We may also assume that the means \(N_1, \ldots , N_{p-1}\) are strict. Take any \(\mathbf{x}\in I^p\) such that equalities (2.5) and (2.6) hold and suppose that \(x_1=\ldots = x_p\) is false. Then \(\min \left\{ x_1,\ldots , x_p\right\} <\max \left\{ x_1,\ldots , x_p\right\} \) hence

$$\begin{aligned} \min \left\{ x_1,\ldots , x_p\right\}<N_i(\mathbf{x})<\max \left\{ x_1,\ldots , x_p\right\} , \qquad i=1,\ldots ,p-1. \end{aligned}$$

Thus (2.5) and (2.6) imply

$$\begin{aligned} N_p(\mathbf{x})=\min \left\{ x_1,\ldots , x_p\right\} \quad \text{ and }\quad N_p(\mathbf{x})=\max \left\{ x_1,\ldots , x_p\right\} , \end{aligned}$$

which is impossible. Consequently, \(x_1=\ldots =x_p\) contrary to the supposition. \(\square \)

Theorem M has been extended in a number of directions. Notice that, among others, its versions for parametrized means and random means were proved in [14] and [16], respectively.

Now we are in a position to prove Theorem 2.1.

Proof of Theorem 2.1

Take any point \(\mathbf{x}_0=\left( x_{0,1}, \ldots , x_{0,p}\right) \in I^p\)    ,   let \(\left( \mathbf{x}_k\right) _{k \in {{\mathbb {N}}}_0}\) with \(\mathbf{x}_k=\left( x_{k,1}, \ldots , x_{k,p}\right) \) be a sequence defined by recurrences (2.1) where \(M_1, \ldots , M_p:I^p\rightarrow I\) are continuous strict means. Define functions \(N_1, \ldots , N_p:I^p\rightarrow I\) by equalities (2.2). According to Proposition 2.3 they are continuous strict means and the sequence \(\left( \mathbf{x}_k\right) _{k \in {{\mathbb {N}}}_0}\) satisfies equalities (2.3). Thus, by Remark 2.4 and Theorem M, there exists a mean L on I such that

$$\begin{aligned} \mathbf{x}_k=\left( N_1, \ldots , N_p\right) ^k\left( \mathbf{x}_0\right) \longrightarrow (L, \ldots , L)\left( \mathbf{x}_0\right) \end{aligned}$$

uniformly on every compact subset of \(I^p\) with respect to \(\mathbf{x}_0\). Consequently, \(L\left( \mathbf{x}_0\right) \) is the limit of each of the sequences \(\left( x_{k,i}\right) _{k \in {{\mathbb {N}}}_0}\), \(i=1, \ldots , p\). \(\square \)

Theorem M, used as the main tool in the above proof of Theorem 2.1, demands weaker assumptions regarding the means \(N_1, \ldots , N_p\) than the strictness of the means \(M_1, \ldots , M_p\) assumed in Theorem 2.1. So the following question arises naturally.

Problem 2.5

Does the assertion of Theorem 2.1 remain true if we assume that:

  1. (i)

    at most one of the means \(M_1,\ldots , M_p\) is not strict?

  2. (ii)

    for every \(\mathbf{x}=\left( x_1,\ldots ,x_p\right) \in I^p\) the equalities

    $$\begin{aligned} \min \left\{ M_1(\mathbf{x}),\ldots , M_p(\mathbf{x})\right\} =\min \left\{ x_1,\ldots ,x_p\right\} \end{aligned}$$

    and

    $$\begin{aligned} \max \left\{ M_1(\mathbf{x}),\ldots , M_p(\mathbf{x})\right\} =\max \left\{ x_1,\ldots ,x_p\right\} \end{aligned}$$

    together imply \(x_1=\ldots =x_p\)?

3 Invariants

When passing from geometry to analysis we often make use of calculations based on algorithms. Then some invariants associated with them turn out to be important and useful. Notice the following result which is another consequence of Proposition 2.3, Remark 2.4 and Theorem M.

Theorem 3.1

Let \(x_{0,1}, \ldots , x_{0,p}\) be points of an interval I and \(M_1, \ldots , M_p\) continuous strict means on I. Let \(N_1, \ldots , N_p:I^p\rightarrow I\) be given by equalities (2.2). Then the common limit L of the sequences \(\left( x_{k,1}\right) _{k \in {\mathbb {N}}},\ldots , \left( x_{k,p}\right) _{k \in {\mathbb {N}}}\), defined by (2.1), is the unique continuous \(\left( N_1, \ldots , N_p\right) \)-invariant mean:

$$\begin{aligned} L\circ \left( N_1, \ldots , N_p\right) =L. \end{aligned}$$
(3.1)

It is usually difficult to find the form of the invariant mean L (cf., for instance, [4, 8, 22]). Remember that even in such a seemingly simple case of the classical Gauss algorithm (1.4) the unique continuous (AG)-invariant mean is defined using an elliptic integral. In the case of the generalized Archimedes–Borchardt algorithm the situation seems to be even more complicated as the mean L existing on account of Theorem 2.1 is invariant with respect to the auxiliary sequence \(\left( N_1, \ldots , N_p\right) \), not to the original sequence \(\left( M_1, \ldots , M_p\right) \). In such a way we come to the next problem ending this note.

Problem 3.2

Find a class of processes (2.1) for which the common limit L of the sequences \(\left( x_{k,1}\right) _{k \in {\mathbb {N}}},\ldots , \left( x_{k,p}\right) _{k \in {\mathbb {N}}}\) can be determined using invariant equation (3.1).