1 INTRODUCTION

Back in 1964, Zeldovich drew attention to the fact that light in the Universe propagates in such a way that an observer measuring angular distances between objects by means of standard cosmological tests obtains a curvature of space slightly less than the curvature corresponding to the average density of the Universe [1]. In particular, if the average density of the world is exactly equal to the critical density, the observer should come to the conclusion that the Universe is open. At a critical density, the spatial section is flat; at a lower density, it has a negative curvature. In a space of zero curvature, the distance between close geodesics increases linearly with the lengths of geodesics, while in a space of negative curvature it grows exponentially. This phenomenon arises from the collective effect of small spatial curvature inhomogeneities. The observer should notice this phenomenon, if the observation conditions allow the difference between linear and exponential growth to be detected. The physical effect itself is fully acknowledged by modern cosmology and is mentioned in various contexts in studies on gravitational lensing; however, quantitatively, it is rather small due to the fact that the density fluctuations are minor and the density is close to critical [2].

The effect noted by Zeldovich is geometric in nature and is reduced to the problem of geodesic divergence in a Riemannian space, the curvature of which along the geodesic can be considered a random process (see [3] for more details). The nature of this effect is not related to the four-dimensionality of space-time and the presence of the time coordinate. Since we do not plan to examine direct cosmological consequences of Zeldovich’s concepts (they are exhausted in his paper) in this study, we are talking simply about the geodesic divergence on a curved two-dimensional Riemannian space. It should be noted that the original work was written by Zeldovich before the formation of modern physics of random media, so he solved the problem using a very specific technique. For its solution by modern regular methods, see [3].

The growth rate of the distance between geodesic lines (geodesic deviation) can be defined in various terms. In random media, we can talk about the Lyapunov exponent (selective growth rate) and the growth rates of the normalized statistical moments of geodesic deviation. In his paper, Zeldovich suggested that these growth rates should not coincide; the growth rate of the statistical moments should exceed the Lyapunov exponent, and the higher moments should increase faster than the lower ones.

Over time, it was realized that Zeldovich’s problem is a convenient model example for studying the development of various instabilities in a random media, and the ratio of the growth rates of various characteristics of geodesic deviation noted by Zeldovich is a manifestation of the general property of the instability development in random media, which eventually became known as intermittency [4].

In his study, Zeldovich arrived at correct answers using reasoning that did not require explicit calculation of the growth rates. Under certain assumptions about curvature properties as a random process (see below for more details), it is possible to calculate the Lyapunov exponent using the mathematical theory of the product of random matrices; however, these methods do not allow calculating the mean-square growth rate and the growth rate of the highest moments [5]. It was possible only in the model of curvature fluctuations in the form of white noise [6]. This model of specifying a random curvature-defining process is poorly compatible with the concept of the properties of gravitational forces, and it is desirable to be replaced with more physical assumptions. This is the content of the present study.

In his study, Zeldovich expressed a number of fruitful thoughts, which are only partly related to the idea that is of interest to us. They were developed in subsequent years by a number of authors, including Zeldovich himself (see, e.g., [710]). Although a detailed review of the modern scientific perception of Zeldovich’s ideas would certainly be of interest, this is not the aim of our study.

2 BASIC EQUATIONS

One of the important technical discoveries in Zeldovich’s work was the realization that the phenomenon under study can be reduced to the behavior of geodesics in a two-dimensional Riemannian space spanned by the projection of two close light rays onto a spatial section of the cosmological model, and the results should be rescaled for a non-stationary multidimensional cosmological model.

Let us consider two geodesics in a two-dimensional Riemannian space, intersecting at one point; the angle \(\theta \) between the geodesics at the intersection point is considered a small value. Let us mark the distance \(x\) (in the metric of a Riemannian manifold) on both geodesics in the same direction from the point of their intersection. The distance (in the same metric) between the obtained points in the first approximation with respect to \(\theta \) is \(\theta y\), where the \(y\) value is known in Riemannian geometry as the Jacobi field (in physics \(\theta y\) is known as the geodesic deviation). It turns out (see, e.g., [11]) that the Jacobi field satisfies the eq-uation

$$y{\kern 1pt} '{\kern 1pt} '\; + K(x)y = 0,$$
(1)

which is called the Jacobi equation. Here, the derivatives are taken with respect to the variable \(x\), and \(K\) is the Gaussian curvature of the manifold, the only nonzero component of the four-dimensional Riemann tensor, which is considered a random process. Assuming that the points move along the geodesics at a constant speed, the distance \(x\) can also be understood as the time over which the points travel a given distance; thus, we are dealing with the evolution of the \(y\) value in a random medium \(K\). We assume that the random process \(K\) is arranged as follows. On each segment of the form \(n\Delta \leqslant x < (n + 1)\Delta \) (these are called update intervals), the curvature of \(K\) does not depend on \(x\) and is considered a random variable with zero mean value and finite variance \({{\sigma }^{2}}\); on different segments, these random variables are statistically independent and equally distributed (for example, according to the Gaussian law). This model of a stochastic process is called an update model. Other assumptions about the structure of the random process \(K\) are also possible, and they can be studied by such methods as well (see, e.g., [12, 13]). The proposed method does not require a special type of statistical distribution of curvature. In order to obtain a specific result, it is necessary to set some specific distribution, as shown below.

To solve Eq. (1), it is necessary to set the initial conditions. We will assume that \(y(0) = 0\), \(y{\kern 1pt} '(0) = 1\).

Let us rewrite Eq. (1) as a system of first-order equations by introducing a two-dimensional vector \({\mathbf{z}}\), the first coordinate of which is \({{z}_{1}} = y\) and the second coordinate is \({{z}_{2}} = y{\kern 1pt} '\Delta \) (the constant factor \(\Delta \) equal to the length of the update interval is introduced to match the dimensions of the vector \({\mathbf{z}}\) components). Then, in the matrix form

$$\frac{d}{{dx}}\left( {\begin{array}{*{20}{c}} {{{z}_{1}}} \\ {{{z}_{2}}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} 0&{1{\text{/}}\Delta } \\ { - K\Delta }&0 \end{array}} \right)\left( {\begin{array}{*{20}{c}} {{{z}_{1}}} \\ {{{z}_{2}}} \end{array}} \right).$$

Naturally, vector \({\mathbf{z}}(x)\) is also a (vector) random process.

3 EQUATION FOR THE CORRELATION TENSOR

Let us now construct the correlation tensor for the random process \({\mathbf{z}}\). To do this, we introduce the two-index tensor

$${{z}_{{ij}}} = {{z}_{i}}{{z}_{j}},$$
(2)

whose mean value (calculated from the \(K\) distribution) is the correlation tensor. Differentiating the product \({{z}_{{ij}}} = {{z}_{i}}{{z}_{j}}\) and using (2), it is easy to show that \({{z}_{{ij}}}\) satisfies the equation

$$z_{{ij}}^{'} = {{A}_{{ijkl}}}{{z}_{{kl}}},$$
(3)

where the tensor components \({{A}_{{ijkl}}}\) are

$$\begin{gathered} {{A}_{{1111}}} = {{A}_{{1122}}} = {{A}_{{1212}}} = {{A}_{{1221}}} \\ \, = {{A}_{{2112}}} = {{A}_{{2121}}} = {{A}_{{2211}}} = {{A}_{{2222}}} = 0, \\ \end{gathered} $$
$${{A}_{{1112}}} = {{A}_{{1121}}} = {{A}_{{1222}}} = {{A}_{{2122}}} = 1{\text{/}}\Delta ,$$
$${{A}_{{1211}}} = {{A}_{{2111}}} = {{A}_{{2212}}} = {{A}_{{2221}}} = - K\Delta .$$

At each update interval, Eq. (3) is an equation with constant coefficients, so its solution is the value of the tensor \({{z}_{{ij}}}\) at the point \(n\Delta \) multiplied by the exponent from tensor \({{A}_{{ijkl}}}\), which we denote by \(B_{{ijkl}}^{n}\), since it is, of course, a fourth-rank tensor as well. We have to supplement this tensor with the subscript \(n\), which indicates the update interval at which the calculation was performed. Naturally, the specific values of the tensor depend on the interval number and are random.

Now, we can average Eq. (3). First, let us do this for the first interval \(0 \leqslant x < \Delta \). It is sufficient to calculate the mean value of the tensor \(B_{{ijkl}}^{0}\), which is calculated component by component.

Since the \(K\) values are replaced by independent values at the ends of the update intervals, we can carry out the same calculation sequentially at the next update intervals; the mean value of the tensor \(B\) does not depend on \(n\) since the \(K\) distributions are the same at different update intervals.

For the convenience of further reasoning, we will renumber the variables. Let us introduce an auxiliary four-dimensional quantity \({\mathbf{w}}\) with components \({{w}_{1}} = {{z}_{{11}}}\), \({{w}_{2}} = {{z}_{{12}}}\), \({{w}_{3}} = {{z}_{{21}}}\), and \({{w}_{4}} = {{z}_{{22}}}\), respectively. The quantity \({\mathbf{w}}\) satisfies the equation

$$\frac{d}{{dx}}{\mathbf{w}} = \hat {A}{\mathbf{w}},$$

where

$$\hat {A} = \left( {\begin{array}{*{20}{c}} 0&{1{\text{/}}\Delta }&{1{\text{/}}\Delta }&0 \\ { - K\Delta }&0&0&{1{\text{/}}\Delta } \\ { - K\Delta }&0&0&{1{\text{/}}\Delta } \\ 0&{ - K\Delta }&{ - K\Delta }&0 \end{array}} \right).$$

The solution to this equation at each update interval is expressed through the matrix \(\hat {B} = \exp(\hat {A}\Delta )\). Therefore, the evolution of the averaged value \(\left\langle {\mathbf{w}} \right\rangle \) is expressed through the matrix \(\left\langle {\hat {B}} \right\rangle \). Moving from one update interval to another, \(\left\langle {\mathbf{w}} \right\rangle \) approaches the eigenvector of the matrix \(\left\langle {\hat {B}} \right\rangle \) corresponding to the highest eigenvalue, which, in turn, determines the growth rate of the \(\left\langle {\mathbf{w}} \right\rangle \) components. Since the mean value of the component \({{w}_{1}} = {{z}_{{11}}}\) is equal to the mean value of \({{y}^{2}}\), the highest eigenvalue of the matrix \(\left\langle {\hat {B}} \right\rangle \) also determines the growth rate of the second moment of the Jacobi field.

4 CALCULATION OF THE EIGENVALUE

First, let us find the explicit form of the matrix \(\hat {B} = \exp(\hat {A}\Delta )\) using the definition of the matrix exponent. Due to the simple structure of the matrix \(\hat {A}\) itself, its degrees are written out explicitly. The form of the matrix \(\hat {B}\) depends on the sign of \(K\). For convenience, we temporarily introduce the dimensionless curvature \(k = K{{\Delta }^{2}}\). Then at \(k \geqslant 0\), we have

$$\hat {B} = \left( {\begin{array}{*{20}{c}} {{{\cos}^{2}}\sqrt k }&{\frac{{\sin 2\sqrt k }}{{2\sqrt k }}}&{\frac{{\sin 2\sqrt k }}{{2\sqrt k }}}&{\frac{{{{\sin}^{2}}\sqrt k }}{k}} \\ {\frac{{ - \sqrt k \sin 2\sqrt k }}{2}}&{{{\cos}^{2}}\sqrt k }&{ - {{\sin}^{2}}\sqrt k }&{\tfrac{{\sin 2\sqrt k }}{{2\sqrt k }}} \\ {\frac{{ - \sqrt k \sin 2\sqrt k }}{2}}&{ - {{\sin}^{2}}\sqrt k }&{{{\cos}^{2}}\sqrt k }&{\tfrac{{\sin 2\sqrt k }}{{2\sqrt k }}} \\ {k{{\sin}^{2}}\sqrt k }&{\tfrac{{ - \sqrt k \sin 2\sqrt k }}{2}}&{\tfrac{{ - \sqrt k \sin 2\sqrt k }}{2}}&{{{\cos}^{2}}\sqrt k } \end{array}} \right),$$
(4)

and at k < 0 the matrix \(\hat {B}\) looks as

$$\hat {B} = \left( {\begin{array}{*{20}{c}} {{\text{cos}}{{{\text{h}}}^{2}}\sqrt { - k} }&{\frac{{\sinh 2\sqrt { - k} }}{{2\sqrt { - k} }}}&{\frac{{\sinh 2\sqrt { - k} }}{{2\sqrt { - k} }}}&{\frac{{{{{\sinh }}^{2}}\sqrt { - k} }}{k}} \\ {\frac{{\sqrt { - k} \sinh 2\sqrt { - k} }}{2}}&{{{{\cosh }}^{2}}\sqrt { - k} }&{{{{\sinh }}^{2}}\sqrt { - k} }&{\tfrac{{\sinh 2\sqrt { - k} }}{{2\sqrt k }}} \\ {\frac{{\sqrt { - k} \sinh 2\sqrt { - k} }}{2}}&{{{{\sinh }}^{2}}\sqrt { - k} }&{{{{\cosh }}^{2}}\sqrt { - k} }&{\tfrac{{\sinh 2\sqrt { - k} }}{{2\sqrt { - k} }}} \\ { - k{{{\sinh }}^{2}}\sqrt { - k} }&{\tfrac{{\sqrt { - k} \sinh 2\sqrt { - k} }}{2}}&{\tfrac{{\sqrt { - k} \sinh 2\sqrt k }}{2}}&{{{{\cosh }}^{2}}\sqrt { - k} } \end{array}} \right).$$
(5)

Note that it would be sufficient to write out only one matrix, for example, for \(k \geqslant 0\); the form of the second matrix for negative \(k\) would then follow from the relations \(\sqrt k = i\sqrt { - k} \) and \(\sin\sqrt k = \sin(i\sqrt { - k} ) = i\sinh \sqrt { - k} \).

Let us further show how in the case of small \(\Delta \), we can obtain approximate estimates for the eigenvalues of the matrix \(\left\langle {\hat {B}} \right\rangle \). The exact answer will depend on the distribution of the \(K\) value. Our goal will be to obtain a series expansion of the eigenvalues in powers of \(\Delta \).

Let us start with a simple case in which the matrix \(\left\langle {\hat {B}} \right\rangle \) is calculated explicitly. To do this, we assume that \(K\) takes only two values, \( \pm \sigma \), with equal probability. Then, \(\left\langle {\hat {B}} \right\rangle \) is the half-sum of matrices (4) and (5), and an explicit characteristic equation can be written out. The coefficients consist of combinations of \({{\cos}^{2}}\sqrt {\sigma {{\Delta }^{2}}} \) and cosh\(^{2}\sqrt {\sigma {{\Delta }^{2}}} \). We will write out the result of their expansion in powers of \(\Delta \). The approximate characteristic equation accurate to the terms of the order of \({{\Delta }^{4}}\) has the form

$$\begin{gathered} {{\lambda }^{4}} - \left( {4 + \frac{4}{3}{{\sigma }^{2}}{{\Delta }^{4}}} \right){{\lambda }^{3}} + \left( {6 + \frac{1}{3}{{\sigma }^{2}}{{\Delta }^{4}}} \right){{\lambda }^{2}} \\ \, + \left( {\frac{4}{3}{{\sigma }^{2}}{{\Delta }^{4}} - 4} \right)\lambda + 1 - \frac{1}{3}{{\sigma }^{2}}{{\Delta }^{4}} = 0. \\ \end{gathered} $$
(6)

Below, we will explain why it is more correct to view the resulting expansion as a series in powers of \(\Delta \) and not \(\sigma \). For now, we will deal with the solution of Eq. (6). It is easy to see that one of the roots is \(\lambda = 1\). The remaining cubic equation has the form

$$\begin{gathered} {{\lambda }^{3}} - \left( {3 + \frac{4}{3}{{\sigma }^{2}}{{\Delta }^{4}}} \right){{\lambda }^{2}} + (3 - {{\sigma }^{2}}{{\Delta }^{4}})\lambda \\ \, - 1 + \frac{1}{3}{{\sigma }^{2}}{{\Delta }^{4}} = 0. \\ \end{gathered} $$
(7)

Let \(\gamma = {{\sigma }^{2}}{{\Delta }^{4}}{\text{/}}3\), then (7) will take the form

$${{\lambda }^{3}} - (3 + 4\gamma ){{\lambda }^{2}} + (3 - 3\gamma )\lambda - 1 + \gamma = 0.$$

A standard replacement of variables λ = y + \((3 + 4\gamma ){\text{/}}3\) allows us to bring the cubic equation to the canonical form (note that all the transformations are now performed with accuracy to \(\gamma \)):

$${{y}^{3}} - 11\gamma y - 6\gamma = 0.$$

Another standard replacement \(y = t + 11\gamma {\text{/}}3t\), known as Vieta substitution, with accuracy up to \(\gamma \) leads to the equation

$${{t}^{3}} - 6\gamma = 0.$$

The roots of the last equation are expressed easily; considering all the replacements, we can write out the solutions of (7):

$$\begin{gathered} {{\lambda }_{1}} = 1 + 2\left( {\delta + \frac{{11}}{9}{{\delta }^{2}}} \right) + \frac{{16}}{9}{{\delta }^{3}}, \\ {{\lambda }_{{2,3}}} = 1 - \left( {\delta + \frac{{11}}{9}{{\delta }^{2}}} \right) + \frac{{16}}{9}{{\delta }^{3}} \pm i\sqrt 3 \left( {\delta - \frac{{11}}{9}{{\delta }^{2}}} \right), \\ \end{gathered} $$
(8)

where \(\delta = {{(\sigma {{\Delta }^{2}}{\text{/}}2)}^{{2/3}}}\). The highest eigenvalue \({{\lambda }_{1}}\) is what determines the growth of the solutions of the extended system (3).

Note that it was possible to arrive at Eq. (6) in a slightly different way, which is applicable for a wider class of \(K\) distributions. The idea is to replace the mathematical expectation of a function of a random variable with the mathematical expectation of the first few terms of its Taylor series. The question of the accuracy of such an approximation is rather complicated, since it is necessary to consider not only the local behavior of the function near the expansion point, but also the character of the growth of the random variable moments. In our situation, the issue is simplified, since we consider the product of a random variable and a small parameter \(\Delta \). Then, with accuracy to the terms of the order of \({{\Delta }^{4}}\) for an arbitrary sufficiently smooth function in a neighborhood of zero \(f\) and symmetric \(K\) distribution, we can write

$$\left\langle {f(K{{\Delta }^{2}})} \right\rangle \approx f(0) + f{\kern 1pt} '(0)\left\langle K \right\rangle {{\Delta }^{2}} + \frac{1}{2}f{\kern 1pt} '{\kern 1pt} '(0)\left\langle {{{K}^{2}}} \right\rangle {{\Delta }^{4}}.$$
(9)

Using this approximation to calculate the elements of the matrix \(\left\langle {\hat {B}} \right\rangle \) and considering the fact that the first moment for a symmetric \(K\) distribution is zero, we arrive exactly at expression (6), in which σ2 = DK = \(\langle {{K}^{2}}\rangle \). Note that the following refining corrections in (9) contain the moments of the random variable \(K\) of higher orders, which are generally not expressed in terms of its variance \({{\sigma }^{2}}\). For this reason, (6) is generally not an expansion in powers of \(\sigma \).

Thus, only in the first approximation for a symmetric \(K\) distribution, the highest eigenvalue of the matrix \(\left\langle {\hat {B}} \right\rangle \) is determined by only two parameters, the variance of the \(K\) distribution and the length of the update interval \(\Delta \).

It remains to explicitly recalculate the highest eigenvalue in terms of the growth rate of the second statistical moment of the Jacobi field (1).

5 RELATIONSHIP OF THE EIGENVALUE AND GROWTH RATE

Recall (see [4]) that the growth rate of the \(p\)th statistical moment is the value

$${{\gamma }_{p}} = \mathop {\lim}\limits_{x \to \infty } \frac{{\ln\left\langle {{\text{|}}y(x){\kern 1pt} {{{\text{|}}}^{p}}} \right\rangle }}{{px}},$$
(10)

where \(p\) is the number of the statistical moment, and the growth rate \({{\gamma }_{p}}\) is normalized to the moment number \(p\) in the denominator of Eq. (10) to make the results comparable for moments of different orders.

The highest eigenvalue \({{\lambda }_{1}}\) found in (8) characterizes the growth of the mean square of the Jacobi field \(y\) in construction. Specifically, this means that the value \(\left\langle {{{y}^{2}}} \right\rangle \) increases by \(\lambda _{1}^{n}\) times at a distance of \(n\) update intervals. On the other hand, according to the definition of the growth rate (10), the same change will be written as \(\exp (2n\Delta {{\gamma }_{2}})\). Hence, the relation between the eigenvalue and the growth rate of the second moment:

$${{\gamma }_{2}} = \frac{1}{{2\Delta }}\ln{{\lambda }_{1}}.$$

Substituting the expression for \({{\lambda }_{1}}\) from (8), with accuracy to terms of the order of \({{\Delta }^{3}}\), we obtain

$${{\gamma }_{2}} = \mathop {\left( {\frac{\sigma }{2}} \right)}\nolimits^{2/3} {{\Delta }^{{1/3}}} + \frac{2}{9}\mathop {\left( {\frac{\sigma }{2}} \right)}\nolimits^{4/3} {{\Delta }^{{5/3}}} + \frac{2}{9}\mathop {\left( {\frac{\sigma }{2}} \right)}\nolimits^2 {{\Delta }^{3}}.$$
(11)

Note that proceeding to expansion (11), it is necessary to consider the limited region of convergence of the series for the logarithm, hence, the restrictions on the possible values of the product \(\sigma {{\Delta }^{2}}\). The numerical estimates show that the condition \(\sigma {{\Delta }^{2}} < 0.37\) should be fulfilled for the logarithm expansion formula to be valid.

Of course, we assume that the highest eigenvalue of our matrix and the mean-square growth rate for more complex distributions are calculated numerically.

To demonstrate the agreement of the estimates with the numerical experiment, we consider three families of \(K\) distributions: Bernoulli with values \( \pm \sigma \), uniform on the interval \([ - \sigma \sqrt 3 \), \(\sigma \sqrt 3 ]\), and Gaussian with zero mean and variance \({{\sigma }^{2}}\). The distribution parameters are selected so that the variance of \(K\) is exactly \({{\sigma }^{2}}\). Furthermore, we will consider \(\sigma = 0.1\), 1, and 10, and the \(\Delta \) values running the interval from \({{10}^{{ - 4}}}\) to 10. For each \(\sigma \) and \(\Delta \), we will simulate \({{10}^{4}}\) realizations of the random variable \(K\), numerically estimate the matrix \(\left\langle {\hat {B}} \right\rangle \), and find its highest eigenvalue. The results will be plotted on the graphs of the highest eigenvalue versus the \(\Delta \) value and compared with the approximation obtained in (8). Figure 1 shows that the results for small \(\Delta \) are nearly identical and closely agree with the approximation; for large \(\Delta \) values, a discrepancy appears, since the higher moments of the \(K\) distribution start to prevail.

Fig. 1.
figure 1

Dependence of the highest eigenvalue on the length of the update interval for some types of distributions in comparison with the approximation. The axes are shown in a logarithmic scale.

Finally, Fig. 2 shows the dependence of the growth rate on the length of the update interval \(\Delta \) for different variances of the \(K\) distribution. Note that at small \(\Delta \), the growth rate is determined by the first term \({{(\sigma {\text{/}}2)}^{{2/3}}}{{\Delta }^{{1/3}}}\). The \(\Delta \) value at which the expansion of the logarithm loses accuracy approximately coincides with the \(\Delta \) value at which the approximation accuracy for the eigenvalues is lost (see Fig. 1). Apparently, a good approximation for small values of the product \(\sigma {{\Delta }^{2}}\) is \({{\gamma }_{2}} \approx {{(\sigma {\text{/}}2)}^{{2/3}}}{{\Delta }^{{1/3}}}\).

Fig. 2.
figure 2

Growth rate dependences of the second moment of the Jacobi field on the length of the update interval and the variance of the curvature parameter (σ = 10, 1, 0.1, and 0.01, from top to bottom, respectively). The axes are shown in a logarithmic scale. The dashed lines are located in the region beyond the limits of applicability of the logarithm expansion (\(\sigma {{\Delta }^{2}} > 0.37\)).

Note one particular case when \(\Delta = 1\) and \(K\) is uniformly distributed on the interval \([ - 1,\;1]\) considered earlier in [5]. For it, we obtain the approximate value of the mathematical expectation of the matrix \(\hat {B}\),

$$\left\langle {\hat {B}} \right\rangle \approx \left( {\begin{array}{*{20}{c}} {1.11}&{1.04}&{1.04}&{1.01} \\ {0.22}&{1.11}&{0.11}&{1.04} \\ {0.22}&{0.11}&{1.11}&{1.04} \\ {0.34}&{0.22}&{0.22}&{1.11} \end{array}} \right)$$

and the highest eigenvalue \({{\lambda }_{1}} \approx 2.53\) (formula (8) also gives a close value \({{\lambda }_{1}} \approx 2.49\)). In terms of the growth rate, we obtain \({{\gamma }_{2}} \approx 0.46\). Note that the value found in [5] for the Lyapunov exponent \(\lambda \approx 0.2132\) turns out, as it should, to be smaller than the growth rate of the highest moment \({{\gamma }_{2}}\). It should also be noted that the direct application of formula (11) gives an overestimated value \({{\gamma }_{2}} \approx 0.50\). This is due to the fact that this case lies outside the limits of applicability regarding the expansion of the logarithm.

6 RESULTS AND DISCUSSION

We have demonstrated how to calculate the mean-square growth rate of the geodesic deviation in the Zeldovich problem. The difficulty of the calculation consists in the fact that we are interested in a quadratic quantity (the square of the geodesic deviation), for which it was previously not possible to obtain a linear equation. We solve this issue by introducing a bilinear quantity (second-order tensor), one component of which is the square of the geodesic deviation. This technique is widely used in the theory of turbulence. For example, in the study of the magnetic energy evolution in a flow of a conductive fluid, it is unclear how to obtain an equation for this quadratic quantity, but it is possible to construct an equation for the correlation tensor of the magnetic field [14].

The technique we use is in no way limited to the specific form of the geodesic deviation equation, but is applicable to the transfer of an arbitrary physical quantity described by a system of ordinary linear differential equations with random coefficients. Many problems that study the development of instabilities in random media by means of a transition to the Lagrangian reference frame are reduced to such equations (see, e.g., [4]).

With the help of this technique, it is also possible to study the growth of statistical moments of a higher order. For example, to study the growth rate of the fourth degree of geodesic deviation, it will be ne-cessary to introduce the fourth-order tensor \({{Z}_{{ijkl}}} = {{z}_{i}}{{z}_{j}}{{z}_{k}}{{z}_{l}}\). Naturally, this will require consideration of matrices of a higher order and will make the calculations more cumbersome, but will not fundamentally change their nature.

With regard to the prospects for the application of the results and the entire body of knowledge about the Zeldovich effect, we note once again that at the stages of the evolution of the Universe close to us in time, this effect is quantitatively very small. In principle, it is possible that the combined action of multiple celestial bodies located near one line of sight, which individually do not lead to gravitational lensing, can collectively cause the occurrence of a gravitational lens. To estimate the probability of the occurrence of such a lens, one must be able to calculate various moments of the Jacobi field. However, this line of research does not appear particularly promising. The Zeldovich effect seems much more important for the study of the earliest Universe, since it clearly says that the influence of inhomogeneities blurs the concept of critical density for the cosmological model and makes one think how to correctly formulate the Einstein equations for fluctuating space-time. It seems that the problem formulated by Zeldovich successfully isolates a fragment of this large problem; this fragment can be researched without consistent mathematical elaboration of the concept of a pseudo-Riemannian manifold with a random metric and, at the same time, indicates a possible direction of the fundamental development of general relativity. The authors hope to make a feasible contribution to this development in the future.