Abstract
For an arbitrary transient random walk \((S_n)_{n\ge 0}\) in \({\mathbb {Z}}^d\), \(d\ge 1\), we prove a strong law of large numbers for the spatial sum \(\sum _{x\in {\mathbb {Z}}^d}f(l(n,x))\) of a function f of the local times \(l(n,x)=\sum _{i=0}^n{\mathbb {I}}\{S_i=x\}\). Particular cases are the number of
-
(a)
visited sites [first considered by Dvoretzky and Erdős (Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pp 353–367, 1951)], which corresponds to the function \(f(i)={\mathbb {I}}\{i\ge 1\}\);
-
(b)
\(\alpha \)-fold self-intersections of the random walk [studied by Becker and König (J Theor Probab 22:365–374, 2009)], which corresponds to \(f(i)=i^\alpha \);
-
(c)
sites visited by the random walk exactly j times [considered by Erdős and Taylor (Acta Math Acad Sci Hung 11:137–162, 1960) and Pitt (Proc Am Math Soc 43:195–199, 1974)], where \(f(i)={\mathbb {I}}\{i=j\}\).
Similar content being viewed by others
1 Introduction and Main Results
Let \(X_1\), \(X_2\), ...be a sequence of independent identically distributed random vectors valued in \({\mathbb {Z}}^d\), \(d\ge 1\). Consider a random walk generated by \(X_n\)’s, \(S_0:=0\), \(S_n : =X_1+\cdots +X_n\), and the number of visits to a site \(x\in {\mathbb {Z}^d}\) up to time n which is called the local time of x,
Define random variables
In particular, the \(L_n(0)=|\{S_0,\ldots ,S_n\}|\) represents the number of distinct sites visited by the random walk up to time n, called the range of \((S_n)_{n\ge 1}\). The case \(\alpha =1\) is trivial because \(L_n(1)=n+1\). The value of \(L_n(2)\) is the number of so-called self-intersections of a random walk. For an integer \(\alpha \), the value of \(L_n(\alpha )\) is the number of \(\alpha \)-fold self-intersections up to time n.
It is known that for a recurrent random walk the quotient \(L_n(0)/n\) tends to be 0 as \(n\rightarrow \infty \) (see, e.g. Spitzer [12, Ch. 1, Sect. 4, Theorem 1]), which assumes a slower growing normalising sequence for a proper limit in the law of large numbers. As shown in Dvoretzky and Erdős [8, Theorem 3] for a simple random walk and in Černý [3] for a general one with zero drift and finite covariance matrix, it is \(n/\log n\) in two dimensions.
In present article, we show that the law of large numbers for \(L_n(\alpha )\) with a non-zero limit and normalising sequence n holds in any dimension d for any transient random walk, that is, when the probability of its return to the origin,
is strictly positive, \(\gamma >0\). We assume in addition that \(\gamma <1\) which excludes a trivial case where either \(l(n,x)=1\) or 0 for all x with probability 1, and hence \(L_n(\alpha )=n+1\).
The result for \(L_n(\alpha )\) we are interested in follows from the following more general result. Consider a function \(f:{\mathbb {Z}}^+\rightarrow {\mathbb {R}}\) and a spatial sum
In particular, for a power function \(f(i)=i^\alpha \), we get \(L_n(\alpha )=G_n(f)\).
Theorem 1
Let the random walk \((S_n)_{n\ge 0}\) be transient and \(f:{\mathbb {Z}}^+\rightarrow {\mathbb {R}}\) be a function satisfying
Then,
in mean square and with probability 1.
The proof of Theorem 1 is given in Sect. 4. In Sects. 2 and 3, we discuss an asymptotic behaviour of the expectation and variance of \(G_n(f)\) as \(n\rightarrow \infty \), respectively, needed further in the proofs.
The following corollaries are immediate.
Corollary 2
For any \(\alpha \ge 0\), it holds that
in mean square and with probability 1.
The case \(\alpha =0\) was considered by Spitzer in [12, Theorem 1.4.1] where convergence in probability is proven. Before then a strong law of large numbers for \(\alpha =0\) was proven for a simple random walk by Dvoretzky and Erdős in [8]. In Becker and König [1], the strong convergence (5) is proven for all \(\alpha \ge 0\) (up to a gap in the proof of Proposition 2.1, see a comment on it in the proof of Lemma 5 following equation (13)) without any further conditions in the case \(d\ge 3\); however, in the cases \(d\in {\{1,2\}}\), it is assumed there that either the steps \(X_i\) are square integrable or, for some \(\eta >0\) and \(C<\infty \),
Corollary 3
Let \(J\subset {\mathbb {N}}\). Then, with probability 1,
If J is a singleton \(\{j\}\), then we get the strong law of large numbers for the number of sites visited exactly j times up to time n. For these statistics, the last corollary generalises Theorem 12 in Erdős and Taylor [4] from a simple random walk in \(d\ge 3\) dimensions to an arbitrary transient random walk; a general result for transient random walks on a countable Abelian group was proven by induction on j by Pitt in [13]. Notice that, for an arbitrary J, say J the set of all odd numbers, Corollary 3 can be reduced to the singleton case, once we know the strong law of large numbers for the range of \(S_n\).
The growth condition (3) is satisfied for all subexponential functions f(i) of order \(e^{o(i)}\) as \(i\rightarrow \infty \), and also for exponentially growing functions of order \(O(e^{ci})\) with exponent coefficient \(c<\lambda _*/2\) where
It is very likely that the condition (3) may be relaxed to the condition (9) below because under the latter condition we have
and since the number of visited sites up to time n is not greater than n, it clearly indicates that the family \(\{G_n(f)/n,n\ge 1\}\) is stochastically bounded. But if we only assume (9), then it requires a much more delicate analysis compared to the estimation of the variance carried out in Lemma 6, as it happens when we prove a strong law of large numbers for a random walk where existence of the second moment of jumps essentially simplifies proving technique. In the result below, we show how it can be done under some additional technical assumptions.
Theorem 4
Let, for some \(C<\infty \) and \(\varepsilon >0\),
(i) either the condition
hold for some \(\eta \in (0,1)\) and \(|f(i)|\le Ce^{i\lambda _*}/i^{2+\varepsilon }\) for all i,
(ii) or the condition
hold and \(|f(i)|\le Ce^{i\lambda _*}/i\log ^{2+\varepsilon }i\) for all \(i>0\).
Then, the convergence (4) holds with probability 1.
For the proof, see Sect. 5. It is based on truncation technique and on a strong limit theorem for the maximal local time, \(l(n):=\max \{l(n,x),\ x\in {\mathbb {Z}}^d\}\), see Proposition 8 there.
Notice that the condition (7) is equivalent to (6). Indeed, on the one hand, it follows from (6) that
On the other hand, it follows from (7) that, for all m,
hence
Also notice that, in \(d\ge 3\) dimensions, if a random walk is not concentrated in some three-dimensional subspace, then the condition (7) is valid because \({\mathbb {P}}\{S_n=0\}=O(1/n^{d/2})\), due to an upper bound for the concentration function of a sum of random vectors, see, e.g. Corollary of Theorem 6.2 in Esseen [6]. For the same reason, in \(d\ge 4\) dimensions, the condition (8) is valid for any random walk not concentrated in some three-dimensional subspace.
If the function f grows faster than assumed in Theorem 4, say if the condition (9) fails, then \(G_n(f)\) would require stronger normalisation than just n, in order to have a proper limit as \(n\rightarrow \infty \). The answer may be conjectured as follows: let \(\tau =\inf \{n\ge 1: S_n=S_0\}\) be the first return time to the origin, then
where \({\widetilde{\tau }}_1\), \({\widetilde{\tau }}_2\), ... are independent copies of \(\tau \) conditioned on \(\{\tau <\infty \}\). For example, consider f such that \(f(k)\sim c_1/(1-\gamma )^k\), then
As shown in [7, Theorem 4], in the case where \({\mathbb {E}}X_1=0\), \({\mathbb {E}}\Vert X_1\Vert ^2<\infty \) and \(d\ge 3\), we have an asymptotic relation \({\mathbb {P}}\{{\widetilde{\tau }}=n\}\sim c_3/n^{d/2}\) as \(n\rightarrow \infty \).
Hence, in the case \(d\ge 5\), \({\mathbb {E}}{\widetilde{\tau }}_1<\infty \) and it follows from the renewal theorem that then \({\mathbb {E}}f(l(n,0))\sim c_2n/{\mathbb {E}}{\widetilde{\tau }}_1\), which together with asymptotic size of the range—which is of order O(n)—indicates that the right normalisation for \(G_n(f)\) should be \(n^2\).
In the cases \(d=3\) and \(d=4\), \({\mathbb {E}}{\widetilde{\tau }}_1=\infty \) and it follows from Erickson’s renewal theorem [5, Theorem 5] that then \({\mathbb {E}}f(l(n,0))\sim c_4n^{1/2}\) and \(c_5n/\log n\), respectively, which in turn indicates that the right normalisation for \(G_n(f)\) should be \(n^{3/2}\) and \(n^2/\log n\), respectively.
2 Asymptotics for Expectation of \(G_n(f)\)
In this section, we discuss the asymptotic behaviour of \({\mathbb {E}}G_n(f)\) as \(n\rightarrow \infty \). We prove the following result.
Lemma 5
Let \(f:{\mathbb {Z}}^+\rightarrow {\mathbb {R}}\) be a function satisfying
Then,
Proof
Following Dvoretzky and Erdős [8], we introduce
the probability that the site visited by random walk in the nth step has not been visited before then; \(\gamma _0=1\). As noticed in [8],
so \(\gamma _n\) equals the probability that the random walk does not return to the origin in n steps:
We observe the following monotone convergence
Consider the following spatial sum
which represents the number of sites visited exactly j times up to time n, hence
As Becker and König [1, Eq. (2.2)] do, we use the following equality, for \(j\ge 1\):
due to the Markov property of the random walk. In [1], the asymptotic behaviour of \({\mathbb {E}}Q_n(j)\) as \(n\rightarrow \infty \) is argued by considering the generating function of \(\{{\mathbb {E}}Q_n(j),n\ge 1\}\) and then referring to the Tauberian theorem, [9, Theorem XIII.5]. Notice that this approach requires the sequence \(\{{\mathbb {E}}Q_n(j),n\ge 1\}\) to be ultimately increasing (see Sects. 1.7.3 and 1.7.4 in [2]) which is not granted from the beginning and probably fails; at least such a discussion is missing in [1]. Notice that this problem can be fixed by first looking at the sum of \(Q_j(n)\) over \(j\ge {\widetilde{j}}\) for some \({\widetilde{j}}\) (this is now monotonic in n) and then looking at the differences. See also Pitt [13] for an alternative proof.
Below we suggest another argument which does not require the Tauberian theorem and is only based on the transience of the random walk. It follows from (13) that
where \(\tau _1\), \(\tau _2\), ... are independent copies of \(\tau \), the first return time to the origin. Thus,
hence
In view of the convergence (11), for any fixed \(s\ge j-1\),
and, moreover,
Therefore, by the dominated convergence theorem, as \(n\rightarrow \infty \),
owing to independence of \(\tau _k\)’s. In addition,
Then, the condition (9) makes it possible to apply dominated convergence again and to conclude that
which completes the proof of (10). Also notice that (15) implies an upper bound
\(\square \)
3 Estimation of Variance of \(G_n(f)\)
The proof of the strong law of large numbers for \(L_n(\alpha )\) for a transient random walk given by Becker and König in [1] is based on the following upper bound for the variance of \(L_n(\alpha )\):
where \(C=C(\alpha )\) is a constant. Notice that the proof of this bound provided in [1] starts with an analysis of some representation for the variance of \(L_n(\alpha )\), which is only available for integer \(\alpha \)’s, implication of which is necessary for further arguments for the strong law of large numbers for \(L_n(\alpha )\) in the case of a non-integer \(\alpha \).
For this reason, we suggest below a different bound which works not only for \(L_n(\alpha )\) with a non-integer \(\alpha \), but also for \(G_n(f)\) with a function f other than power. This bound provides a straightforward way for proving the strong law of large numbers for \(G_n(f)\) with f satisfying the growth condition (3).
Lemma 6
For any non-decreasing function f with \(f(0)=0\),
for all n where \(\varDelta f(i):=f(i)-f(i-1)\ge 0\).
Proof
In view of the representation (12),
hence
because \({\mathbb {P}}\{l(n,x)=i,l(n,y)=j\}=0\) if \(x=y\) and \(i\not = j\). Thus, due to \(f\ge 0\),
and it only remains to estimate the difference of sums \(\varSigma ^1_n-\varSigma ^2_n\) on the right hand side. Since \(f(0)=0\),
and similar equalities hold for ordinary sums. Therefore,
where \(\varDelta f(i)\ge 0\) for all i because f is non-decreasing, and the tail probabilities do not decrease as n grows, which makes it possible to perform a required analysis of the double sum. Let us decompose the event \(B=B(x,y,i,j):=\{l(n,x)\ge i,l(n,y)\ge j\}\) for \(x\not = y\) as a union of four disjoint events \(B\cap B_{xy}\), \(B\cap B_{yx}\), \(B\cap B_{xyx}\) and \(B\cap B_{yxy}\), where
Denote by \(\tau _x(i)\) the time of ith visit to x by the random walk \((S_n)_{n\ge 0}\). Then, the event \(B\cap B_{xy}\) implies the event:
Altogether, these imply the following upper bound
Let us estimate every probability on the right hand side here. Since \(\tau _x(i)\) is a Markov time,
because the event \(\{l(n,y-x)\ge j\}\) can only increase as n grows. Therefore,
Then, summation over all \(x\not = y\) implies that
Together with non-negativity of increments of the function f, it implies that
Further, the event \(B_{xyx}\) may be described as follows: firstly, the site x is visited at least once, say \(t\ge 1\) times, then the site y is visited one or more times, say \(s\ge 1\) times, and then again the site x is visited, which is followed by visits to x and y in an arbitrary order. Thus, for \(i\ge j\),
and similarly for \(j\ge i\)
Summing up for all x and y, we arrive at the following upper bound
in the case \(i\ge j\) and similarly with coefficient \((1-\gamma )^{j-1}\) in the case \(j\ge i\). Since
we get, for \(i\ge j\),
which together with (17), (18) and (19) shows that the variance of \(G_n(f)\) does not exceed
The sum of \(\varDelta f(j)\) from \(j=1\) to i equals f(i), hence the desired upper bound for \({\mathbb {V}\mathrm{ar}}G_n(f)\). \(\square \)
4 Proof of Theorem 1
Without loss of generality, we assume \(f(0)=0\). Any function \(f:{\mathbb {Z}}^+\rightarrow {\mathbb {R}}\) with \(f(0)=0\) is decomposable into a difference of two non-decreasing functions, \(f=f_1-f_2\), where
Since
we get the following upper bound
Therefore, the condition (3) implies that
Hence, without loss of generality, we assume that f is a non-decreasing function satisfying (21) and \(f(0)=0\).
The transience of the random walk \((S_n)_{n\ge 0}\) is equivalent to the convergence of the series
The condition (21) allows us to apply the upper bound (16) to \(f^2\) and to conclude that \({\mathbb {E}}G_n(f^2)\le c_1n\) for some \(c_1<\infty \). Since f is non-decreasing and \(f(0)=0\), \(\varDelta f(i)\le f(i)\). Therefore, by Lemma 6,
again by the condition (21). In view of (22),
and hence
which is equivalent to the convergence \((G_n(f)-{\mathbb {E}}G_n(f))/n\rightarrow 0\) in \(L_2\). Together with the convergence (10), this completes the proof of \(L_2\)-convergence stated in Theorem 1.
For the proof of the almost sure convergence, first let us notice that (22) yields
Hence, we can apply Lemma 7 proven below to the sequence \(\{a_n\}_{n\ge 1}\), so, for any fixed \(\delta >0\), there is an increasing subsequence \(\{n_r\}_{r\ge 1}\) such that \(\sum _{r=1}^\infty a_{n_r}<\infty \) and \(\sqrt{1+\delta }n_{r-1}\le n_{r+1}\le (1+\delta )n_r\) for all r.
Using Chebyshev’s inequality, the upper bound (23) and the convergence (10), we conclude that, for any \(\varepsilon >0\),
Then, it follows from the Borel–Cantelli lemma that
Further, for any n, there exists r such that \(n_r\le n\le n_{r+1}\) and, hence
It follows from (10) that
Moreover, \(n_r<n_{r+1}\le (1+\delta )n_r\) for all r. Then, (24) and (25) imply that
Due to arbitrary choice of \(\delta >0\), the a.s. convergence \(G_n(f)/{\mathbb {E}}G_n(f)\rightarrow 1\) follows.
\(\square \)
In the last proof, we have made use of the following auxiliary result.
Lemma 7
Let \(v_n\ge 0\) and \(\sum _{n=1}^\infty \frac{v_n}{n}<\infty \). Then, for any fixed \(\delta >0\), there exists an increasing subsequence \(\{n_r\}_{r\ge 1}\) such that \(\sum _{r=1}^\infty v_{n_r}<\infty \) and \(\sqrt{1+\delta }n_{r-1}\le n_{r+1}\le (1+\delta )n_r\) for all \(r\ge 1\).
Proof
Let us fix an arbitrary \(b\in (1,2)\) and identify a \(K=K(b)\) such that \([b^K]-[b^{K-1}]\ge 2\). For \(r\ge 1\), choose
By this construction,
Since
the convergence of the series \(\sum _n\frac{v_n}{n}\) guarantees convergence of the series \(\sum _r v_{n_r}\). Also, for all r,
so the lemma conclusion follows if we take \(b=\sqrt{1+\delta }\). \(\square \)
5 Proof of Theorem 4
To prove Theorem 4, let us first consider the maximal local time, \(l(n):=\max \{l(n,x),\ x\in {\mathbb {Z}}^d\}\). Theorem 13 in Erdős and Taylor [4] states a strong limit theorem for l(n): for a simple random walk in \(d\ge 3\) dimensions,
The proof in [4] is split into two parts, dealing with upper and lower bounds. There is some issue with the proof of the upper bound, that is,
The proof suggested in [4] is based on the inequality for tails
and on the observation that the number of returns to the origin, l(n, 0), is dominated by a geometrically distributed random variable with parameter \(1-\gamma \). Notice that justification of (28) in [4] is not complete because it is based there on the assumption that all sites visited by the random walk—clearly not more than n—can be treated in the same way as the origin. This point requires further justification because the set of visited sites is random. The same issue occurs in the proof of Theorem 1 in Revesz [11]. Notice that this set is contained in the ball of radius n, which leads to the coefficient \(n^d\) instead of n on the right hand side of (28) which in its turn leads to the constant \(d/\lambda _*\) on the right hand side of (27) instead of \(1/\lambda _*\).
The last issue may be resolved in different ways, particularly, we may condition on the non-zero value of \(S_1\),
followed by an induction argument on n. Hence, the upper bound (28) holds for any transient random walk in any dimensions. Therefore,
which implies, for all \(\varepsilon >0\) and \(m\in \{0,1,2,\ldots \}\), the following upper bound
for the events
hereinafter \(\log _{(m)}x\) denotes the m-fold iterated logarithms, that is,
Therefore, the series \(\sum _{k=1}^\infty {\mathbb {P}}\{C(2^k,\varepsilon ,m)\}\) converges; hence, by the Borel-Cantelli lemma, only finitely many of \(C(2^k,\varepsilon ,m)\) occur, with probability 1. For any \(n\in [2^k,2^{k+1})\) and the event
we have inclusion \(B(n,\varepsilon ,m)\subseteq C(2^k,\varepsilon ,m)\), and thus only finitely many of \(B(2^k,\varepsilon ,m)\) occur, with probability 1. In other words, we arrive at the following result.
Proposition 8
For all \(\varepsilon >0\) and \(m\in \{0,1,2,\ldots \}\),
for all \(n\ge N\) where N is finite with probability 1.
Notice that, for a simple random walk in \({\mathbb {Z}}^d\), \(d\ge 3\), an upper a.s. bound \(\lambda _*^{-1}(\log n+(1+\varepsilon )\log \log n)\) and—in the case of \(d\ge 4\)—a lower a.s. bound \(\lambda _*^{-1}(\log n-(3+\varepsilon )\log \log n)\) is derived by Revesz in [11] following a different technique; he has also proved that \(l(n)\ge \lambda _*^{-1}(\log n+(1-2/(d-2)-\varepsilon )\log \log n)\) infinitely often a.s. The maximal local time for a zero drift random walk on \({\mathbb {Z}}\) with finite variance—which is clearly recurrent—was studied by Kesten in [10].
Let us proceed with the proof of Theorem 4, we start with the case (ii). Introducing two non-decreasing functions, \(f_1\) and \(f_2\) as in (20), we notice that then \(f_k(i)\le {\widetilde{C}} e^{i\lambda _*}/i\log ^{2+\varepsilon }i\) for \(k=1\), 2, because
Hence, without loss of generality, we assume that f is non-decreasing with \(f(0)=0\).
The function f satisfies the condition (9), but not (21), and this generates a certain difficulty we need to overcome. To this end, let us introduce two sequences of truncated functions
where \(b_n=\log _{(2)}n+2\log _{(3)}n\) and make use of the following decomposition
Since f satisfies the condition (9), we get equivalences
and, by (16), the following upper bound
Further, it follows from the condition (8) that
In addition,
Substituting the last three bounds into the right hand side of the inequality provided by Lemma 6, we derive that
Choose a subsequence \(n_r=[e^{r/\log ^{1/4}r}]\), then, by Chebyshev’s inequality, the last upper bound and (29), we conclude that
which allows us to apply the Borel–Cantelli lemma, hence obtaining
Similar to (25), if \(n_r\le n\le n_{r+1}\) then
In addition, by (29),
Therefore,
Further, it follows from (16) that, for all \(m\le n\),
if \(m\ge n/2\). Applying Chebyshev’s inequality to non-negative random variables \(G_{n_{k+1}}(f_{n_k}^+)/n_{k+1}\) with \(n_k=2^k\), we get the following series convergence
which in its turn implies by the Borel–Cantelli lemma that
In addition, for \(n_k\le n\le n_{k+1}\),
hence
Finally, since l(n) is the largest local time,
which implies a.e. convergence \(G_n(f-f_n-f_n^+)\rightarrow 0\) as \(n\rightarrow \infty \), due to Proposition 8 with \(m=3\). Together with (30), (31) and (29) it implies the desired convergence (4) in the case (ii).
In the case (i), the proof requires some alterations. Consider a sequence of truncated functions
and make use of the following decomposition
As above, the equivalences (29) hold and, by (16),
Further, it follows from the condition (7) that
In addition,
Substituting the last three bounds into the right hand side of the inequality provided by Lemma 6, we derive that
since \(\eta <1\). Then, similar to the case (ii) we deduce (30). Further, it follows from (16) that, for all \(m\le n\),
if \(m\ge n/2\). Again similar to the case (ii), we deduce from the last bound that
which together with (30) and equality \(G_n(f)=G_n(f_n)+G_n(f-f_n)\) implies (4) in the case (i). The proof of Theorem 4 is complete. \(\square \)
References
Becker, M., König, W.: Moments and distribution of the local times of a transient random walk on \({\mathbb{Z}}^d\). J. Theor. Probab. 22, 365–374 (2009)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1987)
Černý, J.: Moments and distribution of the local time of a two-dimensional random walk. Stoch. Proc. Appl. 117, 262–270 (2007)
Erdős, P., Taylor, S.J.: Some problems concerning the structure of random walk paths. Acta Math. Acad. Sci. Hung. 11, 137–162 (1960)
Erickson, K.B.: Strong renewal theorems with infinite mean. Trans. Am. Math. Soc. 151, 263–291 (1970)
Esseen, C.G.: On the concentration function of a sum of independent random variables. Z. Wahrscheinlichkeitstheorie Verw. Gebiete 9, 290–308 (1968)
Doney, R., Korshunov, D.: Local asymptotics for the time of first return to the origin of transient random walk. Stat. Probab. Lett. 81, 1419–1424 (2011)
Dvoretzky, A., Erdős, P.: Some problems on random walk in space. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 353–367 (1951)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, New York (1971)
Kesten, H.: An iterated logarithm law for local times. Duke Math. J. 32, 447–456 (1965)
Révész, P.: The maximum of the local time of a transient random walk. Stud. Sci. Math. Hung. 41, 379–390 (2004)
Spitzer, F.: Principles of Random Walk. Van Nostrand, Princeton (1964)
Pitt, J.H.: Multiple points of transient random walks. Proc. Am. Math. Soc. 43, 195–199 (1974)
Acknowledgements
The authors are very thankful to the referee for valuable comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Asymont, I.M., Korshunov, D. Strong Law of Large Numbers for a Function of the Local Times of a Transient Random Walk in \({\mathbb {Z}}^d\). J Theor Probab 33, 2315–2336 (2020). https://doi.org/10.1007/s10959-019-00937-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-019-00937-6