Skip to content
BY 4.0 license Open Access Published by De Gruyter January 16, 2021

Fixed point of some Markov operator of Frobenius-Perron type generated by a random family of point-transformations in ℝd

  • Peter Bugiel , Stanisław Wędrychowicz and Beata Rzepka EMAIL logo

Abstract

Existence of fixed point of a Frobenius-Perron type operator P : L1L1 generated by a family {φy}yY of nonsingular Markov maps defined on a σ-finite measure space (I, Σ, m) is studied. Two fairly general conditions are established and it is proved that they imply for any gG = {fL1 : f ≥ 0, and ∥f∥ = 1}, the convergence (in the norm of L1) of the sequence {Pjg}j=1 to a unique fixed point g0. The general result is applied to a family of C1+α-smooth Markov maps in ℝd.

MSC 2010: 37A30

1 Introduction

Let be given a randomly perturbed semi-dynamical system that evolves according to the rule:

xj=φξj(xj1)forj=1,2,,

where {φy}yY is a family of nonsingular Markov maps defined on a subset I ⊆ ℝd (bounded or not), d ≥ 1, and {ξj}j=1 is a sequence of identically distributed independent Y-valued random elements, where Y is a Polish metric space (i.e., a complete separable metric space).

Investigation of the asymptotic properties of such a semi-dynamical system leads to the study of the convergence of the sequence {Pj}j=1 of iterates of some Frobenius-Perron type operator P which is Markov operator, i.e. ∥Pf∥ = ∥f∥, and Pf ≥ 0 if f ≥ 0, acting in L1 (Markov operator of F-P type, in short). More precisely, let g ≥ 0, ∥g∥ = 1, if Prob(x0B)=dfBgdm (m denotes the Lebesgue measure on I), then Prob(xjB) = B Pj g dm, where P is the Markov operator of F-P type defined by (3.1) (see Proposition 3.1).

We establish two fairly general conditions: conditions (3.H1) and (3.H2), and prove that under those conditions the system in question evolves to a stationary distribution. That is, the sequence {Pj}j=1 converges (in the norm of L1) to a unique fixed point g0G (Th. 3.1). The two conditions are probabilistic analogues of conditions (3.H1) and (3.H2) in [1], respectively. Actually, if φy = φ, yY, that is if xj = φξj(xj−1) = φ (xj−1) is a deterministic semi-dynamical system (φ is a fixed Markov map), then we get two mentioned conditions given in [1].

As an application of this general result we show that the randomly perturbed semi-dynamical system generated by a family of C1+α-smooth Markov maps in ℝd evolves to a unique stationary density (Th. 4.2).

Similar problems were considered by several authors: eg. [2, 3, 4], and the references therein.

2 Preliminaries

Let (I, Σ, m) be a σ-finite atomless (non-negative) measure space. Quite often the notions or relations occurring in this paper (in particular, the considered transformations) are defined or hold only up to the sets of m-measure zero. Henceforth we do not mention this explicitly.

The restriction of a mapping τ : XY to a subset AX is denoted by τA and the indicator function of a set A by 1A.

Let τ : II be a measurable transformation i.e., τ−1(A) ∈ Σ for each AΣ. It is called nonsingular iff mτ−1m i.e., for each AΣ, m(τ−1(A)) = 0 ⇔ m(A) = 0.

We give a few definitions. The following kind of transformations is considered in this paper:

Definition 2.1

A nonsingular transformation φ from I into itself is said to be a piecewise invertible iff

  1. one can find a finite or countable partition π = {Ik : kK} of I, which consists of measurable subsets (of I) such that m(Ik) > 0 for each kK, and sup {m(Ik) : kK} < ∞, here and in what follows K is an arbitrary countable index set;

  2. for each Ikπ, the mapping φk = φIk is one-to-one of Ik onto Jk = φk(Ik) and its inverse φk1 is measurable.

Definition 2.2

A piecewise invertible transformation φ is said to be a Markov map iff its corresponding partition π satisfies the following two conditions:

  1. π is a Markov partition i.e., for each kK,

    φ(Ik)={Ij:m(φ(Ik)Ij)>0};
  2. φ is indecomposable (irreducible) with respect to π i.e., for each (j, k) ∈ K2 there exists an integer n > 0 such that Ikφn(Ij).

In what follows we denote by ∥ ⋅ ∥ the norm in L1 = L1(I, Σ, m) and by G = G(m) the set of all (probabilistic) densities i.e.,

G=df{gL1:g0,andg=1}.

Let τ : II be a nonsingular transformation. Then the formula

Pτf=dfddm(mfτ1)forfL1, (2.1)

where dmf = f dm, and ddm denotes the Radon-Nikodym derivative, defines a linear operator from L1 into itself. It is called the Frobenius-Perron operator (F-P operator, in short) associated with τ [5, 6].

Formula (2.1) is equivalent to the following one:

APτfdm=mf(τ1(A))=τ1(A)fdm.

From the definition of Pτ it follows that it is a Markov operator, i.e., Pτ is a linear operator and for any fL1(m) with f ≥ 0, Pτf ≥ 0 and ∥Pτf∥ = ∥f∥.

The last equality follows immediately from the second formula equivalent to (2.1) if one puts A = I. Further, PτGG, and Pτ is a contraction, i.e. ∥Pτ∥ ≤ 1.

Let τ1, …, τj, (j ≥ 2), be some nonsingular transformations. Denote by P(j,…,1), P(j,…,i+1) and P(i,…,1), 1 ≤ i < j, the F-P operators associated with the transformations τ(j,…,1) = τj ∘ … ∘ τ1, τ(j,…,i+1) = τj ∘ … ∘ τi+1 and τ(i,…,1) = τi ∘ … ∘ τ1, respectively. Then

P(j,,1)=P(j,,i+1)P(i,,1), (2.2)

in particular P(j,…,1) = PjP1.

Definition 2.3

In what follows we consider a family {φy}yY of Markov maps such that:

  1. Y is a Polish metric space and the map I × Y ∋ (x, y) → φy(x) ∈ I is Σ (I × Y)/ Σ(I)-measurable;

  2. there exists a partition π of I such that πy = π for each yY, where πy is a Markov partition associated with φy.

For j ≥ 1, and y1, …, yjY, we denote y(j) = (yj, …, y1) and then we set

φy(j)=dfφyjφy1. (2.3)

Clearly, φy(j) : II is a Markov map. Its Markov partition is given by

πy(j)=dfπφy(1)1(π)φy(2)1(π)φy(j1)1(π)providedj2.

It consists of the sets of the form:

Ik(j)y(j1)=dfIk0φy(1)1(Ik1)φy(2)1(Ik2)φy(j1)1(Ikj1), (2.4)

where k(j) = (k0,k1, …, kj−1) ∈ Kj.

Let

φy(j)k(j)=df(φy(j))|Ik(j)y(j1),

then by condition (2.M2), φy(j)k(j) is one-to-one mapping of Ik(j)y(j1) onto

Jk(j)y(j)=dfφy(j)k(j)(Ik(j)y(j1))=φyj(Ikj1).

It is nonsingular, and φy(j)k(j)1, the mapping inverse to φy(j)k(j), is measurable.

By Def. 2.3, πy(1) = πy1 = π and therefore Ik(1)y(0)=Ik0; consequently φy(1)k(1) = (φy1)Ik0 = φy1k0 and, according to (2.M2), φk is one-to-one mapping of Ik onto Jky=φyk(Ik).

We have to adjust the indecomposable condition (2.M4) to the new case when a single Markov map φ is replaced by a family {φy}yY of Markov maps. We propose the following condition (see note at the end of Rem. 2.4):

  1. for each (j, k) ∈ K2 there exist an integer s > 0, and a subset sYs with ps(s) > 0 such that Ikφy(s)(Ij) for all y(s) ∈ s. Here p is a probability measure on Σ(Y), and ps=p××pstimes.

From the properties of φy(r) it follows that the formula

my(r)k(r)(A)=dfmφy(r)k(r)1(A)=m(φy(r)k(r)1(A))forAΣ, (2.5)

defines an absolutely continuous measure which is concentrated on Jk(r)y(r)(i.e.,my(r)k(r)(A)=my(r)k(r)(AJk(r)y(r))), and whose Radon-Nikodym derivative satisfies dmy(r)k(r)dm>0 a.e. on Jk(r)y(r).

To see the latter property of the measure my(r)k(r), note first that if dmy(r)k(r)dm=0 on AJk(r)y(r), then φy(r)k(r)1(A)IIk(r)y(r1) a.e., because

m(φy(r)k(r)1(A)Ik(r)y(r1))=AJk(r)y(r)dmy(r)k(r)dmdm=0. Therefore, A = ∅ a.e.

We put (r = 1, 2, …)

σy(r)k(r)=dfdmy(r)k(r)dm,on Jk(r)y(r),0,on IJk(r)y(r), (2.6)

next,

f¯y(r)k(r)=dffφy(r)k(r)1,on Jk(r)y(r),0,on IJk(r)y(r). (2.7)

and finally

Py(r)f=dfddm(mfφy(r)1),wheredmf=fdm.

Then the F-P operator Py(r) of the Markov map φy(r)k(r) can be written in the following form

Py(r)f=k(r)f¯y(r)k(r)σy(r)k(r). (2.8)

Indeed, from Def. 2.3, (2.3) and (2.5) it follows that for any fL1, f ≥ 0, the following equalities hold:

APy(r)fdm=Addm(mfφy(r)1)dm=mf(φy(r)1(A))=k(r)Ay(r)k(r)fdm=k(r)Afφy(r)k(r)1dmy(r)k(r)=Ak(r)f¯y(r)k(r)σy(r)k(r)dm,

where Ay(r)k(r)=φy(r)k(r)1(A), and dmy(r)k(r) is given by (2.5). Hence (2.8) follows.

Remark 2.4

The studies of this paper can be extended to the family {φy}yY which satisfies condition (2.My5) of Def. 2.3 and, instead of (2.My6), the following less restrictive condition:

  1. there is a Markov map φ such that for each yY:

    1. πyπ, i.e., for each Vπy, there exists Uπ which contains V, and

    2. for each Vπy, φy(V) is a union of a number of Uπ.

In this situation each φy(j) = φyjφyj−1 ∘ … ∘ φ1 is defined on the interval of the form:

Ik(j)y(j)=dfIk0y1φy(1)1(Ik1y2)φy(2)1(Ik2y3)φy(j1)1(Ikj1yj)πy(j).

The following family can serve as a simple example: {φi}i=1 where φi = φi, and φ is a Markov map. Note that conditions (2.M4) and (2.My4) are equivalent in this case, if P({i}) > 0 for i ≥ 1.

We close this section with the following criterion of the convergence in L1 of the iterates Pn of Markov operator. It is used in the proof of Th. 3.3.

Theorem 2.5

Let there exist hL1, h ≥ 0 withh∥ > 0, and a dense subset G0G such that limj(Pjgh) = 0, for gG0, where (Pj gh) = max{0, −(Pj gh)}. Then there exists exactly one P-fixed point g0G such that

limjPjg=g0,forallgG.

Proof

We refer to [6], Theorem 3.□

3 Convergence theorem

Let {φy}yY be a family of Markov maps in the sense of Def. 2.3, Σ (Y)-σ-algebra of all Borel-measurable subsets of Y (where Y is a Polish metric space), p a probability measure on (Y, Σ(Y)), and Py the F-P operator of the Markov map φy.

We put

Pf=dfPyfdp(y)forfL1(m). (3.1)

It follows from the definition, and the fact that the F-P operator Py : L1(m) → L1(m) (given by (2.8)) is a Markov operator, that P : L1(m) → L1(m) is also a Markov operator.

Indeed, from (3.1) and Fubini’s Theorem we have for f ≥ 0:

Pfdm=Pyfdp(y)dm=Pyfdmdp(y)=fdmdp(y)=fdm.

In general case we have f = f+f, where f+ = max{0, f}, and f = max{0, −f}, it concludes the proof.

Operator P is called, in this note, the Markov operator of F-P type.

Let Pj = PPj−1 (j ≥ 2), then from (3.1), (2.1) and (2.8) it follows that

Pjf=Py(j)fdpj(y1,,yj), (3.2)

where Py(j) is the F-P operator corresponding to φy(j) defined by (2.3), and pj=p××pjtimes.

The Markov operator P of F-P type given by (3.1) has the following probabilistic interpretation:

Let ξ1, ξ2, … be a sequence of Y-valued random elements (indices) defined on a probability space (Ω, Σ(Ω), p1). For each (x, ω) ∈ I × Ω we put

xj(x,ω)=dfx,for j=0,φξj(ω)(xj1)=φξ(j)(ω)(x),for j1, (3.3)

where ξ(j)(ω) = (ξj(ω), …, ξ1(ω)).

We assume that Ω = Y is a direct product space, Σ(Ω) − σ-algebra of all Borel measurable subsets of Y, p1 = p = p × p × … – direct product measure, and ξj(ω) = ωj for ωΩ where ωj is the j-th coordinate of ω = (ω1,ω2, …) ∈ Y. Thus {ξj}j=1 is a sequence of identically p-distributed independent Y-valued random elements (indices).

Let (Ω͠, Σ(Ω͠), Prob) be a probability space with Ω͠ = I × Ω and Prob = × p1, where is a probability measure on Σ(I). Then the sequence {xj}j=0 defined by (3.3) is a sequence of random vectors over (Ω͠, Σ(Ω͠), Prob). Note that {x0B} = B × Ω, hence Prob(x0B) = (B) for any BΣ(I).

It turns out that if the initial probability distribution is absolutely continuous, then the probability distribution of each random vector xj, defined by (3.3), is also absolutely continuous:

Proposition 3.1

If Prob(x0B)=dfBgdm for all BΣ(I), where gL1(m) and g ≥ 0, ∥g∥ = 1, then

Prob(xjB)=BPjgdm(j=1,2,)

for all BΣ(I) where Pj is the j-th iterate of the Markov operator P of F-P type defined by (3.1).

Proof

We refer to [7], Prop. 3.1.□

The convergence of the sequence {Pj} of the iterates of the Markov operator P of F-P type associated with {xj} is established under two general conditions. We are going now to formulate the first of them.

Let φy(j) be a Markov map given by (2.3) and let Py(j) be its F-P operator given by (2.8). We put

Ay(j)k(g)=dfesssup{Py(j)g(x):xIkspt(Py(j)g)},ay(j)k(g)=dfessinf{Py(j)g(x):xIkspt(Py(j)g)},

where spt(g)=df{x:g(x)>0}.

Now we give the following:

Definition 3.2

A density gG, belongs to G(C*), 0 < C* < ∞, iff there exist constants Cy(j)(g) ≥ 1, y(j) ∈ Yj, jj1(g), such that the following two conditions are satisfied:

  1. Ay(j)k(g) ≤ Cy(j)(g)ay(j)k(g) a.e. [pj], jj1(g), and

  2. lim supjlnCy(j)(g)dpj<C.

Having defined the set G(C*) we are in a position to formulate the first condition:

  1. (Distortion Inequality for the family Py(j), y(j) ∈ Yj) There exists a constant 0 < C* < ∞ such that the set G(C*) defined by Def. 3.2 contains a subset dense in G.

    To formulate the second condition we define first the following auxiliary function:

    uw(2r)(x)=dfinf{gw(2r)k(r)(x):k(r)Kr,andIk(r)w(r1)}, (3.4)

    where

    gw(2r)k(r)=dfk~(r)σ~w~(r)k~(r)Ik~(r)w~(r1)σ~w(r)k(r)dm, (3.5)

    w(2r) = ((r), w(r)) ∈ Yr × Yr, (in (3.5) we put (r) = (r, …, 1), and (r) = (0, …, r−1)); further

    σ~w(r)k(r)=dfσw(r)k(r)m(Ik(r)w(r1)), (3.6)

    where Ik(r)w(r1), and σw(r)k(r) are defined, respectively by (2.4) and (2.6).

    The second condition reads as follows:

    1. There exists ≥ 1 such that 0 < ∥ uw(2) dp2 ∥ < ∞.

    The theorem below states that the semi-dynamical system given by (3.3) evolves to a stationary distribution under the above two conditions.

Theorem 3.3

(Convergence Theorem) Assume that a family {φy}yY of Markov maps satisfies (3.H1) and (3.H2). Then there exists exactly one P-fixed point g0G, that is Pg0 = g0, such that

limjPjg=g0,forallgG.

Proof

The point is to show that for each r ≥ 1, the function

u2r=dfC^explnuw(2r)dp2r,where C^=exp(2C), (3.7)

is a function for P which, under condition (3.H1), plays the role of h from Theorem 2.5. That is, it satisfies the relation

limj(Pj+2rgu2r)=0for allgG. (3.8)

To this end note that by condition (3.H1) there exists a subset G(C*) dense in G. Thus for any g there exists, by Def. 3.2 (a), j1 = j1(g) such that the following inequalities hold:

Cy(j)1(g)Py(j)g(y)/Py(j)g(x)Cy(j)(g) (3.9)

for each jj1, all y(j) ∈ Yj, any Ikπ and m × m a.e. (x, y) ∈ Ik × Ik.

These inequalities imply the following estimate:

Cy(j)1(g)Frw(r)(Py(j)g)Pw(r)Py(j)gCy(j)(g)Frw(r)(Py(j)g) (3.10)

for every r ≥ 1, jj1 and all w(r) ∈ Yr, y(j) ∈ Yj; where Cy(j)(g) are constants involved in Def. 3.2, and Frw(r) is defined by the following formula:

Frw(r)(g)=dfk(r)σ~w(r)k(r)Ik(r)w(r1)gdm. (3.11)

In the last formula σ̃w(r)k(r) and Ik(r)w(r1) are defined by (3.6) and (2.4), respectively.

To see it note that from (3.9) we obtain

Cy(j)1(g)(Py(j)g)¯w(r)k(r)(x)σw(r)k(r)(x)(Py(j)g)¯w(r)k(r)(y)σw(r)k(r)(x)Cy(j)(g)(Py(j)g)¯w(r)k(r)(x)σw(r)k(r)(x),

for each Jk(r)w(r)=φw(r)k(r)(Ik(r)w(r1)), all x,yJk(r)w(r), and jj1(g); where

(Py(j)g)¯w(r)k(r)(x)=df(Py(j)g)φw(r)k(r)1(x), or 0, according as xJk(r)w(r), or xIJk(r)w(r).

Integrating the above inequalities with respect to x on Jk(r)w(r) and multiplying by σ̃w(r)k(r)(y), then summing the resulting inequalities with respect to all k(r) and finally using equality (2.8) one gets the desired double inequality (3.10).

Let w(2r) = ((r), w(r)) ∈ Yr × Yr, then iterating the first of the double inequality (3.10), by using the equalities

Pw(2r)=Pw~(r)Pw(r),
k(r)1Ik(r)w(r1)Py(j)g=1, (3.12)

and the formula (3.11), one gets for every r ≥ 1, jj1(g), and all (r), w(r) ∈ Yr, and y(j) ∈ Yj:

Pw(2r)Py(j)gCz(r+j)1(g)Cy(j)1(g)Frw~(r)(Frw(r)Py(j)g)Cz(r+j)1(g)Cy(j)1(g)uw(2r), (3.13)

where z(r + j) = (w(r), y(j)) and uw(2r) is defined by formulas (3.4), and (3.5).

Integrating the above inequalities with respect to w(2r) = ((r),w(r)), and y(j), using Jensen’s inequality and condition (b) of Def. 3.2, and applying formulas (2.2) together with (3.2), give:

Pj+2rgu2r,

where u2r is defined by (3.7).

The last inequality implies that (3.8) holds. This is so because G is dense, and P is a contraction.

Thus we have proved that for each r ≥ 1, u2r indeed plays the role of h from Theorem 2.5 for P; possibly the trivial one, if ∥ uw(2r) dp2r∥ = 0. To exclude the trivial possibility we have to assume the existence of a nontrivial function u2r for P, for some r ≥ 1, that is condition (3.H2). Then by Theorem 2.5 we have limj Pj g = g0, for all gG. From this and the inequality

g0Pg0g0Pj+1g+Pjgg0for all gG,

it follows that Pg0 = g0, i.e. the density g0 is P-invariant. This finishes the proof of the theorem.□

4 An application to a family {φy}yY of C1+α, 0 < α ≤ 1 in ℝd

We use the following notation: ℝdd-dimensional Euclidean space (d ≥ 1); ∣⋅ ∣ – the Euclidean norm; I – a domain in ℝd, i.e. an open, connected subset of ℝd; Σσ-algebra of all Borel-measurable subsets of I; m – the Lebesgue measure on ℝd; diam(A) – the diameter of the set A.

A C1+α-smooth Markov map φ, 0 < α ≤ 1, means a Markov map in the sense of Def. 2.2 and such that: the partition π of φ consists of domains, and the restriction φk, of φ to any Ikπ, is a C1+α-diffeomorphism.

In this section we consider a family {φy}yY of C1+α-smooth Markov maps which satisfy the following C1+α-variant of the so-called Reńyi’s Condition (see e.g. [9] or [10]):

  1. Let {φy}yY be a family of C1+α-smooth Markov maps. There exist constants C10,y(r) > 0, y(r) ∈ Yr, such that for k(r) ∈ Kr, r = 1,2, …, and all Ikπ one has:

    1. σy(r)k(r)(x) − σy(r)k(r)(y)∣ ≤ C10,y(r) σy(r)k(r)(y)∣xyα

      for all x,yJk(r)y(r)Ik, where σy(r)k(r) is defined by (2.6), and Jk(r)y(r)=φy(r)k(r)(Ik(r)y(r1)).

      Furthermore, the constants C10,y(r) > 0 satisfy the following condition:

    2. lim supjC10,y(j)dpj<.

    Let {φy}yY be a given family of C1+α-smooth Markov maps, and let {πy(r): y(r) ∈ Yr, r = 1, 2, … } be a family of partitions whose elements are defined by (2.4). We assume that this family has the following generating property:

    1. (Generating Condition on {πy(r): y(r) ∈ Yr, r = 1, 2, … })

      limj{supk(j+1)diam(Ik(j+1)y(j))α}dpj=0.

    We are going now to examine the convergence of {Pj g} under conditions (3.H2) (4.Hy4[a, b]), and (4.Hy7). We show that condition (4.Hy4[a, b]) together with condition (4.Hy7) implies condition (3.H1). Then under (3.H2) one gets the thesis of Th. 3.3. It turns out that one can take as a dense subset occurring in condition (3.H1) the following:

Definition 4.1

We denote by Gα, 0 < α ≤ 1, the class of all densities gG satisfying the following three conditions:

  1. spt(g)=df{xI:g(x)>0} is a sum of a number of Ikπ;

  2. for each Ikπ, gIkC0+α(Ik), and

    |g(x)g(y)|C(g)g(y)|xy|αfor allx,yspt(g)Ik;

    where C(g) is a constant depending on g.

The following theorem is a consequence of Th. 3.3:

Theorem 4.2

Let a family {φy}yY of C1+α-smooth Markov maps satisfy conditions (4.Hy4[a, b]), (4.Hy7), and (3.H). Then there exists exactly one P-fixed point g0G such that

limjPjg=g0,forallgG.

Proof

We show that

GαG(C)for an arbitraryC>lim supjlnCy(j)(g)dpj, (4.1)

here in (4.1), for gGα, we define

Cy(j)(g)=df{1+C(g)supk(j+1)Kj+1diam(Ik(j+1)y(j))α}{1+C~0αC10,y(j)}, (4.2)

where C~0α=sup{(diam(Ik))α:kK}.

Note that by conditions (4.Hy4[b]) and (4.Hy7) we have

lim supjlnCy(j)(g)dpjC~0αlim supjC10,y(j)dpj<, (4.3)

that is condition (b) of Def. 3.2 holds.

It remains to show the first condition of that definition holds. Let gGα, then for any y(j) ∈ Yj, k(j) ∈ Kj, j = 1, 2, …, and for any x, zIr the following inequality holds:

gφy(j)k(j)1(x)/gφy(j)k(j)1(z)1+C(g)supk(j+1)diam(Ik(j+1)y(j))α.

Next, by condition (4.Hy4[a]), we have the following inequality (for any y(j) ∈ Yj, k(j) ∈ Kj, j = 1, 2, …, and for any x, zIr):

σy(j)k(j)(x)/σy(j)k(j)(z)1+C~0αC10,y(j),

where C~0α=sup{(diam(Ik))α:kK}.

Therefore for any y(j) ∈ Yj, j = 1, 2, …, Ikπ, and for any x, zIk we have

Py(j)g(x)Cy(j)(g)Py(j)g(z),

where Cy(j) are defined by (4.2). Hence condition (a) of Def. 3.2 holds for gGα too.

The last inequality, and relations (4.2), (4.3) show that (4.1) holds. This implies condition (3.H1) because Gα is dense in G.□

Remark 4.3

(Final Remark) We present two cases of particular nature of the system (3.3) (for more details see [7], Examples (5.1), and (5.3)).

Example 4.4

The first particular case is the following xj(x, ω) = φξj(ω)(xj−1), for j = 1, 2, …. The stochastic perturbation of the system arises from not knowing the precise number of iterations. That kind of stochastic perturbation has no influence on the statistical behaviour of the deterministic system xj = φj(xj−1), for j = 1, 2, ….

Example 4.5

The second case of stochastic perturbation is the following xj(x, ω) = ζj(ω)φ(xj−1), for j = 1, 2, …. In that case stochastic perturbation appears in a multiplicative way (it is the so called parametric noise). Such a perturbation changes essentially the statistical behaviour of the system. It illustrates the example: Let φy(x) = y tan(x), yY = {b, 1}; and p1(ζj = b) = 1 − a, and p1(ζj = 1) = a, for j = 1, 2, …, where b > 1, and 0 < a < 1.

Here φ1(x) = tan(x) is (a Markov map) without any invariant density [11]. However, the considered random system has P-invariant density, but its deterministic counterpart, i.e. when Y = {1} with p1(ζj = 1) = 1, not.

Acknowledgement

The authors thank the referees for their valuable remarks and comments on this paper.

References

[1] P. Bugiel, Distortion inequality for the Frobenius-Perron operator and some its consequences in ergodic theory of Markov maps in ℝd, Annal. Polonici Math. LXVIII.2 (1998), 125–157.10.4064/ap-68-2-125-157Search in Google Scholar

[2] T. Bogenschütz and Z.S. Kowalski, A condition for mixing of skew products, Aequationes Math. 59 (2000), 222–234.10.1007/s000100050122Search in Google Scholar

[3] A. Lasota and M.C. Mackey, Stochastic perturbation of dynamical systems: The weak convergence of measures, J. Math. Anal. Appl. 138 (1989), 232–248.10.1016/0022-247X(89)90333-8Search in Google Scholar

[4] S. Wędrychowicz and A. Wiśnicki, On some results on the stability of Markov operators, Studia Math. 241(1) (2018), 41–55.10.4064/sm8584-3-2017Search in Google Scholar

[5] O. Rechard, Invariant measures for many-one transformations, Duke Math. J. 23 (1956), 477–488.10.1215/S0012-7094-56-02344-4Search in Google Scholar

[6] S.M. Ulam, A Collection of Mathematical Problems, Interscience Tracts in Pure and Appl. Math. no. 8, Interscience, New York, 1960.Search in Google Scholar

[7] P. Bugiel, Ergodic properties of a randomly perturbed family of piecewise C2-diffeomorphisms in ℝd, Math. Z. 224 (1997), 289–311.10.1007/PL00004585Search in Google Scholar

[8] A. Lasota and J. Yorke, Exact dynamical systems and the Frobenius-Perron operator, Trans. Amer. Math. Soc. 273 (1982), 375–384.10.1090/S0002-9947-1982-0664049-XSearch in Google Scholar

[9] R. Mañé, Ergodic Theory and Differentiable Dynamics, Ergebnisse der Mathematic und ihrer Grenzgebiete, Springer, Berlin and New York, 1987.10.1007/978-3-642-70335-5Search in Google Scholar

[10] A. Rényi, Representation of real numbers and their ergodic properties, Acta Math. Akad. Sc. Hungar 8 (1957), 477–493.10.1007/BF02020331Search in Google Scholar

[11] F. Schweiger, tan x is ergodic, Proc. Amer. Math. Soc. 71 (1978), 54–56.Search in Google Scholar

Received: 2020-07-26
Accepted: 2020-11-22
Published Online: 2021-01-16

© 2021 Peter Bugiel et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 24.4.2024 from https://www.degruyter.com/document/doi/10.1515/anona-2020-0163/html
Scroll to top button