Skip to content
Publicly Available Published by De Gruyter January 29, 2021

A fractional generalization of the dirichlet distribution and related distributions

  • Elvira Di Nardo , Federico Polito and Enrico Scalas EMAIL logo

Abstract

This paper is devoted to a fractional generalization of the Dirichlet distribution. The form of the multivariate distribution is derived assuming that the n partitions of the interval [0, Wn] are independent and identically distributed random variables following the generalized Mittag-Leffler distribution. The expected value and variance of the one-dimensional marginal are derived as well as the form of its probability density function. A related generalized Dirichlet distribution is studied that provides a reasonable approximation for some values of the parameters. The relation between this distribution and other generalizations of the Dirichlet distribution is discussed. Monte Carlo simulations of the one-dimensional marginals for both distributions are presented.

MSC 2010: 60E05; 33E12; 60G22

1 Introduction

Let us consider a finite sequence of n positive random variables Z1, ..., Zn. For instance, these variables can represent the wealth of n economic agents if indebtedness is not allowed. Let us denote the sum as Wn = Z1 + … + Zn. In the wealth interpretation this is the total wealth. If we define the wealth fraction of the i-th agent as Qi = Zi/Wn, we get a partition of the interval [0, 1] represented by the sequence Q = (Q1, …, Qn) such that Q1 + … + Qn = 1 almost surely. We are particularly interested in multivariate distributions for the sequence Q whose one-dimensional marginals have heavy tails. If we further assume that the random variables Z1, …, Zn are independent and identically distributed, there is a nice and immediate relationship with point processes of renewal type. In this case, the variables Zi can be interpreted as inter-event intervals and the partial sums Wk = i=1kZi, with kn, are the epochs of the events.

In order to clarify the relationship, we start by recalling some basic facts on the time-fractional Poisson process as we are going to use and generalize it in the next section. From [1, 14] we know that the time-fractional Poisson process Nν = (Nν(t))t≥0, ν ∈ (0, 1], can be defined as a renewal process with independent and identically distributed inter-event waiting times 𝓣j, j ∈ ℕ* = {1, 2, …}, with probability density function (pdf)

P(Tjdt)=λtν1Eν,ν(λtν)dt,λ>0,t>0,(1.1)

where

Eα,β(z)=r=0zrΓ(αr+β),z,α,βC,(α)>0,(1.2)

is the two-parameter Mittag–Leffler function. Note that for ν = 1, the waiting times 𝓣j are exponentially distributed and N1 is the homogeneous Poisson process. Moreover, the Laplace transform of the pdf (1.1) takes a very compact form. Indeed, we have

0eztP(Tjdt)=λλ+zν,z>0.(1.3)

Let us now indicate with Tk, k ∈ ℕ*, the random occurrence time of the k-th event of the stream of events defining Nν. From the renewal structure of Nν we readily obtain that the Laplace transform of Tk reads

0eztP(Tkdt)=λλ+zνk,z>0,(1.4)

which in turn corresponds to the Laplace transform of a function involving the three-parameter Mittag–Leffler function (also known as the Prabhakar function – see [7]). In particular, the three-parameter Mittag–Leffler function is defined as

Eα,βδ(z)=r=0zrΓ(αr+β)Γ(δ+r)r!Γ(δ),z,α,β,δC,(α)>0,(1.5)

and we know by direct calculation that (see e.g. [15], formula (2.3.24))

0tβ1eztEα,βδ(ζtα)dt=zβ1ζzβδ,(1.6)

where ℜ(α) > 0, ℜ(β) > 0, ℜ(z) > 0, z > |ζ|1/ℜ(α). Using (1.6), we obtain

P(Tkdt)=λktνk1Eν,νkk(λtν)dt,λ>0,t>0,ν(0,1],kN.(1.7)

Remark 1.1

Note that, for ν = 1, the above density reduces to that of an Erlang(λ, k) distributed random variable. This can be seen by simply noticing that

P(Tkdt)=dtλktk1r=0(λt)rΓ(r+k)Γ(r+k)Γ(k)r!=dtλktk1eλt(k1)!,kN.(1.8)

Remark 1.2

The Erlang(λ, k) distribution is a special case of the Gamma(a, c) distribution. Consider a sequence Z1, …, Zn of independent random variables each following a Gamma distribution of parameter (a1, c), …, (an, c). It is well known that their sum Wn is still a Gamma of parameter (a1 + … + an, c). Then the sequence of fractions Q1, …, Qn has a joint (N – 1)-dimensional Dirichlet distribution of parameters a1, …, an with density

fQ(q1,,qn1)=Γ(a1++an)Γ(a1)Γ(an)q1a111i=1n1qian1(1.9)

with q1 + … + qn = 1 and is independent of Wn.

The proof of the results in Remark 1.2 can be found in several textbooks and lecture notes (see e.g. [2], Lemma 1.5).

Remark 1.3

The random variables 𝓣j have the following asymptotic behaviour for t → ∞ [8]:

P(T1>t)sin(νπ)πΓ(ν)tν,t1;(1.10)

therefore, their sums Tk belong to the basin of attraction of the ν-stable subordinator.

Remark 1.4

The distributions considered in the present paper belong to the class of distributions on the simplex discussed in [5] (see (2.2) below and [3]).

This paper contains the following material. Section 2 concerns the definition and properties of the fractional Dirichlet distribution. Section 3 mirrors Section 2 and is devoted to the generalized Dirichlet distribution. Section 4 explains how to simulate the fractional Dirichlet distribution and presents the results of Monte Carlo simulations in order to illustrate the relation between the fractional Dirichlet distribution and the generalized Dirichlet distribution.

2 Construction of the fractional Dirichlet distribution

Based on Remark 1.1 and Remark 1.2, we now define a generalization of the Gamma distribution and we immediately present a fractional generalization of the Dirichlet distribution.

Definition 2.1

(Fractional Gamma distribution). Let X be a positive real valued random variable with distribution

μ(dx)=P(Xdx)=λβxνβ1Eν,νββ(λxν)dx,(2.1)

where λ > 0, x > 0, β > 0, ν ∈ (0, 1]. Then X is said to be distributed as a fractional Gamma of parameters λ, β, ν (we write XFG(λ, β, ν)) (see [19]; for applications to renewal processes see [4, 16, 17, 18]).

Remark 2.1

The Laplace transform of μ reads

0ezxμ(dx)=λλ+zνβ,z>0.(2.2)

By means of (2.1), we will construct a generalization of the Dirichlet distribution. We consider n independent random variables Zi, i = 1, …, n, distributed as fractional Gamma random variables of parameters (1, βi, ν), ν ∈ (0, 1], βi > 0, i = 1, …, n, respectively. Furthermore, define the sum W = Z1 + … + Zn, set Qi = Zi/W, i = 1, …, n, and consider the transformation

(Z1,,Zn)(WQ1,,WQn1,W(1i=1n1Qi)).(2.3)

vskip-2pt Note that, from (2.2), the distribution of W is fractional Gamma as well, i.e. WFG(1, β̄, ν), where β̄ = i=1nβi. The joint pdf of the vector (W, Q) = (W, Q1, …, Qn–1) reads

f(W,Q)(y,q1,,qn1)=i=1n1(yqi)νβi1Eν,νβiβi((yqi)ν)y(1i=1n1qi)νβn1×Eν,νβnβn((yyi=1n1qi)ν)yn1=i=1n1yνβi1yνβn1yn1i=1n1qiνβi1Eν,νβiβi((yqi)ν)×1i=1n1qiνβn1Eν,νβnβn((yyi=1n1qi)ν)=yνβ¯1i=1n1qiνβi1Eν,νβiβi((yqi)ν)1i=1n1qiνβn1×Eν,νβnβn((yyi=1n1qi)ν).(2.4)

The joint pdf of Q = (Q1, …, Qn–1) is then obtained by marginalization. Hence,

fQ(q1,,qn1)=i=1n1qiνβi11i=1n1qiνβn1×0yνβ¯1i=1n1Eν,νβiβi((yqi)ν)Eν,νβnβn((yyi=1n1qi)ν)dy.(2.5)

Remark 2.2

On the n-dimensional simplex Δn the probability density of the random vector (Q1, …, Qn), where ∑nQn = 1 a.s., writes

P(Q1,,Qn)d(q1,,qn)=i=1nqiνβi10yνβ¯1i=1nEν,νβiβi(yqi)νdy.(2.6)

Notice that for ν = 1 the integral in the rhs of (2.6) can be easily solved and the Dirichlet(β = (β1, …, βn)) is obtained. In this case (Q1, …, Qn) is uniformly distributed on Δn for βi = 1, i ∈ ℕ*.

If ν ∈ (0, 1) with βi = 1, we have

P(Q1,,Qn)d(q1,,qn)=i=1nqiν10yνn1i=1nEν,ν((yqi)ν)dy,(2.7)

which is symmetric but not uniform.

If we let instead βi = 1/ν (again symmetric), we obtain

P(Q1,,Qn)d(q1,,qn)=0yn1i=1nEν,11/ν((yqi)ν)dy.(2.8)

2.1 Properties

The derivation of the marginal moments can be done explicitly using the formulas in Section 2.2 of [5].

Proposition 2.1

LetQ = (Q1, …, Qn–1) be a random vector distributed with pdf(2.5). For eachj = 1, …, n – 1, we have,

EQj=βjβ¯,(2.9)
VarQj=βj(β¯βj)β¯2(β¯+1)1+β¯(1ν).(2.10)

Proof

By Proposition 2 of [5] we have

EQj=0ddz11+zνβj11+zνβ¯βjdz=0βjνzν11+zν(1+zν)β¯dz=βj0dw(1+w)β+1¯=βjβ¯.(2.11)

Similarly, the second moment writes

EQj2=0zd2dz211+zνβj11+zνβ¯βjdz=0ν2z2ν2βj(βj+1)(1+zν)βj+2βjν(ν1)zν2(1+zν)βj+1(1+zν)β¯+βjdz=zν=wνβj(β+1)0w(1+w)β¯+2dw+(1ν)βj0dw(1+w)β¯+1=νβj(βj+1)β¯(β¯+1)+(1ν)βjβ¯,(2.12)

and hence after some computation

VarQj=βj(β¯βj)β¯2(β¯+1)1+β¯(1ν).(2.13)

Remark 2.3

Notice that the first factor of the variance (2.10) is in fact the variance of a one-dimensional marginal of a Dirichlet(β) distribution. It follows that the marginals are overdispersed with respect to those of a Dirichlet(β) distribution.

We now proceed by analyzing the aggregation property and therefore the marginal distributions.

Proposition 2.2

(Aggregation property). Consider the pdf defined in equation(2.5)and the random variable 𝓠 = j=1kQijwhere 1 < k < nandijdenotes any permutation of the indices. Then the random variable 𝓩 = W𝓠 has pdf(2.1)withβ = j=1kβij.

Proof

The proof is immediate considering that 𝓠 comes from the sum of i.i.d. positive random variables each one with Laplace transform given by (2.2) and then divided by W. Therefore 𝓩 = W 𝓠 has pdf given by (2.1) with β = j=1kβij.□

An immediate corollary of this result is

Corollary 2.1

(Marginal pdf). Consider the pdf in equation(2.5). Then its marginal onQiis given by

fQi(qi)=qiνβi1(1qi)ν(β¯βi)1×0yνβ¯1Eν,νβiβi((yqi)ν)Eν,ν(β¯βi)β¯βi((y(1qi)ν))dy.(2.14)

As the three-parameter Mittag-Leffler function has a representation as an H-function [15],

Hp,rm,n(z)=Hp,rm,nz(ai,Ai)1,p(bi,Bi)1,r,(2.15)

for suitable choices of (ai, Ai) and (bi, Bi), the marginal pdf (2.14) can be expressed in terms of an H-function too.

Proposition 2.3

IfQiis the random variable with pdf(2.14), then

fQi(qi)=1νΓ(βi)Γ(β¯βi)qiνβi1(1qi)(νβi+1)×limε0H3,32,2qi1qiν(1βi,1)(1β¯+ε,1)(ν(εβi),ν)(0,1)(εβi,1)(1νβi,ν).(2.16)

Proof

In the integral (2.14) set

δ1=βi,δ2=β¯βi,q¯1=qiν,andq¯2=(1qi)ν.(2.17)

Denote with Iδ1,δ2νβ¯ the resulting integral and observe that

Iδ1,δ2νβ¯=1ν0yδ1+δ21Eν,νδ1δ1(yq¯1)Eν,νδ2δ2(yq¯2)dy.(2.18)

For δ > 0 and ν ∈ (0, 1] we have

Eν,νδδ(z)=1Γ(δ)H1,21,1z(1δ,1)(0,1)(1νδ,ν).(2.19)

For η ∈ (0, β̄), by using (2.19) and Theorem 2.9 in [12], we have

Iδ1,δ2η=0yη1Eν,νδ1δ1(yq¯1)Eν,νδ2δ2(yq¯2)dy=q¯2ηΓ(δ1)Γ(δ2)H3,32,2q¯1q¯2(1δ1,1)(1η,1)(ν(δ2η),ν)(0,1)(δ2η,1)(1νδ1,ν).(2.20)

Set η = β̄ε in (2.20) with ε ∈ (0, β̄), and use (2.17) to recover Iβi,β¯βiβ¯ε. If ε is sufficiently small, the poles –l, βiεl, (νβi – 1 – l)/ν, l = 0, 1, 2, …, do not coincide with the poles βi + k, β̄ε + k, (ν(εβi) + k)/ν, k = 0, 1, 2, …. Then, according to Theorem 1.1 in [12], the H-function in (2.16) makes sense for all qi ∈ (0, 1) as A1 + A2A3 + B1 + B2B3 = 4 – 2ν > 0. The claim follows by taking the limit as ε ↓ 0 of Iβi,β¯βiβ¯ε.

Remark 2.4

By using Properties 2.1, 2.3 and 2.5 of [12], the H-function in (2.16) can be rewritten interchanging βi with β̄βi and qi with 1–qi, which corresponds to commuting the two Mittag-Leffler functions in (2.14).

According to Theorems 1.3 and 1.4 in [12], since

i=13(BiAi)=0,i=13BiBiAiAi=1,i=13(biai)=β¯(1ν)+ν(β¯ε)>0,(2.21)

the H-function in (2.16) has a power series expansion. The following propositions rely on this property.

Proposition 2.4

Forqi < 1/2 andβinot a positive integer

fQi(qi)=qiνβi1(1qi)(νβi+1)νΓ(βi)Γ(β¯βi)×Γ(β¯)Γ(βi)Γ(βi)Γ(νβi)Γ(νβi)+k=1(1)kDkqi1qiνk,(2.22)

where

Dk=Γ(β¯+k)Γ(βik)Γ(βi+k)k!Γ(ν(βi+k))Γ(ν(βi+k))+(1qi)νβiΓ(βik)Γ(β¯βi+k)qiνβiΓ(νk)Γ(νk).(2.23)

Proof

Consider the H-function H3,32,2 in (2.16). If βi is not a positive integer, we have B1(b2 + l) ≠ B2 (b1 + k) for l, k = 0, 1, 2, …. Thus, thanks to (2.21), from Theorem 1.3 of [12], H3,32,2 is an analytical function in qiν/(1qi)ν and has the following power series expansion for qi < 1/2:

k=0Γ(b2k)Γ(1a1+k)Γ(1a2+k)Γ(1b3+νk)Γ(a3νk)(1)kk!qi1qiνk+qi1qiνb2+k=0Γ(b2k)Γ(1a1+b2+k)Γ(1a2+b2+k)Γ(1b3+ν(b2+k))Γ(a3ν(b2+k))(1)kk!qi1qiνk.(2.24)

The claim follows replacing a1 = 1 – βi, a2 = 1 – β̄ + ε, a3 = ν (εβi), b2 = εβi and b3 = 1 – νβi in (2.24) and taking the limit as ε ↓ 0.□

Remark 2.5

If ν (βi + k) is not a positive integer for k = 0, 1, 2, … thanks to the reflection formula for the gamma function [12], we might simplify the expansion in (2.22) using

Γ(βik)Γ(βi+k)Γ(ν(βi+k))Γ(ν(βi+k))=νsin(πν(βi+k))sin(π(βi+k)).(2.25)

Similarly we get Γ(–νk) Γ(νk) = –π/( sin(πνk)) if νk is not a positive integer.

Proposition 2.5

Forqi > 1/2 andβ̄βinot a positive integer

fQi(qi)=qi(ν(β¯βi)+1)(1qi)ν(β¯βi)1νΓ(βi)Γ(β¯βi)×Γ(β¯)Γ((β¯βi))Γ(β¯βi)Γ(ν(β¯βi))Γ(ν(β¯βi))+k=1(1)kDk1qiqiνk,(2.26)

where

Dk=Γ(β¯+k)k!Γ((β¯βi+k))Γ(β¯βi+k)Γ(ν(β¯βi+k))Γ(ν(β¯βi+k))+qi1qiν(β¯βi)Γ(βi+k)Γ(β¯βik)Γ(νk)Γ(νk).(2.27)

Proof

Consider again the H-function H3,32,2 in (2.16). If β̄βi is not a positive integer, we have A1(1 – a2 + l) ≠ A2(1 – a1 + k) for l, k = 0, 1, 2, … From (2.21) and Theorem 1.4 of [12], H3,32,2 is an analytical function in qiν/(1qi)ν and has the following power series expansion for qi > 1/2:

k=0Γ(1a1+k)Γ(b2+1a1+k)Γ(a1a2k)Γ(a3+ν(1a1+k))Γ(1b3ν(1a1+k))(1)kk!qi1qiν(a11k)+k=0Γ(1a2+k)Γ(b2+1a2+k)Γ(a2a1k)Γ(a3+ν(1a2+k))Γ(1b3ν(1a2+k))(1)kk!qi1qiν(a21k)(2.28)

The claim follows replacing a1 = 1 – βi, a2 = 1 – β̄ + ε, a3 = ν (εβi), b2 = εβi and b3 = 1 – νβi in (2.28) and taking the limit as ε ↓ 0.□

By using the reflection formula for the gamma function, also the expansion (2.26) might be simplified similarly to what has been addressed in Remark 2.5.

3 An alternative generalization

We now give an alternative generalization with desirable properties which in addition approximates the fractional Dirichlet distribution with density (2.5) for appropriate values of the parameters.

Let us thus consider a random vector Q = (Q1, …, Qn–1), n ≥ 2, with the following probability density function:

fQ(q1,,qn1)=i=1n1qiνβi11i=1n1qiνβn1×νn1Γ(β¯)Γ(β1)Γ(βn)(q1ν++(1i=1n1qi)ν)β¯,(3.1)

for q1, …, qn–1 ∈ (0, 1), q1 + … + qn–1 < 1, ν > 0, βi > 0, i = 1, …, n, β̄ = β1 + … + βn.

For the sake of clarity we check that fQ(q1, …, qn–1) as given in (3.1) is a genuine probability density function. This will follow by proving that

νn1Γ(β¯)Γ(β1)Γ(βn)(3.2)

in the rhs of (3.1) plays the role of a normalization coefficient.

Theorem 3.1

We have

01dq101q1qn2dqn1i=1n1qiνβi11i=1n1qiνβn1×(q1ν++(1i=1n1qi)ν)β¯=Γ(β1)Γ(βn)νn1Γ(β¯).(3.3)

Proof

Observe that the lhs of (3.3) can be rewritten as

I=01dq101q1qn2dqn1i=1n1qi1×(1q¯)1i=1n1qi1q¯νβi1+i=1n1qi1q¯νβ¯,(3.4)

where = q1 + ⋯ + qn–1. Apply the change of variables (1 – )/qi = zi for i = 1, …, n – 1, in multivariate integration. Thus, we have

qi=jIn1,izjj=1n1zj+k=1n1jIn1,kzj,i=1,,n1,J=j=1n1zjn2j=1n1zj+k=1n1jIn1,kzjn,(3.5)

where 𝓘n–1,k = {1, …, k – 1, k + 1, …, n – 1} for k = 1, …, n – 1, and J is the Jacobian of the transformation. Note that

1q¯=j=1n1zjj=1n1zj+k=1n1jIn1,kzj.(3.6)

By putting (3.5) and (3.6) in (3.4) we have

I=0dz10dzn1i=1n1zjνβj11+j=1n11zjνβ¯.(3.7)

Apply the change of variables ziν=ti for i = 1, ..., n – 1, in multivariate integration. Then, we have zi=ti1/ν for i = 1, ..., n – 1, and ν1ni=1n1ti1/ν1 is the Jacobian of this transformation. From (3.7)

I=1νn10dt10dtn2In2(t1,,tn2),(3.8)

where

In2(t1,,tn2)=0dtn1i=1n1tjβj11+j=1n11tjβ¯.(3.9)

Observe that In–2(t1, …, tn–2) in (3.9) can be rewritten as

In2(t1,,tn2)=0dtn1j=1n1tjβ¯βj1j=1n1tj+k=1n1jIn1,ktjβ¯=i=1n2tiβi1×0dtn11+tn1i=1n2ti+k=1n2iIn2,ktii=1n2tiβ¯tn1β¯βn11.(3.10)

With the change of variable z=tn1(i=1n2ti+k=1n2iIn2,kti)/i=1n2ti and by recalling the Mellin trasform of (1 + z)β̄, we recover

In2(t1,,tn2)=i=1n2tiβ¯βiβn11(i=1n2ti+k=1n2iIn2,kti)β¯βn1Γ(β¯βn1)Γ(βn1)Γ(β¯).(3.11)

Now, replace In–2(t1, …, tn–2) in (3.8) with the closed form (3.11). This leads us to

I=1νn1Γ(β¯βn1)Γ(βn1)Γ(β¯)0dt10dtn3In3(t1,,tn3),(3.12)

where

In3(t1,,tn3)=0dtn2j=1n2tjβ¯βn1βj1j=1n2tj+k=1n2jIn2,ktj(β¯βn1).(3.13)

By comparing the integral in (3.13) with that in (3.10), we observe that the former has the same expression of the latter with β̄ replaced by β̄βn–1. Thus, by recurring to the same arguments employed to compute In–2(t1, …, tn–2) we recover

In3(t1,,tn3)=i=1n3tiβ¯βn1βn2βi1(i=1n3ti+k=1n3iIn3,kti)β¯βn1βn2×Γ(β¯βn1βn2)Γ(βn2)Γ(β¯βn1).(3.14)

Replacing In–3(t1, …, tn–3) in (3.12) with the closed form (3.14) we get

I=1νn1Γ(β¯βn1βn2)Γ(βn1)Γ(βn2)Γ(β¯)×0dt10dtn4In4(t1,,tn4),(3.15)

where

In4(t1,,tn4)=0dtn3j=1n3tjβ¯βn1βn2βj1×j=1n3tj+k=1n3jIn3,ktj(β¯βn1βn2),(3.16)

which indeed has the same expression of In–2 and In–3 with suitable updates of β̄. The result follows by iterating from i = 4 up to i = n – 1 the computation of

I=1νn1Γ(β¯k=ni+2n1βk)k=ni+2n1Γ(βk)Γ(β¯)×0dt10dtniIni(t1,,tni)(3.17)

with

Ini(t1,,tni)=0dtni+1j=1ni+1tjβ¯k=ni+2n1βkβj1×j=1ni+1tj+k=1ni+1jIni+1,ktj(β¯k=ni+2n1βk).(3.18)

We obtain the closed form expression

Ini(t1,,tni)=j=1nitjβ¯k=ni+1n1βkβj1(i=1niti+k=1niiIni,kti)β¯k=ni+1n1βk×Γ(β¯k=ni+in1βk)Γ(βni+1)Γ(β¯k=ni+2n1βk)(3.19)

with k=1niiIni,kti=1 for i = n – 1. The last replacement with I1(t1) gives

I=1νn1Γ(β1+βn)Γ(β2)Γ(βn1)Γ(β¯)0dt1(1+t1)(β1+βn)t1βn1(3.20)

from which the claimed result follows by observing that

0dt1t1βn1(1+t1)(β1+βn)=Γ(βn)Γ(β1)/Γ(β1+βn).

Remark 3.1

Alternatively, in (3.7) use the transformation z11=t1,,zn1=tn. Then, we have (cf. [9] no. 4.638/3, p. 649)

I=0dt10dtn1i=1n1tjνβj1(1+t1ν++tn1ν)β¯=Γ(β1)Γ(βn1)νn1Γ(β¯β1βn1)Γ(β¯),(3.21)

which is in agreement with (3.3).

Remark 3.2

On the n-dimensional simplex Δn the pdf of the random vector (Q1, …, Qn), where ∑nQn = 1 a.s., writes

P(Q1,,Qn)d(q1,,qn)=νn1Γ(β¯)Γ(β1)Γ(βn)(q1ν++qnν)β¯i=1nqiνβi1=νn1B(β)i=1nqi1qiνi=1nqiνβi,(3.22)

with B(β)=i=1nΓ(βi)/Γ(i=1nβi). In short we write (Q1, …, Qn) ∼ GDIR(ν, β).

Notice that for ν = 1 the Dirichlet(β) is obtained. In this case the random vector (Q1, …, Qn) is uniformly distributed on Δn for βi = 1, i ∈ ℕ*. If instead we only let βi = 1,

P(Q1,,Qn)d(q1,,qn)=νn1(n1)!(q1ν++qnν)ni=1nqiν1(3.23)

which is symmetric but clearly not uniform. If βi = 1/ν (again symmetric) we obtain

P(Q1,,Qn)d(q1,,qn)=νn1Γ(n/ν)Γ(1/ν)n(q1ν++qnν)n/ν.(3.24)

Remark 3.3

The alternative generalized Dirichlet distribution considered in this section (i.e. that with pdf (3.1)) can be derived by the same procedure described in Section 2 with (Zi)ν distributed as Gamma(βi, 1), i = 1, …, n – 1. Note that the random variable X such that Xν, ν > 0, is Gamma(α, 1)-distributed, α > 0, is a special case of the generalized Gamma distribution (see e.g. [11], Section 8.7). In particular, X has pdf

fX(x)=νxνα1exνΓ(α)1R+,(3.25)

and Laplace transform (from (2.3.23) of [15] and the definition of Wright functions)

EezX=νzναΓ(α)k=0(zν)kk!Γ(ν(k+α)).(3.26)

Remark 3.4

The generalized Dirichlet pdf (3.1) turns out to be a reasonably good approximation of the fractional Dirichlet pdf (2.5) for βi < 1 (see for example Fig. 3). A partial explanation is that for λ = 1, β < 1, and ν ∈ (0, 1] the fractional Gamma pdf (2.1) has a rather similar shape to the generalized Gamma pdf (3.25), as Fig. 1 shows for β = 0.2 and β = 0.4. For βi > 1, the fractional Dirichlet pdf exhibits a behaviour different from the generalized Dirichlet pdf (see for example Fig. 2). Indeed, Fig. 1 shows a different shape of the fractional Gamma pdf compared to the generalized Gamma pdf for β = 2 and β = 3.

Figure 1 Comparison of the fractional Gamma pdf (2.1) (red line) versus the generalized Gamma pdf (3.25) (blue line) for ν = 0.7 and β = 0.2 in (a) and β = 0.4 in (b), as in Fig. 3, β = 2 in (c) and β = 3 in (d), as in Fig. 2.
Figure 1

Comparison of the fractional Gamma pdf (2.1) (red line) versus the generalized Gamma pdf (3.25) (blue line) for ν = 0.7 and β = 0.2 in (a) and β = 0.4 in (b), as in Fig. 3, β = 2 in (c) and β = 3 in (d), as in Fig. 2.

Figure 2 Pdf’s for ν = 0.7, β1 = 2, β2 = 3. The black circles represent the results of a Monte Carlo simulation for the fractional Dirichlet distribution and the blue line is the corresponding theoretical value from equation (2.5). The red circles come from a Monte Carlo simulation of the generalized Dirichlet distribution and the green curve is the plot of the theoretical pdf.
Figure 2

Pdf’s for ν = 0.7, β1 = 2, β2 = 3. The black circles represent the results of a Monte Carlo simulation for the fractional Dirichlet distribution and the blue line is the corresponding theoretical value from equation (2.5). The red circles come from a Monte Carlo simulation of the generalized Dirichlet distribution and the green curve is the plot of the theoretical pdf.

Proposition 3.1

(Conjugate distribution). The generalized Dirichlet distribution GDIR(ν, β) (with pdf(3.22)) is the conjugate prior to a re-parametrized Multinomial distribution with pmf

Nx1!xn!i=1nqiνxiq1ν++qnνi=1nxi,(3.27)

whereN ∈ ℕ+, xi ∈ {0, …, N}, i = 1, …, n, n ∈ ℕ+, q1 + … + qn = 1, ν > 0. In particular, if the prior is GDIR(ν, β) and the likelihood is as in(3.27), then the posterior becomes GDIR(ν, β + x).

Proof

The proof is a straightforward application of Bayes theorem. The reparametrization in (3.27) is such that pi=qiν/(i=1nqiν),i = 1, …, n, are the event probabilities (i.e. i=1npi = 1).□

3.1 Representation in terms of Dirichlet random variables

In order to derive a meaningful representation in terms of Dirichlet random variables for the random vector Q, we first recall the definitions of two related classes of random vectors (see [10]).

Definition 3.1

(Liouville distribution of the first kind). Let X = (X1, …, Xn) be an absolutely continuous random vector supported on the n-dimensional positive orthant, i.e. Rn = {(x1, …, xn) : xi > 0 for each i = 1, …, n}. It is said to have Liouville distribution of the first kind if its joint pdf writes

fX(x1,,xn)fi=1nxii=1nxiai1,(3.28)

where ai > 0, i = 1, …, n, and f is a positive continuous function satisfying ∫+ya–1f(y) dy < ∞, with a = a1, …, an. Further, we write XLn(1)[f();a1,,an].

Definition 3.2

(Liouville distribution of the second kind). Let Z = (Z1, …, Zn) be an absolutely continuous random vector supported on Sn = {(z1, …, zn) : zi > 0 for each i = 1, …, n, i=1nzi < 1}. It is said to have Liouville distribution of the second kind if its joint pdf writes

fZ(z1,,zn)gi=1nzii=1nzici1,(3.29)

where ci > 0, i = 1, …, n, and g is a positive continuous function satisfying ∫+yc–1g(y) dy < ∞, with c = c1, …, cn. Further, we write ZLn(2)[g();c1,,cn].

Remark 3.5

If we let f(t) = (1 + t)–(a+an+1), t > 0, an+1 > 0, in (3.28), then X is distributed as an inverted Dirichlet.

If, in (3.29), we choose g(t) = (1 – t)cn+1–1, 0 < t < 1, an+1 > 0, we have that Z is distributed as a Dirichlet.

Proposition 3.1 of [10] tells us what is the relationship between Liouville distributions of the first and of the second kind (and hence between the Dirichlet and the inverted Dirichlet). Specifically, if ZLn(2)[g(⋅);c1, …, cn] and we consider the transformation

Xi=Zi1i=1nZi,i=1,,n,(3.30)

then XLn(1)[f(⋅); c1, …, cn], where

f(t)=(1+t)(c+1)gt1+t,t>0.(3.31)

Plainly, the converse relation is true as well: inverting (3.31) (letting h = t/(1 + t)) we have

g(h)=11hfh1h,0<h<1.(3.32)

As a simple example, considering f(t) = (1 + t)–(c+cn+1), t > 0 (inverted Dirichlet), we readily obtain g(1 – h)cn+1–1 (Dirichlet).

Now, by exploiting the above definition we prove the following distributional representation for Q.

Proposition 3.2

LetQ = (Q1, …, Qn–1) be distributed with pdf(3.1). Then the random vectorM = (M1, …, Mn–1) such that

Mi=Qi1i=1n1Qiν1+Qi1i=1n1Qiν,i=1,,n1,(3.33)

is distributed as a Dirichlet(β = (β1, …, βn)).

Conversely, ifMDirichlet(β) we have thatQ = (Q1, …, Qn–1) such that

Qi=Mi1i=1n1Mi1ν1+Mi1i=1n1Mi1ν,i=1,,n1,(3.34)

is distributed with pdf(3.1).

Proof

Let us define the random vector Y = (Y1, …, Yn–1) such that

Yi=Qi1i=1n1Qiν,i=1,,n1,(3.35)

and let β* = β1 + …+ βn–1. Combining the transformations in the proof of Theorem 3.3 and of Remark 3.1 we see that Y has pdf

fY(y1,,y)=Γ(β+βn)Γ(β1)Γ(βn)1+i=1n1yiββni=1n1yiβi1,(3.36)

and hence YLn1(1)[(1+⋅)–(β*+βn); β1, …, βn–1] (i.e. an inverted Dirichlet distribution).

By using the mentioned transformations between Liouville distributions of first and second kinds we have that MLn1(2)[(1 – ⋅)βn–1; β1, …, βn–1] (Dirichlet), where

Mi=Yi1+i=1n1Yi,i=1,,n1,(3.37)

and (3.33) follows.

Finally, a rewriting of the components of Q in terms of those of Y, leads easily to (3.34).□

4 Monte Carlo simulations

The simulation of the random variables Q for the fractional Dirichlet distribution is straightforward based on the construction presented in Section 2. First, one needs to generate random variables with density (2.1) and one can use the mixture representation discussed in [4]

Xi=dUi1/νVν,

where Ui is Gamma(βi, λ)-distributed and Vν is strictly positive-stable distributed with exp(–sν) as the Laplace transform of the pdf. Summing the Xi to get W and dividing Xi by W gives Qi.

Remark 4.1

For β = 1, there is an alternative representation [13, 6]:

X=dΞZ1/ν,

where Ξ is Exp(λ)-distributed and Z is Cauchy-distributed.

The behaviour of the fractional Dirichlet distribution in the case N = 2 is shown in Fig. 2 for ν = 0.7, β1 = 2, β2 = 3. In this case, the generalized Dirichlet distribution is not a good approximation.

This is not the case for N = 2, ν = 0.7, β1 = 0.2 and β2 = 0.4 where the generalized Dirichlet distribution is a reasonably good approximation of the fractional Dirichlet distribution. This is represented in Fig. 3.

Figure 3 Pdf’s for ν = 0.7, β1 = 0.2, β2 = 0.4. Circles and curves have the same meaning as in Fig. 2.
Figure 3

Pdf’s for ν = 0.7, β1 = 0.2, β2 = 0.4. Circles and curves have the same meaning as in Fig. 2.

For larger values of the parameters βi, one gets a unimodal distribution in both cases as shown in Fig. 4 for N = 2, ν = 0.95, β1 = 10, β2 = 30.

Figure 4 Pdf’s for ν = 0.95, β1 = 10, β2 = 30. Circles and curves have the same meaning as in Fig. 2.
Figure 4

Pdf’s for ν = 0.95, β1 = 10, β2 = 30. Circles and curves have the same meaning as in Fig. 2.

In Fig. 5, the heavy character of the right tail of the generalized Dirichlet distribution is highlighted by the log-log plot.

Figure 5 Double logarithmic plot of the pdf’s for ν = 0.95, β1 = 10, β2 = 30. Circles and curves have the same meaning as in Fig. 2.
Figure 5

Double logarithmic plot of the pdf’s for ν = 0.95, β1 = 10, β2 = 30. Circles and curves have the same meaning as in Fig. 2.

Acknowledgements

F. Polito has been partially supported by the project “Memory in Evolving Graphs” (Compagnia di San Paolo/Università degli Studi di Torino).

References

[1] L. Beghin, E. Orsingher, Fractional Poisson processes and related planar random motions. Electron. J. Probab. 14, No 61 (2009), 1790–1827.10.1214/EJP.v14-675Search in Google Scholar

[2] J. Bertoin, Exchangeable Coalescents. ETH Zurich, 2010.Search in Google Scholar

[3] L. Bondesson, A general result on infinite divisibility. Ann. Probab. 7, No 6 (1979), 965–979.10.1214/aop/1176994890Search in Google Scholar

[4] D. O. Cahoy, F. Polito, Renewal processes based on generalized Mittag-Leffler waiting times. Comm. Nonlinear Sci. Numer. Simulat. 18, No 3 (2013), 639–650.10.1016/j.cnsns.2012.08.013Search in Google Scholar

[5] S. Favaro, G. Hadjicharalambous, I. Prünster, On a class of distributions on the simplex. J. Statist. Plann. Inference141, No 9 (2011), 2987–3004.10.1016/j.jspi.2011.03.015Search in Google Scholar

[6] D. Fulger, E. Scalas, G. Germano, Monte Carlo simulation of uncoupled continuous-time random walks yielding a stochastic solution of the space-time fractional diffusion equation. Phys. Rev. E77 (2008), Art. 021122.10.1103/PhysRevE.77.021122Search in Google Scholar

[7] A. Giusti, I. Colombaro, R. Garra, R. Garrappa, F. Polito, M. Popolizio, F. Mainardi, A practical guide to Prabhakar fractional calculus. Fract. Calc. Appl. Anal. 23, No 1 (2020), 9–54; 10.1515/fca-2020-0002; https://www.degruyter.com/view/journals/fca/23/1/fca.23.issue-1.xml.Search in Google Scholar

[8] R. Gorenflo, F. Mainardi, Fractional Calculus: Integral and Differential Equations of Fractional Order. In: Fractals and Fractional Calculus in Continuum Mechanics, A. Carpinteri and F. Mainardi (Eds.), Springer, New York and Wien (1997), 223–276.10.1007/978-3-7091-2664-6_5Search in Google Scholar

[9] I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products. 8th Edition, Elsevier/Academic Press, Amsterdam (2015).Search in Google Scholar

[10] R. D. Gupta, D. S. P. Richards, Multivariate Liouville distributions. J. Multivariate Anal. 23, No 2 (1987), 233–256.10.1016/0047-259X(87)90155-2Search in Google Scholar

[11] N. L. Johnson, S. Kotz, N. Balakrishnan, Continuous Univariate Distributions. Vol. 1. 2nd Edition, Wiley Ser. in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Inc., New York (1994).Search in Google Scholar

[12] A. A. Kilbas, M. Saigo, H-Transforms. Theory and Applications. Ser. Analytical Methods and Special Functions, Vol. 9, Chapman & Hall/CRC, Boca Raton, FL (2004).Search in Google Scholar

[13] T. Kozubowski, Mixture representation of Linnik distributions revisited. Stat. Probab. Lett. 38 (1998), 157–160.10.1016/S0167-7152(97)00167-3Search in Google Scholar

[14] F. Mainardi, R. Gorenflo, E. Scalas, A fractional generalization of the Poisson processes. Vietnam J. Math. 32 (Special Issue) (2004), 53–64.Search in Google Scholar

[15] A. M. Mathai, H. J. Haubold, Special Functions for Applied Scientists. Springer, New York (2008).10.1007/978-0-387-75894-7Search in Google Scholar

[16] T. M. Michelitsch, A. P. Riascos, Generalized fractional Poisson process and related stochastic dynamics. Fract. Calc. Appl. Anal. 23, No 3 (2020), 656–693; 10.1515/fca-2020-0034; https://www.degruyter.com/view/journals/fca/23/3/fca.23.issue-3.xml.Search in Google Scholar

[17] T. M. Michelitsch, A. P. Riascos, Continuous time random walk and diffusion with generalized fractional Poisson process. Phys. A545 (2020), Art. 123294.10.1016/j.physa.2019.123294Search in Google Scholar

[18] T. M. Michelitsch, F. Polito, A. P. Riascos, Biased continuous-time random walks with Mittag-Leffler jumps. Fractal and Fractional4, No 4 (2020), Art. 51.10.3390/fractalfract4040051Search in Google Scholar

[19] R. N. Pillai, On Mittag-Leffler functions and related distributions. Ann. Inst. Statist. Math. 42, No 1 (1990), 157–161.10.1007/BF00050786Search in Google Scholar

Received: 2020-11-08
Published Online: 2021-01-29
Published in Print: 2021-02-23

© 2021 Diogenes Co., Sofia

Downloaded on 20.4.2024 from https://www.degruyter.com/document/doi/10.1515/fca-2021-0006/html
Scroll to top button