1 Introduction

Einstein’s explanation of Brownian motion has provided the cornerstone which underlies deep connections between stochastic processes and evolution equations. Namely, the function \(\ p_t(x):=(2\pi t)^{-d/2}\exp \left( -\frac{|x|^2}{2t}\right) ,\ \) which is the probability density function (PDF) of a (d-dimensional) Brownian motion \((B_t)_{t\geqslant 0}\), is also the fundamental solution of the heat equation \(\ \frac{\partial u}{\partial t}(t,x)=\frac{1}{2}\varDelta u(t,x)\ \) with the Laplace operator \(\varDelta \). In other words, the heat equation is the governing equation for Brownian motion. And the solution of the Cauchy problem for the heat equation with initial data \(u_0\) has the stochastic representation

$$\begin{aligned} u(t,x)={\mathbb {E}}[u_0(x+B_t)]. \end{aligned}$$

Itô calculus for Brownian motion allows to extend the above relation to a wider class of evolution equations. Using Itô calculus (together with martingale theory or with theory of Markov processes and operator semigroups), one can prove (under suitable assumptions), e.g., the Feynman–Kac formula [29, 50]

$$\begin{aligned} u(t,x):={\mathbb {E}}\left[ u_0(x+B_t)\,e^{\,\int _0^t b(x+B_s)\cdot d B_s-\frac{1}{2}\int _0^t|b(x+B_s)|^2ds +\int _0^t c(x+B_s)ds}\right] , \end{aligned}$$

for the standard diffusion equation with drift b and a “potential” (or killing) term c

$$\begin{aligned} \frac{\partial u}{\partial t}(t,x)=\frac{1}{2}\varDelta u(t,x)+b(x)\cdot \nabla u(t,x)+ c(x) u(t,x) \end{aligned}$$
(1.1)

as well as (quite analogous) Feynman–Kac formulae for Schrödinger type counterparts of equation (1.1), see e.g. [1, 7, 13].

The Standard or Brownian diffusion is identified by the linear growth in time of the variance and by a Gaussian shape of the displacement distribution. However, many natural phenomena show a diffusive behaviour that exhibit a non-linear growth in time of the variance and/or non-Gaussian shape of the displacement distribution; such phenomena are generally labeled as anomalous (or, fractional) diffusion. Anomalous diffusion is ubiquitously observed in many complex systems, ranging from turbulence and plasma physics to soft matter (e.g., cell cytoplasm, membrane and nucleus) and neuro-physiological systems (see, e.g, [34, 51] and references therein). There are many different mathematical models describing anomalous diffusion. And, usually, these models lead to evolution equations generalizing equation (1.1) by substituting partial derivatives with respect to space and/or time by some non-local integro-differential operators (in particular, by operators corresponding to fractional derivatives).

One of the earliest and most well-studied models of anomalous diffusion is based on the Continuous Time Random Walk (CTRW) approach (see, e.g., [16, 26, 27, 33, 35, 46]). The trajectory of each diffusing particle is considered to be governed by the PDF \(p_t(x)\) (of finding the particle in position x at time t) which solves the Montroll–Weiss equation [35]. Under some assumptions on jumps and waiting times of a CTRW, one obtains in the proper scaling limit a symmetric \(\gamma \)-stable Lévy process time-changed (or, subordinated) by an independent inverse \(\beta \)-stable subordinator, \(\beta \in (0,1]\), \(\gamma \in (0,2]\). And the Montroll–Weiss equation leads to the time- and space-fractional heat equation

$$\begin{aligned} u(t,x)=u_0(x)-\frac{1}{\varGamma (\beta )}\int _0^t (t-s)^{\beta -1}\left( -\frac{1}{2}\varDelta \right) ^{\gamma /2}u(s,x)ds \end{aligned}$$
(1.2)

as the governing equation for this process. Here \(-\left( -\frac{1}{2}\varDelta \right) ^{\gamma /2}\) is the fractional Laplacian (up to a constant), i.e. a pseudo-differential operator with symbol \(-2^{-\gamma /2}|p|^{\gamma }\). And the integral operator, which is applied to \( - \left( -\frac{1}{2}\varDelta \right) ^{\gamma /2} u(\cdot ,x)\), is the Riemann-Liouville fractional integral of order \(\beta \). The equation (1.2) can be rewritten also in the formalism of Caputo fractional derivatives in the following way

$$\begin{aligned} \partial ^\beta _t u(t,x)=-\left( -\frac{1}{2}\varDelta \right) ^{\gamma /2} u(t,x), \end{aligned}$$

where \(\partial ^\beta _t f(t):= \int _0^t\frac{(t-s)^{-\beta }}{\varGamma (1-\beta )}\frac{d f(s)}{d s}ds\) is the Caputo fractional derivative of order \(\beta \). Many authors have contributed to various generalizations of the above results, showing that Markov processes, subordinated by independent inverse subordinators, provide stochastic (representations of) solutions of some suitable time- (and, possibly, space-) fractional evolution equations (see, e.g., [5, 27, 28] and references therein).

A new direction in theoretical modelling of diffusion in complex media interprets the anomalous character of the diffusion as a consequence of a very heterogeneous environment [8, 9, 14, 22, 51, 52]. One type of such models is based on randomly scaled Gaussian processes (RSGP) and is sometimes referred to as Generalized Grey Brownian Motion (GGBM), or GGBM-like models. This type of models originates from Schneider’s grey Brownian motion [48, 49] whose PDF is the fundamental solution of the time-fractional heat equation of order \(\beta \) (i.e., equation (1.2) with \(\gamma :=2\)). The GGBM \((X^{\alpha ,\beta }_t)_{t\geqslant 0}\), \(\alpha \in (0,2)\), \(\beta \in (0,1]\), was introduced in works of Mainardi, Mura and their coauthors [36,37,38], and can be realized as

$$\begin{aligned} X^{\alpha ,\beta }_t:=\sqrt{A_\beta }B^{\alpha /2}_t, \end{aligned}$$
(1.3)

where \(B^{\alpha /2}_t\) is a (1-dim.) fractional Brownian motion (FBM) with Hurst parameter \(\alpha /2\) and \(A_\beta \) is a nonnegative random variable which is independent from \((B^{\alpha /2}_t)_{t\geqslant 0}\) and whose distribution has Laplace transform \(E_\beta (-\cdot )\), where \(E_\beta (z):=\sum _{n=0}^\infty \frac{z^n}{\varGamma (\beta n+1)}\) is the Mittag–Leffler function. For \(\beta \in (0,1)\), the PDF of \(A_\beta \) is given by \({\mathcal {M}}_\beta (x)\mathbbm {1}_{\{x\geqslant 0\}}\), where \({\mathcal {M}}_\beta \) is the Mainardi-Wright function \({\mathcal {M}}_\beta (x):=\sum _{n=0}^\infty \frac{(-x)^n}{n!\varGamma (1-\beta -\beta n)}\). The GGBM includes Brownian motion (for \(\alpha =\beta =1\)), FBM (for \(\alpha \ne \beta =1\)) and Schneider’s grey Brownian motion (for \(\alpha =\beta \ne 1\)). These processes are self-similar and with stationary increments, what makes the GGBM attractive for modeling. Generally, GGBM-like models deal with processes of the form \(\sqrt{A} G_t\), or \(\sqrt{A_t}G_t\), where \(G_t\) is a Gaussian process and A (or \(A_t\)) is an independent nonnegative random variable (or process) [39]. The PDF of GGBM is the fundamental solution for the following evolution equation (cf., e.g., [19, 36]):

$$\begin{aligned} u(t,x)=u_0(x)+\frac{\alpha }{\beta \varGamma (\beta )}\int _0^t s^{\frac{\alpha }{\beta }-1}\left( t^{\frac{\alpha }{\beta }}-s^{\frac{\alpha }{\beta }} \right) ^{\beta -1}\frac{1}{2}\frac{\partial ^2 u(s,x)}{\partial x^2}ds, \end{aligned}$$
(1.4)

which is the time-stretched time-fractional heat equation. For \(\alpha :=\beta \), equation (1.4) reduces to the time-fractional heat equation of order \(\beta \). Mathematical theory of GGBM is actively developing nowadays [6, 10,11,12, 15, 19]. Moreover, another GGBM-like process of the form \(\sqrt{A} G_t\) is shown to have PDF which solves the (1-dimensional) time- and space-fractional heat equation (1.2) (see [40]).

The fact that the time- and space-fractional heat equation (1.2) serves as the governing equation for so different stochastic processes as a RSGP and a stable Lévy process subordinated by an inverse stable subordinator motivates the following questions: What is in common between these two classes of processes? What other classes of processes can be used to solve such type of evolution equations? How general can be evolution equations allowing such types of stochastic solutions?

The present paper gives our answers to these questions. We consider a general class of evolution equations of the form

$$\begin{aligned} u(t,x)&=u_0(x)+\int _0^t k(t,s)Lu(s,x)ds,\qquad t>0,\quad x\in {{\mathbb {R}}}^d, \nonumber \\ \lim _{t\searrow 0} u(t,x)&=u_0(x),\qquad x\in {{\mathbb {R}}}^d, \end{aligned}$$
(1.5)

where L is a pseudo-differential operator associated to a Lévy process and k(ts), \(0<s<t<\infty \), is a general kernel. This setting largely extends equations (1.2) and (1.4). In Section 2, we present a general relation between the parameters of the equation (1.5) and the distribution of any stochastic process, which provides a stochastic solution of Feynman-Kac type. The proof of this relation is presented in Section 3. More precisely, we derive a series representation in terms of the time kernel k and the symbol \(-\psi \) of the pseudo-differential operator L for the characteristic function of the one-dimensional marginals of any stochastic solution. We explain how this series simplifies in the important case of homogeneous kernels which includes the kernel \(k(t,s)=(t-s)^{\beta -1}/\varGamma (\beta )\) for time-fractional evolution equations and, more generally, kernels corresponding to the Marichev-Saigo-Maeda fractional differintegral operators. The connection between the Marichev-Saigo-Maeda operators of fractional calculus and positive random variables with Laplace transform given by Prabhakar’s three parameter generalization of the Mittag-Leffler function is established in Section  4. The results of Section 4 yield a stochastic representation for (1.5) with a Marichev-Saigo-Maeda kernel in terms of a randomly slowed-down / speeded-up Lévy process \((Y_{At^\beta })_{t\geqslant 0}\), where Y is a Lévy process with infinitesimal generator L, A is an independent random variable with Laplace transform given by the three-parameter Mittag-Leffler function, and \(\beta \) corresponds to the degree of homogeneity of the kernel. If Y has a stable distribution (e.g., in the case when L is a symmetric fractional Laplacian in space), the randomly slowed-down / speeded-up Lévy process can be replaced by a randomly scaled linear fractional stable motion, providing a stochastic solution in terms of a self-similar process with stationary increments, as demonstrated in Section 5.

2 Generalized time-fractional evolution equations and stochastic processes providing stochastic representations of their solutions

In this section, we state our main results on the existence of stochastic representations for generalized time-fractional evolution equations of the form (1.5), where L is a pseudo-differential operator satisfying Assumption 1 below. We first consider general time kernel functions k(ts), \(0<s<t<\infty ,\) and then specialize to the cases of homogeneous kernels and convolution kernels, respectively.

2.1 General time kernels

We first fix some notation. Let

$$\begin{aligned} S({{\mathbb {R}}}^d):=\left\{ \varphi \in C^\infty ({{\mathbb {R}}}^d)\,:\,\lim \limits _{|x|\rightarrow \infty } x^\alpha \partial ^\beta \varphi (x)=0 \ \ \forall \alpha ,\beta \in {\mathbb {N}}_0^d \right\} \end{aligned}$$

be the Schwartz space. We consider a continuous negative definite function (CNDF) \(\psi \,:{{\mathbb {R}}}^d\rightarrow {{\mathbb {C}}}\). Note that each CNDF \(\psi \,:{{\mathbb {R}}}^d\rightarrow {{\mathbb {C}}}\) is uniquely determined by its Lévy-Khintchine representation

$$\begin{aligned} \psi (p)=c-ip\cdot b+\frac{1}{2} p\cdot Qp+\int _{{{\mathbb {R}}}^d\setminus \{0\}}\left( 1-e^{ip\cdot y}+\frac{ip\cdot y}{1+|y|^2} \right) \nu (dy), \end{aligned}$$

where \(c\geqslant 0\), \(b\in {{\mathbb {R}}}^d\), Q is a symmetric positive semidefinite \(d\times d\)-matrix, \(\nu \) is a measure on \({{\mathbb {R}}}^d\setminus \{0\}\) such that \(\int _{{{\mathbb {R}}}^d\setminus \{0\}}\min (|y|^2,1)\nu (dy)<\infty \). Any such quadruple \((c,b,Q,\nu )\) defines a CNDF. It follows that each CNDF \(\psi \) satisfies the estimate \(|\psi (p)|\leqslant C_\psi (1+|p|^2)\) for all \(p\in {{\mathbb {R}}}^d\) and some \(C_\psi \geqslant 0\).

Further, we consider a pseudo-differential operator \((L,S({{\mathbb {R}}}^d))\) with symbol \(-\psi \) in the Banach space \(C_\infty ({{\mathbb {R}}}^d)\) of continuous functions vanishing at infinity with supremum-norm \(\Vert \varphi \Vert _\infty :=\sup _{x\in {{\mathbb {R}}}^d}|\varphi (x)|\) (cf. Example 4.1.16 of [21]), i.e. for each \(\varphi \in S({{\mathbb {R}}}^d)\)

$$\begin{aligned} L\varphi (x):=\left( {\mathfrak {F}}^{-1}\circ (-\psi )\circ {\mathfrak {F}}\varphi \right) (x)\equiv -(2\pi )^{-d}\int _{{{\mathbb {R}}}^d} \int _{{{\mathbb {R}}}^d}e^{ip\cdot (x-q)} \psi (p)\varphi (q)dqdp, \end{aligned}$$
(2.1)

where \({\mathfrak {F}}\) is the Fourier transform such that \({\mathfrak {F}}\varphi (p)=(2\pi )^{-d/2}\int _{{{\mathbb {R}}}^d} e^{-ipx}\varphi (x)dx\). Note that the operator \((L,S({{\mathbb {R}}}^d))\) is closable in the space \(C_\infty ({{\mathbb {R}}}^d)\) and the closure \((L,\text {Dom}(L))\), \(\text {Dom}(L):=\left\{ \varphi \in C_\infty ({{\mathbb {R}}}^d)\,:\,L\varphi \in C_\infty ({{\mathbb {R}}}^d)\right\} \), generates a strongly continuous semigroup on \(C_\infty ({{\mathbb {R}}}^d)\). This semigroup (or the operator \((L,\text {Dom}(L))\) itself) corresponds to a Lévy process \(Y:=(Y_t)_{t\geqslant 0}\) with charachteristic exponent \(\psi \), i.e. \({\mathbb {E}}\left[ \exp (ip\cdot Y_t) \right] =\exp (-t\psi (p))\) (see, e.g. [3, 21]).

Assumption 1

We assume that the initial data \(u_0\) is an arbitrary function from \(S({{\mathbb {R}}}^d)\) and that the symbol \(-\psi \) of the operator L satisfies \(\psi (0)=0\), i.e. there is no killing term (i.e. \(c=0\)) in the Lévy-Khintchine representation of \(\psi \) (and hence an underlying Lévy process \((Y_t)_{t\geqslant 0}\) has an infinite life time).

Concerning the general kernels, we impose the following condition:

Assumption 2

We consider a Borel-measurable kernel \(k\,:\,(0,\infty )\times (0,\infty )\rightarrow {{\mathbb {R}}}\) satisfying the following condition: \(\exists \,\alpha ^*\in [0,1)\) and \(\exists \,\varepsilon >0\) such that for each \(T>0\)

$$\begin{aligned} K_T:=\sup \limits _{0<t\leqslant T}t^{\alpha ^*-\frac{1}{1+\varepsilon }}\Vert k(t,\cdot )\Vert _{L^{1+\varepsilon }((0,t))}<\infty . \end{aligned}$$

Continuity of stochastic representations for (1.5) can be obtained under the following continuity assumption on the kernel:

Assumption 3

For a.e. \(s\in (0,1)\) the mapping

$$\begin{aligned} (0,\infty )\rightarrow {\mathbb {R}},\quad t\mapsto k(t,ts) \end{aligned}$$

is continuous.

Let \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space and \(X:=(X_t)_{t\geqslant 0}\) be an \({{\mathbb {R}}}^d\)-valued stochastic process on it such that \(X_0=0\) \(\,\,{\mathbb {P}}\)-almost surely. We shall say that X provides a stochastic solution to (1.5), if the function u defined by

$$\begin{aligned} u(t,x)={\mathbb {E}}\left[ u_0\left( x+X_t\right) \right] ,\qquad x\in {{\mathbb {R}}}^d,\quad t\geqslant 0, \end{aligned}$$
(2.2)

is a solution to (1.5) for every \(u_0\in S({{\mathbb {R}}}^d)\). Theorem 1 below characterizes, when such a stochastic solution exists. Before we formulate this theorem, let us recall some facts about continuous positive definite functions, completely monotone functions and Bernstein functions. First, by the Bochner theorem, a function \(f\,:\,{{\mathbb {R}}}^d\rightarrow {{\mathbb {C}}}\) is the Fourier transform of a bounded measure on \({{\mathbb {R}}}^d\) if and only if f is continuous and positive definite (see, e.g., [21]). Second, a function \(f : (0, \infty ) \rightarrow [0, \infty )\) is said to be a completely monotone function (CMF) if \((-1)^n f^{(n)}(x) \geqslant 0\) for \(x > 0\) and \(n\in {\mathbb {N}}_0\). A function f is a CMF if and only if it is the Laplace transform of a measure on the half-line \([0,\infty )\) (Bernstein’s theorem). Third, a continuous function \(f : (0, \infty ) \rightarrow [0, \infty )\) is said to be a Bernstein function (BF) if \((-1)^n f^{(n)}(x) \leqslant 0\) for \(x > 0\) and \(n\in {\mathbb {N}}\). Note that a composition \(f\circ \psi \) of a BF f and a CNDF \(\psi \) is again a CNDF, applying the extension of Bernstein functions to the complex numbers with nonnegative real part as described in [47], Proposition 3.5.

Theorem 1

Let Assumptions 1, 2 hold. For each \(t\geqslant 0\), the function \(\varPhi (t,\cdot )\,:\,{{\mathbb {C}}}\rightarrow {{\mathbb {C}}}\) given by

$$\begin{aligned}&\varPhi (t,\lambda ):=\sum _{n=0}^\infty c_n(t)\lambda ^n, \end{aligned}$$
(2.3)
$$\begin{aligned}&c_0(t):=1 \qquad \forall \,\,t\geqslant 0 \qquad \text {and}\nonumber \\&\quad c_n(t):=\left\{ \begin{array}{ll} \int _0^t k(t,s)c_{n-1}(s)ds, &{} \quad \forall \,\,t>0,\\ 0 &{} \quad t=0, \end{array}\right. \qquad n\in {\mathbb {N}}, \end{aligned}$$
(2.4)

is well-defined (i.e., the integrals in the recursion formula exist) and entire.

Moreover:

(i) A stochastic solution to (1.5) exists, if and only if the function \(\varPhi (t,-\psi (\cdot ))\) is positive definite for all \(t\geqslant 0\). In this case, \((X_t)_{t\geqslant 0}\) provides a stochastic solution to (1.5), if and only if

$$\begin{aligned} {\mathbb {E}}\left[ e^{ip\cdot X_t} \right] =\varPhi (t,-\psi (p)),\qquad p\in {{\mathbb {R}}}^d,\quad t\geqslant 0. \end{aligned}$$

(ii) If the restriction of the function \(\varPhi (t,-\cdot )\) on \((0,\infty )\) is a CMF for all \(t\geqslant 0\), the process \((Y_{A(t)})_{t\geqslant 0}\) provides a stochastic solution to (1.5), where, for each \(t\geqslant 0\), A(t) is a non-negative random variable whose distribution \({\mathcal {P}}_{A(t)}\) has the Laplace transform given by \(\varPhi (t,-\cdot )\), i.e. \(\int _0^\infty e^{-\lambda a}{\mathcal {P}}_{A(t)}(da)=\varPhi (t,-\lambda )\), and \((Y_t)_{t\geqslant 0}\) is a Lévy process with characteristic exponent \(\psi \) which is independent from \((A(t))_{t\geqslant 0}\).

(iii) If, additionally, the characteristic exponent \(\psi \) is given by \(\psi :=f\circ {\widetilde{\psi }}\) for some other CNDF \({\widetilde{\psi }}\) and some BF f, then the process \((\tilde{Y}_{{\tilde{A}}(t)})_{t\geqslant 0}\) provides a stochastic solution to (1.5), where, for each \(t\geqslant 0\), \({\widetilde{A}}(t)\) is a non-negative random variable whose distribution \({\mathcal {P}}_{{\widetilde{A}}(t)}\) has the Laplace transform given by the function \(\varPhi (t,-f(\cdot ))\), and \(({\widetilde{Y}}_t)_{t\geqslant 0}\) is a Lévy process with characteristic exponent \({\widetilde{\psi }}\) which is independent from \(({\widetilde{A}}(t))_{t\geqslant 0}\).

The proof of Theorem 1 as well as of the other results of this section will be provided in Section 3. The following remark briefly explains how to deal with time-stretched equations.

Remark 1

Consider the following class of time-stretchings (or “dressings”, cf. Sec. 3.3 of [28]) \({\mathcal {G}}:=\big \{g\,:[0,\infty )\rightarrow [0,\infty )\) such that \(g(\tau )>0 \,\,\forall \,\tau >0\), \(g(\tau )\nearrow \infty \) as \(\tau \nearrow \infty \), \(g(\tau )=\int _0^\tau {\dot{g}}(\theta )d\theta \) for some \({\dot{g}}\in L^1_{loc}([0,\infty ))\) with \({\dot{g}}(\tau )>0\,\,\forall \,\tau >0 \big \}\). Using the change of variables \(t=g(\tau )\), we obtain the analogue of Theorem 1 for the whole class of time-stretched equations

$$\begin{aligned} v(\tau ,x)=u_0(x)+\int _0^\tau \kappa _g(\tau ,\theta ) Lv(\theta ,x)d\theta ,\qquad \tau >0,\quad x\in {{\mathbb {R}}}^d,\quad g\in {\mathcal {G}}, \end{aligned}$$
(2.5)

where the kernel \(\kappa _g\) is defined via

$$\begin{aligned} \kappa _g(\tau ,\theta ):=k(g(\tau ),g(\theta )){\dot{g}}(\theta ). \end{aligned}$$
(2.6)

Obviously, a function v solves evolution equation (2.5) if and only if \(v(\tau ,x)=u(g(\tau ),x)\), where u solves the evolution equation (1.5) with the corresponding kernel k. And \(v(\tau ,x)={\mathbb {E}}\left[ u_0\left( x+X_{g(\tau )}\right) \right] \) solves (2.5), if \((X_t)_{t\geqslant 0}\) provides a stochastic solution to (1.5). Note, that the kernel in the GGBM-equation (1.4) can be obtained from the time-fractional kernel of equation (1.2) via the time-stretching \(g(\tau ):=\tau ^{\alpha /\beta }\).

We close this subsection by establishing a sufficient condition for continuity of \(\varPhi \) and u in both variables.

Proposition 1

Suppose Assumptions 2 and 3 are in force. Then \(t\mapsto c_n(t)\) is continuous on \([0,\infty )\) for every \(n\in {\mathbb {N}}\) and \(\varPhi \) is continuous on \([0,\infty )\times {\mathbb {C}}\). If, moreover, Assumption 1 holds and if a process \((X_t)_{t\geqslant 0}\) provides a stochastic solution to (1.5), then the solution u given by (2.2) is continuous.

2.2 Homogeneous kernels

We now explain how the general results simplify in the case of a homogeneous kernel. Recall that a kernel function k is homogeneous of degree \(\beta -1\), if

$$\begin{aligned} k(t,ts)=t^{\beta -1}k(1,s) \end{aligned}$$

for every \(t\in (0,\infty )\) and \(s\in (0,1)\).

Theorem 2

Suppose k is homogeneous of degree \(\beta -1\) for some \(\beta >0\) and \(k(1,\cdot )\in L^{1+\varepsilon }((0,1))\) for some \(\varepsilon >0\). Then, Assumptions 2 and 3 are satisfied and \(\varPhi \) takes the form

$$\begin{aligned} \varPhi (t,\lambda )={\hat{\varPhi }}(\lambda t^\beta ), \qquad t\geqslant 0, \quad \lambda \in {\mathbb {C}}, \end{aligned}$$

where \({\hat{\varPhi }}(\lambda )=\sum _{n=0}^\infty {\hat{c}}_n \lambda ^n\) and

$$\begin{aligned} {\hat{c}}_n:= {\hat{c}}_{n-1} \int _0^1 k(1,s)s^{\beta (n-1)}ds, \qquad n\in {\mathbb {N}},\qquad {\hat{c}}_0=1. \end{aligned}$$

Additionally, suppose that Assumption 1 holds and \(\psi =({\tilde{\psi }})^\gamma \) for some \(\gamma \in (0,1]\) and a CNDF \({\tilde{\psi }}\), and denote by \({\tilde{Y}}\) a Lévy process with charachteristic exponent \({\tilde{\psi }}\). If \(x\mapsto {\hat{\varPhi }}(-x)\) is a CMF on \((0,\infty )\), then there is a nonnegative random variable \({\tilde{A}}\) independent of \({\tilde{Y}}\) with Laplace transform \({\hat{\varPhi }}(-(\cdot )^\gamma )\) and \(( {\tilde{Y}}_{{\tilde{A}} t^{\beta /\gamma }})_{t\geqslant 0}\) provides a stochastic solution to (1.5).

We again postpone the proof to Section 3, but explain how to obtain conditionally Gaussian representations for the fractional heat equation in time and space from this result.

Example 1

The fractional time kernel

$$\begin{aligned} k(t,s)=\frac{1}{\varGamma (\beta )} (t-s)^{\beta -1},\qquad \beta \in (0,1], \end{aligned}$$

where \(\varGamma \) denotes the gamma function, is homogeneous of degree \(\beta -1\). Since

$$\begin{aligned} \int _0^1 k(1,s)s^{\beta (n-1)}ds=\frac{\varGamma ((n-1)\beta +1)}{\varGamma (n\beta +1)}, \end{aligned}$$

we obtain

$$\begin{aligned} {\hat{c}}_n=\frac{1}{\varGamma (n\beta +1)} \end{aligned}$$

and, thus,

$$\begin{aligned} {\hat{\varPhi }}(\lambda )=E_{\beta }(\lambda ):=\sum _{n=0}^\infty \frac{\lambda ^n}{\varGamma (n\beta +1)} \end{aligned}$$

is the Mittag-Leffler function. Hence \(\varPhi (t,\lambda )=E_\beta (t^\beta \lambda )\). In a similar way, the GGBM-kernel

$$\begin{aligned} k(t,s):=\frac{\alpha }{\beta \varGamma (\beta )}s^{\frac{\alpha }{\beta }-1} \left( t^{\frac{\alpha }{\beta }} -s^{\frac{\alpha }{\beta }} \right) ^{\beta -1},\qquad \beta \in (0,1],\,\,\alpha \in (0,2), \end{aligned}$$

is homogeneous of degree \(\alpha -1\) and the corresponding \( \varPhi (t,\lambda )=E_{\beta }(t^{\alpha }\lambda )\) (cf. Remark 1). The function \(E_{\beta }(-\cdot )\) is known to be completely monotone for \(\beta \in (0,1]\) since the work by Pollard [41], and we denote by \(A_\beta \) a nonnegative random variable which has \(E_{\beta }(-\cdot )\) as Laplace tansform. Suppose B is a d-dimensional standard Brownian motion independent of \(A_\beta \). By Theorem 2 with \(\gamma =1\), the solution to the time-stretched time-fractional heat equation

$$\begin{aligned} u(t,x)=u_0(x)+\frac{\alpha }{\beta \varGamma (\beta )}\int _{0}^t s^{\frac{\alpha }{\beta }-1}\left( t^{\frac{\alpha }{\beta }} -s^{\frac{\alpha }{\beta }} \right) ^{\beta -1}\frac{1}{2} \varDelta u(s,x) ds \end{aligned}$$
(2.7)

(which is the governing equation for the GGBM) has the stochastic representation

$$\begin{aligned} {\mathbb {E}}[u_0(x+B_{A_\beta t^\alpha })]. \end{aligned}$$
(2.8)

Similarly, for \(\beta \in (0,1]\) and \(\gamma \in (0,1)\) denote by \(A_\beta ^{(\gamma )}\) a random variable with Laplace transform \(E_\beta (-(\cdot )^\gamma )\) independent of B. Then, by Theorem 2,

$$\begin{aligned} {\mathbb {E}}\left[ u_0\big (x+B_{A_\beta ^{(\gamma )} t^{\beta /\gamma }}\big )\right] \end{aligned}$$
(2.9)

provides a stochastic representation for the time-space fractional heat equation (with symbol \(\psi (p)=|p|^{2\gamma }/2^{\gamma }\))

$$\begin{aligned} u(t,x)=u_0(x)-\frac{1}{\varGamma (\beta )}\int _{0}^t \left( t-s \right) ^{\beta -1} \left( -\frac{1}{2} \varDelta \right) ^\gamma u(s,x) ds. \end{aligned}$$
(2.10)

As the distribution of a Gaussian random vector is determined by its mean and covariance function, we may replace \(B_{A_\beta t^\alpha }\) in the stochastic representation (2.8) for the solution of the GGBM-equation (2.7) with \(\alpha \in (0,2)\) by a multivariate extension of generalized grey Brownian motion

$$\begin{aligned} X^{\alpha ,\beta }_t:=\sqrt{A_\beta } B^{\alpha /2}_t \end{aligned}$$

where \(B^H\) is a d-dimensional fractional Brownian motion with Hurst parameter \(H\in (0,1)\) and independent of \(A_\beta \), thus obtaining the time-stretched time-fractional heat equation as the governing equation for generalized grey Brownian motion also in the multivariate case. We recall here that, for \(d=1\), a 1-dimensional fractional Brownian motion is a centred Gaussian process with covariance structure

$$\begin{aligned} {\mathbb {E}}\left[ B^H_t B^H_s\right] =\frac{1}{2}\left( t^{2H}+s^{2H}-|t-s|^{2H}\right) ,\quad t,s\in [0,\infty ), \end{aligned}$$

and that for \(d>1\) the components of \(B^H\) are independent 1-dimensional fractional Brownian motions with Hurst parameter H.

While this representation is appealing from a modeling point of view, because fractional Brownian motion (and, thus, generalized grey Brownian motion) is self-similar with stationary increments, simple representations in terms of a Brownian motion such as \(B_{A_\beta t^\alpha }\) or \(\sqrt{A_\beta } B_{t^\alpha }\) are mathematically convenient as they allow to adopt tools from martingale theory and from the theory of Markov processes.

Analogously, for representing the solution to the time-space fractional heat equation (2.10), we may replace \(B_{A_\beta ^{(\gamma )} t^{\beta /\gamma }}\) in (2.9) by \(\sqrt{A_\beta ^{(\gamma )}} B^{\beta /(2\gamma )}_t\) (for \(\beta < 2\gamma \)), extending the stochastic representation with stationary increments of [40] beyond the univariate case, or by \(\sqrt{A_\beta ^{(\gamma )}} B_{t^{\delta }}^{\frac{\beta }{2\gamma \delta }}\) for any \(\delta >0\) such that \(\frac{\beta }{2\gamma \delta }<1\) without any additional restrictions on the relation between \(\beta \) and \(\gamma \).

Remark 2

The results of the previous example will be generalized in various directions in Sections 4 and 5 below. In Section 4, we consider kernels corresponding to Marichev-Saigo-Maeda fractional differintegral operators and demonstrate how \(\varPhi \) relates to Prabhakar’s three parameter generalization of the Mittag-Leffler function in this case. In Section 5 we discuss how to obtain stochastic representations with stationary increments beyond the Gaussian case in terms of fractional stable motion.

2.3 Convolution kernels

For convolution kernels, we can derive the following result from Theorem 1.

Theorem 3

Suppose \(k(t,s)={\mathfrak {K}}(t-s)\), where \({\mathfrak {K}}:(0,\infty )\rightarrow {\mathbb {R}}\) is continuous and satisfies

$$\begin{aligned} |{\mathfrak {K}}(t)|\leqslant M t^{\beta -1}e^{\gamma t},\quad t>0, \end{aligned}$$

for some constants \(M,\gamma \geqslant 0\) and \(\beta \in (0,1]\). Let \(({\mathcal {L}}{\mathfrak {K}})(\cdot )\) be the Laplace transform of \({\mathfrak {K}}\). If there exists a nonnegative stochastic process \((A(t))_{t\geqslant 0}\) such that almost all its paths are right-continuous with left limits (RCLL) and such that

$$\begin{aligned} \int _0^\infty e^{-\sigma t} {\mathbb {E}}\left[ e^{-\lambda A(t)}\right] dt= \frac{1}{\sigma }\frac{1}{1+\lambda ({\mathcal {L}}{\mathfrak {K}})(\sigma )} \end{aligned}$$

for every \(\lambda \geqslant 0\) and sufficiently large \(\sigma \geqslant \sigma _0(\lambda )\), then \(\varPhi (t,-\cdot )\), constructed from this kernel k by formulas (2.3), (2.4), is a CMF for every \(t\geqslant 0\) and the above process \((A(t))_{t\geqslant 0}\) satisfies \(\int _0^\infty e^{-\lambda a}{\mathcal {P}}_{A(t)}(da)=\varPhi (t,-\lambda )\). If, moreover, Assumption 1 is in force, then \((Y_{A(t)})_{t\geqslant 0}\) provides a stochastic solution to (1.5), where \((Y_{t})_{t\geqslant 0}\) is a Lévy process with charachteristic exponent \(\psi \) independent of \((A(t))_{t\geqslant 0}\).

Example 2

(i) Suppose \(k(t,s)={\mathfrak {K}}(t-s)\) is as in the previous theorem and the Laplace transform of \({\mathfrak {K}}\) equals 1/h for some BF h. Then, h is the Laplace exponent of some Lévy subordinator \((\eta ^h_t)_{t\geqslant 0}\). The corresponding inverse subordinator \(({\mathcal {E}}^h_t)_{t\geqslant 0}\) is defined via \({\mathcal {E}}^h_t:=\inf \left\{ s>0\,:\,\eta ^h_s>t\right\} \). It has been shown in [32] (formula (3.14)) that, in the case when the Lévy measure \(\nu \) of \((\eta ^h_t)_{t\geqslant 0}\) satisfies \(\nu (0,\infty )=\infty \), the double Laplace transform of the distribution \({\mathcal {P}}_{{\mathcal {E}}^h_t}(da)\) with respect to both time and space variables is equal to

$$\begin{aligned} \int _0^\infty e^{-\sigma t} {\mathbb {E}}\left[ e^{-\lambda {\mathcal {E}}^h(t)}\right] dt=\frac{h(\sigma )}{\sigma (h(\sigma )+\lambda )}= \frac{1}{\sigma }\frac{1}{1+\lambda ({\mathcal {L}}{\mathfrak {K}})(\sigma )}. \end{aligned}$$

Note that the assumption \(\nu (0,\infty )=\infty \) guarantees that the sample paths of \((\eta ^h_t)_{t\geqslant 0}\) are a.s. strictly increasing, i.e. almost all paths of \(({\mathcal {E}}^h_t)_{t\geqslant 0}\) are continuous [30]. Hence, one may take \(A(t):={\mathcal {E}}^h_t\), \(t\geqslant 0\), in Theorem 3. Therefore, Theorem 3 recovers the well-known result that \({\mathbb {E}}[u_0(x+Y_{{\mathcal {E}}^h(t)})]\) solves (1.5), where Y is a Lévy process with characteristic exponent \(\psi \) which is independent from \(({\mathcal {E}}^h_t)_{t\geqslant 0}\), see e.g. [27].

(ii)   Let \({\mathfrak {K}}(t)=t^{\beta -1}/\varGamma (\beta )\) for some \(\beta \in (0,1)\). We, hence, again consider the time-fractional evolution equation. Then,

$$\begin{aligned} ({\mathcal {L}}{\mathfrak {K}})(\sigma )=\frac{1}{\sigma ^\beta },\qquad \sigma >0, \end{aligned}$$

where \(h(\sigma ):=\sigma ^\beta \) is the Laplace exponent of the \(\beta \)-stable subordinator, recovering the representation for the solution of time-fractional evolution equations in terms of a Lévy process time-changed by an inverse stable subordinator, see e.g. [5]. We have seen in Example 1 that, in this situation, the time change can alternatively be done by \(A(t)=A_\beta t^\beta \). This can also be verified by Theorem 3, because for \(\sigma >0\) and \(\lambda \geqslant 0\),

$$\begin{aligned} \int _0^\infty e^{-\sigma t} {\mathbb {E}}[e^{-\lambda A_\beta t^\beta }]dt= \int _0^\infty e^{-\sigma t} E_\beta (-\lambda t^\beta )dt=\frac{\sigma ^{\beta -1}}{\sigma ^\beta +\lambda }=\frac{1}{\sigma } \frac{1}{1+\lambda ({\mathcal {L}}{\mathfrak {K}})(\sigma )}, \end{aligned}$$

e.g. by Eq. (7.1) in [20].

Remark 3

Inverse subordinators are actively used to produce stochastic representations for solutions of evolution equations of the form (1.5) with convolution kernels also in the case when the generator of a Lévy process \((L,\text {Dom}(L))\) is substituted by an arbitrary generator of a strongly continuous semigroup (see, e.g. [5, 27, 28] as well as works of other authors). The statements (ii)-(iii) of Theorem 1 also can be generalized to this case applying different tools. This topic will be presented in our next paper.

3 Proofs for Section 2

3.1 Proof of Theorem 1

The proof of Theorem 1 requires several auxiliary results. We first connect the characteristic function of the 1-dimensional marginals of any process X, which provides a stochastic solution to (1.5), to a family of Volterra equations of second kind.

Proposition 2

(General Relation) Let Assumptions 12 hold. Then \((X_t)_{t\geqslant 0}\) provides a stochastic solution to the evolution equation (1.5) if and only if

$$\begin{aligned} 1-\varphi _{X_t}(p)=\psi (p)\int _0^t k(t,s)\varphi _{X_s}(p)ds,\qquad \forall \,\, p\in {{\mathbb {R}}}^d,\quad \forall \,\,t>0, \end{aligned}$$
(3.1)

where \(\varphi _{X_t}(p):={\mathbb {E}}\left[ e^{ip\cdot X_t}\right] \) is the characteristic function of \(X_t\), \(t>0\).

Remark 4

The General Relation (3.1) provides an interrelation between the parameters of the equation \(\psi \), k and the stochastic process X.

Remark 5

If relation (3.1) holds for each \(p\in {{\mathbb {R}}}^d\) then it holds in particular for \(p=0\). Since \(\varphi _{X_t}(0)=1\) for each \(t\geqslant 0\), the right hand side of (3.1) must be zero at \(p=0\) for each \(t\geqslant 0\). It is possible only if \(\psi (0)=0\), i.e. there is no killing term in the Lévy-Khintchine representation of \(\psi \) (and an underlying Lévy process has an infinite life time).

Proof of Proposition 2

Let u(tx) be given by (2.2). Then \(\Vert u(t,\cdot )\Vert _\infty \leqslant \Vert u_0\Vert _\infty \) for each \(t\geqslant 0\). Moreover,

$$\begin{aligned} {\mathbb {E}}\left[ \int _{{{\mathbb {R}}}^d}|u_0(x+X_t)|dx \right] ={\mathbb {E}}\left[ \int _{{{\mathbb {R}}}^d}|u_0(y)|dy \right] =\Vert u_0\Vert _{L^1({{\mathbb {R}}}^d)}<\infty \end{aligned}$$

since \(u_0\in S({{\mathbb {R}}}^d)\subset L^1({{\mathbb {R}}}^d)\). Therefore, we have by the Fubini theorem

$$\begin{aligned} \int _{{{\mathbb {R}}}^d}\!|u(t,x)|dx&\leqslant \int _{{{\mathbb {R}}}^d}\!{\mathbb {E}}\left[ |u_0(x+X_t) | \right] dx\\&={\mathbb {E}}\left[ \int _{{{\mathbb {R}}}^d}\!|u_0(x+X_t)|dx \right] =\Vert u_0\Vert _{L^1({{\mathbb {R}}}^d)}<\infty , \end{aligned}$$

i.e. \(u(t,\cdot )\in L^1({{\mathbb {R}}}^d)\) for all \(t\geqslant 0\) and \(\Vert u(t,\cdot )\Vert _{L^1({{\mathbb {R}}}^d)}\leqslant \Vert u_0\Vert _{L^1({{\mathbb {R}}}^d)}\). Hence we can apply Fourier transform to \(u(t,\cdot )\) with respect to the space variable. And we have by the Fubini theorem:

$$\begin{aligned} {\mathfrak {F}}\left[ u(t,\cdot ) \right] (p)&=(2\pi )^{-d/2}\int _{{{\mathbb {R}}}^d}e^{-ix\cdot p}{\mathbb {E}}\left[ u_0(x+X_t) \right] dx\nonumber \\&={\mathbb {E}}\left[ (2\pi )^{-d/2}\int _{{{\mathbb {R}}}^d}e^{-i(y-X_t)\cdot p}u_0(y)dy \right] ={\mathfrak {F}}[u_0](p)\varphi _{X_t}(p). \end{aligned}$$
(3.2)

Note that since \(u(t,\cdot )\in L^1({{\mathbb {R}}}^d)\), we have \({\mathfrak {F}}\left[ u(t,\cdot ) \right] \in C_\infty ({{\mathbb {R}}}^d)\) for all \(t\geqslant 0\). Since \(|\varphi _{X_t}(p)|\leqslant 1\) for all \(p\in {{\mathbb {R}}}^d\), \(t\geqslant 0\), we have

$$\begin{aligned} \left| {\mathfrak {F}}[u(t,\cdot )](p) \right| \leqslant \left| {\mathfrak {F}}[u_0](p) \right| \qquad \forall \,\,p\in {{\mathbb {R}}}^d,\quad t\geqslant 0. \end{aligned}$$
(3.3)

Since \(u_0\in S({{\mathbb {R}}}^d)\) then also \({\mathfrak {F}}[u_0]\in S({{\mathbb {R}}}^d)\). Hence \({\mathfrak {F}}[u(t,\cdot )]\) is a bounded function which decays as fast as \({\mathfrak {F}}[u_0]\) when \(|p|\rightarrow \infty \) by (3.3). Therefore, \(u(t,\cdot )\) is a smooth function for each \(t\geqslant 0\) by the properties of the Fourier transform. Moreover \(\lim _{|x|\rightarrow \infty }u(t,x)=0\) for all \(t\geqslant 0\) by the Lebesgue theorem on dominated convergence. Hence \(u(t,\cdot )\in C_\infty ({{\mathbb {R}}}^d)\). Further, again by (3.3), we have \(-\psi {\mathfrak {F}}[u(t,\cdot )]\in L^1({{\mathbb {R}}}^d)\) and is a bounded function too, since the symbol \(\psi \) grows at most quadratically. Hence \({\mathfrak {F}}^{-1}\circ \psi \circ {\mathfrak {F}}[u(t,\cdot )] \in C_\infty ({{\mathbb {R}}}^d)\) by properties of the Fourier transform. So, \(u(t,\cdot )\in \text {Dom}(L)\) for all \(t\geqslant 0\).

The integral \(\int _0^t k(t,s)\varphi _{X_t}(p) ds\) is finite for all \(t\geqslant 0\) due to Assumption 2 since \(|\varphi _{X_t}(p)|\leqslant 1\) for all \(p\in {{\mathbb {R}}}^d\) and \(t\geqslant 0\). Analogously, the integrals \(\int _0^t k(t,s)Lu(s,x)ds\) and \(\int _0^t k(t,s)\psi (p){\mathfrak {F}}[u(s,\cdot )](p)\,ds\) are well-defined and finite for all \(t\geqslant 0\) by Assumption 2, equality (3.2) and estimate (3.3) since \(-\psi {\mathfrak {F}}[u_0]\in L^1({{\mathbb {R}}}^d)\) and is a bounded function. Therefore, both sides of equation (1.5) make sense for u given by (2.2). Further, it holds by the Fubini theorem (since \(\left| \psi {\mathfrak {F}}[u(t,\cdot )]\right| \) is bounded and decays fast at infinity) and by (3.2)

$$\begin{aligned}&\int _0^t k(t,s)Lu(s,x)ds=\int _0^t k(t,s){\mathfrak {F}}^{-1}\left[ (-\psi ){\mathfrak {F}}[u(s,\cdot )]\right] (x)ds\\&\quad ={\mathfrak {F}}^{-1}\!\left[ \int _0^t k(t,s)(-\psi ){\mathfrak {F}}[u(s,\cdot )]ds\right] (x)={\mathfrak {F}}^{-1}\! \left[ (-\psi ){\mathfrak {F}}[u_0]\int _0^t k(t,s)\varphi _{X_s}ds\right] \!(x). \end{aligned}$$

Assume now that X is such that u given by (2.2) solves equation (1.5). Applying Fourier transform to both sides of equation (1.5) we obtain by (3.2) for \(p\in {{\mathbb {R}}}^d\), \(t\geqslant 0\)

$$\begin{aligned} {\mathfrak {F}}[u_0](p)\varphi _{X_t}(p) ={\mathfrak {F}}[u_0](p)-\psi (p){\mathfrak {F}}[u_0](p)\int _0^t k(t,s)\varphi _{X_s}(p)ds. \end{aligned}$$
(3.4)

Since \(u_0\) can be arbitrary function from \(S({{\mathbb {R}}}^d)\), the above equality (3.4) is equivalent to (3.1). Vice versa, if X is such that \(\varphi _{X_t}\) solves equation (3.1), then equation (3.4) holds for any \(u_0\in S({{\mathbb {R}}}^d)\). And hence u given by (2.2) solves equation (1.5). \(\square \)

We now discuss the family of Volterra equations in the General Relation on the space \(B_b([0,T],{{\mathbb {C}}})\) of bounded Borel-measurable complex-valued functions defined on the segment [0, T], \(T>0\). It is a Banach space with the supremum-norm \(\Vert \cdot \Vert _\infty \).

Lemma 1

Let Assumption 2 hold. Then for any \(\lambda \in {{\mathbb {C}}}\) and any \(T>0\) there exists \(n_T\in {\mathbb {N}}\) such that the \(n_T\)-th power of the operator \({\mathcal {R}}_\lambda \,:\,B_b([0,T],{{\mathbb {C}}})\rightarrow B_b([0,T],{{\mathbb {C}}})\),

$$\begin{aligned} \left( {\mathcal {R}}_\lambda g\right) (t):= \left\{ \begin{array}{ll} 1-\lambda \int _0^t k(t,s)g(s)ds, &{} t\in (0,T],\\ 1, &{} t=0, \end{array}\right. \quad g\in B_b([0,T],{{\mathbb {C}}}) \end{aligned}$$
(3.5)

is a strict contraction.

Proof

Let us fix \(\lambda \in {{\mathbb {C}}}\) and \(T>0\). Due to Assumption 2, it holds for any \(g\in B_b([0,T],{{\mathbb {C}}})\), \(n\in {\mathbb {N}}\), and any \(t\in (0,T]\)

$$\begin{aligned}&t^{(\alpha ^*-1)n}\left| \int _0^tk(t,s)g(s)ds \right| \nonumber \\ {}&\quad \leqslant \left( \sup _{0<s\leqslant T} |s^{(\alpha ^*-1)(n-1)}g(s)|\right) t^{(\alpha ^*-1)n} \int _0^t|k(t,s)|s^{(1-\alpha ^*)(n-1)}ds\nonumber \\ {}&\quad \leqslant \left( \sup _{0<s\leqslant T} |s^{(\alpha ^*-1)(n-1)}g(s)|\right) \frac{\Vert k(t,\cdot )\Vert _{L^{1+\varepsilon }((0,t))}t^{\alpha ^*-1}t^{\varepsilon /(1+\varepsilon )}}{((1-\alpha ^*) (n-1)(1+1/\varepsilon )+1)^{\varepsilon /(1+\varepsilon )}} \nonumber \\ {}&\quad \leqslant \frac{K_T}{((1-\alpha ^*)(n-1)(1+1/\varepsilon )+1)^{\varepsilon /(1+\varepsilon )}} \left( \sup _{0<s\leqslant T} |s^{(\alpha ^*-1)(n-1)}g(s)|\right) . \end{aligned}$$
(3.6)

Choosing \(n=1\), we have \(\Vert {\mathcal {R}}_\lambda g\Vert _\infty \leqslant 1+ K_T|\lambda | T^{1-\alpha ^*}\Vert g\Vert _\infty <\infty \), i.e. the operator \({\mathcal {R}}_\lambda \) maps \(B_b([0,T],{{\mathbb {C}}})\) into itself. Since for all f, \(g\in B_b([0,T],{{\mathbb {C}}})\) and all \(t\in [0,T]\)

$$\begin{aligned} \left| {\mathcal {R}}_\lambda g(t)-{\mathcal {R}}_\lambda f(t)\right| \leqslant |\lambda | K_T T^{1-\alpha ^*}\Vert g-f\Vert _\infty , \end{aligned}$$

\({\mathcal {R}}_\lambda \) is a continuous operator on \(B_b([0,T],{{\mathbb {C}}})\). Further, for each \(n\in {\mathbb {N}}\) and \(t\in (0,T]\), we obtain due to (3.6)

$$\begin{aligned}&t^{(\alpha ^*-1)(n+1)} \left| {\mathcal {R}}^{n+1}_\lambda g(t)-{\mathcal {R}}^{n+1}_\lambda f(t)\right| \\&\quad = t^{(\alpha ^*-1)(n+1)} |\lambda | \left| \int _0^t k(t,s) ({\mathcal {R}}^{n}_\lambda g(s)-{\mathcal {R}}^{n}_\lambda f(s)) ds\right| \\&\quad \leqslant \frac{K_T|\lambda |}{((1-\alpha ^*)n(1+1/\varepsilon )+1)^{\varepsilon /(1+\varepsilon )}} \sup _{0<s \leqslant T} |s^{(\alpha ^*-1)n} ({\mathcal {R}}^{n}_\lambda g(s)-{\mathcal {R}}^{n}_\lambda f(s))|. \end{aligned}$$

Proceeding inductively, we arrive at

$$\begin{aligned}&\sup _{0\leqslant t\leqslant T} \left| {\mathcal {R}}^{n+1}_\lambda g(t)-{\mathcal {R}}^{n+1}_\lambda f(t)\right| \nonumber \\&\quad \leqslant (|\lambda |K_T T^{1-\alpha ^*})^{n+1}\prod _{l=1}^n \big (l(1-\alpha ^*)(1+1/\varepsilon )+1 \big )^{\frac{-\varepsilon }{1+\varepsilon }} \Vert g-f\Vert _\infty \; . \end{aligned}$$
(3.7)

Since the factor in front of \(\Vert g-f\Vert _\infty \) tends to 0 as \(n\rightarrow \infty \), there exists \(n_T\in {\mathbb {N}}\) such that \({\mathcal {R}}^{n_T}_\lambda \) is a strict contraction on \(B_b([0,T],{{\mathbb {C}}})\). \(\square \)

Corollary 1

Let Assumption 2 hold. Then, for each \(\lambda \in {{\mathbb {C}}}\), there exists a unique solution \(\varPhi (\cdot ,-\lambda )\in B_b([0,T],{{\mathbb {C}}})\), \(\forall \,\, T>0\), of the following Volterra equation of the second kind

$$\begin{aligned} \varPhi (t,-\lambda )=1-\lambda \int _0^t k(t,s)\varPhi (s,-\lambda )ds,\qquad t>0. \end{aligned}$$
(3.8)

Moreover, \(\lim _{t\searrow 0}\varPhi (t,-\lambda )=1\) locally uniformly with respect to \(\lambda \in {{\mathbb {C}}}\), \(\varPhi (t,\cdot )\) is an entire function for all \(t\geqslant 0\) and equalities (2.3) and (2.4) hold.

Proof

Fix any \(T>0\). By the Banach fix-point theorem, there exists exactly one fixed point \(\varPhi (\cdot ,-\lambda )\in B_b([0,T],{{\mathbb {C}}})\) of the strict contraction \({\mathcal {R}}_\lambda ^{n_T}\) due to Lemma 1. Hence \(\varPhi (\cdot ,-\lambda )\) is also the unique fixed point of the operator \({\mathcal {R}}_\lambda \), i.e. the equation (3.8) has the unique solution \(\varPhi (\cdot ,-\lambda )\in B_b([0,T],{{\mathbb {C}}})\) which can be obtained by the Picard iterations

$$\begin{aligned} \varPhi (t,-\lambda )=\lim _{n\rightarrow \infty }\varPhi _n(t,-\lambda ), \qquad t\in [0,T], \end{aligned}$$

where

$$\begin{aligned}&\varPhi _0(t,-\lambda ):=1,\quad t\in [0,T],\\&\quad \varPhi _n(t,-\lambda ):=\left( {\mathcal {R}}_\lambda \varPhi _{n-1}(\cdot ,-\lambda )\right) (t)=\\&\quad = \left\{ \begin{array}{ll} 1-\lambda \int _0^t k(t,s)\varPhi _{n-1}(s,-\lambda )ds, &{} t\in (0,T], \\ 1, &{} t=0, \end{array}\right. \quad n\in {\mathbb {N}}. \end{aligned}$$

Using auxiliary functions \(\phi _0(t,\lambda ):=1\) and \(\phi _n(t,\lambda ):=\varPhi _n(t,-\lambda )-\varPhi _{n-1}(t,-\lambda )\) for \(n\in {\mathbb {N}}\) (i.e. \(\phi _n(t,\lambda )= -\lambda \int _0^t k(t,s)\phi _{n-1}(s,\lambda )ds\) for \(t\in (0,T]\)), we get

$$\begin{aligned} \varPhi (t,-\lambda )=\sum _{n=0}^\infty \phi _n(t,\lambda )=\sum _{n=0}^\infty c_n(t)(-\lambda )^n, \end{aligned}$$

where the coefficients \(c_n(t)\), \(n\in {\mathbb {N}}\), \(t\in [0,T]\), are given by (2.4). Therefore, for each \(\lambda \in {{\mathbb {C}}}\) and each \(T>0\), the function \(\varPhi \), given by (2.3), belongs to \(B_b([0,T],{{\mathbb {C}}})\). Therefore, for all \(t\geqslant 0\), the series \(\sum _{n=0}^\infty c_n(t)\lambda ^n\) converges in \({{\mathbb {C}}}\), i.e. the function \(\varPhi (t,\cdot )\) is an entire function for all \(t\geqslant 0\). Moreover, we have for each \(n\in {\mathbb {N}}\) and each \(t\in [0,T]\), by iterating  (3.6) analogously to the derivation of (3.7),

$$\begin{aligned} |c_n(t)|\leqslant K_T^{n}T^{n(1-\alpha ^*)}\prod _{l=1}^{n-1} \left[ \big (l(1-\alpha ^*)(1+1/\varepsilon )+1 \big )^{\frac{-\varepsilon }{1+\varepsilon }} \right] \rightarrow 0,\quad T\rightarrow 0, \end{aligned}$$

and for any \(R>0\) and any \(\lambda \in \{z\in {{\mathbb {C}}}\,:\,|z|\leqslant R\}\)

$$\begin{aligned} \left| \varPhi (t,-\lambda ) \right|&\leqslant \sum _{n=0}^\infty R^n|c_n(t)|\\&\leqslant \sum _{n=0}^\infty R^n K_T^{n} T^{n(1-\alpha ^*)}\prod _{k=1}^{n-1} \left[ \big (k(1-\alpha ^*)(1+1/\varepsilon )+1 \big )^{\frac{-\varepsilon }{1+\varepsilon }} \right] <\infty . \end{aligned}$$

Hence, we have \(\lim \limits _{t\searrow 0}|\varPhi (t,-\lambda )|=1\) and \(\lim \limits _{t\searrow 0}\varPhi (t,-\lambda )=1\) locally uniformly w.r.t. \(\lambda \in {{\mathbb {C}}}\). \(\square \)

We are now ready to present the proof of Theorem 1.

Proof of Theorem 1

(i) By Corollary 1, the family of Volterra equations for the General Relation  (3.1) has \(\varPhi (t,-\psi (p))\) as its unique locally bounded solution, where \(\varPhi \) is given by (2.3), (2.4). Thus, by Proposition 2, \((X_t)_{t\geqslant 0}\) provides a stochastic solution to the evolution equation (1.5), if and only if \(\varphi _{X_t}(\cdot )=\varPhi (t,-\psi (\cdot ))\) holds for every \(t\geqslant 0\). Now, if a stochastic solution exists, then \(\varPhi (t,-\psi (\cdot ))\) is positive definite for every \(t\geqslant 0\), since every characteristic function has this property. On the other hand, if \(\varPhi (t,-\psi (\cdot ))\) is a positive definite function, then, for each \(t\geqslant 0\), there exists a random variable \({\tilde{X}}_t\) such that \(\varPhi (t,-\psi (\cdot ))=\varphi _{{\tilde{X}}_t}\) (recalling that \(\varPhi (t,0)=1\)). We may now choose any stochastic process \((X_t)_{t\geqslant 0}\) such that \(X_t\) has the same distribution as \({\tilde{X}}_t\) for every \(t\geqslant 0\). E.g. one can construct an independent family \((X_t)_{t\geqslant 0}\) with these one-dimensional marginal distributions on an infinite product space. Then, \(\varphi _{X_t}(\cdot )=\varPhi (t,-\psi (\cdot ))\) for every \(t\geqslant 0\), and, hence, a stochastic solution exists.

(ii) Let the restriction of \(\varPhi (t,-\cdot )\) on \((0,\infty )\) be a completely monotone function for each \(t\geqslant 0\). Hence, for each \(t\geqslant 0\), there exists a non-negative random variable A(t) whose distribution \({\mathcal {P}}_{A(t)}\) has Laplace transform \(\varPhi (t,-\cdot )\) by the Bernstein theorem. Let \((Y_t)_{t\geqslant 0}\) be a d-dimensional Lévy process with characteristic exponent \(\psi \) which is independent from \((A(t))_{t\geqslant 0}\). Then we have

$$\begin{aligned} \varPhi (t,-\psi (p))&=\int _0^\infty e^{-a\psi (p)}{\mathcal {P}}_{A(t)}(da)=\int _0^\infty {\mathbb {E}}\left[ e^{ip\cdot Y_a}\right] {\mathcal {P}}_{A(t)}(da)\\&={\mathbb {E}}\left[ e^{ip\cdot Y_{A(t)}}\right] =\varphi _{Y_{A(t)}}(p). \end{aligned}$$

Therefore, the function \(\varPhi (t,-\psi (\cdot ))\) is positive definite and \((Y_{A(t)})_{t\geqslant 0}\) provides a stochastic solution to (1.5) by statement (i) of Theorem 1.

(iii) Let now, additionally, the symbol \(\psi \) be given by \(\psi :=f\circ {\widetilde{\psi }}\) for some other CNDF \({\widetilde{\psi }}\) and some Bernstein function f. Then the function \(\varPhi (t,-f(\cdot ))\) is completely monotone as a composition CMF\(\circ \)BF. Hence, for each \(t\geqslant 0\), there exists a non-negative random variable \({\widetilde{A}}(t)\) whose distribution \({\mathcal {P}}_{{\widetilde{A}}(t)}\) has Laplace transform \(\varPhi (t,-f(\cdot ))\). Taking a d-dimensional Lévy process \(({\widetilde{Y}}_t)_{t\geqslant 0}\) with characteristic exponent \({\widetilde{\psi }}\) which is independent from \(({\widetilde{A}}(t))_{t\geqslant 0}\), we obtain

$$\begin{aligned} \varPhi (t,-\psi (p))&=\varPhi (t,-f({\widetilde{\psi }}(p))) =\int _0^\infty e^{-a{\widetilde{\psi }}(p)}{\mathcal {P}}_{{\widetilde{A}}(t)}(da)\\&=\int _0^\infty {\mathbb {E}}\left[ e^{ip\cdot {\widetilde{Y}}_a}\right] {\mathcal {P}}_{{\widetilde{A}}(t)}(da)={\mathbb {E}}\left[ e^{ip\cdot {\widetilde{Y}}_{{\widetilde{A}}(t)}}\right] =\varphi _{{\widetilde{Y}}_{{\widetilde{A}}(t)}}(p), \end{aligned}$$

and we may conclude as in (ii). \(\square \)

Remark 6

Let \(\varPhi \) and f be as in Theorem 1 (iii). Then the completely monotone function \(\varPhi (t,-f(\cdot ))\) extends to an analytical function in \(\varPi _+:=\{z\in {{\mathbb {C}}}\,:\,\mathrm{Re\,}z>0\}\). This function does not have to be analytical in the whole complex plane. It follows from Corollary 1 (with \(\lambda =f(z)\)) that \( {\widetilde{\varPhi }}(t,\cdot ):= \varPhi (t,-f(\cdot ))\) solves the following version of Volterra equation (3.8):

$$\begin{aligned} {\widetilde{\varPhi }}(t,z)=1-f(z)\int _0^tk(t,s){\widetilde{\varPhi }}(s,z)ds,\qquad t>0,\quad z\in \varPi _+. \end{aligned}$$

Remark 7

Suppose that Assumptions 1, 2 are in force. If \((X_t)_{t\geqslant 0}\) provides a stochastic solution to (1.5), then by Theorem 1 and (3.2),

$$\begin{aligned} {\mathbb {E}}\left[ u_0\left( x+X_t\right) \right] ={\mathfrak {F}}^{-1}\big [{\mathfrak {F}}[u_0](\cdot ) \varPhi (t,-\psi (\cdot ))\big ](x). \end{aligned}$$

The function on the right-hand side may be well-defined, even if no stochastic solutions exists. Adapting the arguments in the proof of Proposition 2 in the obvious way, one can e.g. show that

$$\begin{aligned} u(t,x):={\mathfrak {F}}^{-1}\big [{\mathfrak {F}}[u_0](\cdot )\varPhi (t,-\psi (\cdot ))\big ](x) \end{aligned}$$

solves (1.5) for every initial condition \(u_0\in S({{\mathbb {R}}}^d)\), if \(\varPhi (t,-\psi (p))\) satisfies a polynomial growth condition in p which is locally uniform in t.

3.2 Proof of Proposition 1

The continuity of \(c_n\) at \(t=0\) has already been shown in the proof of Corollary 1. We next prove inductively that \(c_n\) is continuous on \((0,\infty )\). The claim is trivial for the constant function \(c_0\). We write

$$\begin{aligned} c_n(t)=t\int _0^1 k(t,ts)c_{n-1}(st) ds, \end{aligned}$$

and show that \(c_n\) is continuous on (1/TT) for every \(T>0\). In view of the induction hypothesis and the continuity assumption on k, we only need to argue that we can interchange limit (in the t-variable) and integration in this expression. To this end, we apply the de la Valée-Poussin criterion for uniform integrability of the family \((k(t,ts)c_{n-1}(st))_{t\in (1/T,T)}\) for arbitrary \(T>0\). Thanks to Assumption 2, with the same choice of \(\varepsilon , \alpha ^*\) as there,

$$\begin{aligned}&\sup _{1/T\leqslant t\leqslant T} \int _0^1 |k(t,ts)c_{n-1}(st)|^{1+\varepsilon } ds\\&\quad \leqslant \sup _{0\leqslant u \leqslant T} |c_{n-1}(u)|^{1+\varepsilon } \sup _{1/T\leqslant t \leqslant T} t^{-1}t^{\alpha ^*(1+\varepsilon )} \int _0^t |k(t,s)|^{1+\varepsilon }ds\;T^{\alpha ^*(1+\varepsilon )} \\&\quad \leqslant T^{\alpha ^*(1+\varepsilon )} \sup _{0\leqslant u \leqslant T} |c_{n-1}(u)|^{1+\varepsilon } K_T^{1+\varepsilon }< \infty . \end{aligned}$$

This argument finishes the proof of continuity of \(c_n\). Recall that

$$\begin{aligned} \varPhi (t,\lambda )=\sum _{n=0}^\infty c_n(t)\lambda ^n. \end{aligned}$$

We have shown that all the summands are continuous in \((t,\lambda )\). Now, for every \(T>0\), \(t\in [0,T]\) and \(\lambda \in {\mathbb {C}}\) with \(|\lambda |\leqslant T\),

$$\begin{aligned} |c_n(t)\lambda ^n|\leqslant K_T^{n}T^{n(2-\alpha ^*)}\prod _{l=1}^{n-1} \left[ \big (l(1-\alpha ^*)(1+1/\varepsilon )+1 \big )^{\frac{-\varepsilon }{1+\varepsilon }} \right] , \end{aligned}$$

by the proof of Corollary 1, and the right-hand side is summable. We may thus interchange limits in the \((t,\lambda )\)-variables and summation, yielding the continuity of \(\varPhi \). For the continuity of the function u, recall that by (3.2)

$$\begin{aligned} u(t,x)={\mathfrak {F}}^{-1}[{\mathfrak {F}}[u_0](\cdot )\varphi _{X_t}(\cdot )](x), \end{aligned}$$

and thus continuity is inherited from \(\varPhi \) as a consequence of the dominated convergence theorem with integrable majorant \(|{\mathfrak {F}}[u_0]|\), since characteristic functions are bounded by 1.

3.3 Proof of Theorem 2

Suppose that k is homogeneous of degree \(\beta -1\) for some \(\beta >0\). Since \(k(t,ts)=t^{\beta -1}k(1,s)\), Assumption 3 is obviously satisfied. We next check Assumption 2. To this end note that

$$\begin{aligned} t^{-1/(1+\varepsilon )} \Vert k(t,\cdot )\Vert _{L^{1+\varepsilon }((0,t))}= & {} t^{-1/(1+\varepsilon )}\left( t \int _0^1 |k(t,ts)|^{1+\varepsilon } ds\right) ^{1/(1+\varepsilon )}\\= & {} t^{\beta -1} \Vert k(1,\cdot )\Vert _{L^{1+\varepsilon }((0,1))}. \end{aligned}$$

Hence, Assumption 2 is satisfied with the choice \(\alpha ^*:=1-\beta \in [0,1)\) in the case \(\beta \in (0,1]\) and with the choice \(\alpha ^*:=0\) in the case \(\beta >1\).

We next observe, inductively, that \(c_n(t)=c_n(1)t^{n\beta }\), because

$$\begin{aligned} c_n(t)&=\int _0^t k(t,s)c_{n-1}(s)ds=t^{\beta }\int _0^1k(1,s)c_{n-1}(ts)ds\\&=t^{n\beta }\int _0^1k(1,s)c_{n-1}(s)ds=t^{n\beta }c_n(1). \end{aligned}$$

Let \({\hat{c}}_n=c_n(1)\). Then, by the previous considerations, \({\hat{c}}_n\) satisfies the recursion

$$\begin{aligned} {\hat{c}}_n= {\hat{c}}_{n-1} \int _0^1k(1,s)s^{\beta (n-1)} ds,\quad {\hat{c}}_0=1 \end{aligned}$$

and

$$\begin{aligned} \varPhi (t,\lambda )=\sum _{n=0}^\infty {\hat{c}}_n (t^\beta \lambda )^n={\hat{\varPhi }}(t^\beta \lambda ). \end{aligned}$$

Assume now that the restriction of the function \({\hat{\varPhi }}(-\cdot )\) on \((0,\infty )\) is completely monotone. Then, for \(\gamma \in (0,1]\), \(x \mapsto {\hat{\varPhi }}(-x^\gamma )\) is completely monotone on \((0,\infty )\), because \((\cdot )^\gamma \) is a BF. Thus, there is a nonnegative random variable \({\tilde{A}}\) with Laplace transform given by \({\hat{\varPhi }}(-(\cdot )^\gamma )\). Then, for every \(t\geqslant 0\), the Laplace transform of \({\tilde{A}} t^{\beta /\gamma }\) is given by

$$\begin{aligned} {\mathbb {E}}\left[ e^{-\lambda {\tilde{A}} t^{\beta /\gamma }} \right] ={\hat{\varPhi }}(-\lambda ^\gamma t^\beta )=\varPhi (t,-\lambda ^\gamma ),\quad \lambda >0. \end{aligned}$$

Now, Theorem 1, (iii), applies.

3.4 Proof of Theorem 3

We first note that Assumption 2 is satisfied with \(\alpha ^*=1-\beta \in [0,1)\) for any sufficiently small \(\varepsilon >0\), because

$$\begin{aligned}&\sup \limits _{0<t\leqslant T}t^{\alpha ^* -\frac{1}{1+\varepsilon }}\Vert k(t,\cdot ) \Vert _{L^{1+\varepsilon }((0,t))}\\&\quad \leqslant Me^{\gamma T} \sup \limits _{0<t\leqslant T}\left( t^{\alpha ^*-\frac{1}{1+\varepsilon }} \left( \int _0^t (t-s)^{-\alpha ^*(1+\varepsilon )} ds\right) ^{1/(1+\varepsilon )}\right) \\&\quad =\frac{Me^{\gamma T}}{(1-\alpha ^*(1+\varepsilon ))^{1/(1+\varepsilon )}}. \end{aligned}$$

We next prove inductively that

$$\begin{aligned} e^{-\gamma t} |c_n(t)| \leqslant (M \varGamma (\beta ) t^\beta )^n \frac{1}{\varGamma (n\beta +1)}. \end{aligned}$$

This is obvious for \(n=0\) and the induction step follows by

$$\begin{aligned} e^{-\gamma t} |c_n(t)|\leqslant & {} \int _0^t M(t-s)^{\beta -1} e^{-\gamma s} |c_{n-1}(s)|ds\\\leqslant & {} M^n \varGamma (\beta )^{n-1} \frac{1}{\varGamma ((n-1)\beta +1)}\int _0^t (t-s)^{\beta -1} s^{\beta (n-1)} ds \\= & {} M^n \varGamma (\beta )^{n-1} t^{\beta n} \frac{1}{\varGamma ((n-1)\beta +1)}\int _0^1 (1-s)^{\beta -1} s^{\beta (n-1)} ds\\= & {} (M \varGamma (\beta ) t^\beta )^n \frac{1}{\varGamma (n\beta +1)}. \end{aligned}$$

Then, for every \(\lambda \in {\mathbb {C}}, \;t\geqslant 0\),

$$\begin{aligned} \sum _{n=0}^\infty |c_n(t)| |\lambda |^n\leqslant e^{\gamma t} E_\beta (M\varGamma (\beta )|\lambda | t^\beta )\leqslant const.\; e^{(\gamma + (M\varGamma (\beta )|\lambda |)^{1/\beta })t}, \end{aligned}$$

using the asymptotics for the Mittag-Leffler function, which can be found e.g. in [20], Eq. (6.4). We conclude that for every \(\lambda \geqslant 0\) and \(\sigma > \gamma + (M\varGamma (\beta )|\lambda |)^{1/\beta }\), the Laplace transform of \(\varPhi (\cdot ,-\lambda )\) exists and can be interchanged with the summation (by Fubini’s theorem with Lebesgue measure and counting measure), i.e.,

$$\begin{aligned} ({\mathcal {L}}\varPhi (\cdot ,-\lambda ))(\sigma )=\sum _{n=0}^\infty ({\mathcal {L}}c_n)(\sigma )(-\lambda )^n. \end{aligned}$$

By the convolution theorem for the Laplace transform and induction

$$\begin{aligned} ({\mathcal {L}}c_n)(\sigma )=({\mathcal {L}}{\mathfrak {K}})(\sigma )({\mathcal {L}}c_{n-1}) (\sigma )=\frac{1}{\sigma }[({\mathcal {L}}{\mathfrak {K}})(\sigma )]^n \end{aligned}$$

for \(\sigma >\gamma \), since \(({\mathcal {L}}1)(\sigma )=\sigma ^{-1}\). Thus, for \(\sigma > \gamma + (M\varGamma (\beta )|\lambda |)^{1/\beta }\),

$$\begin{aligned} ({\mathcal {L}}\varPhi (\cdot ,-\lambda ))(\sigma )=\frac{1}{\sigma }\frac{1}{1+\lambda ({\mathcal {L}}{\mathfrak {K}})(\sigma )}. \end{aligned}$$

Since \(\varPhi \) inherits continuity from \({\mathfrak {K}}\) by Proposition 1 and \(t\mapsto E\left[ e^{-\lambda A(t)}\right] \) is RCLL by dominated convergence, Lerch’s uniqueness theorem implies that

$$\begin{aligned} \varPhi (t,-\lambda )= E\left[ e^{-\lambda A(t)}\right] \end{aligned}$$

for every \(\lambda \geqslant 0\) and \(t\geqslant 0\). Hence, \(\varPhi (t,-\cdot )\) is a CMF for every \(t\geqslant 0\) and part (ii) of Theorem 1 applies for the assertion concerning the stochastic solution.

4 Stochastic solutions for generalized time-fractional evolution equations with Marichev-Saigo-Maeda operators

In this section, we consider generalized time-fractional evolution equations of the form (1.5), where the kernel k is the kernel of the Marichev-Saigo-Maeda integral operator of generalized fractional calculus. The Marichev-Saigo-Maeda (M-S-M) operator has been introduced by Marichev ([31, 43]) and later also studied by Saigo and Maeda [42]. Note that this operator includes as particular cases some operators of fractional calculus like the Riemann-Liouville, Weil, Erdélyi-Kober and Saigo operators. On the other hand, the M-S-M integral itself appears as a special case of the multiple Erdélyi-Kober integrals (of multiplicity \(m>1\)) introduced by Kiryakova as operators of Generalized Fractional Calculus (GFC) in the book [24] (see also the recent survey [25]). In the settings of this GFC, the M-S-M operators correspond to the case of multiplicity \(m=3\) since they are actually compositions of three commutable classical Erdélyi-Kober integrals. In this section, we construct the function \(\varPhi \) corresponding to a Marichev-Saigo-Maeda kernel k and discuss stochastic solutions in terms of randomly speeded up / slowed-down Lévy processes of the considered evolution equations.

Let us first recall that the Appell third generalization \(F_3\) of the Gauss hypergeometric function is defined in the following way:

$$\begin{aligned} F_3\left( \alpha ,\alpha ',\beta ,\beta ',\gamma ,x,y\right) =\sum _{m,n\geqslant 0} \frac{(\alpha )_m(\beta )_m(\alpha ')_n(\beta ')_n}{(\gamma )_{m+n} n!m!} x^my^n, \end{aligned}$$
(4.1)

where \(\alpha ,\alpha ',\beta ,\beta ',\gamma \in {{\mathbb {C}}}\), \(\gamma \notin -{\mathbb {N}}\), and the general Pochhammer symbol \((\lambda )_\nu \) is defined as follows:

$$\begin{aligned} (\lambda )_\nu :=\left\{ \begin{array}{lll} 1, &{} \nu =0, &{} \lambda \in {{\mathbb {C}}}\\ \lambda (\lambda -1)\cdot \ldots \cdot (\lambda -n+1), &{} \nu =n\in {\mathbb {N}}, &{} \lambda \in {{\mathbb {C}}}. \end{array}\right. \end{aligned}$$

The series in (4.1) converges for \(|x|,|y| <1\) and can be analytically extended to real \(x,y<1\).

Theorem 4

Let \(b>0\), \(a>0\), \(\mu >-1\), and \(\nu >\max \{-b,-a\mu \}\). Consider the kernel

$$\begin{aligned}&k(t,s):=\nonumber \\&\quad =\frac{a}{\varGamma (b/a)}(t^a-s^a)^{\frac{b}{a}-1}t^{a-\nu }s^{\nu -1} F_3\left( \frac{\nu }{a}-1,\frac{b}{a},1,\mu , \frac{b}{a}, 1-\left( \frac{s}{t}\right) ^a,1-\left( \frac{t}{s}\right) ^a\right) , \end{aligned}$$
(4.2)

where \(0<s<t\). Then the kernel k is homogeneous of degree \(b-1\) and satisfies \(k(1,\cdot )\in L^{1+\varepsilon }((0,1))\) for some \(\varepsilon >0\). The corresponding function \({\hat{\varPhi }}\) in Theorem 2 has the following form:

$$\begin{aligned} {\hat{\varPhi }}(z)= \varGamma (\lambda _2) E_{\lambda _1,\lambda _2}^{\lambda _3}(z), \end{aligned}$$
(4.3)

where

$$\begin{aligned} \lambda _1=\frac{b}{a},\quad \lambda _2=\frac{\nu }{a}+\mu ,\quad \lambda _3= 1+\frac{\nu -a}{b}, \end{aligned}$$
(4.4)

and \(E_{\lambda _1,\lambda _2}^{\lambda _3}\) is the three parameter Mittag-Leffler (or Prabhakar) function

$$\begin{aligned} E_{\lambda _1,\lambda _2}^{\lambda _3}(z):=\sum _{n=0}^\infty \frac{\left( \lambda _3\right) _n}{\varGamma \left( \lambda _1 n+\lambda _2\right) n!}\,z^n, \end{aligned}$$

which is well-defined on the whole \({{\mathbb {C}}}\) for \(\mathrm{Re\,}\lambda _1>0\) and is an entire function.

Proof

First note that

$$\begin{aligned}&k(t,ts) \\&\quad =\frac{a}{\varGamma (b/a)} t^{b-1} (1-s^a)^{b/a-1}s^{\nu -1} F_3\left( \nu /a-1,b/a,1,\mu , b/a, 1-s^a,1-1/s^a\right) , \end{aligned}$$

and, thus, k is homogeneous of degree \(b-1\). Let for \(0<s<1\)

$$\begin{aligned} \kappa (s):=\frac{1}{\varGamma (b/a)} (1-s)^{b/a-1}s^{(\nu -1)/a} F_3(\nu /a-1,b/a,1,\mu , b/a, 1-s,1-1/s). \end{aligned}$$

Then, \(k(t,ts)=t^{b-1}k(1,s)=t^{b-1}a\kappa (s^a)\). In particular,

$$\begin{aligned} \Vert k(1,\cdot )\Vert ^{1+\varepsilon }_{L^{1+\varepsilon }((0,1))}&= \int _0^1 (a\kappa (s^a))^{1+\varepsilon } ds =a^\varepsilon \int _0^1 \kappa (s)^{1+\varepsilon } s^{1/a-1}ds . \end{aligned}$$

Inserting the definition of \(\kappa \), the integral converges, if and only if the integral

$$\begin{aligned} \int _0^1 (1-s)^{\left( \frac{b}{a}-1\right) (1+\varepsilon )}s^{\frac{(1+\varepsilon )(\nu -1)}{a}+\frac{1}{a}-1} F_3\left( \frac{\nu }{a}-1,\frac{b}{a},1,\mu , \frac{b}{a}, 1-s,1-\frac{1}{s}\right) ^{1+\varepsilon } ds \end{aligned}$$

does. Recall the asymptotic behavior (see, e.g., Lemma 3.1.2 in [45])

$$\begin{aligned} F_3(\nu /a-1,b/a,1,\mu , b/a, 1-s,1-1/s)=O\left( (1-s)^{\min \{0,1-\frac{b}{a}\}}\right) ,\quad s\rightarrow 1, \end{aligned}$$

which shows convergence at \(s=1\) for small \(\varepsilon >0\), and

$$\begin{aligned} F_3(\nu /a-1,b/a,1,\mu , b/a, 1-s,1-1/s)=O\left( s^{\min \{\frac{b}{a},\mu ,\frac{b-\nu }{a}\}}\right) ,\quad s\rightarrow 0, \end{aligned}$$

which implies convergence at \(s=0\) for small \(\varepsilon \) due to our conditions on the parameters a, b, \(\mu \), \(\nu \).

Let us now determine the corresponding function \(\varPhi \). The recursion formula for the coefficients \({\hat{c}}_n\)’s (initialized with \({\hat{c}}_0=1)\) reads

$$\begin{aligned} {\hat{c}}_n&=\hat{c}_{n-1} \int _0^1 k(1,s)s^{(n-1)b} ds= \hat{c}_{n-1} a \int _0^1 \kappa (s^a)(s^a)^{(n-1)b/a} ds \\&= \hat{c}_{n-1} \int _0^1 \kappa (\theta ) \theta ^{(n-1)b/a+1/a-1}d\theta . \end{aligned}$$

Then,

$$\begin{aligned}&\frac{{\hat{c}}_{n}}{{\hat{c}}_{n-1}}\\&\quad = \frac{1}{\varGamma (b/a)} \int _0^1 \theta ^{\frac{nb+\nu }{a}-1} (1-\theta )^{\frac{b}{a}-1}\theta ^{-\frac{b}{a}} F_3\left( \frac{\nu }{a}-1, \frac{b}{a}, 1, \mu ,\frac{b}{a}, 1-\theta ,1- \frac{1}{\theta }\right) d\theta . \end{aligned}$$

Recall that the Marichev-Saigo-Maeda operator \(I^{\alpha ,\alpha ',\beta ,\beta ',\gamma }_{0+}\) is defined in the following way (see [42, 43]):

$$\begin{aligned}&\left( I^{\alpha ,\alpha ',\beta ,\beta ',\gamma }_{0+} f\right) (x)\\&\quad := \frac{x^{-\alpha }}{\varGamma (\gamma )}\int _0^x (x-\theta )^{\gamma -1}\theta ^{-\alpha '}F_3\left( \alpha ,\alpha ',\beta ,\beta ',\gamma , 1-\frac{\theta }{x}, 1-\frac{x}{\theta } \right) f(\theta )d\theta . \end{aligned}$$

Hence with \(f(\theta ):=\theta ^{\frac{n b+\nu }{a}-1}\) we have:

$$\begin{aligned} \frac{{\hat{c}}_{n}}{{\hat{c}}_{n-1}}= \left( I^{\frac{\nu }{a}-1,\frac{b}{a}, 1,\mu ,\frac{b}{a}}_{0+} f\right) (1). \end{aligned}$$

Note that the integral \(I^{\alpha ,\alpha ',\beta ,\beta ',\gamma }_{0+} f\) converges for \(f(\theta ):=\theta ^{\rho -1}\) and it holds

$$\begin{aligned} \left( I^{\alpha ,\alpha ',\beta ,\beta ',\gamma }_{0+} f\right) (x)=\frac{\varGamma (\rho )\varGamma (\rho +\gamma -\alpha -\alpha '-\beta ) \varGamma (\rho -\alpha '+\beta ')}{\varGamma (\rho +\beta ')\varGamma (\rho +\gamma -\alpha -\alpha ') \varGamma (\rho +\gamma -\alpha '-\beta )} \end{aligned}$$

in the case when \(\gamma >0\) and \(\rho >\max \{0,\alpha +\alpha '+\beta -\gamma , \alpha '-\beta ' \}\) (see [42]). Our choice of parameters

$$\begin{aligned} \alpha =\frac{\nu }{a}-1,\quad \alpha '=\gamma =\frac{b}{a}, \quad \beta =1,\quad \beta '=\mu , \quad \rho =\frac{nb+\nu }{a},\;n\in {\mathbb {N}}, \end{aligned}$$

leads, therefore, to the conditions

$$\begin{aligned} a,b>0, \quad b>-\nu ,\quad \nu >-a\mu , \end{aligned}$$
(4.5)

which we have postulated in the statement of Theorem 4. Assuming (4.5), we thus obtain, on the one hand,

$$\begin{aligned} \frac{{\hat{c}}_{n}}{{\hat{c}}_{n-1}}= & {} \frac{\varGamma (\frac{nb+\nu }{a})\varGamma (\frac{nb}{a})\varGamma (\frac{(n-1)b+\nu }{a} +\mu )}{\varGamma (\frac{nb+\nu }{a}+\mu ) \varGamma (\frac{nb}{a}+1)\varGamma (\frac{nb+\nu }{a}-1)} = \frac{\varGamma (\frac{(n-1)b +\nu }{a}+\mu )}{\varGamma (\frac{nb+\nu }{a}+\mu )}\cdot \frac{\frac{nb+\nu }{a}-1}{\frac{nb}{a}} \\= & {} \frac{\varGamma (\frac{(n-1)b+\nu }{a}+\mu )}{\varGamma (\frac{nb+\nu }{a}+\mu )} \cdot \frac{(n-1)+1+\frac{\nu -a}{b}}{n}. \end{aligned}$$

On the other hand, it holds for the coefficients of the three parameter Mittag-Leffler function \(E^{\lambda _3}_{\lambda _1,\lambda _2}\):

$$\begin{aligned}&\left( \frac{(\lambda _3)_n}{\varGamma (\lambda _1n+\lambda _2) n!}\right) \bigg / \left( \frac{(\lambda _3)_{n-1}}{\varGamma (\lambda _1(n-1)+\lambda _2) (n-1)!}\right) \\&\quad =\frac{\varGamma (\lambda _1(n-1)+\lambda _2) }{\varGamma (\lambda _1n+\lambda _2)}\cdot \frac{(n-1)+\lambda _3}{n}. \end{aligned}$$

Since \(E^{\lambda _3}_{\lambda _1,\lambda _2}(0)=\frac{1}{\varGamma (\lambda _2)}\), we obtain with \(\lambda _1\), \(\lambda _2\), \(\lambda _3\) as in (4.4)

$$\begin{aligned} {\hat{\varPhi }}(z)=\varGamma (\lambda _2)E^{\lambda _3}_{\lambda _1,\lambda _2}(z),\qquad z\in {{\mathbb {C}}}. \end{aligned}$$

\(\square \)

Remark 8

Note that sufficient conditions for the complete monotonicity of the function \(z\mapsto \varGamma (\lambda _2) E_{\lambda _1,\lambda _2}^{\lambda _3}(-z)\) are given (see, e.g., [18] or the Appendix in [2]) by

$$\begin{aligned} 0<\lambda _1\leqslant 1,\quad 0<\lambda _3\leqslant \frac{\lambda _2}{\lambda _1}. \end{aligned}$$

This implies additional assumptions on parameters a, b, \(\mu \), \(\nu \):

$$\begin{aligned} b\leqslant a,\quad \nu >a-b,\quad \mu \geqslant \frac{b}{a}-1. \end{aligned}$$

Hence the following statement is a direct consequence of Theorem 1, Theorem 4, Remark 1 and Remark 8.

Corollary 2

Let \(b>0\), \(a\geqslant b\), \(\mu \geqslant \frac{b}{a}-1\), \(\nu >\max \left\{ a-b,-a\mu \right\} \). Let the kernel k be given by (4.2), \(g\in {\mathcal {G}}\) from Remark 1 and the corresponding kernel \(\kappa _g\) be given by (2.6). Let Assumption 1 hold. Let A denote a non-negative random variable, whose distribution has Laplace transform \(z\mapsto \varGamma (\lambda _2) E_{\lambda _1,\lambda _2}^{\lambda _3}(-z)\) with parameters \(\lambda _1\), \(\lambda _2\), \(\lambda _3\) as in (4.4). Let \((Y_t)_{t\geqslant 0}\) be a \({{\mathbb {R}}}^d\)-valued Lévy process with generator \((L,\text {Dom}(L))\) which is independent of A. Then

$$\begin{aligned} v(\tau ,x):={\mathbb {E}}\left[ u_0\left( x+Y_{Ag^b(\tau )} \right) \right] \end{aligned}$$
(4.6)

solves the corresponding generalized time-fractional evolution equation (2.5).

Example 3

(i) Let us consider the case \(\lambda _3=1\), i.e. \(\nu =a\). Then \(E^{\lambda _3}_{\lambda _1,\lambda _2}\) reduces to the two parameter Mittag-Leffler function \(E_{\lambda _1,\lambda _2}\) and we have

$$\begin{aligned} \varPhi (t,z)=\varGamma (\mu +1)E_{\frac{b}{a},\mu +1}(zt^b). \end{aligned}$$

Further,

$$\begin{aligned} F_3(0,b/a,1,\mu , b/a, 1-(s/t)^a,1-(t/s)^a)=\sum _{n=0}^\infty \frac{(\mu )_n}{n!}\left( 1-(t/s)^a \right) ^n =(s/t)^{\mu a}. \end{aligned}$$

And the corresponding kernel k simplifies to

$$\begin{aligned} k(t,s)= \frac{a}{\varGamma (b/a)}(t^a-s^a)^{b/a-1}t^{-a\mu }s^{a(\mu +1)-1}. \end{aligned}$$

(ii) Let us consider the case \(\lambda _2=\lambda _3=1\), i.e. \(\nu =a\) and \(\mu =0\). Then \(E^{\lambda _3}_{\lambda _1,\lambda _2}\) reduces to the classical Mittag-Leffler function \(E_{\lambda _1}\) and we have

$$\begin{aligned} \varPhi (t,z)=E_{\frac{b}{a}}(zt^b). \end{aligned}$$

The corresponding kernel k simplifies to

$$\begin{aligned} k(t,s)= \frac{a}{\varGamma (b/a)}(t^a-s^a)^{b/a-1}s^{a-1}. \end{aligned}$$

Let now \(\beta \in (0,1]\), \(\alpha \in (0,2)\). Choosing \(b:=\alpha \) and \(a:=\frac{\alpha }{\beta }\) we obtain the kernel of the governing equation (1.4) of the GGBM and \(\varPhi (t,z)=E_\beta (z t^\alpha )\). If additionally \(\alpha =\beta \), we get \(k(t,s)=\frac{1}{\varGamma (\beta )}(t-s)^{\beta -1}\) and \(\varPhi (t,z)=E_\beta (z t^\beta )\), cf. Example 1.

Remark 9

It follows immediately from Theorem 1, Theorem 4 and Corollary 1, that the three parameter Mittag-Leffler function \(E^{1+\frac{\nu -a}{b}}_{\frac{b}{a},\,\frac{\nu }{a}+\mu }\) with \(b>0\), \(a>0\), \(\mu >-1\), and \(\nu >\max \{-b,-a\mu \}\) satisfies the following relation (which is nothing else but the Volterra equation (3.8)):

$$\begin{aligned}&\varGamma \left( \frac{b}{a} \right) E^{1+\frac{\nu -a}{b}}_{\frac{b}{a},\,\frac{\nu }{a}+\mu }\left( t^bz \right) =1+az\int \limits _0^t (t^a-s^a)^{\frac{b}{a}-1}t^{a-\nu }s^{\nu -1}\\&\quad \times F_3\left( \frac{\nu }{a}-1,\frac{b}{a},1,\mu , \frac{b}{a}, 1-\left( \frac{s}{t}\right) ^a,1-\left( \frac{t}{s}\right) ^a\right) E^{1+\frac{\nu -a}{b}}_{\frac{b}{a},\,\frac{\nu }{a}+\mu }\left( s^bz \right) \,ds,\,\, t>0,\, z\in {{\mathbb {C}}}. \end{aligned}$$

The special case of this relation for the classical Mittag-Leffler function is well-known and can be found, e.g. in Lemma 3.24 of [17].

Remark 10

(i) With the choice of parameters \(\lambda _1\), \(\lambda _2\), \(\lambda _3\) as in Corollary 2 (via formula (4.4)) and assuming additionally that \(\lambda _1<1\), one can show that the random variable A(t), \(t>0\), in Corollary 2 has PDF \(f_{A(t)}\). This PDF can be represented in terms of the Fox H-function (see, e.g., [23,24,25]):

$$\begin{aligned} f_{A(t)}(a)=t^{-b}\varGamma (\lambda _2) H^{1,0}_{1,1}\left[ at^{-b}\,\left| \begin{array}{c} (\lambda _2-\lambda _1, \lambda _1)\\ (\lambda _3-1,1) \end{array} \right. \right] \mathbbm {1}_{\{a>0\}}. \end{aligned}$$
(4.7)

Indeed (cf. Sec. 10.6 and formulas  (5), (10), (44) in [25]), applying the Laplace transform to the function \(f_{A(t)}\), representing the kernel exponential function as an H-function, i.e. \(\exp (-z)=H^{1,0}_{0,1}\left[ z\,\left| \begin{array}{c} --\\ (0,1) \end{array} \right. \right] \), and using the general formula for the integral of product of two Fox H-functions, one obtains as the result \(\varGamma (\lambda _2)H^{1,1}_{1,2}\left[ zt^b\,\left| \begin{array}{c} (1-\lambda _3,1)\\ (0,1),(1-\lambda _2,\lambda _1) \end{array} \right. \right] =\varGamma (\lambda _2)E^{\lambda _3}_{\lambda _1, \lambda _2}(-zt^b)\). In the special case \(\lambda _2=\lambda _3=1\), the Fox H-function in formula (4.7) reduces to the Mainardi-Wright function \({\mathcal {M}}_{\lambda _1}\).

(ii) In a similar way as in this section, one may find further examples of kernels k, satisfying Assumption 2, with the use of multiple Erdélyi-Kober integrals of Kiryakova (with some multiplicity \(m>3\)) instead of Marichev-Saigo-Maeda operators. The corresponding \(\varPhi \) is conjectured to be given in terms of the Fox H-functions.

5 Stochastic solutions with stationary increments

Some of the stochastic solutions, that we derived (e.g., the ones in Theorem 2), were provided by randomly slowed-down / speeded-up Lévy processes \((Y_{At^\beta })_{t\geqslant 0}\), where the positive random variable A is independent of the Lévy process Y. These processes may lack nice statistical properties which are appealing from a modeling point of view such as stationarity of the increments (which only holds for \(\beta =1\)) or self-similarity. Extending results of [40] beyond the Gaussian case by the techniques explained in Example 1 above, we derive in this section stochastic solutions in terms of linear fractional stable motion. For the sake of exposition we restrict ourselves to one space dimension.

We first need to fix some notation. For the index of stability \(\delta \in (0,2]\) and for the skewness parameter \(\rho \in [-1,1]\) (with the restriction to the ‘symmetric case’ \(\rho =0\) for \(\delta =1\)), we consider the symbol \(-\psi _{\delta ,\rho }\), where

$$\begin{aligned} \psi _{\delta ,\rho }(p)=|p|^\delta (1-i\rho \text {sign}(p)\tan (\pi \delta /2)),\quad p\in {\mathbb {R}}, \end{aligned}$$

covering the fractional Laplacian in space. The corresponding stable random measure \(M_{\delta ,\rho }\) is a \(\sigma \)-additive mapping from the Borel field \({\mathcal {B}}\) on \({\mathbb {R}}\) to the space of real-valued random variables which is randomly scattered in the sense that

$$\begin{aligned} (M_{\delta ,\rho }(A_1),\ldots , M_{\delta ,\rho }(A_n)) \end{aligned}$$

are independent, whenever \(A_1,\ldots , A_n\) are pairwise disjoint, and such that \(M_{\delta ,\rho }(A)\) follows a stable law; precisely,

$$\begin{aligned} {\mathbb {E}}[e^{ipM_{\delta ,\rho }(A)}]=e^{-\text {Leb}(A)\psi _{\delta ,\rho }(p)},\qquad p\in {\mathbb {R}}, \quad A\in {\mathcal {B}}. \end{aligned}$$

Here \(\text {Leb}\) denotes the Lebesgue measure on the real line. More details on stable random measures and integration with respect to them can be found in Chapter 3 of [44]. The stable Lévy motion corresponding to the characteristic exponent \(\psi _{\delta ,\rho }\) can be realized as

$$\begin{aligned} Y^{({\delta ,\rho })}_t=M_{\delta ,\rho }([0,t]),\quad t\geqslant 0. \end{aligned}$$

Note that, in the Gaussian case \(\delta =2\), the normalization is chosen such that \( (\frac{1}{\sqrt{2}} Y^{({2,\rho })}_t)_{t\geqslant 0} \) is a standard Brownian motion for any choice of \(\rho \in [-1,1]\).

We may now consider a linear fractional Lévy motion of the form

$$\begin{aligned} Y^{(\delta ,\rho ,H)}_t=\frac{1}{K_{\delta ,H}} \int _{\mathbb {R}} \left( (t-x)_+^{H-1/\delta } -(-x)_+^{H-1/\delta }\right) \;M_{\delta ,\rho }(dx),\quad t\geqslant 0 \end{aligned}$$

for \(H\in (0,1)\setminus \{1/\delta \}\), which contains the Mandelbrot-Van Ness representation for fractional Brownian motion as special case for \(\delta =2\) (up to the factor \(1/\sqrt{2}\) as explained above). Here, the normalizing constant is

$$\begin{aligned} K_{\delta ,H}=\left( \int _0^\infty \left| (1+x)^{H-1/\delta } -x^{H-1/\delta }\right| ^{\delta } dx +\frac{1}{\delta H}\right) ^{1/\delta }. \end{aligned}$$

By Proposition 7.4.2 in [44], linear fractional stable motion \((Y^{(\delta ,\rho ,H)}_t)_{t\geqslant 0}\) has stationary increments and is H-self-similar. Moreover, the characteristic function of its one-dimensional marginals is given by

$$\begin{aligned} \varphi _{Y^{(\delta ,\rho ,H)}_t}(p)={\mathbb {E}}\left[ e^{ipY^{(\delta ,\rho ,H)}_t} \right] =e^{-t^{\delta H}\psi _{\delta ,\rho _0}(p)},\qquad t\geqslant 0,\quad p\in {\mathbb {R}}, \end{aligned}$$
(5.1)

for

$$\begin{aligned} \rho _0=\left\{ \begin{array}{ll} \rho ,&{} H>1/\delta \\ \rho \; \frac{\frac{1}{H\delta } - \int _0^\infty \left( x^{H-1/\delta }-(1+x)^{H-1/\delta } \right) ^{\delta } dx}{\frac{1}{H\delta } + \int _0^\infty \left( x^{H-1/\delta }-(1+x)^{H-1/\delta } \right) ^{\delta } dx}, &{} H<1/\delta \end{array} \right. , \end{aligned}$$
(5.2)

which can be obtained from Proposition 3.4.1 in [44] by elementary computations.

Let us denote by \(L_{\delta ,\rho }\) the pseudo-differential operator associated to the symbol \(-\psi _{\delta ,\rho }\) via (2.1). In the case \(\rho =0\) of the symmetric fractional Laplacian, we also write \(-\left( -\varDelta \right) ^{\delta /2}:=L_{\delta ,0}\).

Theorem 5

Suppose that k is homogeneous of degree \(\beta -1\) for some \(\beta \in (0,2)\) and \(k(1,\cdot )\in L^{1+\varepsilon }((0,1))\) for some \(\varepsilon >0\), and that \(x\mapsto {\hat{\varPhi }}(-x)\) is completely monotone on \((0,\infty )\), where \({\hat{\varPhi }}\) is defined in Theorem 2. Then:

(i) Let \(\gamma \in (\beta ,2]\) and \(\delta \in [\gamma ,2]\setminus \{\gamma /\beta \}\). If \(A_{\gamma /\delta }\) is a nonnegative random variable with Laplace transform \({\hat{\varPhi }}(-(\cdot )^{\gamma /\delta })\) and \(\big (Y^{(\delta ,0,\beta /\gamma )}_t\big )_{t\geqslant 0}\) is a symmetric linear fractional stable motion independent of \(A_{\gamma /\delta }\), then \(\left( A_{\gamma /\delta }^{1/\delta } Y^{(\delta ,0,\beta /\gamma )}_t\right) _{t\geqslant 0}\) provides a stochastic solution to

$$\begin{aligned} u(t,x)&=u_0(x)-\int _0^t k(t,s) \left( -\varDelta \right) ^{\gamma /2} u(s,x)ds, \qquad t>0,\quad x\in {{\mathbb {R}}}, \\ \lim _{t\searrow 0} u(t,x)&=u_0(x),\qquad x\in {{\mathbb {R}}}. \end{aligned}$$

(ii) Let \(\beta \ne 1\), \(\delta \in (\beta ,2]\setminus \{1\}\), and \(\rho \in [-1,1]\). If A is a nonnegative random variable with Laplace transform \({\hat{\varPhi }}(-(\cdot ))\) and \(\big (Y^{(\delta ,\rho ,\beta /\delta )}_t\big )_{t\geqslant 0}\) is a linear fractional stable motion independent of A, then \(\left( A^{1/\delta } Y^{(\delta ,\rho ,\beta /\delta )}_t\right) _{t\geqslant 0}\) provides a stochastic solution to

$$\begin{aligned} u(t,x)&=u_0(x)+\int _0^t k(t,s) L_{\delta ,\rho _0}u(s,x)ds,\qquad t>0,\quad x\in {{\mathbb {R}}}, \\ \lim _{t\searrow 0} u(t,x)&=u_0(x),\qquad x\in {{\mathbb {R}}}, \end{aligned}$$

for

$$\begin{aligned} \rho _0=\left\{ \begin{array}{ll} {\rho ,} &{} \beta \in (1,2),\\ \rho \; \frac{\frac{1}{\beta } - \int _0^\infty \left( x^{(\beta -1)/\delta }-(1+x)^{(\beta -1)/\delta } \right) ^{\delta } dx}{\frac{1}{\beta } + \int _0^\infty \left( x^{(\beta -1)/\delta }-(1+x)^{(\beta -1)/\delta } \right) ^{\delta } dx}, &{} \beta \in (0,1). \end{array}\right. \end{aligned}$$

Proof

(i) Let \(H=\beta /\gamma \). In view of (5.1)–(5.2) and the independence assumption, we obtain

$$\begin{aligned} {\mathbb {E}}\left[ e^{ip A_{\gamma /\delta }^{1/\delta } Y^{(\delta ,0,H)}_t}\right]= & {} {\mathbb {E}}\left[ e^{-t^{\delta H}|p A_{\gamma /\delta }^{1/\delta } |^\delta }\right] = {\mathbb {E}}\left[ e^{-t^{\delta H}A_{\gamma /\delta }|p |^\delta }\right] ={\hat{\varPhi }}(-(|p |^\delta t^{\delta H})^{\gamma /\delta })\\= & {} {\hat{\varPhi }}(-|p|^\gamma t^\beta ) =\varPhi (t,-\psi _{\gamma ,0}(p)),\qquad p\in {\mathbb {R}},\quad t\geqslant 0, \end{aligned}$$

applying Theorem 2 for the last identity. The latter theorem also implies that all assumptions of Theorem 1 are satisfied. Thus, Theorem 1, (i), concludes the proof.

(ii) The proof is analogous to the one of part (i), making use of the computation

$$\begin{aligned} {\mathbb {E}}\left[ e^{ip A^{1/\delta } Y^{(\delta ,\rho ,H)}_t}\right]= & {} {\mathbb {E}}\left[ e^{-t^{\delta H} \psi _{\delta ,\rho _0}(p A^{1/\delta })} \right] ={\mathbb {E}}\left[ e^{-t^{\beta }A \psi _{\delta ,\rho _0}(p)} \right] ={\hat{\varPhi }}(-t^\beta \psi _{\delta ,\rho _0}(p)) \\= & {} \varPhi (t, -\psi _{\delta ,\rho _0}(p)),\qquad p\in {\mathbb {R}},\quad t\geqslant 0. \end{aligned}$$

\(\square \)

Remark 11

(i) If \(\delta =\gamma /\beta \) in the setting of Theorem 5, (i), then a stochastic solution is provided by the process \(({\mathcal {A}}^{1/\delta }_\beta Y^{(\delta ,0)}_t)_{t\geqslant 0}\), where the symmetric stable Lévy motion \( Y^{(\delta ,0)}\) (with characteristic exponent \(\psi _{\delta ,0}\)) is independent of the positive random variable \({\mathcal {A}}_\beta \) with Laplace transform \({\hat{\varPhi }}(-(\cdot )^{\beta })\). The proof remains valid without any changes.

(ii) The name ‘linear fractional Lévy motion’ usually refers to a larger family of processes which have stationary increments, feature H-self-similarity and have stable laws, see Definition 7.4.1 in [44]. We here chose the parametrization corresponding to the Mandelbrot-Van Ness representation of fractional Brownian motion. We note that other parameter choices work equally well, but – except in the Gaussian case – may lead to different processes which only have identical one-dimensional marginal distributions.

We conclude the paper by a summarizing example, combining some of the results of Sections 4 and 5.

Example 4

Suppose k is a Marichev-Saigo-Maeda kernel

$$\begin{aligned}&k(t,s)\\&\quad := \frac{a}{\varGamma (b/a)}(t^a-s^a)^{\frac{b}{a}-1}t^{a-\nu }s^{\nu -1} F_3\left( \frac{\nu }{a}-1,\frac{b}{a},1,\mu , \frac{b}{a}, 1-\left( \frac{s}{t}\right) ^a,1-\left( \frac{t}{s}\right) ^a\right) \end{aligned}$$

for parameters \(b\in (0,2)\), \(a\geqslant b\), \(\mu \geqslant \frac{b}{a}-1\), \(\nu >\max \left\{ a-b,-a\mu \right\} \). For \(\gamma \in (b,2]\) and \(\delta \in [\gamma ,2]\setminus \{\gamma /b\}\) denote by \(A_{\gamma /\delta ,a,b,\mu ,\nu }\) a random variable with Laplace transform given in terms of the three-parameter Mittag-Leffler function by

$$\begin{aligned} {[}0,\infty )\rightarrow {\mathbb {R}},\quad x\mapsto \varGamma \left( \frac{\nu }{a}+\mu \right) E_{\frac{b}{a},\frac{\nu }{a}+\mu }^{1+\frac{\nu -a}{b}}(-x^{\gamma /\delta }), \end{aligned}$$

which is a completely monotone function by Remark 8. Let \((Y^{(\delta ,0,b/\gamma )}_t)_{t\geqslant 0}\) be a symmetric linear fractional stable motion independent of \(A_{\gamma /\delta ,a,b,\mu ,\nu }\). Then, for every initial condition \(u_0\in S({{\mathbb {R}}}^d)\), the function

$$\begin{aligned} u(t,x)={\mathbb {E}}\left[ u_0\left( x+A_{\gamma /\delta ,a,b,\mu ,\nu }^{1/\delta } Y^{(\delta ,0,b/\gamma )}_t\right) \right] , \qquad t \geqslant 0, \quad x \in {{\mathbb {R}}}\end{aligned}$$

is a solution to

$$\begin{aligned} u(t,x)&=u_0(x)-\int _0^t k(t,s)\left( - \varDelta \right) ^{\gamma /2} u(s,x)ds, \qquad t>0,\quad x\in {{\mathbb {R}}}, \\ \lim _{t\searrow 0} u(t,x)&=u_0(x),\quad x\in {{\mathbb {R}}}. \end{aligned}$$

As \(\delta \) is not a parameter of the equation, we thus obtain a whole family of stochastic representations in terms of \(b/\gamma \)-self-similar processes with stationary increments parametrized by the index of stability \(\delta \in [\gamma ,2]\). Here, \(\gamma \) is the order of the space derivative and \(b-1\) the degree of homogeneity of the Marichev-Saigo-Maeda kernel. This example extends the results of [40] beyond the Gaussian case (\(\delta =2\)) and the time-fractional case of order \(b\in (0,1]\) (\(a=\nu =1\), \(\mu =0\)), cp. Example 3, (ii).