1 Introduction

The measurement of exit times or functionals of exit times using the existing numerical methods is considered to be one of the most difficult tasks, even if updates of the process are generated with high accuracy. The resulting errors are generally large due to the possibility that the exit event may have occurred between the computational nodes as it is discussed in [16, 22, 23]. Our goal is to estimate the expectation of \(f(S_{\tau _H},\tau _H)\) for some given function f which represents the discounted payoff of a barrier option in finance. The process \(S_t,t\ge 0\) is a one-dimensional diffusion process that satisfies the stochastic differential equation of the form

$$\begin{aligned} \mathrm{{d}}S_t=\mu (S_t)\mathrm{{d}}t+\sigma (S_t)\mathrm{{d}}W_t,\quad S_0=s\in \mathbb {R}, \end{aligned}$$
(1)

where \(W_t\) is a standard one-dimensional Brownian motion defined on a filtered probability space \((\varOmega ,\mathfrak {F},\mathfrak {F_t},\mathbb {P})\). The functions \(\mu ,\sigma :\mathbb {R}\rightarrow \mathbb {R}\), with \(\sigma ^{2}(s)>0\), represent the drift and diffusion parameters of the diffusion process \(S_t\), respectively, and are assumed to satisfy regularity conditions, which are sufficient to guarantee the existence and uniqueness of the solution to Eq. (1) (see, e.g. [26, Theorem 4.5.3] for sufficient conditions). Additionally, \(\tau _{H}\) is the first time for the diffusion process \(S_t\) to hit or cross a barrier H and can be written as

$$\begin{aligned} \tau _H=\left\{ \begin{array}{ll} \inf \{t\ge 0: S_t\le H|S_0=s\} \quad \text{ if } \quad s>H,\\ \inf \{t\ge 0: S_t\ge H|S_0=s\} \quad \text{ if } \quad s<H. \end{array} \right. \end{aligned}$$
(2)

A common application for this computational problem in financial mathematics is a pricing of barrier option, whose payoff in maturity depends on whether the underlying asset price crosses some predefined barrier level during the lifetime of the option. Since 1973, barrier options have become extremely popular and have been traded in the over-the-counter market, where the trade is just between two parties, without the supervision of an exchange [29]. There are two main types of such options: knock-in (up-and-in and down-and-in) and knock-out (up-and-out and down-and-out). These two options can either have a put or a call feature. Much of the work in the literature on pricing barrier options is based on the Black–Scholes analysis where the underlying asset price follows the geometric Brownian motion with constant volatility [17, 20]. However, this assumption of constant volatility contradicts the accumulated empirical observations in the real market for barrier options. Empirically, the implied volatility observed from market data for such options is not constant, but varies with strike prices, and this variation is known as the smile volatility or skew volatility phenomenon, which is observed in equity market [11]. To capture this effect and provide a better fit to the empirical observations, it is assumed that the underlying asset price follows an alternative model, which is called the constant elasticity of variance (CEV) diffusion model, defined as

$$\begin{aligned} \mathrm{{d}}S_t=(r-q)S_t\mathrm{{d}}t+\gamma S_t^{\beta +1}\mathrm{{d}}W_t,\quad t\ge 0, \quad S_0=s>0, \end{aligned}$$
(3)

where \(\gamma \) is a positive constant and \(\beta \le 0\) is a parameter representing the elasticity of the local volatility. The parameters r and q represent the continuously compounded annual risk-free interest rate and the dividend payable, respectively. In addition, \(W_t\) is a standard one-dimensional Brownian motion with respect to the risk-neutral probability measure Q. For \(\beta <0\), the local volatility \(\sigma (S_t)=\gamma S_t^{\beta }\) monotonically decreases the function of \(S_t\), and the model allows for bankruptcy in financial industry. In other words, many empirical observations have shown that the correlation between the asset price and its return volatility is negative and, therefore, the elasticity \(\beta \) is chosen as \(\beta <0\). The pricing of derivatives under the CEV model was first proposed by Cox and Ross [8], where the closed form European option prices for \(-1\le \beta \le 0\) are obtained. Schroder [33] relaxed the range of \(\beta \) to \(\beta \le 0\) and established a connection between these pricing formulae and non-central Chi-squared distribution. Emanuel and MacBeth [12] extended the analysis of Cox and Ross to the case of \(\beta >0\) by constructing the relevant transition density function to show the strictly martingale property for the payoff function. Boyle and Tian [7] used the trinomial lattice to approximate the value of barrier and lookback options under the CEV model. Davydov and Linetsky [9] obtain closed-form solutions for Laplace transformations of the CEV barrier options with respect to the time remaining until expiration and to numerically invert these Laplace transformations via the Abate–Whitt numerical Laplace inversion algorithm. Davydov and Linetsky [10] also invert the Laplace transforms analytically for barrier options in terms of spectral expansions associated with the infinitesimal generator of the CEV diffusion.

However, these closed-form solutions for the CEV barrier options are only available in a few particular frameworks, and many other cases, such as options with multiple assets, have no analytical expressions. Therefore, much work has focused on accurate numerical and Monte Carlo (MC) simulation procedures in such a situation. The simplest method for approximating the mean of function of the first exit time, such as the expected payoff of a barrier option, is to employ a time-discretization method to generate an approximate path and step until the path crosses the barrier. The basic time-discretization algorithm for approximating the process \(S_t\) defined by (1) on [0, T] is the Euler–Maruyama scheme with a fixed time step \(\Delta t=\frac{T}{N}\), where N is the number of time steps. The result is an approximation of the function of exit time; then apply an MC method to obtain an approximation of the mean of the output [15]. This procedure has been applied, e.g. in [4] to two specific models. However, in simulating barrier options, this method becomes inefficient and produces two main errors: exit time error or hitting time error inherent from the timestepping method and the MC statistical error. The hitting time error is of weak order \(\frac{1}{\sqrt{N}}\) with N time steps and converges slowly compared to a \(\frac{1}{N}\) rate for other Euler-type approximations [15]. This slow convergence in simulating the continuously monitored barrier options is due to the fact that the option may be knocked out between the discrete computational nodes as it is discussed in [16]. To retrieve the first-order convergence of the Euler method, one can apply a simple hitting test after each time step using the distribution of the Brownian bridge pinned between the discrete computational nodes to correctly check whether the barrier was hit during the time step, and this approach is known as the Brownian bridge technique [15, 28]. For more analysis on using this technique in barrier options, see [2, 14, 16, 30] and the references quoted therein.

The \(95\%\) confidence interval of the barrier option is defined as

$$\begin{aligned} \tilde{V}\pm 1.96\left( \sqrt{\frac{{\text {Var}}}{M}}\right) , \end{aligned}$$

where M is the number of simulations, \(\tilde{V}\) is the MC simulation of the option price and Var represents the computed variance. The quantity \(\sqrt{\frac{{\text {Var}}}{M}}\) represents the statistical error which is of order \(O(\frac{1}{\sqrt{M}})\). To reduce such an error efficiently and to speed up the MC simulation, one can concentrate on reducing the variance rather than adding more samples and this procedure is known as the variance reduction technique [14, 18]. One of the most successful and broadly applicable variance reduction techniques is called the control variate (CV) method, which exploits information about statistical error in the estimate of known quantities to reduce the error in the estimate of an unknown quantity [14]. The earliest application of the CV technique on options pricing is provided in 1977 by Boyle [5]. In the latter reference, the Black–Scholes price of standard call is used as a control in estimating the European call option values on dividend paying stock. In 1997, Boyle et al. [6] used the idea of CV on barrier options by applying the vanilla European option as a control in estimating the down-and-out call option with discrete barriers. The CV technique can lead to very substantial statistical error reductions if a suitable control is chosen where the selected CV should have a high degree of correlation with the original output. For example, a suitable CV for pricing up-and-out put barrier option is its counterpart to a vanilla put option, as we will see below in our numerical experiments. The CV technique can also be used for pricing options with no closed form formulas, but there are analytical expressions for similar options [18]. For more details on using the CV in option pricing, see [5, 6, 14, 18] as a brief sampling.

Jansons and Lythe [22, 23] presented an efficient approach that is analogous to the usual Euler–Maruyama technique but with random time step distributed exponentially. They referred to as the exponential timestepping Euler algorithm. They subsequently employed such an algorithm to simulate the first hitting time of one-dimensional diffusion models. When this method is employed, as opposed to using a fixed time step, \(\Delta t\), we utilize i.i.d. exponentially distributed random variables, \(\delta t\), as the time steps to form the probability \(\mathbb {P}(\delta t>t)=\exp (-\lambda t)\), \(t\ge 0\), where the rate \(\lambda > 0\) acts as the discretization parameter. As such, the random time step, \(\delta t\), has the expectation \(E[\delta t]=\frac{1}{\lambda }\), as corresponding to the fixed time step, \(\Delta t\). Unlike the Gaussian distribution, the exponential distribution has a memoryless property and is highly peaked in the proximity of the origin [31]. The diffusion process at the end of the exponential time step, \(\delta t\), is distributed in the same way as if the time step had commenced at the barrier providing it hits the given barrier during the time step [22, 23]. This results in a certain degree of uncertainty because the exact value of the time step, \(\delta t\), is not officially known; however, it has a mean value of \(\frac{1}{\lambda }=\Delta t\). As such, the total time that has elapsed after N time steps represents a random variable that has a mean of \(\frac{N}{\lambda }\) [23]. For the purposes of this paper, the exponential timestepping Euler method will be referred to as the standard Exp algorithm. Like the Brownian bridge approach, it is possible to also take into account the probability that the barrier was hit during the time step by implementing an effective boundary-hitting test after each time step. Numerical experiments for the double-well potential physical problem [23], for the neural diffusion models [4], and for financial derivatives of barrier features [3] reveal that combining a boundary testing approach with Euler exponential timestepping can calculate a first order of convergence, \(O(\Delta t)\), that can accurately approximate the hitting time. For abbreviation, we call the exponential timestepping Euler algorithm with boundary test as the original Exp algorithm. Recently, Antoine Lejay et al. [27] extended the Jansons and Lythe method to deal with discontinuities or locally non-constants coefficients. Our paper contributes by incorporating the CV technique with the original Exp algorithm to minimize the high MC statistical error that results in estimating the exit time. In addition, we adapt this algorithm to simulate the function of exit time, such as the CEV barrier options. We call this method the improved Exp algorithm for abbreviation.

The remainder of this paper is structured as follows. In Sect. 2, we describe in detail the improved Exp algorithm based on the combination of the Jansons and Lythe original method with the CV technique which is used in improving the efficiency of the simulation. We also explain its implementation on simulating the exit time of a one-dimensional diffusion equation. In Sect. 3, the problem of the CEV barrier option under the risk-neutral probability measure is formulated and the necessary notations are introduced. In Sect. 4, we include numerical experiments concerning the CEV barrier options, to compare the Jansons and Lythe original Exp algorithm for efficiency and accuracy with the improved Exp algorithm. Section 5 contains our conclusions and some ideas for future research.

2 The exponential timestepping algorithm with boundary test and control variate technique

We present a stochastic Euler algorithm with random time steps introduced by Jansons and Lythe [22, 23] to simulate the hitting time of one-dimensional diffusion models, and we adapt it to deal with the function of the hitting time, such as payoff of barrier options. Moreover, we combine such an algorithm with control variate technique to minimize the high MC statistical error that results in estimating the exit time.

2.1 A review of Jansons and Lythe exponential timestepping algorithm

When this method is employed, as opposed to using a fixed time step, \(\Delta t\), we utilize i.i.d. exponentially distributed random variables, \(\delta t\), as the time steps to form the probability \(\mathbb {P}(\delta t>t)=\exp (-\lambda t)\), \(t\ge 0\), where the rate \(\lambda > 0\) acts as the discretization parameter. As such, the random time step, \(\delta t\), has the expectation \(E[\delta t]=\frac{1}{\lambda }\), as corresponding to the fixed time step, \(\Delta t\) in the usual Euler–Maruyama method. To carry out the underlying algorithm, we need to calculate the density of the increment \(S_{t+\delta t}-S_t\) and the conditional probability of the barrier being hit during the exponential time step. Due to the independence of all the quantities of the starting time, we assume that the process at \(t = 0\) (i.e. \(S_0=s\)). Both quantities can be expressed in terms of fundamental solutions (nonzero and linearly independent solutions) of the linear second-order ordinary differential equation:

$$\begin{aligned} \mu (s)\psi '(s)+\frac{1}{2}\sigma ^2(s)\psi ''(s)-\lambda \psi (s)=0, \end{aligned}$$
(4)

where the functions \(\mu \) and \(\sigma ^2\) represent the drift and diffusion parameters of the diffusion process \(S_t\), respectively [23, 25, 32]. The first task is thus to calculate the density of the increment \(S_{\delta t}\):

$$\begin{aligned} g(s,y)=\frac{{\text {d}}}{{\text {d}}y}\mathbb {P}(S_{\delta t}<y|S_0=s), \end{aligned}$$
(5)

and then to write it in terms of fundamental solutions of Eq. (4). To this end, define first, for each \(\lambda >0\), \(-\infty \le l<r\le \infty \), the functions \(\psi ^{\uparrow }(s)\): \([l,r)\rightarrow (0,\infty )\) and \(\psi ^{\downarrow }(s)\): \((l,r]\rightarrow (0,\infty )\) by [32]

$$\begin{aligned} \psi ^{\uparrow }(s)\equiv \left\{ \begin{array}{c} E_s[\exp (-\lambda H_b)], \quad s\le b, \quad s\in (l,r)\\ \\ \displaystyle \frac{1}{E_b[\exp (-\lambda H_s)]},\quad s\ge b, \quad s\in (l,r) \\ \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} \psi ^{\downarrow }(s)\equiv \left\{ \begin{array}{c} E_s[\exp (-\lambda H_b)], \quad s\ge b, \quad s\in (l,r)\\ \\ \displaystyle \frac{1}{E_b[\exp (-\lambda H_s)]},\quad s\le b, \quad s\in (l,r), \\ \end{array}\right. \end{aligned}$$

where \(b\in (l,r)\), \(\psi ^{\uparrow }(l)=\psi ^{\downarrow }(r)=0\) and \(E_s\) represents the expectation with respect to the conditional probability \(\mathbb {P}(.|S_0=s)\).

Lemma 2.1

The increasing function \(\psi ^{\uparrow }(s)\) and the decreasing function \(\psi ^{\downarrow }(s)\) defined above are a pair of strictly positive and linearly independent solutions to Eq. (4), and are defined up to a multiplicative constant by

$$\begin{aligned} \lim _{s\rightarrow -\infty }\psi ^\uparrow (s)=0,\quad \frac{{\text {d}}\psi ^\uparrow (s)}{{\text {d}}s}>0,\quad \text{ and }\quad \lim _{s\rightarrow \infty }\psi ^\downarrow (s)=0\quad \frac{{\text {d}}\psi ^\downarrow (s)}{{\text {d}}s}<0. \end{aligned}$$

Proof

See [1, 25] for the proof. \(\square \)

Proposition 2.2

The density g(sy) defined by Eq. (5) can be written in terms of \(\psi ^{\uparrow }(s)\) and \(\psi ^{\downarrow }(s)\) as

$$\begin{aligned} g(s,y)=c_{\lambda }\psi ^\downarrow (\max (s,y))\psi ^\uparrow (\min (s,y))m(y), \end{aligned}$$
(6)

where the speed density m(y) is defined by

$$\begin{aligned} m(y)=2\sigma ^{-2}(y)\exp \left( \int _{0}^{y}2\mu (z)\sigma ^{-2}(z){\text {d}}z\right) \end{aligned}$$

and \(c_{\lambda }\) is a constant dependent of \(\lambda \).

Proof

For the proof, see [1, Theorem 3.3.3] and also Sect. 2 and Appendix A in [23]. \(\square \)

It remains only to calculate the conditional probability of the barrier being hit during the exponential time step to carry out the underlying algorithm. To this end, we begin with the following lemma:

Lemma 2.3

$$\begin{aligned} \mathbb {P}( \tau _H<\delta t)=E_{s}[\exp (-\lambda \tau _H)]=\left\{ \begin{array}{ll} \frac{\psi ^{\uparrow }(s)}{\psi ^{\uparrow }(H)} \quad {\text {if}} \quad s<H,\\ \frac{\psi ^{\downarrow }(s)}{\psi ^{\downarrow }(H)}\quad {\text {if}} \quad s>H, \end{array} \right. \end{aligned}$$
(7)

where \(E_s\) represents the expectation with respect to the conditional probability \(\mathbb {P}(.|S_0=s)\).

Proof

For the proof, see [1, 25]. \(\square \)

Corollary 2.4

Based on Eqs. (6) and (7), the desired conditional probability can be thus given by

$$\begin{aligned} \mathbb {P}( \tau _H<\delta t|s,S_{\delta t})=\left\{ \begin{array}{lll} 1 &{} \quad \text{ if } \quad s<H<S_{\delta t} \quad {\text {or}} \quad s>H>S_{\delta t} ,\\ \frac{\psi ^\uparrow (\max (s,S_{\delta t})) \psi ^\downarrow (H)}{\psi ^\uparrow (H)\psi ^\downarrow (\max (s,S_{\delta t}))} &{} \quad {\text {if}} \quad s,S_{\delta t}<H,\\ \frac{\psi ^\downarrow (\min (s,S_{\delta t})) \psi ^\uparrow (H)}{\psi ^\downarrow (H)\psi ^\uparrow (\min (s,S_{\delta t}))} &{} \quad {\text {if}} \quad s,S_{\delta t}>H. \end{array} \right. \end{aligned}$$
(8)

Proof

See [1] for the proof. \(\square \)

When both \(\mu \) and \(\sigma \) are constants, the exact calculations of the required quantities given by Eqs. (6) and (8) can be obtained.

The strategy of this algorithm is thus based on approximating the diffusion process \(S_t\) at each time using a Brownian motion with a constant drift \(Y_t=\mu t+\sigma W_t,\) with \(Y_0=0\), where \(\mu \) and \(\sigma \) are constants. By solving Eq. (4), explicit expressions can be obtained for \(\psi ^\uparrow (s)\) and \(\psi ^\downarrow (s)\) for the process \(Y_t\) as [23]

$$\begin{aligned} \psi ^\uparrow (s)=\exp ((N-F)s), \quad \psi ^\downarrow (s)=\exp (-(N+F)s), \end{aligned}$$

where \(F=\frac{\mu }{\sigma ^2}\) and \(N=\sqrt{(F)^2+\frac{2\lambda }{\sigma ^2}}\). The density of \(Y_{\delta t}\) can be thus written as [1, 23, 25]

$$\begin{aligned} h(s)=\int _{0}^{\infty }\lambda \exp \left( -\lambda t\right) \frac{1}{\sqrt{(2\pi \sigma ^2t)}}\exp \left( \frac{-(s-t\mu )^2}{2\sigma ^2t}\right) {\text {d}}t=\frac{\lambda }{\sigma ^2}N^{-1} \left\{ \begin{array}{ll} \exp (-s(-N-F)) \quad \text{ if } \quad s<0,\\ \exp (-s(N-F)) \quad \text{ otherwise }. \end{array} \right. \end{aligned}$$
(9)

Integrating over the density of \(Y_{\delta t}\) defined in Eq. (9) yields

$$\begin{aligned} \mathbb {P}(Y_{\delta t}>s)=\frac{N+F}{2N}\exp (-(N-F)s), \end{aligned}$$
(10)

and by setting \(s=0\), we get

$$\begin{aligned} \mathbb {P}(Y_{\delta t}>0)=\frac{N+F}{2N}. \end{aligned}$$
(11)

Now, define two-points random variable Z takes the values \(+1\) and \(-1\) with the probability

$$\begin{aligned} \mathbb {P}(Z=1) = \mathbb {P}(Y_{\delta t}>0)= \frac{N+F}{2N}, \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}(Z=-1) = \mathbb {P}(Y_{\delta t}<0)= 1-\frac{N+F}{2N}. \end{aligned}$$

Therefore, Z can be generated using a uniformly distributed random variable \(U\sim \mathscr {U}([0,1])\) as

$$\begin{aligned} Z=\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \quad 0\le U\le \frac{N+F}{2N},\\ -1 &{} \quad \text{ if } \quad \frac{N+F}{2N}<U\le 1. \end{array}\right. \end{aligned}$$

Thus,

$$\begin{aligned} Z=\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \quad U=\frac{N+F}{2N},\\ \text{ sign }\left( \frac{N+F}{2N}-U\right) , &{} \quad \text{ otherwise }. \end{array}\right. \end{aligned}$$
(12)

Now, by Eqs. (9) and (12), \(Y_{\delta t}\) can be easily sampled as an exponentially distributed random variable generated by

$$\begin{aligned} Y_{\delta t}=\left\{ \begin{array}{ll} \frac{p}{N-F} &{} \quad {\text {if}} \quad Z=1,\\ \frac{-p}{N+F} &{} \quad {\text {if}} \quad Z=-1, \end{array}\right. \end{aligned}$$

where p is an exponential random variable with the distribution:

$$\begin{aligned} \mathbb {P}(p>s)=e^{-s}, \end{aligned}$$

and can be generated as

$$\begin{aligned} p=-\mathrm{{ln}} V, \end{aligned}$$

where \(V\sim \mathscr {U}([0,1])\) and independent of U. Thus,

$$\begin{aligned} Y_{\delta t}=\frac{Zp}{N-ZF}. \end{aligned}$$
(13)

Using these calculations, we generate updates for the diffusion process \(S_t\) on the time interval \([t_n,t_n+\delta t]\) with \(\mu =\mu (S_t)\) and \(\sigma =\sigma (S_t)\) as [23]

$$\begin{aligned} S_{n+1}=S_n+\frac{Z(S_n) p_n}{(N(S_n)-Z(S_n)F(S_n))}, \end{aligned}$$
(14)

where \(F(S_n)=\frac{\mu (S_n)}{\sigma ^2(S_n)}\) and \(N(S_n)=\sqrt{(F(S_n))^2+\frac{2\lambda }{\sigma ^2(S_n)}}\). The exit event then occurs during the time interval \([t_n,t_n+\delta t]\) as \(S_{n+1}\) exceeds the barrier H [23]. We call this algorithm the standard Exp algorithm. Under this method, the hitting time error is proportional to \(\sqrt{\frac{1}{\lambda }}\).

To improve this slow convergence and to calculate the conditional probability of a given barrier H being hit during the time step, a simple posteriori test is performed after each time step. This probability can be calculated exactly for \(Y_t\) by substituting the explicit expressions of \(\psi ^\uparrow \) and \(\psi ^\downarrow \) into Eq. (8). Therefore, the required conditional probability that \(S_t\) exceeds H for some \(t_n\le t \le t_n+\delta t\), given \(S_{n}\) and \(S_{n+1}\), is approximated by [23]

$$\begin{aligned} \mathbb {P_H}=\mathbb {P}( \tau _H<\delta t|S_{n},S_{n+1})=\left\{ \begin{array}{ll} \exp (-2N(S_{n})(H-(\max (S_{n},S_{n+1}))) &{} \quad \text{ if } \quad S_{n},S_{n+1}<H,\\ \exp (-2N(S_{n})((\min (S_{n},S_{n+1}))-H) &{} \quad \text{ if } \quad S_{n},S_{n+1}>H. \end{array} \right. \end{aligned}$$
(15)

Based on this conditional probability, we say an excursion is detected in \([t_n,t_n+\delta t]\), if [23]

  1. 1.

    for \(S_n<H\),

    $$\begin{aligned} S_{n+1} >H \quad \text{ or }\quad W_n<\mathbb {P_H}. \end{aligned}$$
    (16)
  2. 2.

    for \(S_n>H\),

    $$\begin{aligned} S_{n+1}<H \quad \text{ or }\quad W_n<\mathbb {P_H}, \end{aligned}$$
    (17)

where \(W_n\sim \mathscr {U}([0,1])\).

The time stepping will be repeated until this hitting event is detected or until the maximum number of exponential time steps is reached. The output is a function of the hitting time approximated as \(\tilde{\tau }_H=\frac{n}{\lambda }\), where n is the number of taken time steps. An MC procedure is therefore used to estimate the mean of function of the first hitting time using a sample average of M independent simulations. As an abbreviation, we call this technique the original Exp algorithm. Numerical experiments [3, 4, 23] show that incorporating this boundary test with the standard Exp algorithm can retrieve a first order of convergence \(O(\frac{1}{\lambda })\) in approximating the hitting time, coinciding with our numerical observations for CEV barrier options.

2.2 Control variate technique

The original Exp algorithm also suffers from a high MC statistical error, which is of the order \(O(\frac{1}{\sqrt{M}})\) with M simulations. To reduce such an error efficiently and to speed up the MC method, the control variate technique (CV) is used [13, 14]. The CV is one of the most successful and broadly applicable variance reduction techniques, which exploits information about statistical error in the estimate of known quantities to reduce the error in the estimate of an unknown quantity [13, 14, 18]. To describe this technique, we begin with the following proposition:

Proposition 2.5

Suppose we wish to estimate \(\mu _{X}=E[X]\), where X is an output of a simulation experiment. Suppose that Y is also an output of the simulation that can easily be executed with the known mean \(\mu _{Y}=E[Y]\) and is correlated with X in some sense. Then

  1. 1.

    The random variable

    $$\begin{aligned} Z_c=X+c(\mu _{Y}-Y), \end{aligned}$$
    (18)

    where \(c\in \mathbb {R}\) is an unbiased controlled estimator for \(\mu _{X}\) with

    $$\begin{aligned} \mathrm{{Var}}(Z_c)=\mathrm{{Var}}(X)+c^2\mathrm{{Var}}(Y)+2c\mathrm{{Cov}}(X,Y), \end{aligned}$$
    (19)

    where Var and Cov represent the variance and covariance functions, respectively.

  2. 2.

    The optimal value of c defined as

    $$\begin{aligned} c_\mathrm{{min}}=-\frac{\mathrm{{Cov}}(X,Y)}{\mathrm{{Var}}(Y)} \end{aligned}$$
    (20)

    minimizes the variance in Eq. (19) with

    $$\begin{aligned} \mathrm{{Var}}(Z_c)=\mathrm{{Var}}(X)-\frac{\mathrm{{Cov}}^2(X,Y)}{\mathrm{{Var}}(Y)}=\mathrm{{Var}}(X)(1-\rho ^2), \end{aligned}$$
    (21)

    where \(\rho \) is the correlation between X and Y.

Proof

For the proof, see [13]. \(\square \)

From the above proposition, we note first that for any choice of c, \(Z_c\) remains an unbiased estimator of \(\mu _{X}\) as \(E[Z_c]=E[X]=\mu _{X}\). Therefore, we can use \(Z_c\) instead of X to estimate \(\mu _{X}\). As \(\rho \) approaches 1, \(\mathrm{{Var}}(Z_c)\) approaches zero. Thus, the stronger that X and Y are correlated, the greater the reduction in variance, and this suggests the suitable way of choosing the control variate Y. Here, Y is called a control variate since the observed error \(\mu _{Y}-Y\) controls the MC simulation. In practice, \(\mathrm{{Var}}(Y)\) and \(\mathrm{{Cov}}(X,Y)\) are unknown and therefore can be estimated using the MC simulation [13, 14, 18].

The CV technique improves the constant of the rate of convergence and can lead to very substantial error reductions if a suitable CV is chosen [13, 14]. Therefore, we combine it with the original exponential timestepping algorithm to utilize such a reduction and to speed up the MC simulation [13].

3 Problem setting

Let \(f(S_{\tau _H},\tau _H)\) be the discounted payoff function of a barrier option under the risk-neutral measure Q, where \(\tau _{H}\) is the first hitting time of the predefined barrier H by the asset price \(S_t\) that follows the CEV process defined by Eq. (3). The problem we consider here is evaluating the price of the CEV barrier option V(St), which is given as [8, 11, 33]

$$\begin{aligned} V(S,t)= E^Q[f(S_{\tau _H},\tau _H)] \end{aligned}$$
(22)

using an efficient numerical technique. The crude MC approximation for V(St) is thus given by

$$\begin{aligned} \tilde{V}=\frac{1}{M}\sum _{j=1}^M f_j(S_{\tilde{\tau }_H},\tilde{\tau }_H), \end{aligned}$$

where M is the simulation number, \(f_j\) is the discounted payoff at simulation step j, and \(\tilde{\tau }_H\) is the approximation for the first hitting time \(\tau _H\). We choose \(\tilde{\tau }_H=\frac{n}{\lambda }\), where n is the number of taken time steps until the exit event is detected during the exponential time step \([t_n,t_n+\delta t]\). The discounted payoff \(f_j\) is calculated using the exponential time stepping Euler algorithm with the boundary test and the CV technique discussed above. We consider two cases for the CEV barrier options here: the up-and-out put barrier option and the down-and-out call barrier option. The discounted payoff function of the up-and-out put barrier option can be given as [7, 8, 11, 33]

$$\begin{aligned} f_\mathrm{{uop}}(S_{\tau _H},\tau _H)=e^{-rt}\max (K-S_T,0)1_{\{\tau _{H}<T,S_t<H\}}, \end{aligned}$$
(23)

and the discounted payoff function of the down-and-out call barrier option can be computed as

$$\begin{aligned} f_\mathrm{{doc}}(S_{\tau _H},\tau _H)=e^{-rt}\max (S_T-K,0)1_{\{\tau _{H}<T,S_t>H\}}, \end{aligned}$$
(24)

where K is the strike price and \(1_{\{.\}}\) denotes the indicator function.

To illustrate how to use the control variate technique combined with the original Exp algorithm in pricing the barrier option under CEV model, we consider the case of the the up-and-out put barrier option whose its discounted payoff calculated as in (23) and the down-and-call barrier option whose its discounted payoff calculated as in (24). First, we choose their counterparts, the vanilla put option and the vanilla call option as control variates where their discounted payoff functions can be computed as [7, 8, 11]:

$$\begin{aligned} f_\mathrm{{put}}(S_t,t)=e^{-rt}\max (K-S_T,0), \end{aligned}$$
(25)

and

$$\begin{aligned} f_\mathrm{{call}}(S_t,t)=e^{-rt}\max (S_T-K,0), \end{aligned}$$
(26)

respectively. Then the expected of their payoffs at the risk-free rate with respect to the risk-neutral probability measure Q are computed as [7, 8, 11]

$$\begin{aligned} V_\mathrm{{put}}(S,t)= E^Q[f_\mathrm{{put}}(S_t,t)], \end{aligned}$$
(27)

and

$$\begin{aligned} V_\mathrm{{call}}(S,t)= E^Q[f_\mathrm{{call}}(S_t,t)], \end{aligned}$$
(28)

respectively. The improved Exp algorithm is thus proceeded as follows:

  • Generate updates for the payoffs \(f_\mathrm{{uop}}\) in (23) and \(f_\mathrm{{doc}}\) in (24) and the controls \(f_\mathrm{{put}}\) in (25) and \(f_\mathrm{{call}}\) in (26) at each time step using Eq. (14). (standard Exp algorithm).

  • Apply then a posteriori test using the condition (16) for up-and-out put option and the condition (17) for the down-and-out call option to detect the exit event during the time step. (original Exp algorithm).

  • Estimate the expected of the payoffs V defined in Eq. (22), \(V_\mathrm{{put}}\) defined in Eq. (27) and \(V_\mathrm{{call}}\) defined in Eq. (28) using the MC samples.

  • Estimate the variance of payoffs (\(\mathrm{{Var}}_\mathrm{{uop}}\)) and (\(\mathrm{{Var}}_\mathrm{{doc}}\)) and their control (\(\mathrm{{Var}}_\mathrm{{put}}\)) and (\(\mathrm{{Var}}_\mathrm{{call}}\)) using the MC samples.

  • Estimate the covariance between the payoff (\(f_\mathrm{{uop}}\)) and its control (\(f_\mathrm{{put}}\)), and the payoff (\(f_\mathrm{{doc}}\)) and its control (\(f_\mathrm{{call}}\)) using the MC samples.

  • Calculate the optimal value of c: \(c_{min}\) according to Eq. (20).

  • Calculate the unbiased controlled estimator \(Z_c\) for the expected payoff (V) according to Eq. (18).

  • Calculate the variance of \(Z_c\): (\(\mathrm{{Var}}(Z_c)\)) according to Eq. (21).

  • Return the required quantities \(Z_c\) and its statistics \(E(Z_c)\) and \(\mathrm{{Var}}(Z_c)\).

We call the resulting algorithm as the improved Exp algorithm and use it to simulate the CEV barrier options, as shown in the following section. See Algorithm1 in Appendix A for the full algorithm used in this context.

Generally, simulating the mean of the function of the first hitting time such as the case of barrier options using the crude MC associated with a time-stepping method produces a global error \(\xi \) that can be divided into the first hitting time error \(\xi _H\) inherent from the time-stepping method, (we use an exponential timestepping Euler method), and the MC statistical error \(\xi _s\) [4]. Thus,

$$\begin{aligned} \xi= & {} E^Q[f(S_{\tau _H},\tau _H)]-\frac{1}{M}\sum _{j=1}^M f_j(S_{\tilde{\tau }_H},\tilde{\tau }_H) \\= & {} (E^Q[f(S_{\tau _H},\tau _H)-f(S_{\tilde{\tau }_H},\tilde{\tau }_H)])+(E^Q[f(S_{\tilde{\tau }_H},\tilde{\tau }_H)]-\frac{1}{M}\sum _{j=1}^M f_j(S_{\tilde{\tau }_H},\tilde{\tau }_H)) \\= & {} \xi _H + \xi _s, \end{aligned}$$

where the expectation is in respect to the risk-neutral probability measure Q. In the following section, we will discuss the ways in which to reduce both sources of error using the boundary test for the hitting time error and the CV technique for the MC statistical error.

4 Computational results and analysis

In our numerical experiments, we employ the underlying simulation algorithms for the barrier options under the CEV model and examine their convergence properties and their efficiency in terms of the hitting time errors and MC statistical errors.

4.1 Hitting time error and convergence analysis

Figure 1 displays the hitting time error in estimating the payoff of down-and-out call barrier option under the CEV model using the standard Exp method, the original Exp algorithm, and the improved Exp algorithm as functions of the mean duration of the time step \(\Delta t=\frac{1}{\lambda }\). The parameters are chosen as in [11]: the volatility parameter at initial time \(\sigma _{0}=0.25\), the dividend \(q=0\), the risk-free interest rate \(r=0.1\), the current value of the option \(S_0=100\), the strike price \(K=95\), the barrier \(H=90\), the elasticity of the local volatility \(\beta =-0.5\), and the expiration time \(T=0.5\). The theoretical value of 10.6013 is used to check the performance of the underlying algorithms [11]. Figure 2 shows the results of the hitting time error using the underlying simulation methods for the up-and-out put barrier option under the CEV model with parameters chosen as in [3]: the volatility parameter at initial time \(\sigma _{0}=0.40\), the dividend \(q=0.03\), the risk-free interest rate \(r=0.08\), the current value of the option \(S_0=100\), the strike price \(K=95\), the barrier \(H=130\), the elasticity of the local volatility \(\beta =0\), and the expiration time \(T=1\). The analytical value of the option price is 11.2713, which is used to check the efficiency of the underlying methods [19]. For both figures, we set the mean value of the exponential time step as \(\Delta t=\frac{1}{\lambda }=0.001, 0.0025, 0.005, 0.0075, 0.01, 0.025\). The averages are taken over \(M=10^7\) realizations and, therefore, the error bars, which represent the statistical errors, are smaller than the plotted symbols and can be neglected. For references, the lines of the half slope and slope one are included. For both cases, we observe that the standard Exp method produces an error in the mean hitting time proportional to \(\sqrt{\frac{1}{\lambda }}\). This slow convergence of the standard Exp method is due to the approximation of the continuous samples of the underlying process using independent discrete random samples with exponential distributions, giving the values only at the beginning and the end of each time step. Thus, we have no information about the behaviour of the continuous process during the time step, given a finite probability that the barrier may be hit between the computational nodes. However, when the original Exp method and the improved Exp algorithm are used, this probability can be taken into account using an efficient boundary test, causing a reduction in the error to being proportional to \(\frac{1}{\lambda }\) and achieving a convergence of the first order.

Fig. 1
figure 1

Plots of the hitting time error calculated using the standard exponential timestepping method, the original exponential method and the improved exponential time stepping algorithm against the discretization parameter \(\Delta t\) for the down-and-out call barrier option under CEV model. The parameters are fixed as the volatility parameter at initial time \(\sigma _{0}=0.25\), the dividend \(q=0\), the risk-free interest rate \(r=0.1\), the current value of the option \(S_0=100\), the strike price \(K=95\), the barrier \(H=90\), the elasticity of the local volatility \(\beta =-0.5\), and the expiration time \(T=0.5\). The theoretical value is 10.6013 and the averages are taken over \(M=10^7\) realizations

Fig. 2
figure 2

Plots of the hitting time error calculated using the standard exponential timestepping method, the original exponential method and the improved exponential time stepping algorithm against the discretization parameter \(\Delta t\) for the up-and-out put barrier option under CEV model. The parameters are fixed as: the volatility parameter at initial time \(\sigma _{0}=0.40\), the dividend \(q=0.03\), the risk-free interest rate \(r=0.08\), the current value of the option \(S_0=100\), the strike price \(K=100\), the barrier \(H=130\), the elasticity of the local volatility \(\beta =0\), and the expiration time \(T=1\). The theoretical value is 11.2713 and the averages are taken over \(M=10^7\) realizations

To further examine this behaviour of convergence, we assume that the hitting time error yields the relation \(\xi _{H}=C{\Delta t}^b\) for some constants C and b. Thus, we have

$$\begin{aligned} \log \xi _{H}=\log C +b\log \Delta t. \end{aligned}$$

For the results of the down-and-out call barrier option, a least squares power law fit performed on the standard Exp method slope produces \(b=0.4626\approx 0.5\) with a least squares residual of 0.1155, whereas the original Exp method and the improved Exp algorithm give the value of \(b=1.0172\approx 1\) with a least squares residual of 0.1377 and the value of \(b=0.9468\approx 1\) with a least squares residual of 0.1495, respectively. Similar observations are obtained for the up-and-out put barrier option. Thus, on the standard Exp method, a least squares fit for \(\log C\) and b is calculated, giving the value of \(0.4546\approx 0.5\) for b with a least squares residual of 0.0664. However, a least squares fit performed on the original Exp method slope produces \(b=0.9983\approx 1\) with a residual of 0.1475, and one performed on the improved Exp algorithm slope gives \(b=0.9937\approx 1\) with a residual of 0.0524. This agrees with the convergence behaviour in Figs. 1 and 2 for these simulation methods. To conclude this point, we see that when the boundary test is incorporated, the hitting time error scales linearly with mean value of the random time step, giving a rate of weak convergence of one.

Moreover, as observed from these figures, the improved algorithm seems to be more accurate in terms of the hitting time errors than the original Exp algorithm, despite the similarity between their respective rates of convergence. Note that the overall error can be defined as the expected value of the hitting time error. On the other hand, the statistical error is given as \(\xi _{s}=\sqrt{\frac{\mathrm{{Var}}}{M}}\), where Var represents the computed variance and M is the number of samples. To reduce the statistical error and so improve the overall accuracy, a large value of M is required, which is computationally expensive. However, the statistical error is proportional to the computed variance, and instead of adding more samples, one can concentrate on reducing the variance using an appropriate variance reduction technique, such as that of the CV. This technique leads to a huge reduction in the computed variance and in the statistical error without an increase significant in time if a suitable CV is chosen. To price the underlying barrier options, the suitable CVs are their counterpart vanilla options, given significant reduction in the resultant variances. For example, when \(\Delta t=0.001\), the computed variance of the original Exp method is 199.1457 for the first case and 265.9108 for the second case. Under the CV technique, these quantities are reduced to just 32.3957 and 32.3783, respectively. The variance of the improved algorithm is about six times smaller than the original method for the down-and-out call barrier option and is about eight times smaller for the case of the up-and-out put barrier option. This selection of the CV is successful due to the high correlation \(\rho \) between the vanilla options and their counterpart barrier options, with \(\rho =0.9151\) and \(\rho =0.9371\), respectively. More precisely, Eq. (21) yields that

$$\begin{aligned} \mathrm{{Var}}(Z_c)=\mathrm{{Var}}(X)[1-\rho ^2], \end{aligned}$$

and to minimize \(\mathrm{{Var}}(Z_c)\), \(\rho \) should be close to 1. This means that the variance reduction factor: \(\frac{\mathrm{{Var}}(X)}{\mathrm{{Var}}(Z_c)}=\frac{1}{[1-\rho ^2]}\), which is a measure of the computational acceleration, increases sharply as \(\rho \) approaches 1. Therefore, in the first case, a correlation of 0.9151 produces a sixfold acceleration, whereas in the second case, a correlation of 0.9371 yields an eightfold acceleration. This suggests that a high degree of correlation is needed for a CV to yield substantial benefits.

4.2 Statistical error and measuring of simulation efficiency

Generally, if we have a choice between two algorithms, the algorithm with a lower variance may not be acceptable if it takes longer to execute. Therefore, we should balance between the variance reduction and the computational effort using the relation:

$$\begin{aligned} \mathrm{{Eff}} = \mathrm{{Computed}} \quad \mathrm{{variance}}\times \mathrm{{CPU}}\quad \mathrm{{time}} \end{aligned}$$
(29)

as a measure of efficiency. Thus, an algorithm with a smaller value of such a product is more efficient than the other one. The formulation of the central limit theorem suggests that the estimator with such a smaller product will produce a more precise estimator and a narrower confidence interval. Basically the 95% confidence interval of the barrier option is defined as

$$\begin{aligned} \tilde{V}\pm 1.96\left( \sqrt{\frac{\mathrm{{Var}}}{M}}\right) , \end{aligned}$$

where \(\tilde{V}\) is the MC simulation of the option price, Var represents the computed variance and M is the number of samples. The quantity \((\sqrt{\frac{\mathrm{{Var}}}{M}})\) represents the statistical error and can be considered a measure of an algorithm’s accuracy. Clearly, the number of paths M and the volatility parameter \(\sigma _{0}\) play a crucial role in efficiency and in accuracy of the underlying simulation algorithms. In addition, the restriction in the choice of the elasticity \(\beta \) plays an important role in the performances of the CEV process. Therefore, we will also analyse the effectiveness of \(\beta \) in such a context.

4.2.1 Efficiency under different values of MC samples

To examine the strong influence of these parameters, we first compare the efficiency and the accuracy of the original Exp algorithm and of the improved Exp algorithm for the down-and-out call barrier option under the CEV model with different values of M. The results are recorded in Tables 1 and  2. Four sample sizes are used \(M=10^4\), \(10^5\), \(10^6\) and \(10^7\) with keeping other parameters as they are in Fig. 1, except \(\Delta t=E[\delta t]\) is fixed at 0.002. From the numerical results in Table 1, we see that the improved algorithm is more efficient than the original algorithm by a factor around of six, indicating that the gain in efficiency is significant. Table 2 shows the 95% confidence intervals for the original and improved Exp algorithms, as well as the ratio of their widths. In addition to that, the resulting statistical errors are given to measure the accuracy of the underlying algorithms. We observe that using the CV shrinks the confidence intervals and improves the accuracy by a factor around of two and a half. For example, the statistical errors have been reduced from 0.1402 to 0.056 for \(M=10^4\) and from 0.0045 to 0.0018 for \(M=10^7\) when the CV is used, indicating substantial error reductions.

Table 1 The efficiency of simulation algorithms as (simulated variance\(\times \) CPU time) for the down-and-out call barrier option under CEV model with \(\beta =-0.5\), \(\sigma =0.25\) and \(\Delta =E[\delta t]=0.002\)
Table 2 \(95\%\) confidence interval for the down-and-out call barrier option under the CEV model with \(\beta =-0.5\), \(\sigma =0.25\) and \(\Delta t=E[\delta t]=0.002\)

4.2.2 Efficiency under different values of volatility parameter \(\sigma \)

Similar observations are obtained for the up-and-out put barrier option under the CEV model with different values of the volatility parameter \(\sigma _{0}\). The mean duration of time \(\Delta t\) is fixed as 0.002 and the averages are taken over \(10^6\). Other parameters are considered, as shown in Fig. 2. The results are recorded in Tables 3 and 4. First, Table 3 shows the comparison in efficiency between the improved Exp algorithm and the original Exp algorithm for four levels of \(\sigma _{0}\), where \(\sigma _{0}=30\), 40, 50, and 60. The factor of efficiency is decreases as \(\sigma _{0}\) increases, with a range of between 22.7409 for \(\sigma _{0}=30\) and 3.02905 for \(\sigma _{0}=60\). These computations confirm that the algorithm using the CV is more efficient than the other one, particularly for moderate levels of volatility. In Table 4, we report the corresponding results of the 95% confidence intervals and their widths, as well as the resulting statistical errors. The analytical values of the option prices are also included to examine the validity of the estimated confidence intervals. First, we see that the method with CV has confidence intervals that are between around five times smaller for \(\sigma _{0}=30\) and around two times smaller for \(\sigma _{0}=60\) than those obtained by the original Exp algorithm. Despite this significant shrinkage of the confidence intervals, the analytical values are still included for all levels of volatility. Moreover, the improved Exp algorithm results in prices with lower statistical errors than the original Exp algorithm for the same levels of the volatility. We observe that the reduction in the statistical error increases as the volatility parameter \(\sigma _{0}\) decreases. For example, when \(\sigma _{0}=30\), the control variate technique reduces the statistical error from 0.0126 to just 0.0025, leading to a result almost five times smaller than the error obtained using the original Exp algorithm. However, if we increase the volatility to \(\sigma _{0}=60\), the statistical error is reduced from 0.0222 to 0.0123, giving almost twice as much accuracy as the original Exp method. This is due to the fact that smaller variance leads to stronger correlation and so a great reduction in the resultant error is obtained.

4.2.3 Efficiency under different chosen values of the elasticity \(\beta \)

The CEV process (3) with elasticity \(\beta <0\) is used to model the implied volatility smile observed in the stock index options market [8]. Empirical observations indicated that the typical values of \(\beta \) in S&P 500 stock index option prices are strongly negative, and around \(-3\) [21]. Due to this empirical importance of \(\beta \), we would analyse its effect on the efficiency of the underlying algorithms. Five tests are done for down-and-out call CEV barrier option, covering \(\beta =0, -0.5, -1, -2, -3\). The averages are taken over \(M=10^6\) realizations, with keeping other parameters as they are in Table 1. The results are recorded in Tables 5 and 6. First, Table 5 shows the comparison in efficiency between the improved Exp algorithm and the original Exp algorithm. The correlations \(\rho \) between the vanilla options and their counterpart barrier options are also given to measure how the selection of the control variate is successful. Under the restricted CEV process with \(-1\le \beta \le 0\), we found that the improved Exp algorithm is more efficient than the original Exp algorithm by a factor between five and seven, indicating that the gain in efficiency is significant. In these cases, we observed that the computed correlations are close to 1, making the underlying options to be well correlated. However, for the case of \(\beta =-2\) and \(\beta =-3\), the correlation between the vanilla options and their counterpart barrier options are 0.1926 and 0.001, respectively. Based on Eq. (21), the variance reduction factor is given by \(\frac{1}{[1-\rho ^2]}\), which means that the factor of efficiency is close to 1 as \(\rho \) goes to zero. This is consistent with our numerical observations for the factor of efficiency with value of 1.0148 for \(\beta =-2\) and 1.0691 for \(\beta =-3\).

In Table 6, we report the corresponding results of the \(95\%\) confidence intervals and their widths, as well as the resulting statistical errors. We observe that using the CV shrinks the confidence intervals and improves the accuracy by a factor around of two and a half for \(-1\le \beta \le 0\), with statistical errors have been reduced from around of 0.01 to around of 0.005, indicating substantial error reductions. In contrast, the underlying algorithms have approximately the same statistical errors for other values of \(\beta \). Finally, our numerical observations illustrate that the factor of efficiency decreases as the absolute value of \(\beta \) increases, giving that the successful selection of suitable CV is affected dramatically by the choice of \(\beta \).

Table 3 The efficiency of simulation algorithms as (simulated variance\(\times \) CPU time) for the up-and-out put barrier option under the CEV model with \(\beta =0\), \(M=10^6\) and \(\Delta t=E[\delta t]=0.002\)
Table 4 \(95\%\) confidence interval for the up-and-out put barrier option under the CEV model with \(\beta =0\), \(M=10^6\) and \(\Delta t=E[\delta t]=0.002\)
Table 5 The efficiency of simulation algorithms as (simulated variance\(\times \) CPU time) for the down-and-out call barrier option under CEV model with \(M=10^6\), \(\sigma =0.25\) and \(\Delta t=E[\delta t]=0.002\)
Table 6 \(95\%\) confidence interval for the down-and-out call barrier option under the CEV model with \(M=10^6\), \(\sigma =0.25\) and \(\Delta t=E[\delta t]=0.002\)

5 Conclusion and discussion

Finding the analytical results for the exit time problems, in general, is a hard task and success can be limited, although closed forms can be obtained in some cases. Therefore, numerical and particularly MC simulation approaches are often used in this context. A common application of exit time problems in financial mathematics is the pricing of a barrier option where the computing of the expected value of the discounted payoff under the risk-neutral measure is of interest. In our work, we have implemented a stochastic Euler algorithm with random time steps introduced by Jansons and Lythe [22, 23] to simulate the exit time of one-dimensional diffusion models, and we have adapted it to deal with the function of exit time, such as the payoff of the CEV barrier options, where the volatility smile observed in the empirical data can be captured. The MC method is then applied to give an approximation of the mean of the function of exit time. This procedure suffers from two sources of error: the hitting time error inherent from the exponential timestepping Euler method and the MC statistical error.

Based on the numerical experiments, we found that the hitting time error is of order \(O(\frac{1}{\sqrt{N}})\), with N exponential time steps. This slow convergence is due to the possibility that the exit event may have occurred without being detected between the discrete computational nodes. To reduce such an error efficiently and to retrieve a first order of convergence, Jansons and Lythe [23] suggested a simple boundary test in their original algorithm after each time step to check whether the barrier was hit during the time step. Our numerical experiments have confirmed this claim for the CEV barrier options, coinciding with the previous results, such as those for the double-well potential physical problem [23], for the neural diffusion models [4], and for the financial derivatives of barrier features [3]. Jansons and Lythe [23] also expected that further analysis of the solution of the linear second-order ordinary differential equation (4) could lead to a simple general proof of this rate of convergence. This would be an interesting area to consider in our future work.

The overall accuracy depends not only on the hitting time error but also on the statistical error arising from MC approximation. This kind of error has an order of convergence of \(O(\frac{1}{\sqrt{M}})\), where M is the number of simulations. Indeed, the original exponential timestepping Euler algorithm has high statistical error due to its exponentially large exit times with unbounded samples. This gives a large variance, and therefore, the statistical error will dominate. To overcome this problem, we combined the original Exp algorithm with a variance reduction technique called the control variate (CV for short). We call this method the improved Exp algorithm for abbreviation. The CV technique can improve the constant of the rate of convergence and can lead to very substantial error reductions if a suitable CV is chosen. Our numerical experiments, for the chosen parameters, have shown that the variance of the improved Exp algorithm is about six times smaller than the Jansons and Lythe original method for the down-and-out call barrier option and is about eight times smaller for the case of the up-and-out put barrier option for the same number of paths. In terms of efficiency, which is the product of the computed variance and the amount of CPU time, the results also have shown that the improved algorithm under the restricted CEV process is more efficient than the original algorithm, leading to a significant reduction in the statistical errors.

For this study, the improved Exp algorithm has been implemented for only scalar diffusion problems, giving an efficient estimation for their hitting times. The challenge is how to develop such an algorithm to deal with the efficiently of the multi-dimensional diffusion processes, such as the CEV barrier options with multiple assets. Jansons and Lythe [24] developed their original exponential timestepping algorithm to generate the updates only for the case of the multidimensional Brownian motion with hitting times of curved surfaces. Based on this work, one interesting area to consider in future work is to develop this algorithm to deal with more general diffusion problems such as barrier options with multiple assets. Furthermore, the improved Exp algorithm needs to be examined on much more complex and realistic applications, such as the barrier options of general stochastic volatility models; this will be a topic for our future research. Finally, it would also be of interest to investigate the possibility of incorporating other variance reduction techniques, such as the antithetic variate, stratified sampling, importance sampling, Latin Hypercube, and moment-matching methods to obtain greater overall savings and to improve the computational efficiency of the exponential timestepping algorithm.