1 Introduction

R&D investments are considered an important driving force for the growth of the modern economy. For the analysts, is very important to value these investments considering their uncertainty. As it is well-know, R&D projects are characterized by the sequentiality of their investments and by the flexibility to realize the production investment at any time before the expiration time of R&D innovation. In this scenario, the real option approach can capture these aspects, unlike the Net Present Value (NPV) and the Internal Rate of Return (IRR) that underestimate R&D projects. In particular, the R&D projects can be considered as compound American exchange option (CAEO) in which both the gross project value and the investment cost are uncertain. Papers that deal with exchange option valuation are Margrabe (1978), McDonald and Siegel (1985), Carr (1988), Carr (1995), Armada et al. (2007) and so on. In particular way, McDonald and Siegel (1985) value a simple European exchange option, Carr (1988) develops a model to price a compound European exchange option while Armada et al. (2007) propose a Richardson extrapolation in order to value a simple American exchange option.

These models consider that assets distribute ”dividends” that, in real options context, are the opportunity costs if an investment project is postponed Myers (1977).

However, the analytical computation of CAEO is more difficult and it is convenient to implement a numerical method. Numerical approximation is therefore an important task as witnessed by the contributions of Tilley (1993), Barraquand and Martineau (1995), Broadie and Glasserman (1997).

The first goal of our paper is to implement a Monte Carlo methodology in order to value a CAEO applied in the context of R&D investment. To realize this objective, based on Cortelezzi and Villani (2009) and Villani (2014), we present the Least Square Monte Carlo (LSM) proposed by Longstaff and Schwartz (2001) in order to value the CAEO. Despite this approach is valid in term of accuracy, the time required for the simulation of this kind of option is very long. Consequently, the second aim of this paper is to build a neural network architecture based on Back Propagation (BP) system using the simulation results as “targets” in the learning phase. As there is not a market valuation of CAEO, the advantages of this approach are first of all the speed and the accuracy in the computations and, after that, the possibility to extend the trained neural network to value any R&D investment project. To highlight our method, we compare BP approach with Radial Basis Function (RBF) and General Regression (GRNN).

The computing power has allowed nonlinear methods to become applicable to modeling and forecasting a host of economic and nancial relationships. Neural networks, in particular, have been applied to many of these empirical cases. For instance, Aminian et al. (2006) compare the predictive power of the linear regression model against the fully generalized nonlinear neural network, with the improvement exposing the degree of nonlinearity present in the relationship investigated. Their study uses neural networks as an efcient nonlinear regression technique to assess the validity of linear regression in modeling nancial data. Andreou et al. (2006) show that the artificial neural network models with the use of the Huber function outperform the ones optimized with least squares. Eskiizmirliler et al. (2020) approximate the unknown function of the option value using a trial function, which depends on a neural network solution and satisfies the given boundary conditions of the Black–Scholes equation. Arin and Ozbayoglu (2020) develop hybrid deep learning based options pricing models to achieve better pricing compared to Black-Scholes. The results indicate that the proposed models can generate more accurate prices for all option classes. RBF method as a meshless technique is suggested to solve time fractional Black–Scholes model for European option pricing problem Golbabai et al. (2019).

The literature that studies the real option in the neural network context is not very extensive. For instance, Ma (2016) based on real options method to construct a petroleum exploration and development projects, select the appropriate option pricing method and instance data analyzed by gas exploration and point out that the application of real option method can effectively improve the investment project evaluation. Moreover, Taudes et al. (1998) propose to use neural networks to value options approximating the value function of the dynamic program showing for each mode of operation the current state as input and yielding the mode to be chosen as output.

The paper is organized in this fashion. Section 2 presents the structure of an R&D investment and its evaluation in term of real option while, Sect. 3, illustrates the valuation of CAEO using the LSM approach. Moreover, the implementation of the neural network architecture is realized in Sect. 4 and some numerical applications are proposed in Sect. 5. Finally, Sect. 6 concludes.

2 R&D Structure as Real Option

In this section, we present a two-stage R&D investment which structure is the following: R is the research investment spent at initial time \(t_0=0\); IT is the investment technology to develop innovation payed at time \(t_1\); D is the production investment in order to obtain the R&D project’s value and V is the R&D project value. Let assume that \(IT=qD\) is a proportion q of asset D, so it follows the same stochastic process of D and the production investment D can be realized between \(t_1\) and T. In particular way, investing R at time \(t_0\), the firm obtains a first investment opportunity that can be value as a CAEO denoted by \(C(S_k,IT,t_1)\). This option allows to realize the investment technology IT at time \(t_1\) and to obtain, as underlying asset, the option to realize the market launch. Let denote by \(S_k(V,D,T-t_1)\) this option value at time \(t_1\), with maturity date \(T-t_1\) and exercisable k times. In detail, during the market launch, the firm has got another investment opportunity to invest D between \(t_1\) and T and to receive the R&D project value V. Specifically, using the LSM approach, the firm must decide whether to invest D or to wait at any discrete time \(\tau _k=t_1+k\Delta t\), for \(k=0,1,2,\cdots h\) with \(\Delta t= \frac{T-t_1}{h}\) and h is the number of discretizations. In this way we capture the managerial flexibility to invest D before the maturity T and so to realize the R&D cash flows. Figure 1 depicts the R&D investment structure.

Fig. 1
figure 1

R&D structure evaluated as CAEO

We assume that V and D have the following geometric Brownian motion:

$$\begin{aligned} \frac{dV_t}{V_t}= & {} (\mu _v - \delta _v)dt + \sigma _v dZ_t^v \end{aligned}$$
(1)
$$\begin{aligned} \frac{dD_t}{D_t}= & {} (\mu _d - \delta _d)dt + \sigma _d dZ_t^d \end{aligned}$$
(2)
$$\begin{aligned} cov\left( \frac{dV_t}{V_t},\frac{dD_t}{D_t}\right)= & {} \rho _{vd}\sigma _v \sigma _d\,dt \end{aligned}$$
(3)

where \(\mu _v\) and \(\mu _d\) are the expected rates of return, \(\delta _v\) and \(\delta _d\) are the corresponding dividend yields, \(\sigma _v^2\) and \(\sigma _d^2\) are the respective variance rates, \(\rho _{vd}\) is the correlation between changes in V and D, \((Z^v_t)_{t\in [0,T]}\) and \((Z^d_t)_{t\in [0,T]}\) are two Brownian processes defined on a filtered probability space \((\varOmega ,{\mathcal {A}}, \{{\mathcal {F}}_t\}_ {t\ge 0}, {\mathbb {P}})\), where \(\varOmega \) is the space of all possible outcomes, \({\mathcal {A}}\) is a sigma-algebra, \({\mathbb {P}}\) is the probability measure and \(\{{\mathcal {F}}_t\}_ {t\ge 0}\) is a filtration with respect to \(\varOmega \) space. Assuming that the firm keeps a portafolio of activities which allows it to value activities in a risk-neutral way, the dynamics of the assets V and D under the risk-neutral martingale measure \({\mathbb {Q}}\) are given by:

$$\begin{aligned} \frac{dV_t}{V_t}= & {} (r - \delta _v)dt + \sigma _v dZ^{*v}_t \end{aligned}$$
(4)
$$\begin{aligned} \frac{dD_t}{D_t}= & {} (r - \delta _d)dt + \sigma _d dZ^{*d}_t \end{aligned}$$
(5)
$$\begin{aligned} Cov\left( dZ^{*v}_t,dZ^{*d}_t\right)= & {} \rho _{vd}\,dt \end{aligned}$$
(6)

where r is the risk-free interest rate, \(Z^{*v}_t\) and \(Z^{*d}_t\) are two Brownian standard motions under the probability \({\mathbb {Q}}\) with correlation coefficient \(\rho _{vd}\). After some manipulation, we get the equations for the price ratio \(P=\frac{V}{D}\) and \(D_T\) under the probability \({\mathbb {Q}}\):

$$\begin{aligned} \frac{dP_t}{P_t}= & {} (-\delta +\sigma _d^2 -\sigma _v\sigma _d \rho _{vd})\,dt +\sigma _v dZ^{*v}_t - \sigma _d dZ^{*d}_t \end{aligned}$$
(7)
$$\begin{aligned} D_T= & {} \displaystyle D_0 \exp {\left\{ (r-\delta _d)T \right\} }\cdot \exp {\left( -\frac{\sigma ^2_d}{2}\,T +\sigma _d Z^{*d}_T\right) } \end{aligned}$$
(8)

where \(D_0\) is the value of asset D at initial time.

We can observe that \(U\equiv (-\frac{\sigma ^2_d}{2}\,T +\sigma _d Z^{*d}_T) \sim N (-\frac{\sigma ^2_d}{2}T,\sigma _d \sqrt{T}) \) and therefore \(\exp (U)\) is log-normal distributed whose expectation value \(E_{{\mathbb {Q}}}[\exp (U)]=1\). By Girsanov’s theorem, we define a new probability measure \({\mathop {{\mathbb {Q}}}\limits ^{\sim }}\) equivalent to \({\mathbb {Q}}\) whose Radon-Nikodym derivative is:

$$\begin{aligned} \frac{d{\mathop {{\mathbb {Q}}}\limits ^{\sim }}}{d\,{\mathbb {Q}}}= \exp {\left( -\frac{\sigma ^2_d}{2}\,T +\sigma _d Z^{*d}_T\right) } \end{aligned}$$
(9)

Hence, substituing in (8) we can write:

$$\begin{aligned} \displaystyle D_{T}= D_0 \,e^{(r-\delta _d)T} \cdot \frac{d {\mathop {{\mathbb {Q}}}\limits ^{\sim }}}{d\,{\mathbb {Q}}} \end{aligned}$$
(10)

By the Girsanov’s theorem, the evolution of processes:

$$\begin{aligned} d {\hat{Z}}^d_t= & {} d Z^{*d}_t - \sigma _d dt \end{aligned}$$
(11)
$$\begin{aligned} d {\hat{Z}}^v_t= & {} \rho _{vd} d{\hat{Z}}^d_t + \sqrt{1-\rho _{vd}^2}\,dZ'_t \end{aligned}$$
(12)

are two Brownian motions under the risk-neutral probability space \((\varOmega ,{\mathcal {A}}, {\mathcal {F}},{\mathop {{\mathbb {Q}}}\limits ^{\sim }})\) and \(Z'\) is a Brownian motion under \({\mathop {{\mathbb {Q}}}\limits ^{\sim }}\) independent of \({\hat{Z}}_d\). By using Eqs. (11) and (12), we can now obtain the risk-neutral price ratio P:

$$\begin{aligned} P_t= P_0 \exp \left\{ \left( \delta _d -\delta _v -\frac{\sigma ^2}{2}\right) t + \sigma Z^p_t \right\} \end{aligned}$$
(13)

where \((\sigma _v d \hat{Z}^v_t-\sigma _d d \hat{Z}^d_t) \sim {\mathcal {N}}(0,\sigma \sqrt{dt}) = \sigma dZ^p_t\) and \(\sigma =\sqrt{\sigma _v^2+\sigma _d^2-2\sigma _v\sigma _d\rho _{vd}}\) and \(Z_t^p\) is a Brownian motion under \({\mathop {{\mathbb {Q}}}\limits ^{\sim }}\).

3 Valuation of CAEO Using LSM Method

The value of CAEO can be determined as the expectation value of discounted cash-flows under the risk-neutral probability \({\mathbb {Q}}\):

$$\begin{aligned} C(S_k,IT,t_1)=e^{-rt_1}E_{{\mathbb {Q}}}[\max (S_k(V_{t_1},D_{t_1} ,T-t_1)-IT,0)] \end{aligned}$$
(14)

Assuming the asset D as numeraire and using Eq.(10) we obtain:

$$\begin{aligned} C(S_k,IT,t_1)=D_0 e^{-\delta _d t_1}E_{{\mathop {{\mathbb {Q}}}\limits ^{\sim }}}[\max (S_k(P_{t_1},1 ,T-t_1)-q,0)] \end{aligned}$$
(15)

where \(IT=q\,D_{t_1}\).

The market launch phase \(S_k(P_{t_1},1 ,T-t_1)\) can be analyzed using the LSM method. Like in any American option valuation, the optimal exercise decision at any point in time is obtained as the maximum between immediate exercise value and expected continuation value. The LSM method allows us to estimate the conditional expectation function for each exercise date and so to have a complete specification of the optimal exercise strategic along each path. The method starts by simulating n price paths of asset \(P_{t_1}\) defined by Eq. (13) with \(\delta =\delta _v-\delta _d\). Let \(\hat{P^i_{t_1}}, i=1,\ldots , n \) the simulated prices. Starting from each \(i\) th simulated-path, we begin by simulating a discretization of Eq. (13) for \(k=1,\ldots , h\). The process is repeated m times over a time horizon T. Starting with the last \(j\) th price \(\hat{P}^{i,j}_{T}\), for \(\, j=1\ldots m\), the option value in T can be computed as \(S_0( \hat{P}^{i,j}_{T},1,0)=\max ( \hat{P}^{i,j}_{T}-1,0)\). Working backward, at time \(\tau _{h-1}\), the process is repeated for each \(j\) th path. In this case, the expected continuation value may be computed using the analytic expression for an European option \(S_1(\hat{P}^{i,j}_{\tau _{h-1}},1,\Delta t)\). Moving backwards, at time \(\tau _{h-1}\), the management must decide whether to invest or not. The value of the option is maximized if the immediate exercise exceeds the continuation value, i.e.:

$$\begin{aligned} \hat{P}^{i,j}_{\tau _{h-1}}-1\ge S_1(\hat{P}^{i,j}_{\tau _{h-1}},1,\Delta t). \end{aligned}$$
(16)

We can find the critical ratio \(P^*_{\tau _{h-1}}\) that solve the inequality (16):

$$\begin{aligned} P^*_{\tau _{h-1}}-1= S_1(P^*_{\tau _{h-1}},1,\Delta t) \end{aligned}$$

and so the condition (16) is satisfied if \(\hat{P}^{i,j}_{\tau _{h-1}}\ge P^*_{\tau _{h-1}}\). But it is very heavy to compute the expected continuation value for all previous time and so to determine the critical price \(P^*_{\tau _{k}},\, k=1,\ldots , h-2\), as it is shown in Carr (1995).

The main contribution of the LSM method is to determine the expected continuation values by regressing the subsequent discounted cash flows on a set of basis functions of current state variables. As described in Abramowitz and Stegun (1970), a common choice of basis functions are the weighted Power, Laguerre, Hermite, Legendre, Chebyshev, Gegenbauer and Jacobi polynomials. In our paper we consider as basis function a three weighted Power polynomial. Let be \(L^w\) the basis of functional forms of the state variable \(\hat{P}^{i,j}_{\tau _k}\) that we use as regressors. We assume that \(w=1,2,3\). At time \(\tau _{h-1}\), the least square regression is equivalent to solve the following problem:

$$\begin{aligned} \min _{\mathbf{a }} \sum _{j=1}^m \left[ S_0(\hat{P}^{i,j}_{T},1,0)e^{-r\Delta t} -\sum _{w=1}^3 a_w L^w ( \hat{P}^{i,j}_{\tau _{h-1}})\right] ^2 \end{aligned}$$
(17)

The optimal \(\hat{\mathbf{a }}=({\hat{a}}_1,{\hat{a}}_2,{\hat{a}}_3)\) is then used to estimate the expected continuation value along each path \(\hat{P}^{i,j}_{\tau _{h-1}}, \, j=1,\ldots , m\):

$$\begin{aligned} {\hat{S}}^i_1(\hat{P}^{i,j}_{\tau _{h-1}},1,\Delta t)= \sum _{w=1}^3 \hat{a}_w L^w ( \hat{P}^{i,j}_{\tau _{h-1}}) \end{aligned}$$

After that, the optimal decision for each price path is to choose the maximum between the immediate exercise and the expected continuation value.

Proceeding recursively until time \(t_1\), we have a final vector of continuation values for each price-path \(\hat{P}^{i,j}_{\tau _k}\) that allows us to build a stopping rule matrix in Matlab that maximizes the value of American option. As consequence, the \(i\)th option value approximation \(\hat{S}^i_k(\hat{P}^i_{t_1},1 ,T-t_1)\) can be determined by averaging all discounted cash flows generated by option at each date over all paths \(j=1,\ldots , m\). Finally, it is possible to implement Monte Carlo simulation to approximate the CAEO:

$$\begin{aligned} C(S_k,IT,t_1)\approx D_0e^{-\delta _d t_1} \left( \sum _{i=1}^n \frac{\max (\hat{S}^i_k(\hat{P}^i_{t_1},1 ,T-t_1)-q, 0)}{n}\right) \end{aligned}$$
(18)

“Appendix A” illustrates the complete Matlab algorithm to value CAEO. We conclude that, applying real option methodology, the R&D project will be realized at time \(t_0\) if \(C(S_k,IT,t_1)-R\) is positive, otherwise the investment will be rejected.

4 Feed–Forward Neural Networks to Value CAEO

In this section we describe the neural network architecture in order to value the CAEO and in particular a BP system in which the input layer is composed by \(n=10\) nodes, one for each variable:

$$\begin{aligned} CAEO=f(V,D,IT, \delta _d,\delta _v, t_1,T, \rho ,\sigma _v,\sigma _d) \end{aligned}$$

and is composed by one hidden layer with \(p=6\) nodes, as it shown in Fig. 2. Following Eskiizmirliler et al. (2020), we describe the BP neural network. The yellow circles are the ten input parameters above described, the blue ones are the six nodes in the hidden layer and the pink node denotes the CAEO output given by BP structure. Moreover, the red and green lines denote a negative inhibitory and a positive excitatory respectively, depending on the weights connecting the nodes. Obviously, the thickness represents the intensity of link.

Fig. 2
figure 2

Structure of the general feed-forward single hidden layer perceptron

For the learning phase, the network is parameterized on a sufficient large number of targets given by the previous Monte Carlo LSM estimation of CAEO, summarized in Tables 4 and 5. The idea is to use the Monte Carlo values, whose time simulation is very long, as targets in the training phase in order to extend, with the BP neural network, the value for any input vector. This approach allows a drastrical reduction of time simulation. About the Monte Carlo approach, we have used as number of discretizations \(x=100\), the number of American simulation \(m=50{,}000\) and the paths of Compound option \(n=30{,}000\). We recall that for each path \(j=1,\ldots ,n\) there are \(m=50{,}000\) trajectories to simulate the American exchange option. This leads to increase time simulation of CAEO instead of a better accuracy.

We propose a logistic activation function between the input-hidden layers. In this scenario, its property as approximator is well defined (see White 1990). In our model, we have assumed one hidden layer with six nodes and one output layer, i.e. the neural value of CAEO, with a pureline activation function. Each node performs computation and transformation operations. In particular, in the hidden layer, the aggregation function used is the sum function as:

$$\begin{aligned} a_{j}=\sum _{i=1}^{10} w_{ij}x_i-b_j \end{aligned}$$
(19)

where \(j=1,\ldots ,6\) are the nodes in hidden layer, \(x_i\) are the input values for \(i=1,\ldots ,10\) and \(b_j\) is a threshold value named bias, (for more detail see Hecht-Nielsen 1990). As seen in Fig. 2, a feed forward neural network model including a single hidden layer, which takes inputs from the input layer and produces the weighted sum of inputs added onto some bias values as outputs, is preferred to solve the problem effectively. The output produced by each node of hidden layer is obtained by the logistic activation function:

$$\begin{aligned} x'_{j}=f(a_{j})=\frac{1}{1+e^{-a_{j}}} \end{aligned}$$

and so this output become input for the output layer. In the same fashion:

$$\begin{aligned} y'=g\left( \sum _{j=1}^{6} w'_{j}x'_j-b'\right) \end{aligned}$$

is the output that the network produces at the end of the first cycle of learning, in which g is the pureline activation function, \(w'_j\) and \(b'_j\) are the weights and the bias, respectively.

Moreover, the BP networks are a learning algorithm based on conventional method of reduction gradient, in which the couples of input-output are introduced iteratively in the network by an opportune update and modification of weights in order to reach the minimum value of squared error (MSE) function:

$$\begin{aligned} E=\frac{1}{2K}\sum _{k=1}^{K}[y'_k-y_k]^2 \end{aligned}$$
(20)

where K is the number of input-output data with LSM simulation, \(y_k\) is the real output (target) associated with vector input k and \(y'_k\) is the neural value.

For the numerical solution of the minimization problem defined above, the ”Gradient Descent Method”   is considered. Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using a gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. In particular, the updating of the weights is obtained by retropropagating appropriately the value assumed by the function E from the output layer, through the hidden layer, up to the initial state, as:

$$\begin{aligned} w_{ij}(q+1)= & {} w_{ij}(q)-\eta \frac{\partial E}{\partial w_{ij}}(q+1) \end{aligned}$$
(21)
$$\begin{aligned} w'_{j}(q+1)= & {} w'_{j}(q)-\eta \frac{\partial E}{\partial w'_{j}}(q+1) \end{aligned}$$
(22)

where w(q) is the value of weights at q iteration and \(\eta \) is a learning rate that we assume \(\eta =0.60\).

Moreover, the choice of the learning rate is important, as descent parameter \(\eta \), and it plays a vital role for converging of the algorithm to the solution in gradient descent. A lower \(\eta \) value causes a long running time for the algorithm, which becomes computationally expensive. In contrast, large \(\eta \) values imply divergence from the solution in general.

4.1 Other Methods: Radial Basis Function (RBF) and General Regression Neural Network (GRNN)

As we have analyzed, BP network is one of the most widely used neural networks. It is a multi-layer network which includes at least one hidden layer. First the input is propagated forward through the network to get the response of the output layer. Then, the sensitivities are propagated backward to reduce the error. During this process, weights in all hidden layers are modified. As the propagation continues, the weights are continuously adjusted and the precision of the output is improved.

Radial Basis Function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer.

The RBF network is a three-layer feed-forward neural network, between the input and the output layers there is a hidden layer. When training, vectors are input to the first layer and fanned out to the hidden layer. In the latter, a cluster of RBF functions turn the input to output, adjusting the weight of the input to the hidden layer. Then, under the target vector’s supervising, the weight of the output vector of the hidden layer is adjusted. When clustering texts, the Euclidean distance between the input vectors and the weight vectors, which have been adjusted by training process, is calculated. Each input sample is sorted to a class. Then the output layer collects samples belonging to same classes and organizes an output vector, the final clustering. The most common form of basis function is the Gaussian. By contrast, the hidden units of the RBF network are formed by the distance between the prototype vector and the input vector, transformed by a non-linear basis function. The basic structure of an RBF neural network includes an n dimension input layer, a fairly larger dimension hidden layer (\(p>n\)) and the output layer. Typical radial basis function is Gaussian function,

$$\begin{aligned} x'_j =\exp \left( {\frac{-||a_j-c_j ||}{2\sigma ^2_j}}\right) \end{aligned}$$

where p is the number of number of neurons in the hidden layer, \(|| \cdot ||\) is the Euclidean norm, \(c_j\) and \(\sigma _j\) are the center and width of the hidden neuron j, respectively and \(a_j\) is given by formula (19). The output is given by the following linear transformation:

$$\begin{aligned} y'_{RD}=\sum _{j=1}^p w'_jx'_j \end{aligned}$$

Generalized Regression Neural Network (GRNN), suggested by Specht (1991), belongs to RBF networks with the assumption that the number of neurons in the hidden layer is the same as the sample size of training data and the center of the i-th neuron is just the i-th sample \(x_i\). We remark that GRNN directly produces a predicted value without training process.

5 Numerical Results

Figure 3a shows the \(K=100\) LSM data used in training and in the particular their values have been normalized between 0 and 1. The lowest value corresponds to CAEO value equals to 1.850 while the highest one is 32.250. In the same picture, the red line depicts the average value of LSM simulation. It is interesting to analyze Fig. 3b that illustrates the evolution of standard error in the training phase over the learning cycles. It shows how the average training error goes down in order to reach the level 0.0034.

Fig. 3
figure 3

Back Propagation training results

To illustrate our results, we simulate by the neural network the CAEO value starting from these initial parameter value: \(V=100; \,D=80; \, IT=40; \, \delta _v=0.15;\, \delta _d=0; \, t_1=1; \, T=2;\, \rho =0.06; \,\sigma _v=0.90; \, \sigma _d=0.30\) and we change (once for time) all variables. The results are summarized in Table 1. The neural network simulates the CAEO respecting the sensitivity of several variables. In particular, it increases when asset V and the volatilities \(\sigma _v\) and \(\sigma _d\) raise too. While, the CAEO decreases when the investment costs D and IT enlarge. The advantage to have a neural network to simulate a CAEO is, first of all, the time in order to receive the simulated output with a low standard error. We remark that the LSM Monte Carlo is accuracy with an average standard error 0.0094 but the time simulation is very long. Another advantage about the neural network is to describe the influence that all variables produce on the CAEO value. As referred in Table 2, the most important parameter is the volatility of gross project \(\sigma _v\), the gross project value V and the volatility \(\sigma _d\). It is also possible to verify, as shown in Fig. 4, that most of the results are accurate. They are in fact almost all arranged on the fit line. In fact, the correlation coefcient (R) to the linear t (\(y = ax\)) is 0.999 giving an almost perfect t, something of course expected since it was this data set used for the training of the network. The very good tting values indicate that the training was done very well. These results veried the success of BP neural networks to recognize the implicit relationships between input and output variables.

Table 1 Neural Network values with the BP method
Table 2 Sensitivity parameter on CAEO value

Finally, to appreciate the goodness of BP method, we will compare it with the RBF and general regression GRNN. Some significant results are summarized in Table 3. To evaluate the goodness of the network, the MSE (see Eq. 20) and the Mean Absolute Percentage Error (MAPE) defined as:

$$\begin{aligned} MAPE=\frac{1}{K}\sum _{k=1}^K\left| \frac{y'_k-y_k}{y_k} \right| \end{aligned}$$

are proposed. As we can see from the results in Table 3, the RBF and the GRNN seem to underestimate the CAEO value with respect to BP. However, the three methods analyzed have a good predictive power since, even if the MSE and MAPE are slightly higher in RBF and GRNN than BP. It is evident that the BP network provides much better predictions than the other types of neural networks. As regards the latter, however, it is difficult to establish which one has the best behavior since the accuracy of the predictions is fairly uniform. However, what we can say is that the GRNN type network is the one that behaves in the worst way.

Fig. 4
figure 4

Regression based on BP Neural Network: training, validation, test and overall result

Table 3 Comparison among BP, RBF and GRNN methodology when the asset value V changes

6 Conclusions

In this paper, we have shown how neural network methodology, joint with the LSM, can be used to evaluate R&D projects. In particular way, an R&D opportunity is a sequential investment and therefore can be considered as a compound option. We have assumed the managerial flexibility to realize the production investment D before the maturity T in order to benefit of R&D cash flows. So an R&D project can be view as a Compound American Exchange option (CAEO), that allows us to couple both the sequential frame and the managerial flexibility of an R&D investment. We have analyzed two main contributions. The first is that the LSM method permits to determine the expected continuation value by regressing the discounted cash flows on the simple powers of variable P, and so to overcome the effort to compute the critical prices \(P^*_{\tau _k},k=1\ldots h-2\). But this approach requires long time in order to value a CAEO. The second contribution analyzed is the construction of a neural network based on BP architecture using the LSM simulation results as “targets” in the learning phase. As there is not a market evaluation of CAEO, we have seen that the advantage of this approach is the speed and the accuracy in the computations and, moreover, the possibility to extend the trained neural network to value any R&D investment project. Finally, we have compared the BP results with those obtained from the RBF and GRNN neural approach. Based on MSE and MAPE, the BP provides much better predictions.