Skip to main content
Log in

The learning premium

  • Published:
Mathematics and Financial Economics Aims and scope Submit manuscript

Abstract

We find equilibrium stock prices and interest rates in a representative-agent model where dividend growth is uncertain, but gradually revealed by dividends themselves, while asset prices reflect current information and the potential impact of future knowledge. In addition to the usual premium for risk, stock returns include a learning premium, which reflects the expected change in prices from new information. In the long run, the learning premium vanishes, as prices and interest rates converge to their counterparts in the standard setting with known dividend growth. If both relative risk aversion and elasticity of intertemporal substitution are above one, the model reproduces the increase in price-dividend ratios observed in the past century, and implies that—in the long run—price-dividend ratios may increase a further forty percent above current levels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Source: CRSP monthly data 1926–2015

Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. For example, see Campbell and Shiller [1], Breen et al. [2], Fama and French [3], Glosten et al. [4], Lamont [5], Baker and Wurgler [6], Lettau and Ludvigson [7], Campbell and Vuolteenaho [8], Polk et al. [9], Ang et al. [10], Binsbergen et al. [11], Chen et al. [12], Kelly and Pruitt [13], Van Binsbergen et al. [14], Li et al. [15], Da et al. [16] and Martin [17].

  2. In detail, \(d\widehat{W^D}_t = \frac{\mu ^D - \widehat{\mu ^D}_t}{\sigma ^D} dt + dW_t.\)

  3. The trivial exception is \(\gamma = 1\), which leads to \(S_t = D_t /\beta \), whence learning has no effects on prices, both with anticipative utility and with rational expectations.

  4. A similar but more technical calculation with Epstein–Zin isoelastic preferences confirms that the prices still diverge, except in the case of unit EIS (elasticity of intertemporal substitution) that nests logarithmic utility and implies that \(S_t = D_t/\beta \).

  5. Formally, consider a measurable space \((\Omega , \mathcal F, \mathbb {P})\) supporting a uniform random variable \(P\sim U[0,1]\) and an IID sequence \((X_t)_{t\ge 1}\), \(X_t\sim B(P)\), where B(P) denotes the Bernoulli distribution with parameter P. Additionally, define the filtration generated by the observations of \(X_t\). Let \(\mathcal F_t = \sigma (X_1, \ldots , X_t),~t\ge 0\), which is the filtration used for Bayesian updating.

  6. This assumption can be relaxed to \(P \sim Beta({\eta }_0, \beta _0)\), with \({\eta }_0,\beta _0>0\).

  7. Recall that time-additive power utility with risk aversion \(\gamma \) recovers from the Epstein–Zin setting \(\gamma =\rho \) and \(\theta =1\), and using the transformation \(V_t = \frac{U_t^{1-\gamma }}{(1-\gamma )(1-\delta )}, \delta = {\text {e}}^{-\beta }\), whence (5) becomes \(V_t = \frac{C_t^{1-\gamma }}{1-\gamma } + {\text {e}}^{-\beta } \mathbb {E}_t\left[ V_{t+1}\right] \).

References

  1. Campbell, J.Y., Shiller, R.J.: The dividend-price ratio and expectations of future dividends and discount factors. Rev. Financ. Stud. 1(3), 195–228 (1988)

    Google Scholar 

  2. Breen, W., Glosten, L.R., Jagannathan, R.: Economic significance of predictable variations in stock index returns. J. Finance 44(5), 1177–1189 (1989)

    Google Scholar 

  3. Fama, E.F., French, K.R.: Common risk factors in the returns on stocks and bonds. J. Financ. Econ. 33(1), 3–56 (1993)

    MATH  Google Scholar 

  4. Glosten, L.R., Jagannathan, R., Runkle, D.E.: On the relation between the expected value and the volatility of the nominal excess return on stocks. J. Finance 48(5), 1779–1801 (1993)

    Google Scholar 

  5. Lamont, O.: Earnings and expected returns. J. Finance 53(5), 1563–1587 (1998)

    Google Scholar 

  6. Baker, M., Wurgler, J.: The equity share in new issues and aggregate stock returns. J. Finance 55(5), 2219–2257 (2000)

    Google Scholar 

  7. Lettau, M., Ludvigson, S.: Consumption, aggregate wealth, and expected stock returns. J. Finance 56(3), 815–849 (2001)

    Google Scholar 

  8. John, Y.: Campbell and Tuomo Vuolteenaho. Inflation illusion and stock prices. Technical report, National bureau of economic research (2004)

  9. Polk, C., Thompson, S., Vuolteenaho, T.: Cross-sectional forecasts of the equity premium. J. Financ. Econ. 81(1), 101–141 (2006)

    Google Scholar 

  10. Ang, A., Bekaert, G., Wei, M.: Do macro variables, asset markets, or surveys forecast inflation better? J. Monet. Econ. 54(4), 1163–1212 (2007)

    Google Scholar 

  11. Van Binsbergen, H.J., Koijen, R.S.J.: Predictive regressions: a present-value approach. J. Finance 65(4), 1439–1471 (2010)

    Google Scholar 

  12. Chen, L., Da, Z., Zhao, X.: What drives stock price movements? Rev. Financ. Stud. 26(4), 841–876 (2013)

    Google Scholar 

  13. Kelly, B., Pruitt, S.: Market expectations in the cross-section of present values. J. Finance 68(5), 1721–1756 (2013)

    Google Scholar 

  14. Van Binsbergen, J., Hueskes, W., Koijen, R., Vrugt, E.: Equity yields. J. Financ. Econ. 110(3), 503–519 (2013)

    Google Scholar 

  15. Li, Y., Ng, D.T., Swaminathan, B.: Predicting market returns using aggregate implied cost of capital. J. Financ. Econ. 110(2), 419–436 (2013)

    Google Scholar 

  16. Da, Z., Jagannathan, R., Shen, J.: Growth expectations, dividend yields, and future stock returns. Technical report, National Bureau of Economic Research (2014)

  17. Martin, I.: The lucas orchard. Econometrica 81(1), 55–111 (2013)

    MathSciNet  MATH  Google Scholar 

  18. Goyal, A., Welch, I.: Predicting the equity premium with dividend ratios. Manage. Sci. 49(5), 639–654 (2003)

    MATH  Google Scholar 

  19. Lettau, M., Ludvigson, S.C.: Expected returns and expected dividend growth. J. Financ. Econ. 76(3), 583–626 (2005)

    Google Scholar 

  20. Welch, I., Goyal, A.: A comprehensive look at the empirical performance of equity premium prediction. Rev. Financ. Stud. 21(4), 1455–1508 (2007)

    Google Scholar 

  21. Cochrane, J.H.: Explaining the variance of price–dividend ratios. Rev. Financ. Stud. 5(2), 243–280 (1992)

    Google Scholar 

  22. Cochrane, J.H.: The dog that did not bark: a defense of return predictability. Rev. Financ. Stud. 21(4), 1533–1575 (2007)

    Google Scholar 

  23. Campbell, J.Y., Thompson, S.B.: Predicting excess stock returns out of sample: Can anything beat the historical average? Rev. Financ. Stud. 21(4), 1509–1531 (2007)

    Google Scholar 

  24. Lettau, M., Van Nieuwerburgh, S.: Reconciling the return predictability evidence: the review of financial studies: reconciling the return predictability evidence. Rev. Financ. Stud. 21(4), 1607–1652 (2007)

    Google Scholar 

  25. Modigliani, F.: The monetarist controversy; or, should we forsake stabilization policies? Econ. Rev. (Spr suppl), 27–46 (1977)

  26. Lucas, R.E., Sargent, T.J.: Rational Expectations and Econometric Practice, vol. 2. University of Minnesota Press, Minneapolis (1981)

    Google Scholar 

  27. Hansen, L.P.: Beliefs, doubts and learning: valuing economic risk. Technical report, National Bureau of Economic Research (2007)

  28. Johannes, M., Lochstoer, L.A., Mou, Y.: Learning about consumption dynamics. J. Finance 71(2), 551–600 (2016)

    Google Scholar 

  29. Croce, M.M., Lettau, M., Ludvigson, S.C.: Investor information, long-run risk, and the term structure of equity. Rev. Financ. Stud. 28(3), 706–742 (2014)

    Google Scholar 

  30. Jagannathan, R., Liu, B.: Dividend dynamics, learning, and expected stock index returns. Technical report, National Bureau of Economic Research (2015)

  31. Collin-Dufresne, P., Johannes, M., Lochstoer, L.A.: Parameter learning in general equilibrium: the asset pricing implications. Am. Econ. Rev. 106(3), 664–698 (2016)

    Google Scholar 

  32. Kreps, D.M.: Anticipated utility and dynamic choice. Econom. Soc. Monogr. 29, 242–274 (1998)

    Google Scholar 

  33. Piazzesi, M., Schneider, M.: Interest rate risk in credit markets. Am. Econ. Rev. 100(2), 579–584 (2010)

    Google Scholar 

  34. Cogley, T., Sargent, T.J.: Diverse beliefs, survival and the market price of risk. Econ. J. 119(536), 354–376 (2009)

    Google Scholar 

  35. Veronesi, P.: How does information quality affect stock returns? J. Finance 55(2), 807–837 (2000)

    Google Scholar 

  36. Brevik, F., d’Addona, S.: Information quality and stock returns revisited. J. Financ. Quant. Anal. 45(6), 1419–1446 (2010)

    Google Scholar 

  37. Epstein, L.G., Zin, S.E.: Substitution, risk aversion and the temporal behavior of consumption and asset returns: a theoretical framework. Econometrica 57(4), 937–969 (1989)

    MathSciNet  MATH  Google Scholar 

  38. Cox, J.C., Ross, S.A., Rubinstein, M.: Option pricing: a simplified approach. J. Financ. Econ. 7(3), 229–263 (1979)

    MathSciNet  MATH  Google Scholar 

  39. Lucas Jr., R.E.: Asset prices in an exchange economy. Econom. J. Econom. Soc. 46, 1429–1445 (1978)

    MathSciNet  MATH  Google Scholar 

  40. Liptser, R.S., Shiryaev, A.N.: Statistics of Random Processes: I. General Theory, vol. 5. Springer, Berlin (2013)

    Google Scholar 

  41. Pástor, Ľ., Stambaugh, R.F.: Are stocks really less volatile in the long run? J. Finance 67(2), 431–478 (2012)

    Google Scholar 

  42. Georgii, H.-O.: Stochastics: Introduction to Probability and Statistics. Walter de Gruyter, Berlin (2013)

    MATH  Google Scholar 

  43. Robert, C.: The Bayesian Choice: from Decision-Theoretic Foundations to Computational Implementation. Springer, New York (2007)

    MATH  Google Scholar 

  44. Beeler, J., Campbell, J.Y., et al.: The long-run risks model and aggregate asset prices: an empirical assessment. Crit. Finance Rev. 1(1), 141–182 (2012)

    Google Scholar 

  45. Duffie, D., Epstein, L.G.: Asset pricing with stochastic differential utility. Rev. Financ. Stud. 5(3), 411–436 (1992)

    Google Scholar 

  46. Pennesi, D.: Asset prices in an ambiguous economy. Math. Financ. Econ. 12(1), 55–73 (2018)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxim Bichuch.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Work is partially supported by NSF (DMS-1736414), and by the Acheson J. Duncan Fund for the Advancement of Research in Statistics. Partially supported by the ERC (279582), NSF (DMS-1412529), and SFI (16/SPP/3347 and 16/IA/4443).

Appendices

Appendix: Transitory versus permanent learning

This subsection demonstrates the difference between transitory and permanent learning by examining in detail the asset pricing implications of a model of transitory learning, and by contrasting them with the ones obtained in the prologue for permanent learning.

Consider the case of an unobservable dividend drift that follows an Ornstein–Uhlenbeck process with known coefficients. As before, the dividends themselves are still observable and can be used to estimate the current drift. Let the dividends again grow geometrically, i.e.,

$$\begin{aligned} dD_t = \mu _t D_t dt + \sigma _D D_t dW_t . \end{aligned}$$

However, let now the growth rate \(\mu _t\) follow a hidden Ornstein–Uhlenbeck process

$$\begin{aligned} d\mu _t = \kappa ({\bar{\mu }}-\mu _t)dt +\sigma _\mu dB_t , \end{aligned}$$

which means that \(\mu _t\) fluctuates around its long-term mean \({\bar{\mu }}\). Denoting by

$$\begin{aligned} R_t&= \int _0^t \frac{dD_s}{D_s} - \int _0^t\int _0^u {\bar{\mu }}\kappa {\text {e}}^{-\kappa (u-s)} dsdu =\int _0^t \frac{dD_s}{D_s} + {\bar{\mu }} t + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa t} -1 \right) \\ \theta _t&= \mu _t - {\bar{\mu }}\kappa \int _0^t{\text {e}}^{-\kappa (t-s)} ds , \end{aligned}$$

it follows that

$$\begin{aligned} dR_t&= \theta _t dt + \sigma _D dW_t,\\ d\theta _t&= -\kappa \theta _t dt + \sigma _\mu dB_t. \end{aligned}$$

Moreover, \({\mathcal F}^D_t = \sigma \left( (D_u)_{0\le u\le t}\right) = \sigma \left( (R_u)_{0\le u\le t}\right) = {\mathcal F}^R_t.\) The Kalman–Bucy filter \({\hat{\theta }}_t = \mathbb {E}\left[ \theta _t\vert {\mathcal F}^R_t\right] \) and its variance \(\gamma (t) = \mathbb {E}[({\hat{\theta }}_t-\theta _t)^2]\) satisfy

$$\begin{aligned} d\gamma (t)&= \left( -2\kappa \gamma (t) -\frac{(\gamma (t))^2}{\sigma _D^2} + \sigma _\mu ^2\right) dt ,\nonumber \\ d{\hat{\theta }}_t&= -\kappa {\hat{\theta }}_t dt + \frac{\gamma (t) }{\sigma _D^2}\left( dR_t - {\hat{\theta }}_t dt\right) . \end{aligned}$$
(13)

Let \(\gamma _{\pm } = -\kappa \sigma _D^2 \pm \sigma _D\sqrt{\kappa ^2 \sigma _D^2 + \sigma _\mu ^2}\), be the two roots of the quadratic of the right hand side of (13). Assuming again that \(\theta _0 \sim \mathbb {N}(\mu _0, \sigma _0^2)\), and setting \({\hat{\theta }}_0 = \mu _0, \gamma (0) = \sigma _0^2,\) then the solution to the Kalman–Bucy filter is

$$\begin{aligned} \gamma (t)&= \frac{\gamma _{-} -\gamma _{+}\frac{\gamma _0^2-\gamma _{-}}{\gamma _0^2-\gamma _{+}} {\text {e}}^{\frac{\gamma _{+} - \gamma _{-}}{\sigma _D^2}t } }{1 -\frac{\gamma _0^2-\gamma _{-}}{\gamma _0^2-\gamma _{+}} {\text {e}}^{\frac{\gamma _{+} - \gamma _{-}}{\sigma _D^2}t } } ,\\ {\hat{\theta }}_t&= {\text {e}}^{-\int _0^t (\kappa + \frac{1}{\sigma _D^2}\gamma (s)) ds } {\hat{\theta }}_0 + \frac{1}{\sigma _D^2}\int _0^t {\text {e}}^{-\int _s^t (\kappa + \frac{1}{\sigma _D^2}\gamma (u) )du } \gamma (s) dR_s. \end{aligned}$$

and the Brownian motion under \({\mathcal F}_t^R\) is \(\widehat{W^D}\), defined as

$$\begin{aligned} d\widehat{W^D}_t = \frac{dR_t - {\hat{\theta }}_t dt}{\sigma _D} = \frac{\theta _t - {\hat{\theta }}_t }{\sigma _D}dt + dW_t, \end{aligned}$$

so that

$$\begin{aligned} d{\hat{\theta }}_t = -\kappa {\hat{\theta }}_t dt + \frac{\gamma (t) }{\sigma _D}d\widehat{W^D}_t. \end{aligned}$$

Thus

$$\begin{aligned} \frac{dD_t}{D_t}&= dR_t + \int _0^t {\bar{\mu }}\kappa {\text {e}}^{-\kappa (t-s)} ds dt= dR_t + {\bar{\mu }}\left( 1-{\text {e}}^{-\kappa t} \right) dt\nonumber \\&=\left( {\hat{\theta }}_t + {\bar{\mu }}\left( 1-{\text {e}}^{-\kappa t} \right) \right) dt +\sigma _Dd\widehat{W^D}_t. \end{aligned}$$
(14)

Recall the price process in (3), with the state-price density \(M_t\) is proportional to the marginal utility of consumption \({\text {e}}^{-\beta t} D_t^{-\gamma } \). Thus

$$\begin{aligned} D_t M_t = D_s M_s {\text {e}}^{-\beta (t-s) +(1-\gamma )\left( \int _s^t {\hat{\theta }}_u du + {\bar{\mu }} (t-s) + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (t-s)} -1 \right) - \frac{\sigma _D^2}{2}(t-s) + \sigma _D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \right) } . \end{aligned}$$

Note that, for \(s\le t\), it holds that

$$\begin{aligned} \theta _t - \theta _s = -\kappa \int _s^t {\hat{\theta }}_udu + \int _s^t\frac{\gamma (u)}{\sigma _D} d\widehat{W^D}_u , \end{aligned}$$

which, combined with \({\hat{\theta }}_t ={\hat{\theta }}_s {\text {e}}^{-\kappa (t-s)} + \int _s^t\frac{\gamma (u)}{\sigma _D} {\text {e}}^{-\kappa (t-u) } d\widehat{W^D}_u\), yields

$$\begin{aligned} \int _s^t {\hat{\theta }}_udu&= -\frac{\theta _t - \theta _s}{\kappa } + \frac{1}{\kappa \sigma _D} \int _s^t \gamma (u) d\widehat{W^D}_u\\&=\frac{1-{\text {e}}^{-\kappa (t-s) } }{\kappa } {\hat{\theta }}_s - \frac{1}{\kappa \sigma _D} \int _s^t\gamma (u) ({\text {e}}^{-\kappa (t-u) } -1)d\widehat{W^D}_u. \end{aligned}$$

Hence,

$$\begin{aligned} \int _s^t {\hat{\theta }}_udu + \sigma _D \left( \widehat{W^D}_t - \widehat{W^D}_s\right)&=\frac{1-{\text {e}}^{-\kappa (t-s) }}{\kappa } {\hat{\theta }}_s -\frac{1}{\kappa \sigma _D} \int _s^t\left( \gamma (u) ({\text {e}}^{-\kappa (t-u) } -1)- \kappa \sigma _D^2\right) d\widehat{W^D}_u.~~~~~ \end{aligned}$$
(15)

Therefore,

$$\begin{aligned} \int _s^t {\hat{\theta }}_udu + \sigma _D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \sim N\left( \frac{1-{\text {e}}^{-\kappa (t-s) } }{\kappa } {\hat{\theta }}_s, \frac{1}{\left( \kappa \sigma _D\right) ^2} \int _s^t\left( \gamma (u) (1-{\text {e}}^{-\kappa (t-u) } )+ \kappa \sigma _D^2\right) ^2du \right) . \end{aligned}$$

As in the long run \(\gamma (u)\) converges to \(\gamma _{+}\), it follows that for large st

$$\begin{aligned} \int _s^t {\hat{\theta }}_udu + \sigma _D (\widehat{W^D}_t - \widehat{W^D}_s)\sim N\Bigg (\frac{1-{\text {e}}^{-\kappa (t-s) }}{\kappa } {\hat{\theta }}_s, H(t-s) \Bigg ), \end{aligned}$$

where

$$\begin{aligned} H(\tau ) =\frac{\gamma _{+}^2}{\left( \kappa \sigma _D\right) ^2} \frac{\gamma _{+}^2 \left( 4 {\text {e}}^{-\kappa \tau } -{\text {e}}^{-2 \kappa \tau }+2 \kappa \tau -3\right) -4 \gamma _{+} \kappa \sigma ^2 \left( \kappa \tau -{\text {e}}^{-\kappa \tau } +1\right) +2 \kappa ^3 \sigma ^4 \tau }{2 k} . \end{aligned}$$

Hence,

$$\begin{aligned} \mathbb {E}\left[ M_t D_t \vert \mathcal F_s\right]&= D_sM_s {\text {exp}} \left\{ \, -\left( \beta - (1-\gamma )\left( {\bar{\mu }} - \frac{\sigma _D^2}{2}\right) \right) (t-s) \,\right\} \\&\quad \times {\text {exp}} \left\{ \, (1-\gamma ) \left( \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (t-s)} -1 \right) + \frac{1-{\text {e}}^{-\kappa (t-s) }}{\kappa } {\hat{\theta }}_s\right) + (1-\gamma )^2\frac{H(t-s)}{2} \,\right\} . \end{aligned}$$

This equality in turn implies that

$$\begin{aligned} S_t&= \frac{1}{M_t}\mathbb {E}\left[ \int _t^\infty M_s D_s ds \vert \mathcal F_t\right] = \frac{1}{M_t}\int _t^\infty \mathbb {E}\left[ M_s D_s \vert \mathcal F_t\right] ds\\&= D_t \int _t^\infty {\text {e}}^{ -\left( \beta - (1-\gamma )\left( {\bar{\mu }} - \frac{\sigma _D^2}{2}\right) \right) (s-t) +(1-\gamma ) \left( \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (s-t)} -1 \right) + \frac{1-{\text {e}}^{-\kappa (s-t) }}{\kappa } {\hat{\theta }}_t\right) +(1-\gamma )^2H(s-t) } ds<\infty . \end{aligned}$$

As \(\kappa >0\), the expression \(H(\tau )\) grows at most linearly in \(\tau .\) As a result, for \(\beta >0\) large enough, the above expression is finite. (There is no closed form solution, even in the stationary case \(\gamma (u)= \gamma _{+}\) for all \(u>0\).) Thus, in contrast to the setting of permanent learning, described in the main text, this model of transitory learning gives rise to finite prices, at least for sufficiently large discount rates.

The same argument carries over to Epstein–Zin preferences. Denoting the aggregator by

$$\begin{aligned} {\bar{f}}(c,v) = \frac{\beta }{\rho } \frac{c^\rho -({\eta }v)^\frac{\rho }{{\eta }}}{({\eta }v)^{\frac{\rho }{{\eta }}-1}}, \end{aligned}$$
(16)

so that the indirect utility \(V_t\) satisfies

$$\begin{aligned} dV_t = -{\bar{f}}(C_t,V_t) dt + {\bar{\sigma }}_v(t) d\widehat{W^D}_t. \end{aligned}$$

In order to find \({\bar{\sigma }}_v(t)\), recall that

$$\begin{aligned} dV_t = \left( - f(C_t,V_t) -\frac{1}{2}A(v) \sigma _v^2(t)\right) dt + \sigma _v(t) d\widehat{W^D}_t, \end{aligned}$$

where

$$\begin{aligned} f(c,v) = \frac{\beta }{\rho } \frac{c^\rho -v^{\rho } }{v^{ \rho -1}} ,\quad A(v) = \frac{{\eta }-1}{v}. \end{aligned}$$

As the only source of randomness of the utility comes from the consumption, and both the consumption and the utility processes are linear in consumption, \(\sigma _v(t) = \sigma ^D V_t.\) The transformation to an equivalent normalized utility process is \(\bar{U} = U \circ \phi \), where \(\phi (v) =\int {\text {e}}^{\int A(x)dx} dv,\) which in this case, is \(\phi (v) = \frac{v^{\eta }}{{\eta }}.\) From Itô’s formula, it follows that

$$\begin{aligned} {\bar{f}}(c, \phi (v) )&= f(c,v) \phi '(v),\\ {\bar{\sigma }}_{\phi (v)} (t)&= \sigma _v(t) \phi '(v),\\ {\bar{A}}(\phi (v))&= A(v) \phi '(v) - \phi ''(v). \end{aligned}$$

It then follows that \({\bar{f}}\) indeed equals (16), \(\bar{A} =0,\) and

$$\begin{aligned} {\bar{\sigma }}_{v} (t) = {\eta }\sigma ^D V_t. \end{aligned}$$
(17)

As dividends coincide with consumption, i.e. \(C_t=D_t\), the state-price deflator \(M_t\) is [45]

$$\begin{aligned} M_t =\,&\exp { \left\{ \int _0^t {\bar{f}}_v(D_s, V_s)ds \right\} } {\bar{f}}_c(D_t, V_t)\\ =\,&\beta \exp { \left\{ \frac{\beta }{\rho {\eta }^{\frac{\rho }{{\eta }}-1} }\left( 1-\frac{\rho }{{\eta }}\right) \int _0^t \frac{D_s^{\rho }}{V_s^{\frac{\rho }{{\eta }}}}ds - \beta \frac{{\eta }}{\rho }t \right\} } \frac{D_t^{\rho -1}}{ ({\eta }V_t)^{\frac{\rho }{{\eta }}-1}}. \end{aligned}$$

Using the fact that

$$\begin{aligned} d\left( V_t^{1-\frac{\rho }{\gamma }} \right)&= - \left( 1-\frac{\rho }{\gamma }\right) V_t ^{1-\frac{\rho }{\gamma }} \left( \frac{ {\bar{f}}(D_t,V_t) dt - {\bar{\sigma }}_v(t) d\widehat{W^D}_t }{V_t} +\frac{\rho }{2\gamma }\frac{ {\bar{\sigma }}^2_v(t)}{V_t^2} dt \right) , \end{aligned}$$

it follows that

$$\begin{aligned} d\left( \frac{D_t^{\rho -1}}{ (\gamma V_t)^{\frac{\rho }{\gamma }-1}} \right)&= \frac{D_t^{\rho -1}}{ (\gamma V_t)^{\frac{\rho }{\gamma }-1}} (\rho -1) \left( \left( {\hat{\theta }}_t - {\bar{\mu }}\left( 1-{\text {e}}^{-\kappa t} \right) \right) dt + \sigma ^D d\widehat{W^D}_t + \frac{1}{2}(\rho -2) \left( \sigma ^D\right) ^2 dt \right) \\&\quad -\frac{D_t^{\rho -1}}{ (\gamma V_t)^{\frac{\rho }{\gamma }-1}} \left( 1-\frac{\rho }{\gamma }\right) \left( \frac{ {\bar{f}}(D_t,V_t) dt - {\bar{\sigma }}_v(t) d\widehat{W^D}_t }{V_t} +\frac{\rho }{2\gamma }\frac{ {\bar{\sigma }}^2_v(t)}{V_t^2} dt \right) \\&\quad +(\rho -1)\frac{D_t^{\rho -1}}{ (\gamma V_t)^{\frac{\rho }{\gamma }-1}} \left( 1-\frac{\rho }{\gamma }\right) \frac{\sigma ^D {\bar{\sigma }}_v(t)}{V_t} dt. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{dM_t}{M_t}&= (\rho -1) \left( \left( {\hat{\theta }}_t - {\bar{\mu }}\left( 1-{\text {e}}^{-\kappa t} \right) \right) dt + \sigma ^D d\widehat{W^D}_t + \frac{1}{2}(\rho -2) (\sigma ^D)^2 dt \right) \\&\quad -\left( 1-\frac{\rho }{\gamma }\right) \left( \frac{ {\bar{f}}(D_t,V_t) dt - {\bar{\sigma }}_v(t) d\widehat{W^D}_t }{V_t} +\frac{\rho }{2\gamma }\frac{ {\bar{\sigma }}^2_v(t)}{V_t^2} dt \right) \\&\quad +(\rho -1) \left( 1-\frac{\rho }{\gamma }\right) \frac{\sigma ^D {\bar{\sigma }}_v(t)}{V_t} dt + \left( \frac{\beta }{\rho \gamma ^{\frac{\rho }{\gamma }-1} }\left( 1-\frac{\rho }{\gamma }\right) \frac{D_t^{\rho }}{V_t^{\frac{\rho }{\gamma }}} - \beta \frac{\gamma }{\rho } \right) dt. \end{aligned}$$

and, substituting (16) and (17), yields

$$\begin{aligned} \frac{dM_t}{M_t}&=-r_tdt + (\rho -1) \sigma ^D d\widehat{W^D}_t + \left( 1-\frac{\rho }{\gamma }\right) \frac{ {\bar{\sigma }}_v(t) }{V_t} d\widehat{W^D}_t\\&=-r_tdt + (\gamma -1) \sigma ^D d\widehat{W^D}_t, \end{aligned}$$

where

$$\begin{aligned} r_t&= \beta + (1-\rho ) \left( {\hat{\theta }}_t - {\bar{\mu }}\left( 1-{\text {e}}^{-\kappa t} \right) \right) + (2-\rho ) (\gamma -1)\frac{(\sigma ^D)^2}{2}. \end{aligned}$$

Therefore

$$\begin{aligned} M_t&= M_s \exp {\left\{ -\int _s^t r_u du - \frac{(\gamma -1)^2}{2}(\sigma ^D)^2(t-s) + (\gamma -1)\sigma ^D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \right\} } . \end{aligned}$$
(18)

From (14),

$$\begin{aligned} D_t&= D_s \exp {\left\{ \int _s^t {\hat{\theta }}_u du + {\bar{\mu }} (t-s) + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (t-s)} -1 \right) - \frac{(\sigma _D)^2}{2}(t-s) + \sigma _D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \right\} }. \end{aligned}$$

Together with (18) it now follows that

$$\begin{aligned} D_t M_t&= D_s M_s {\text {exp}} \left\{ \, -\beta (t-s) + \rho \left( \int _s^t {\hat{\theta }}_u du + {\bar{\mu }} (t-s) + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (t-s)} -1 \right) \right) \,\right\} \\&\quad \times {\text {exp}} \left\{ \, \left( \rho (\gamma -1) -\gamma ^2\right) \frac{(\sigma ^D)^2}{2}(t-s) + \gamma \sigma ^D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \,\right\} . \end{aligned}$$

Thus, similarly to (15),

$$\begin{aligned} \int _s^t {\hat{\theta }}_udu + \frac{\gamma }{\rho }\sigma _D \left( \widehat{W^D}_t - \widehat{W^D}_s\right) \sim N\left( \frac{1-{\text {e}}^{-\kappa (t-s) } }{\kappa } {\hat{\theta }}_s, H_1(s,t) \right) , \end{aligned}$$

where

$$\begin{aligned} H_1(s,t) = \frac{1}{\left( \kappa \sigma _D\right) ^2} \int _s^t\left( \gamma (u) (1-{\text {e}}^{-\kappa (t-u) } )+ \kappa \frac{\gamma }{\rho }\sigma _D^2\right) ^2du. \end{aligned}$$

Thus

$$\begin{aligned} \mathbb {E}\left[ M_t D_t \vert \mathcal F_s\right]&= D_sM_s {\text {exp}} \left\{ \, -\beta (t-s) + \rho \left( {\bar{\mu }} (t-s) + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (t-s)} -1 \right) \right) \,\right\} \\&\quad \times {\text {exp}} \left\{ \, \rho \frac{1-{\text {e}}^{-\kappa (t-s) }}{\kappa } {\hat{\theta }}_s+\left( \rho (\gamma -1) -\gamma ^2\right) \frac{(\sigma ^D)^2}{2}(t-s)+ \rho ^2\frac{H_1(s,t)}{2} \,\right\} . \end{aligned}$$

We now calculate

$$\begin{aligned} S_t&= \frac{1}{M_t}\mathbb {E}\left[ \int _t^\infty M_s D_s ds \vert \mathcal F_t\right] = \frac{1}{M_t}\int _t^\infty \mathbb {E}\left[ M_s D_s \vert \mathcal F_t\right] ds\\&= D_t \int _t^\infty {\text {e}}^{-\beta (s-t) + \rho \left( {\bar{\mu }} (s-t) + \frac{{\bar{\mu }}}{\kappa }\left( {\text {e}}^{-\kappa (s-t)} -1 \right) \right) + \rho \frac{1-{\text {e}}^{-\kappa (s-t) }}{\kappa } {\hat{\theta }}_t+\left( \rho (\gamma -1) -\gamma ^2\right) \frac{(\sigma ^D)^2}{2}(s-t)+ \rho ^2\frac{H_1(t,s)}{2} } ds. \end{aligned}$$

Again, for the same reasons as in the additive utility case above, this expression is finite for a discount rate \(\beta \) large enough, which confirms the claim that prices remain finite even for Epstein–Zin preferences.

B Proofs

Proof of Lemma 3.1

For \(n=1\), it follows from the definition of \(X_1\) that its distribution is \(X_1 \sim BetaBin (1,1,1).\) For \(n>1\), we calculate the posterior distribution. Recall that

$$\begin{aligned} f_P(p \vert X_1,\ldots , X_{n-1}) \propto L(p) f_{p_0}(p)\propto p^{\sum _{i=1}^{n-1} X_i} (1-p)^{n-1-\sum _{i=1}^{n-1} X_i}, \end{aligned}$$
(19)

where \(f_P\) is the pdf of P and L is the log-likelihood. Hence, \(P\vert X_1,\ldots , X_{n-1} \sim Beta(\sum _{i=1}^{n-1} X_i+1, n-1-\sum _{i=1}^{n-1} X_i+1)\), and thus \(X_n\vert X_1, \ldots , X_{n-1} \sim BetaBin(1,\sum _{i=1}^{n-1} X_i+1, n-1-\sum _{i=1}^{n-1} X_i+1).\) Moreover, given the observations \(X_1, \ldots ,X_{n-1}\) and using (19), for \(n\ge 1\) it holds that

$$\begin{aligned} {\hat{p}}_{n-1}&= \mathbb {P}(X_n=1\vert X_1, \ldots , X_{n-1}) = \int _0^1 \mathbb {P}(X_{n}=1 \vert p) f_P(p \vert X_1,\ldots , X_{n-1})dp \\&= \int _0^1 p \frac{p^{\sum _{i=1}^{n-1} X_i} (1-p)^{n-1-\sum _{i=1}^{n-1} X_i}}{Beta(\sum _{i=1}^{n-1} X_i+1, (n-1)-\sum _{i=1}^{n-1} X_i+1) } dp = \frac{\sum _{i=1}^{n-1} X_i+1}{n+1}. \end{aligned}$$

\(\square \)

We formulate this section for the general case of Epstein-Zin utility. The case of power utility corresponds to \(\theta =1\). First, we show the Epstein–Zin utility is well defined, i.e., the infinite-horizon limit in (6) exists. See also (Pennesi [46], Theorem 1) for a related result.

Lemma B.1

Fix an admissible consumption \(C\in \mathcal {L}_\delta \). Then

$$\begin{aligned} U_t^N(C) \le (1-\delta )^{\frac{1}{1-\rho }} \sum _{n=t}^{N-1} \delta ^{n-t} \mathbb {E}_t[C_n] . \end{aligned}$$
(20)

It follows that the limit in (6) is well defined [hence so is \(U_t(C)]\). Moreover, such \(U_t(C)\) is the unique solution to the recursive equation

$$\begin{aligned} U_t(C)= \left\{ (1-\delta ) C_t^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_t[ (U_{t+1})^{1-\gamma }]\right) ^{\frac{1}{\theta }} \right\} ^{\frac{\theta }{1-\gamma }} \end{aligned}$$
(21)

with the asymptotic condition

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }U_t\left( C^{0,n}\right) = U_t(C), \end{aligned}$$
(22)

where for any consumption streams \({\tilde{C}}, {\hat{C}}\), the modified process \({\hat{C}}^{{\tilde{C}},n}\) is defined as

$$\begin{aligned} {\hat{C}}^{{\tilde{C}},n}_s =\left\{ \begin{array} {ll} {\hat{C}}_s &{}:s\le n,\\ {\tilde{C}}_s &{} :s> n. \end{array}\right. \end{aligned}$$
(23)

Proof of Lemma B.1

Fix any \(N\ge t.\) Then \(U_N^N(C) =0.\) Similarly, if \(N\ge 1\), then \(U_{N-1}^N(C) = (1-\delta )^{\frac{1}{1-\rho }} \mathbb {E}_{N-1}[C_{N-1}].\)

By (backward) induction, assume that (20) is true for \(t=k+1\), and show it for \(t=k\). The induction assumption and Jensen’s inequality imply that

$$\begin{aligned} \mathbb {E}_{k} \left[ \left( U_{k+1}^N(C)\right) ^{1-\gamma } \right] ^\frac{1}{1-\gamma } \le (1-\delta )^{\frac{1}{1-\rho }} \sum _{n=k+1}^{N-1} \delta ^{ n-(k+1)} \mathbb {E}_k[C_n] . \end{aligned}$$

Then

$$\begin{aligned} U_k^N(C)&=\left\{ (1-\delta ) C_k^{1-\rho } + \delta \left( \mathbb {E}_k[ (U^N_{k+1})^{1-\gamma }]\right) ^{\frac{1-\rho }{1-\gamma }} \right\} ^{\frac{1}{1-\rho }} \\&\le \left( (1-\delta ) C_k^{1-\rho } +\delta (1-\delta ) \left( \sum _{n=k+1}^{N-1} \delta ^{ n-(k+1)} \mathbb {E}_k[C_n] \right) ^{1-\rho } \right) ^{\frac{1}{1-\rho }} \le (1-\delta )^{\frac{1}{1-\rho }}\sum _{n=k}^{N-1} \delta ^{n-k} \mathbb {E}_k[C_n], \end{aligned}$$

where the first inequality follows from the induction step, and the second from Jensen’s inequality, proving the induction step. It follows that the limit in (6) is well defined, as \(\{U^N_t\}\) for fixed t is an increasing sequence in \(N\ge t\). Hence, \(U_t^N(C)\) is well defined for every N, and thus so is its limit \(U_t(C)\) in (6).

Additionally, (21) now follows by continuity, after taking the limit \(N\rightarrow \infty \) in \( U^N_t(C)= \left\{ (1-\delta ) C_t^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_t[ (U^N_{t+1})^{1-\gamma }]\right) ^{\frac{1}{\theta }} \right\} ^{\frac{\theta }{1-\gamma }}\). Whereas the uniqueness of the solution follows from the uniqueness of \(U_t^N\) and the fact that \(U_t(C^{0,N}) = U_t^N(C).\)\(\square \)

Next, set

$$\begin{aligned} m_{t+1,t} = \delta \left( \frac{D_{t+1}}{D_t}\right) ^{\frac{1-\gamma }{\theta } -1} \left( \frac{U_{t+1}(D)}{\left( \mathbb {E}_t\left[ U_{t+1}^{1-\gamma }(D)\right] \right) ^\frac{1}{1-\gamma }}\right) ^{-\frac{(1-\gamma )(1-\theta )}{\theta }}, ~m_{t,s} = \prod _{i=s+1}^t m_{i,i-1} \text{ for } t>s.~~~~~ \end{aligned}$$
(24)

Define

$$\begin{aligned} \pi _{t,t}&= \frac{\partial U_t}{\partial C_t} \big \vert _{C=D} = \frac{\theta }{1-\gamma }(1-\delta ) \frac{1-\gamma }{\theta } U_t(D)^{\frac{\theta }{1-\gamma }-1 } D_t^{\frac{1-\gamma }{\theta }-1} = (1-\delta )U_t(D)^\rho D_t^{-\rho }, \end{aligned}$$
(25)
$$\begin{aligned} \pi _{s,t}&= m_{s,t} \pi _{t,t}\quad \text{ for } s>t. \end{aligned}$$
(26)

In view of (26), optimizing over the consumption at time \(s\ge t\) without any constraint on initial wealth, leads to the problem

$$\begin{aligned} \max _{C\ge 0, C_u=D_u, u\ne s } \{U_t(C) - \mathbb {E}_t\left[ \pi _{s,t} C_{s}\right] \} = U_t(D) - \mathbb {E}_t\left[ \pi _{s,t} D_{s}\right] . \end{aligned}$$
(27)

For convenience, denote \(P_t\) the cum-dividend price, defined as

$$\begin{aligned} P_t&= S_t + D_t = \mathbb {E}_t\left[ \sum _{n=t}^{\infty } m_{n,t} D_n\right] . \end{aligned}$$
(28)

To complete the description of the market, define the price at time t of a bond maturing at \(t+1\) as

$$\begin{aligned} B(t, t+1) = \mathbb {E}_t\left[ m_{t+1,t}\right] , \end{aligned}$$
(29)

and the interest rate as

$$\begin{aligned} B(t,t+1) = \frac{1}{1+r_{t+1,t}}. \end{aligned}$$
(30)

Next, to define an equilibrium in this market it remains to define admissible consumption plans. Let \(X_t\) be total the wealth of the representative agent at time t (before any consumption takes place).

Definition B.2

The wealth process X starting from time \(t_0\) is admissible, if \(X_t \ge 0\) for all times \(t\ge t_0\). For a given consumption stream \(C_t\ge 0,~t=t_0, t_0+1,\ldots \) set value of the consumption stream starting from time \(t\ge t_0\) as

$$\begin{aligned} W_{t_0}(C) = \sum _{s=t_0}^{\infty } \mathbb {E}_{t_0}\left[ m_{s,t_0} C_s\right] . \end{aligned}$$

Lemma B.3

Let \(t_0\ge 0\), then for any admissible consumption \(C_s,~s\ge t_0\),

$$\begin{aligned} \mathbb {E}_{t_0}\left[ m_{T+1,{t_0}} X_{T+1} \right] =\,&X_{t_0} - \sum _{t=t_0}^T\mathbb {E}_{t_0}\left[ m_{t,{t_0}} C_{t} \right] , \end{aligned}$$
(31)
$$\begin{aligned} \text {and}\qquad X_{t_0} \ge&\sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ m_{t,t_0} C_{t} \right] . \end{aligned}$$
(32)

Moreover, if \(X_{t_0} = P_{t_0}\) any admissible consumption C is dominated by D, in that \(W_{t_0}(C) \le W_{t_0}(D)\).

Proof

At any time, the agent can invest in two assets, the bond and the stock. Assume that at time at time t the portfolio is valued at \(X_t\). The dividend is paid out first. Then the portfolio can be rebalanced, to include \({\phi }_t\) shares of stock and \(\psi _t\) cash. Thus \(X_t = {\phi }_t(P_t -D_t) + \psi _t\), since the stock price \(P_t\) is cum-dividend, and whence \(\psi _t = X_t - {\phi }_t(P_t -D_t).\) After which the consumption \(C_t\) happens. Then at the next period \(t+1\), the portfolio is worth \(X_{t+1}\), which is comprised of \({\phi }_t P_{t+1}\) wealth invested in stock and \((\psi _t-C_t)(1+r_t)\) cash, i.e.,

$$\begin{aligned} X_{t+1}&={\phi }_t P_{t+1} + (\psi _t-C_t)(1+r_t) ={\phi }_t P_{t+1} + (X_t - {\phi }_t(P_t -D_t) -C_t)(1+r_t) . \end{aligned}$$
(33)

Because \(P_t =D_t + \mathbb {E}_t\left[ m_{t+1,t} P_{t+1}\right] \) for any t, from (29) it follows that

$$\begin{aligned} \mathbb {E}_{t_0}\left[ m_{t_0+1,t_0} X_{t_0+1} \right]&={\phi }_{t_0} \mathbb {E}_{t_0}\left[ m_{t_0+1,t_0} P_{t_0+1}\right] + ( \psi _{t_0} -C_{t_0}) (1+r_{t_0+1,t_0})\mathbb {E}_{t_0}\left[ m_{t_0+1,t_0} \right] \\&={\phi }_{t_0} \left( P_{t_0} -D_{t_0}\right) + ( \psi _{t_0} -C_{t_0}) = X_{t_0} - C_{t_0}. \end{aligned}$$

Repeating this argument, (31) follows.

By admissibility of \(X_{T+1}\) and the non-negativity of m it follows that \(X_{t_0} \ge \sum _{t=t_0}^T\mathbb {E}_{t_0} \left[ m_{t,t_0} C_{t} \right] \) for all \(T\ge t_0.\) Thus, (32) follows by letting \(T\rightarrow \infty \):

$$\begin{aligned} X_{t_0}=P_{t_0} = \sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ m_{t,t_0} D_{t} \right] \ge \sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ m_{t,t_0} C_{t} \right] . \end{aligned}$$
(34)

\(\square \)

1.1 B.1 Additive power utility

Proof of Theorem 4.1

The closed form formula for the stock price \(P_n\) at time n is as follows

$$\begin{aligned} D_n^{-\gamma }P_n&= \sum _{j=n}^\infty \mathbb {E}\left[ {\text {e}}^{-\beta (j-n) } D_j^{1-\gamma }\right] =D_n^{1-\gamma } \sum _{j=0}^\infty {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) j } \mathbb {E}\left[ {\text {e}}^{(1-\gamma )s Y_{j+n}^{(n)}} \right] ,\\&=D_n^{1-\gamma } \sum _{j=0}^\infty {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) j } \, _{{2}}F_{{1}}\left( -j,(n+1) p_n ;n {+1} ;1-{\text {e}}^{(1-\gamma )s} \right) \\&= D_n ^{1-\gamma } \sum _{j=0}^\infty {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) j } \, \sum _{k=0}^j (-1)^k \left( {\begin{array}{c}j\\ k\end{array}}\right) \frac{ ((n+1)p_n)_k }{(n {+1})_k} \left( 1-{\text {e}}^{ (1-\gamma )s} \right) ^k, \end{aligned}$$

where

$$\begin{aligned} (q)_k = \left\{ \begin{array}{ll} 1 &{} :k=0,\\ q (q+1)\dots (q+k-1) &{} :k>0. \end{array} \right. \end{aligned}$$

Now, changing the order of the summation

$$\begin{aligned} P_n&= D_n \sum _{k=0}^\infty \frac{ ((n+1)p_n)_k }{(n+1)_k} (-1)^k \left( 1-{\text {e}}^{ (1-\gamma )s} \right) ^k \sum _{j=k}^\infty {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) j } \, \left( {\begin{array}{c}j\\ k\end{array}}\right) \\&= D_n \sum _{k=0}^\infty \frac{ ((n+1)p_n)_k }{(n+1)_k} (-1)^k \left( 1-{\text {e}}^{ (1-\gamma )s} \right) ^k \left( 1- {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } \right) ^{-k-1} {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) k } \\&= \frac{D_n }{1- {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } } \sum _{k=0}^\infty \frac{ ((n+1)p_n)_k }{(n+1)_k} (-1)^k\left( \frac{{\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } \left( 1-{\text {e}}^{ (1-\gamma )s} \right) }{1- {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } } \right) ^k\\&= \frac{D_n}{1- {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } } \sum _{k=0}^\infty \frac{k! ((n+1)p_n)_k }{(n+1)_k } \frac{\left( - \frac{\left( 1-{\text {e}}^{ (1-\gamma )s} \right) }{ {\text {e}}^{- \left( (1-\gamma ){\eta }- \beta \right) } -1} \right) ^k}{k!}\\&= \frac{D_n}{1- {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } } \, _{2}F_{1}\left( 1,(n+1) p_n ;n+1 ;\frac{ 1-{\text {e}}^{ (1-\gamma )s} }{1- {\text {e}}^{ - \left( (1-\gamma ){\eta }- \beta \right) } } \right) , \end{aligned}$$

where the second equality uses the identity \( \sum _{j=k}^\infty q^j \, \left( {\begin{array}{c}j\\ k\end{array}}\right) = \left( 1- q \right) ^{-k-1} q^k \) with \(q= {\text {e}}^{ \left( (1-\gamma ){\eta }- \beta \right) } \), showing (7).

To show (8), recall the definition \(B(t, t+1)\)—the price at time t of a zero coupon bound maturing at time \(t+1\) in (29). As for power utility, (24) becomes \(m_{t+1,t} = {\text {e}}^{-\beta } \left( \frac{D_{t+1}}{D_t}\right) ^{-\gamma }\), (8) readily follows by recalling the definition of the interest rate \(r_{t+1,t}\) in (30).

Recall the with power utility the stochastic discount factor \(\pi \) in (25), (26) is \(\pi _{t,t} = D_t^{-\gamma }.\) Thus, by (24), (26) it follows that \(\mathbb {E}_s\left[ \pi _{t,s}C_t\right] = \mathbb {E}_{t_0}\left[ \frac{1}{1-\gamma }\frac{\partial V_{t_0}(D)}{\partial D_s} C_t\right] ,~s\ge t\ge t_0,\) for any \(C_t\ge 0\) admissible consumption. Recall that from Lemma B.3 for any admissible consumption C with initial portfolio wealth \(P_{t_0}\), \(\sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ \pi _{t,t_0} D_{t} \right] \ge \sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ \pi _{t,t_0} C_{t} \right] .\) For such consumption it follows that

$$\begin{aligned} \frac{1}{1-\gamma }V_{t_0}(C)&= \sum _{t=t_0}^\infty \mathbb {E}_{t_0}\left[ {\text {e}}^{-\beta (t-t_0)}\frac{ C_t^{1-\gamma }}{1-\gamma }\right] \\&\le \sum _{t=0}^\infty \mathbb {E}_{t_0}\left[ {\text {e}}^{-\beta (t-t_0)} \frac{C_t^{1-\gamma }}{1-\gamma } + \pi _{t,t_0} (D_t - C_t) \right] \\&\le \sum _{t=0}^\infty \mathbb {E}_{t_0}\left[ {\text {e}}^{-\beta (t-t_0)}\frac{ D_t^{1-\gamma }}{1-\gamma }\right] = \frac{V_{t_0}(D)}{1-\gamma }, \end{aligned}$$

where the first inequality follows from

$$\begin{aligned} \max _{C_t\ge 0}\mathbb {E}_{t_0}\left[ {\text {e}}^{-\beta (t-t_0)}\frac{ C_t^{1-\gamma }}{1-\gamma } -\pi _{t,t_0}C_t\right] = \mathbb {E}_{t_0}\left[ {\text {e}}^{-\beta (t-t_0)}\frac{D_t^{1-\gamma }}{1-\gamma } - \pi _{t,t_0}D_t\right] , \end{aligned}$$

which in turn follows from (27).

Assume for convenience that \(t_0=0.\) Note that, if \(X_0 = P_0\), \({\hat{C}}_t = D_t\) and \({\phi }_t = 1\), then (33) implies by induction that \(X_t = P_t\) for all \(t\ge 0\). Now, consider the alternative strategy in which at time t the number of shares changes from 1 to \(1+\varepsilon \) on some \(\mathcal F_t\)-measurable event \(A \subset \{|P_t|<M , D_t > 1/M\}\), with \(M>0\). Note, that after the dividend is paid, the share price is \(P_t-D_t\). Thus consumption correspondingly changes from \(D_t\) to \(D_t-\varepsilon (P_t-D_t)\) and to \(D_s (1+\varepsilon )\) for \(s\ge t+1\). That is, define \({\phi }_s^\varepsilon = {\phi }_s + \varepsilon 1_{\{s\ge t\}\cap A}\) and \(c^\varepsilon _s = D_s -\varepsilon P_t 1_{\{s = t\}\cap A} + \varepsilon D_s 1_{\{s\ge t+1\}\cap A} \), and note that this strategy continues to satisfy (33). (Note that \(\varepsilon \) may be either positive or negative.)

Setting \(u(t,C_t) = {\text {e}}^{-\beta t} \frac{ C_t^{1-\gamma }}{1-\gamma }\), the change in expected utility from (D, 1) to \((c^\varepsilon ,{\phi }^\varepsilon )\) is thus

$$\begin{aligned} \Delta ^\varepsilon = \mathbb {E}\left[ 1_A \left( u(t,D_t - \varepsilon (P_t-D_t)) -u(t,D_t) + \sum _{s=t+1}^\infty (u(s,D_s (1 + \varepsilon )) -u(s,D_s)) \right) \right] \le 0\nonumber \\ \end{aligned}$$
(35)

where the last inequality reflects the assumed optimality of the consumption stream D together with the treading strategy \({\phi }\equiv 1\). By concavity, note that for any \(t, x, y>0\):

$$\begin{aligned} u_c(t,y)(y-x) \le u(t,y) - u(t,x) \le u_c(t,x)(y-x). \end{aligned}$$

Whence, on the event A, for \(s>t\)

$$\begin{aligned} u_c(s,D_s (1 + \varepsilon ))\varepsilon D_s \le u(s,D_s (1 + \varepsilon )) - u(s,D_s) \le u_c(s,D_s) \varepsilon D_s. \end{aligned}$$

Therefore, again on A,

$$\begin{aligned}&\left| u(s,D_s (1 + \varepsilon )) - u(s,D_s) \right| \le \left| \varepsilon \right| D_s \max ( u_c(s,D_s), u_c(s,D_s (1 + \varepsilon )))\nonumber \\&\quad = \left| \varepsilon \right| D_s u_c(s,D_s) \le \left| \varepsilon \right| D_s u_c(s,D_s) , \end{aligned}$$
(36)

where for the first equality the fact that \(u\) is increasing and concave was used. Likewise,

$$\begin{aligned} -\varepsilon P_t u_c(t,D_t - \varepsilon (P_t-D_t)) \le u(t,D_t - \varepsilon (P_t-D_t)) -u(t,D_t) \le -\varepsilon P_t u_c(t,D_t) \quad \text {on }A. \end{aligned}$$

Hence on A, for \(\epsilon >0\) small enough

$$\begin{aligned} | u(t,D_t - \varepsilon (P_t-D_t)) -u(t,D_t) | \le |\varepsilon | P_t u_c(t,1/M - \varepsilon (M-1/M)) \le |\varepsilon | P_t u_c(t,1/(2 M) ).\nonumber \\ \end{aligned}$$
(37)

In view of (36) and (37), it follows that the respective incremental ratios are dominated by an integrable random variable, uniformly in \(\varepsilon \). Thus, dividing \(\Delta ^\varepsilon \) in (35) by \(\varepsilon \) and passing to the limit as \(\varepsilon \downarrow 0\), Lebesgue’s dominated convergence theorem yields

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\frac{\Delta ^\varepsilon }{\varepsilon } = \mathbb {E}\left[ 1_A \left( - u_c(t,D_t)(P_t-D_t) + \sum _{s=t+1}^\infty u_c(s,D_s)D_s ) \right) \right] \le 0 \end{aligned}$$

Analogously, as \(\varepsilon \uparrow 0\) it follows that \(\lim _{\varepsilon \downarrow 0}\frac{\Delta ^\varepsilon }{\varepsilon } \ge 0\), whence the limit must be zero. By the tower property of conditional expectation,

$$\begin{aligned} \mathbb {E}\left[ 1_A \left( - u_c(t,D_t) P_t + \mathbb {E}_t\left[ \sum _{s= t}^\infty u_c(s,D_s)D_s \right] \right) \right] = 0 . \end{aligned}$$

As \(M\uparrow \infty \), the event A spans any element of \(\mathcal F_t\), which implies that

$$\begin{aligned} P_t = \mathbb {E}_t\left[ \sum _{s= t}^\infty \frac{u_c(s,D_s)}{u_c(t,D_t)} D_s \right] \qquad \text { a.s..} \end{aligned}$$

This completes the proof by recalling the definition of the SDF m in (24). \(\square \)

We now adapt this proof to the Epstein–Zin recursive utility case.

1.2 B.2 Recursive Epstein–Zin utility

The proof for the general recursive Epstein–Zin utility is more complicated, but the proof that the market is in equilibrium uses the same ideas as in the equivalent part of Theorem 4.1. The major difference is that there is no closed form solution to the price process, as opposed to the one found in Theorem 4.1. Hence, we proceed by finding a power expansion. First, it is more convenient to work with the following equilibrium price candidate P.

$$\begin{aligned} P_t&=D_t + \mathbb {E}_t\left[ m_{t+1,t} P_{t+1}\right] . \end{aligned}$$
(38)

To establish the connection between utility U and price P, substitute (24) into (38) to get

$$\begin{aligned} \left( \mathbb {E}_t\left[ U_{t+1}^{1-\gamma }(D)\right] \right) ^{\frac{\theta -1}{\theta }} D_t^{\frac{1-\gamma }{\theta }-1} P_t&= \left( \mathbb {E}_t\left[ U_{t+1}^{1-\gamma }(D)\right] \right) ^{\frac{\theta -1}{\theta }} D_t^{\frac{1-\gamma }{\theta }} \\&\quad + \delta \mathbb {E}_t\left[ D_{t+1}^{\frac{1-\gamma }{\theta }-1} U_{t+1}^{\frac{(1-\gamma )(\theta -1)}{\theta }} (D)P_{t+1}\right] . \end{aligned}$$

Comparing this with (21) it follows that

$$\begin{aligned} U_t^{\frac{1-\gamma }{\theta }} = (1-\delta ) D_t^{\frac{1-\gamma }{\theta }-1} P_t. \end{aligned}$$
(39)

The proof that condition (22) holds is deferred to Lemma B.12. Next, let \(c_t\) defined by

$$\begin{aligned} P_t = c_t^{\frac{1-\gamma }{\theta }} D_t. \end{aligned}$$
(40)

and attempt to find \(c_t\). In other words \(c_t^{\frac{1-\gamma }{\theta }}\) is the price dividend ratio. Then (39) becomes

$$\begin{aligned} U_t(D) = (1-\delta )^\frac{\theta }{1-\gamma }c_t D_t, \end{aligned}$$
(41)

Substituting (41) into (21), it follows that

$$\begin{aligned} c_t^{\frac{1-\gamma }{\theta }}D_t ^{\frac{1-\gamma }{\theta }}= D_t ^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_t\left[ c_{t+1}^{1-\gamma }D_{t+1}^{1-\gamma }\right] \right) ^\frac{1}{\theta }. \end{aligned}$$

and, using (4),

$$\begin{aligned} c_t^{\frac{1-\gamma }{\theta }}=1+ \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_t\left[ c_{t+1}^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{t+1} } \right] \right) ^\frac{1}{\theta }. \end{aligned}$$
(42)

Note that this is a backward recursion. If \(c_{t+1}\) is known and assuming \({\hat{p}}_t\) is also known, then \(c_t\) can be computed. Additionally, note that (42) can be solved if it is assumed that no more learning takes place, that is if \(c_t = c_{t+1} = c.\) In this case,

$$\begin{aligned} c^{\frac{1-\gamma }{\theta }}= 1 + \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } }c^{\frac{1-\gamma }{\theta }} \left( \mathbb {E}_t\left[ {\text {e}}^{ (1-\gamma ) s X_{t+1} } \right] \right) ^\frac{1}{\theta }. \end{aligned}$$

It follows that

$$\begin{aligned} c^{\frac{1-\gamma }{\theta }}= \frac{ 1}{1- \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_t\left[ {\text {e}}^{ (1-\gamma ) s X_{t+1} } \right] \right) ^\frac{1}{\theta }} = \frac{ 1}{1- \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( (1-{\hat{p}}_t) + {\hat{p}}_t {\text {e}}^{ (1-\gamma ) s } \right) ^\frac{1}{\theta }}. \end{aligned}$$

Thus, define

$$\begin{aligned} c_\infty ^{(n)}(p_n) = \left( \frac{ 1}{1- \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( (1-p_n) + p_n {\text {e}}^{ (1-\gamma ) s } \right) ^\frac{1}{\theta }} \right) ^\frac{\theta }{1-\gamma } . \end{aligned}$$

Next, postulate that

$$\begin{aligned} \left( c_n({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }} = \left( c_\infty ^{(n)}({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }} + \sum _{i=1}^\infty \frac{\alpha _i({\hat{p}}_n)}{n^i}, \end{aligned}$$
(43)

and seek the coefficients \(\alpha _i\) by subsisting into (42). Henceforth, the argument \({\hat{p}}_n\) of \(c_n,c_\infty ^{(n)}, \alpha _n\) is dropped for convenience. The coefficients in this expansion are solved explicitly by inserting (43) into (42). For example, the first one equals

$$\begin{aligned}&\alpha _1(p)= \frac{(p ({\text {e}}^{ (1-\gamma ) s} -1)+1)^{\frac{1}{\theta }}{\text {e}}^{2{\eta }\frac{1-\gamma }{\theta } } \delta ^2 (p-1) p ({\text {e}}^{ (1-\gamma ) s} -1)^2 (p ({\text {e}}^{ (1-\gamma ) s} -1)+1)^{\frac{1}{\theta }-2}}{\theta \left( {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \delta (p ({\text {e}}^{ (1-\gamma ) s} -1)+1)^{\frac{1}{\theta }}-1\right) ^3}. \end{aligned}$$

and explicit formulas for higher-order coefficients follow similarly. The next auxiliary lemmas helps to verify the expansion (43).

Lemma B.4

There exists \(\nu _0>0\), such that

$$\begin{aligned} \nu _0^{-(n-m)} D_{m}\le D_n \le \nu _0^{n-m} D_{m},\quad \text{ for } \text{ any } n\ge m\ge 0. \end{aligned}$$
(44)

Moreover, fix the starting point \(n_0\ge 0\), and assume that

$$\begin{aligned} 0<\delta <\delta _1\triangleq 1\wedge \max \{ {\text {e}}^{-({\eta }+s)\frac{1-\gamma }{\theta } } , {\text {e}}^{-{\eta }\frac{1-\gamma }{\theta } } \}. \end{aligned}$$

Then for \(n\ge n_0\),

$$\begin{aligned}&c_{\min }\triangleq 1\le c_n \le \left( \frac{1}{1-\delta {\text {e}}^{(1-\rho )({\eta }+ (s)^{+}) } } \right) ^{\frac{1}{1-\rho }} \triangleq c_{\max }. \end{aligned}$$
(45)

So that

$$\begin{aligned} (1-\delta )^{\frac{1}{1-\rho }}c_{\min }D_{n_0}\le U_{n_0}(D) \le (1-\delta )^{\frac{1}{1-\rho }}c_{\max }D_{n_0}. \end{aligned}$$
(46)

Proof

Set \(D_{n+n_0}^{*}\triangleq D_{n_0} \max \{{\text {e}}^{(n+n_0) {\eta }}, {\text {e}}^{(n+n_0)({\eta }+s)} \} = D_{n_0} {\text {e}}^{(n+n_0)({\eta }+ (s)^{+})} .\) Then \(0< D_{n+n_0} \le D_{n+n_0}^{*}\). Similarly, \(D_{n_0} {\text {e}}^{(n+n_0)({\eta }- (s)^{-})} \le D_{n+n_0}.\) It then follows that (44) is satisfied with \(\nu _0 = {\text {e}}^{ \left| {\eta } \right| + \left| s \right| } \). To show (45), using the fact that U is increasing in consumption, it immediately follows from the definition of \(c_{n_0}\) in (41) that \(c_{n_0}\ge 1\), whence \(U_{n_0}^{*} = U_{n_0}(D^{*})\ge U_{n_0}(D).\) Thus, (21) becomes

$$\begin{aligned} U_{n_0}^{*}= \left\{ (1-\delta ) (D_{n_0}^{*})^{\frac{1-\gamma }{\theta }} + \delta (U_{n_0+1}^{*}) ^{\frac{1-\gamma }{\theta }} \right\} ^{\frac{\theta }{1-\gamma }}, \end{aligned}$$

where we used the identity \(\mathbb {E}_t[ (U_{n_0+1}^{*}) ^{1-\gamma }] =(U_{n_0+1}^{*}) ^{1-\gamma } \) because the consumption \(D_t^{*} \) is deterministic for \(t\ge n_0.\) Recalling that \(\theta = \frac{1-\gamma }{1-\rho }\) , it follows that for \(V_{n_0}^{*} = \frac{(U_{n_0}^{*})^{1-\rho }}{(1-\rho )(1-\delta )}\)

$$\begin{aligned} V_{n_0}^*{ }= (D_{n_0}^{*})^{1-\rho } + \delta V_{n_0+1}^{*}, \end{aligned}$$

which is the power utility case, with risk aversion \(\rho \). Hence,

$$\begin{aligned} V_{n_0}^{*} = \sum _{n={n_0}}^{\infty }\delta ^{n-n_0} (D_n^{*})^{1-\rho } = \sum _{n=0}^{\infty } (D_{n_0}^{*})^{1-\rho } \delta ^{n}{\text {e}}^{n(1-\rho )({\eta }+ (s)^{+})} = \frac{(D_{n_0}^{*})^{1-\rho }}{1-\delta {\text {e}}^{(1-\rho )({\eta }+ (s)^{+}) } }, \end{aligned}$$

which implies (45) by recalling that \(c_{n_0} = \frac{U_{n_0}}{ (1-\delta )^\frac{\theta }{1-\gamma } D_{n_0}} \le c_{n_0}^{*} = \frac{U_{n_0}^{*}}{ (1-\delta )^\frac{\theta }{1-\gamma } D_{n_0} } = \frac{( (1-\delta )V_{n_0}^{*})^{\frac{1}{1-\rho }}}{ (1-\delta )^\frac{1}{1-\rho } D_{n_0}} =\left( \frac{1}{1-\delta {\text {e}}^{(1-\rho )({\eta }+ (s)^{+}) } } \right) ^{\frac{1}{1-\rho }}=c_{\max }.\) This also shows (46), as \((1-\delta )^{\frac{1}{1-\rho }}c_{\min }D_{n_0}\le U_{n_0}(D)\).

\(\square \)

Similarly, any admissible consumption stream admits the following bounds.

Lemma B.5

Let \(n_0\ge 0\) be the initial time. Then there exists a constant \(K_0>0\), independent of \(n_0\), such that \( U_{n}(C) \le K_0 X_{n},\) for any \(n\ge n_0\) and for any admissible consumption process C.

Proof

Using \(\nu _0\) from Lemma B.4 and recalling (40), it follows that \(K_1^{-1} \nu _0^{-(n-n_0)} D_{n_0}\le c_{max}^{1-\rho }\wedge c_{min}^{1-\rho } D_n \le P_n \le c_{max}^{1-\rho }\vee c_{min}^{1-\rho } D_n \le K_1 \nu _0^{n-n_0} D_{n_0}, \) for some constant \(K_1>0\) and \(n\ge n_0\). Hence, it also follows that \(\frac{P_n}{P_{n-1}}\le K_1^2 \nu _0.\) Using the bounds on U from Lemma B.4, for another constant \(K_2>0\) it follows that \(m_{n+1,n}\ge \frac{1}{K_2 \nu _0^{-\rho }},\) which implies that the same bound holds for \(1+r_{n+1,n} \le K_2 \nu _0^{-\rho }.\) Thus for \(\nu _1 = K_2 \nu _0^{-\rho } \vee K_1^2 \nu _0\), it follows that \(C_n \le \nu _1^{n-n_0} X_{n_0},~n\ge n_0.\) A similar calculation as in Lemma B.4 yields the upper bound \(U_{n_0}(C) \le K_0 X_{n_0}\), for some constant \(K>0.\)

\(\square \)

Lemma B.6

Set

$$\begin{aligned} {\text{ Err }}&\triangleq \max _{p\in [0,1]} \frac{\delta ^2 (p-1) p e^{2 {\eta }(1-\gamma )} \left( e^{(1-\gamma ) s}-1\right) ^2 \left( p \left( e^{(1-\gamma ) s}-1\right) +1\right) ^{\frac{2}{\theta }-2}}{\theta \left( \delta e^{{\eta }(1-\gamma )} \left( p \left( e^{(1-\gamma ) s}-1\right) +1\right) ^{1/\theta }-1\right) ^2} + 1,\nonumber \\ B_1&\triangleq 1+c_{\max }^{\frac{1-\gamma }{\theta }} {\text {e}}^{\left( \frac{1-\gamma }{\theta }({\eta }+s)\right) ^{+} } +\, {\text{ Err }}, \end{aligned}$$
(47)
$$\begin{aligned} B_2&\triangleq 1+c_{\max }^{1-\gamma } {\text {e}}^{\left( \frac{1-\gamma }{\theta }s\right) ^{+} }, \end{aligned}$$
(48)

and assume that \(0<{\bar{\delta }}<1\), where

$$\begin{aligned} {\bar{\delta }} = \delta \max \left\{ \left| \theta \right| (1\vee \left| B_1 \right| ^{\theta -1}) {\text {e}}^{\left( {\eta }\frac{1-\gamma }{\theta }\right) ^{+} } , (1\vee \left| B_1 \right| ^{\theta -1})(1\vee \left| B_2 \right| ^{-1-\frac{1}{\theta }}) {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } (1-\gamma )(s)^{+}}\right\} . \end{aligned}$$
(49)

Moreover, let the assumptions of Lemma B.4 hold. Then \( \left| \left( c_\infty ^{(n)}(p_n)\right) ^{\frac{1-\gamma }{\theta }} - \left( c_n(p_n)\right) ^{\frac{1-\gamma }{\theta }} \right| = O\left( \frac{1}{n}\right) .\)

Proof

First, note that \(c_{\infty }^{(n)}\) almost satisfy (42), more specifically, for \(n>0\) big enough

$$\begin{aligned}&(c_{\infty }^{(n)}({\hat{p}}_n))^{\frac{1-\gamma }{\theta }}- 1- \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_t\left[ (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }\\&\quad =\frac{1}{n} \frac{\delta ^2 ({\hat{p}}_n-1) p_n e^{2 {\eta }(1-\gamma )} \left( e^{(1-\gamma ) s}-1\right) ^2 \left( {\hat{p}}_n \left( e^{(1-\gamma ) s}-1\right) +1\right) ^{\frac{2}{\theta }-2}}{\theta \left( \delta e^{{\eta }(1-\gamma )} \left( {\hat{p}}_n \left( e^{(1-\gamma ) s}-1\right) +1\right) ^{1/\theta }-1\right) ^2} + O\left( \frac{1}{n^2}\right) \le \frac{{\text{ Err }}}{n}. \end{aligned}$$

Fix n and \(N>n\). The idea is to express the difference between \((c_n({\hat{p}}_n))^{1-\gamma }\) and \((c_{\infty }^{(n)}({\hat{p}}_n))^{1-\gamma }\) using the difference at time \(n+1\), and then recursively repeat the process until time N. Observe that

$$\begin{aligned}&\left| (c_n({\hat{p}}_n))^{1-\gamma } - (c_{\infty }^{(n)}({\hat{p}}_n))^{1-\gamma } \right| \\&\quad \le \left| \left( 1+ \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_n\left[ (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }\right) ^\theta \right. \\&\qquad \left. -\left( 1+ \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_n\left[ (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }+ \frac{{\text{ Err }}}{n}\right) ^\theta \right| \\&\quad \le \left| \theta \right| \left| \zeta _n \right| ^{\theta -1} \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \\&\qquad \times \left| \left( \mathbb {E}_n\left[ (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }- \left( \mathbb {E}_n\left[ (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }+ \frac{{\text{ Err }}}{n} \right| \\&\quad \le \left| \theta \right| \left| \zeta _n \right| ^{\theta -1} \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left| \frac{ \left| {\hat{\zeta }}_n \right| ^{-1-\frac{1}{\theta }} }{ \left| \theta \right| } \mathbb {E}_n\left[ \left( (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } - (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma }\right) {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] + \frac{{\text{ Err }}}{n} \right| \\&\quad \le \left| \theta \right| \left| \zeta _n \right| ^{\theta -1} \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \frac{{\text{ Err }}}{n} \\&\qquad + \left| \zeta _n \right| ^{\theta -1} \left| {\hat{\zeta }}_n \right| ^{-1-\frac{1}{\theta }} \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } (1-\gamma )(s)^{+}} \mathbb {E}_n\left[ \left| (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } - (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } \right| \right] , \end{aligned}$$

here \(\zeta _n\), and \({\hat{\zeta }}_n\) are unknown points in the Taylor remainder. Note that both \(\zeta _n\) and \({\hat{\zeta }}_n\) are uniformly bounded, independently of n. Indeed, the point \(\zeta _n\) is located somewhere between \( 1+ \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_n\left[ (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }\) and \( \delta {\text {e}}^{{\eta }\frac{1-\gamma }{\theta } } \left( \mathbb {E}_n\left[ (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma }{\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \right) ^\frac{1}{\theta }+1+ \frac{{\text{ Err }}}{n}\). Both of these quantities are bounded between 1 and \(B_1\) from (47). Similarly, the point \({\hat{\zeta }}_n\) is located between \( \mathbb {E}_n\left[ (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \) and \( \mathbb {E}_n\left[ (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } {\text {e}}^{ (1-\gamma ) s X_{n+1} } \right] \), which are bounded by \({\text {e}}^{(1-\gamma )s^{-}}\) and \(B_2\) from (48). Recalling the definition of \({\bar{\delta }}\) in (49), the previous chain of inequalities continues as

$$\begin{aligned} \left| (c_n({\hat{p}}_n))^{1-\gamma } - (c_{\infty }^{(n)}({\hat{p}}_n))^{1-\gamma } \right|&\le {\bar{\delta }}\frac{{\text{ Err }}}{n} + {\bar{\delta }} \mathbb {E}_n\left[ \left| (c_{n+1}({\hat{p}}_{n+1}))^{1-\gamma } - (c_{\infty }^{(n+1)}({\hat{p}}_{n+1}))^{1-\gamma } \right| \right] \\&\le {\bar{\delta }}\frac{{\text{ Err }}}{n} + {\bar{\delta }}^2 \frac{{\text{ Err }}}{n+1}+ {\bar{\delta }}^2 \mathbb {E}_n\left[ \left| (c_{n+2}({\hat{p}}_{n+2}))^{1-\gamma } - (c_{\infty }^{(n+2)}({\hat{p}}_{n+2}))^{1-\gamma } \right| \right] \\&\le \left( {\bar{\delta }} + {\bar{\delta }}^2\right) \frac{{\text{ Err }}}{n}+ {\bar{\delta }}^2 \mathbb {E}_n\left[ \left| (c_{n+2}({\hat{p}}_{n+2}))^{1-\gamma } - (c_{\infty }^{(n+2)}({\hat{p}}_{n+2}))^{1-\gamma } \right| \right] . \end{aligned}$$

Which implies that

$$\begin{aligned} \left| (c_n({\hat{p}}_n))^{1-\gamma } - (c_{\infty }^{(n)}({\hat{p}}_n))^{1-\gamma } \right|&\le \left( \sum _{k=1}^\infty {\bar{\delta }}^k \right) \frac{{\text{ Err }}}{n}+ {\bar{\delta }}^{N-n} \mathbb {E}_n\left[ \left| (c_{N}({\hat{p}}_{N}))^{1-\gamma } - (c_{\infty }^{(N)}({\hat{p}}_{N}))^{1-\gamma } \right| \right] \\&=\frac{1}{1-{\bar{\delta }}}\frac{{\text{ Err }}}{n}+ {\bar{\delta }}^{N-n} \mathbb {E}_n\left[ \left| (c_{N}({\hat{p}}_{N}))^{1-\gamma } - (c_{\infty }^{(N)}({\hat{p}}_{N}))^{1-\gamma } \right| \right] . \end{aligned}$$

Letting \(N\rightarrow \infty \) the claim of the lemma now follows as both \(c_N\) and \(c_{\infty }^{(N)}\) are bounded. \(\square \)

This lemma can be generalized to higher orders. (The corresponding proof is omitted.)

Lemma B.7

For any \(k\ge 1\), there exists \(\delta >0\) small enough, such that

$$\begin{aligned} \left| \left( c_\infty ^{(n)}({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }}+\sum _{i=1}^k\frac{\alpha _i({\hat{p}}_n)}{n} - \left( c_n({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }} \right| = O\left( \frac{1}{n^{k+1}}\right) . \end{aligned}$$

Lemma B.8

The interest rate \(r_{t,t+1}\) with Epstein–Zin recursive utility is as in (10).

Proof

Using (41), and (4) the SDF from (24) becomes

$$\begin{aligned} m_{t+1,t}&=\delta \left( \frac{D_{t+1}}{D_t}\right) ^{\frac{1-\gamma }{\theta } -1} \left( \frac{c_{t+1}D_{t+1}}{\left( \mathbb {E}_t\left[ \left( c_{t+1}D_{t+1}\right) ^{1-\gamma }\right] \right) ^\frac{1}{1-\gamma }}\right) ^{\frac{(1-\gamma )(\theta -1)}{\theta }}\nonumber \\&=\delta \left( \frac{D_{t}{\text {e}}^{{\eta }+sX_{t+1}} }{D_t}\right) ^{\frac{1-\gamma }{\theta } -1} \left( \frac{c_{t+1}D_{t}{\text {e}}^{{\eta }+sX_{t+1}}}{\left( \mathbb {E}_t\left[ \left( c_{t+1}D_{t}{\text {e}}^{{\eta }+sX_{t+1}}\right) ^{1-\gamma }\right] \right) ^\frac{1}{1-\gamma }}\right) ^{\frac{(1-\gamma )(\theta -1)}{\theta }}\nonumber \\&=\delta {\text {e}}^{{\eta }\left( \frac{1-\gamma }{\theta } -1\right) } c_{t+1}^{\frac{(1-\gamma )(\theta -1)}{\theta }} \left( \mathbb {E}_t\left[ c_{t+1}^{1-\gamma } {\text {e}}^{(1-\gamma )s X_{t+1}} \right] \right) ^{\frac{1-\theta }{\theta }} {\text {e}}^{-\gamma s X_{t+1}} . \end{aligned}$$
(50)

Recall the definition of bond price \(B(t, t+1)\) in (29). It follows from (50) that

$$\begin{aligned} B(t,t+1) =\delta {\text {e}}^{{\eta }\left( \frac{1-\gamma }{\theta } -1\right) } \mathbb {E}_t\left[ c_{t+1}^{\frac{(1-\gamma )(\theta -1)}{\theta }} {\text {e}}^{-\gamma s X_{t+1}} \right] \left( \mathbb {E}_t\left[ c_{t+1}^{1-\gamma } {\text {e}}^{(1-\gamma )s X_{t+1}} \right] \right) ^{\frac{1-\theta }{\theta }}. \end{aligned}$$

The desired result (10) follows readily now from the definition of the interest rate \(r_{t+1,t}\) in (30). \(\square \)

Corollary B.9

For \(\delta >0\) small enough, (50) implies that

$$\begin{aligned} \left| \frac{\left( \mathbb {E}_n\left[ \left( c_\infty ^{(n+1)}( {\hat{p}}_{n+1})\right) ^{1-\gamma } {\text {e}}^{(1-\gamma )s X_{n+1}} \right] \right) ^{\frac{\rho -\gamma }{1-\gamma }}}{\delta {\text {e}}^{-{\eta }\rho } \mathbb {E}_n\left[ \left( c_\infty ^{(n+1)}({\hat{p}}_{n+1})\right) ^{\rho -\gamma } {\text {e}}^{-\gamma s X_{n+1}} \right] } -1 - r_{n+1,n} \right| = O\left( \frac{1}{n}\right) \end{aligned}$$

And an error of \(O\left( \frac{1}{n^{k+1}}\right) ,k\ge 1\) can be achieved if higher order approximation of \(\left( c_\infty ^{(n)}({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }}+\sum _{i=1}^k\frac{\alpha _i({\hat{p}}_n)}{n}\) is used to approximate \(\left( c_n({\hat{p}}_n)\right) ^{\frac{1-\gamma }{\theta }}\).

Proof

The proof follows from the combination of Lemmas B.4, B.6, B.7, B.8. \(\square \)

Corollary B.10

For \(\delta >0\) small enough, note that \(\lim \nolimits _{t\rightarrow \infty } \mathbb {E}_{t_0}\left[ m_{t,t_0} P_{t}\right] =0.\)

Proof

Recall that \(\theta = \frac{1-\gamma }{1-\rho }\). Then, under the assumption that \(\delta >0\) small enough,

$$\begin{aligned} 0<m_{t+1,t}\frac{D_{t+1}}{D_t} \le \delta _2. \end{aligned}$$
(51)

for some \(\delta _2<1\). This can be seen by considering different cases. For example, when \(\gamma >1\) and \(\rho <\gamma \), so that \(1-\gamma ,\frac{1-\theta }{\theta } = \frac{\gamma -\rho }{1-\gamma }<0\), using the definition of m in (24) and Lemma B.4 it follows that

$$\begin{aligned} 0<m_{t+1,t} \frac{D_{t+1}}{D_t}&\le \delta {\text {e}}^{-{\eta }\rho } \mathbb {E}_t\left[ \left( \frac{c_{\max }}{c_{\min }}\right) ^{1-\gamma } {\text {e}}^{(1-\gamma )s X_{t+1} } \right] ^{\frac{1-\theta }{\theta }} {\text {e}}^{-\gamma s X_{t+1}}\\&\le \left( \frac{c_{\max }}{c_{\min }}\right) ^{\gamma -\rho } \delta {\text {e}}^{-{\eta }\rho } {\text {e}}^{-\gamma s^{-}} {\text {e}}^{(\gamma -\rho )s^{+}} =\delta \left( \frac{c_{\max }}{c_{\min }}\right) ^{\gamma -\rho } {\text {e}}^{-{\eta }\rho +\gamma s - \rho s^{+}}. \end{aligned}$$

Thus (51) holds for \(\delta >0\) small enough. Thus, for any \(t_0\ge 0\),

$$\begin{aligned} \lim \limits _{t\rightarrow \infty } m_{t,t_0} D_{t} = D_{t_0}\lim \limits _{t\rightarrow \infty }\prod _{n=t_0+1}^{t} m_{n,n-1} \frac{D_{n}}{D_{n-1}}\le D_{t_0}\lim \limits _{t\rightarrow \infty }\delta _2^{t-t_0} =0 , \end{aligned}$$

whence \(\lim \nolimits _{t\rightarrow \infty } \mathbb {E}_{t_0}\left[ m_{t,t_0} P_{t}\right] =0.\)\(\square \)

The next corollary is presented for completeness only. It shows that the two price candidates (38) and (28) in the power utility and Epstein–Zin utility coincide.

Corollary B.11

The price P in (38) equals (28).

Proof

Recall that \(m_{t_0,t_0}=1\). The equality between (38) and (28) follows from Corollary B.10. \(\square \)

So far we have been using the recursion (21). We are now ready to show that the asymptotic condition (22) holds.

Lemma B.12

Let \(U_t(D)\) be as in (41). Then for \(\delta >0\) small enough,

$$\begin{aligned} \lim \limits _{N\rightarrow \infty } \left| U_t(D) -U_t\left( D^{0,N}\right) \right| =0. \end{aligned}$$

Proof

Observe, that the equivalent of (46) also holds for \(U_t\left( D^{0,N}\right) \), for \(N\ge t+1\). Namely,

$$\begin{aligned} (1-\delta )^{\frac{1}{1-\rho }}c_{\min }D_{t}\le U_{t}\left( D^{0,N}\right) \le (1-\delta )^{\frac{1}{1-\rho }}c_{\max }D_{t}. \end{aligned}$$

Thus, similarly to (51) and using the same \(\delta _2\) we can bound \(0<m_{t+1,t}^N\frac{D_{t+1}}{D_t} \le \delta _2\), where

$$\begin{aligned} m_{t+1,t}^N = \delta \left( \frac{D_{t+1}}{D_t}\right) ^{\frac{1-\gamma }{\theta } -1} \left( \frac{U_{t+1}\left( D^{0,N}\right) }{\left( \mathbb {E}_t\left[ \left( U_{t+1}\left( D^{0,N}\right) \right) ^{1-\gamma }\right] \right) ^\frac{1}{1-\gamma }}\right) ^{-\frac{(1-\gamma )(1-\theta )}{\theta }}. \end{aligned}$$

Then from Lemma B.4 it follows that

$$\begin{aligned}&\left| U_t(D) -U_t(D^{0,N}) \right| \le \mathbb {E}_{t}\left[ \pi _{t,t} \prod _{n=t}^{N} m_{n+1,n}^N \left| U_{N}(D) -(1-\delta )^{\frac{1}{1-\rho }}D_N \right| \right] \\&\quad \le \mathbb {E}_{t}\left[ \pi _{t,t} \prod _{n=t}^{N} m_{n+1,n}^NU_{N}(D) \right] \le \pi _{t,t} (1-\delta )^{\frac{1}{1-\rho }} c_{\max } \delta _2^{N-t+1} D_t, \end{aligned}$$

which in turn converges to zero as \(N\rightarrow \infty \). \(\square \)

We are now ready for the equilibrium proof for Epstein–Zin utility.

Proof of Theorem 4.2

First, note that we have already proved parts of Theorem 4.2. Specifically, Lemmas B.6 and B.7 show the validity of (9) and (11), and (10) and of (12) follow from Lemma B.8 and Corollary B.9 respectively. The next two steps similar to the ones in the proof of Theorem 4.1 is to show that the consumption D maximizes the utility U subject to the budget constraint and then use this result to show the market is in equilibrium.

Let \(\epsilon >0\) and let \(t\ge 0\) be the initial time. Assume the initial wealth is \(X_t = P_t\), so that the consumption stream D is admissible (otherwise, it suffices to scale it). Fix a consumption process C, also admissible for this initial wealth. The first goal is to show that \(U_t(C) \ge U_t(D).\) Without loss of generality assume that \(\sum _{s=t}^{\infty } \mathbb {E}_{t}\left[ m_{s,t} C_s\right] = X_t\). Indeed, \(\sum _{s=t}^{\infty } \mathbb {E}_{t}\left[ m_{s,t} C_s\right] \le X_t\) by Lemma B.3. Thus if the inequality is strict we may increase the consumption, and thereby increase the utility.

The goal now is to show that \(U_t(C) \le U_t(D).\) From (34), there exists \(n\ge t\) such that \( \sum _{s=n+1}^\infty \mathbb {E}_t\left[ m_{s,t} C_{s} \right] \le \epsilon ,\)\(\sum _{s=n+1}^\infty \mathbb {E}_t\left[ m_{s,t} D_{s} \right] \le \epsilon ,\) and hence

$$\begin{aligned}&\sum _{s=n+1}^\infty \mathbb {E}_t\left[ \pi _{s,t}( C_{s} - D_{s}) \right] \le 2\pi _{t,t}\epsilon . \end{aligned}$$

Recall the definition (23), which defines the modified consumption process \(D^{C,n}=\left\{ \begin{array} {ll} D_s &{}:s\le n,\\ C_s &{} :s> n. \end{array}\right. \) It then follows from Lemma B.3 that

$$\begin{aligned}&\mathbb {E}_t\left[ \pi _{n+1,t} X_{n+1}(D^{C,n})\right] = \mathbb {E}_t\left[ \pi _{n+1,t} X_{n+1}(C)\right] = X_t - \sum _{s=t}^n\mathbb {E}_t\left[ \pi _{s,t} C_{s} \right] = \sum _{s=n+1}^\infty \mathbb {E}_t\left[ \pi _{s,t} C_{s} \right] \le \pi _{t,t}\epsilon . \end{aligned}$$

We next show that \(U_t(D^{C,n}) \le U_t(D) +K_0 \epsilon , \) where \(K_0>0\) is the constant from Lemma B.5. Clearly, we only need to consider the case, when \( U_t(D^{C,n}) \ge U_t(D)\). Then from the concavity of U and

$$\begin{aligned} U_t(D^{C,n}) - U_t(D)&\le \mathbb {E}_t\left[ \frac{\partial U_t(D)}{\partial U_{n+1}} \left| U_{n+1}(D^{C,n}) - U_{n+1}(D) \right| \right] \le \mathbb {E}_t\left[ \frac{\partial U_t(D)}{\partial U_{n+1}} U_{n+1}(D^{C,n}) \right] \\&= \mathbb {E}_t\left[ m_{n+1,t} U_{n+1}(D^{C,n})\right] \le \mathbb {E}_t\left[ m_{n+1,t} K_0 X_{n+1}\right] \le K_0 \epsilon , \end{aligned}$$

where the third inequality is from Lemma B.5. Then

$$\begin{aligned} U_t(C)&\le U_t(C) - \sum _{s=t}^{n} \mathbb {E}_t\left[ \pi _{s,t}\left( C_s - D_s\right) \right] - \sum _{s=n+1}^\infty \mathbb {E}_t\left[ \pi _{s,t}\left( C_s - D_s\right) \right] \\&\le U_t(C) - \sum _{s=t}^{n} \mathbb {E}_t\left[ \pi _{s,t}\left( C_s - D_s\right) \right] +2 \epsilon \pi _{t,t} \\&\le U_t(D^{C,n}) +2 \epsilon \pi _{t,t} \le U_t(D) + \left( 2 \pi _{t,t} + K_0 \right) \epsilon . \end{aligned}$$

where the first inequality holds by (34), and the third from (27). Letting \(\epsilon \rightarrow 0\) it follows that \(U_t(C) \le U_t(D)\) and thus D maximizes the utility of consumption from a given initial wealth.

We now proceed in a similar fashion to the proof of Theorem 4.1. Consider the alternative strategy in which at time t the number of shares changes from 1 to \(1+\varepsilon \) on some \(\mathcal F_t\)-measurable event \(A \subset \{ \left| P_t \right| , \langle M , D_t\rangle 1/M\}\), with \(M>0\), while at the next time step \(t+1\) the extra shares now worth \(\varepsilon P_{t+1}\) are consumed in addition to \(D_{t+1}\), and for times \(s\ge t+2\) the consumption remains the same as before \(D_s\). That is, define \({\phi }_s^\varepsilon = {\phi }_s + \varepsilon 1_{\{s= t\}}\cap A\) and \(C^\varepsilon _s = D_s -\varepsilon P_s 1_{\{s = t\}\cap A} +\varepsilon P_s 1_{\{s = t+1\}\cap A} \), and note that this strategy continues to satisfy (33). (Note that \(\varepsilon \) may be either positive or negative.) The change in expected utility from (D, 1) to \((c^\varepsilon ,{\phi }^\varepsilon )\) is thus

$$\begin{aligned} \Delta ^\varepsilon&= \mathbb {E}\left[ 1_A \left\{ (1-\delta ) (D_t - \varepsilon (P_t-D_t))^{\frac{1-\gamma }{\theta }} \right. \right. \nonumber \\&\quad \left. \left. +\, \delta \left( \mathbb {E}\left[ \left\{ (1-\delta ) (D_{t+1} + \varepsilon P_{t+1})^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_{t+1}[ U_{t+2}(D) ^{1-\gamma }]\right) ^{\frac{1}{\theta }} \right\} ^{\theta } \right] \right) ^{\frac{1}{\theta }} \right\} ^{\frac{\theta }{1-\gamma }}\right] \nonumber \\&\quad - \mathbb {E}\left[ 1_A \left\{ (1-\delta )D_t^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_t\left[ \left\{ (1-\delta ) D_{t+1} ^{\frac{1-\gamma }{\theta }} + \delta \left( \mathbb {E}_{t+1}[ U_{t+2}(D) ^{1-\gamma }]\right) ^{\frac{1}{\theta }} \right\} ^{\theta } \right] \right) ^{\frac{1}{\theta }} \right\} ^{\frac{\theta }{1-\gamma }}\right] \le 0 \end{aligned}$$
(52)

where the last inequality reflects the assumed optimality of (D, 1). For any increasing, concave function u(xy), it holds that

$$\begin{aligned}&u_x(x_2,y_2)(x_2-x_1) + u_y(x_2,y_2)(y_2-y_1) \\&\quad \le u(x_2,y_2) - u(x_1,y_1) \le u_x(x_1,y_1)(x_2-x_1) + u_y(x_1,y_1)(y_2-y_1) \end{aligned}$$

whence, on the event A,

$$\begin{aligned}&\frac{\partial U_t}{\partial C_t}( D) \varepsilon (P_t-D_t) + \varepsilon \mathbb {E}_t\left[ \frac{\partial U_t}{\partial C_{t+1} }(D) P_{t+1} \right] \le U_t(D) - U_t(C^\varepsilon ) \nonumber \\&\quad \le \frac{\partial U_t}{\partial C_t}( C^\varepsilon ) \varepsilon (P_t-D_t) + \varepsilon \mathbb {E}_t\left[ \frac{\partial U_t}{\partial C_{t+1} }(C^\varepsilon ) P_{t+1} \right] . \end{aligned}$$
(53)

Note that from Lemma B.5 on A, we also have that \(P_{t+1} \le M_1\triangleq \frac{M}{\nu _0}\), and \(D_{t+1} \ge \frac{1}{M_1}.\) Set

$$\begin{aligned} C_s^M=\left\{ \begin{array}{ll} 1/(2M) &{} :s=t,\\ 1/(2M_1) &{}:s=t+1,\\ 0 &{} :s\ge t+2.\end{array}\right. \text{ for } s\ge t. \end{aligned}$$

Assuming \(0<\varepsilon <1/(2M^2)\), we have that \(D_s,C_s^\varepsilon \ge C_s^M\) for all \(s\ge t\). Thus, from (53)

$$\begin{aligned} \left| U_t(D) - U_t(C^\varepsilon ) \right|&\le \left| \frac{\partial U_t}{\partial C_t}( C^M) \varepsilon (P_t-D_t) \right| + \left| \varepsilon \frac{\partial U_t}{\partial C_{t+1} }(C^M) \mathbb {E}_t\left[ P_{t+1} \right] \right| \nonumber \\&\le \left| \frac{\partial U_t}{\partial C_t}( C^M) \right| \varepsilon M + M_1 \varepsilon \left| \frac{\partial U_t}{\partial C_{t+1} }(C^M) \right| . \end{aligned}$$
(54)

In view of (54), it follows that the respective incremental ratios are dominated by an integrable random variable, uniformly in \(\varepsilon \). Thus, dividing \(\Delta ^\varepsilon \) in (52) by \(\varepsilon \) and passing to the limit as \(\varepsilon \downarrow 0\), Lebesgue’s dominated convergence theorem yields

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\frac{\Delta ^\varepsilon }{\varepsilon } = \mathbb {E}\left[ 1_A \left( - \frac{\partial U_t(D)}{\partial C_t} (P_t-D_t) +\frac{\partial U_t(D)}{\partial C_{t+1}} P_{t+1} \right) \right] \le 0 \end{aligned}$$

Analogously, as \(\varepsilon \uparrow 0\) it follows that \(\lim _{\varepsilon \downarrow 0}\frac{\Delta ^\varepsilon }{\varepsilon } \ge 0\), whence the limit must be zero. By the tower property of conditional expectation,

$$\begin{aligned} \mathbb {E}\left[ 1_A \left( - \frac{\partial U_t(D)}{\partial C_t} (P_t-D_t) +\frac{\partial U_t(D)}{\partial C_{t+1}} P_{t+1} \right) \right] = 0. \end{aligned}$$

As \(M\uparrow \infty \), the event A spans any element of \(\mathcal F_t\), and recalling the definition \(m_{t+1,t}\) in (24), we get that that

$$\begin{aligned} P_t = D_t + E_t\left[ m_{t+1,t} P_{t+1} \right] \qquad \text { a.s..} \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bichuch, M., Guasoni, P. The learning premium. Math Finan Econ 14, 175–205 (2020). https://doi.org/10.1007/s11579-019-00251-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11579-019-00251-z

Keywords

Mathematics Subject Classification

JEL Classification:

Navigation