Skip to main content
Log in

Consumption in incomplete markets

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

We develop a method to find approximate solutions, and their accuracy, to consumption–investment problems with isoelastic preferences and infinite horizon, in incomplete markets where state variables follow a multivariate diffusion. We construct upper and lower contractions; these are fictitious complete markets in which state variables are fully hedgeable, but their dynamics is distorted. Such contractions yield pointwise upper and lower bounds for both the value function and the optimal consumption of the original incomplete market, and their optimal policies are explicit in typical models. Approximate consumption–investment policies coincide with the optimal one if the market is complete or utility is logarithmic.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The continuous-time literature begins with the work of Merton on constant [32] and stochastic [33] investment opportunities. Much of the following research, among which we mention Karatzas et al. [27], Cox and Huang [8], Karatzas et al. [28], He and Pearson [23], Duffie et al. [11], aims at characterising optimal policies either with martingale or with control methods. Kim and Omberg [30], Zariphoupoulou [40] find explicit solutions in an incomplete market without consumption and Wachter [39] in a complete market with consumption. Liu [31] extends these results to a wide class of quadratic models, in which optimal policies are given in terms of solutions of Riccati differential equations, again in models that are either complete with consumption or incomplete but without consumption. Existence and verification theorems for optimal consumption–investment problems in different settings of incomplete markets are discussed in Fleming and Hernández-Hernández [14], Fleming and Pang [15], Castañeda-Leyva and Hernández-Hernández [6], Hata and Sheu [20, 21], with constraints on model parameters and admissible strategies. Rogers [37] offers a recent survey of the portfolio choice literature.

  2. An exception is logarithmic utility, for which the optimal portfolio is myopic, intertemporal hedging is absent and the consumption–wealth ratio constantly equals the time-preference rate, hence is insensitive to market completeness and asset dynamics; see Goll and Kallsen [17] for a general statement.

  3. Campbell and Viceira [5, Sect. 5.1], [4] study the policies resulting from a log-linear approximation of the budget constraint and investigate the impact of stochastic investment opportunities on consumption and investment policies, while the accuracy of the approximation is not analysed in detail. Recently, Pohl et al. [34] show that such log-linear approximations lead to large numerical errors in asset pricing models. Haugh et al. [22] calculate numerical upper bounds for the maximum power utility from terminal wealth by adding artificial assets that complete the market, in the spirit of He and Pearson [23] and Cvitanić and Karatzas [10]. Bick et al. [3] employ a similar approach with intertemporal consumption, assuming deterministic risk premia for unhedgeable risks.

  4. Formally, \(\tilde{A}\) and \(\tilde{b}\) are functions defined on \(\mathbb{R}^{n}\times E\), though they only depend on the last \(k\) coordinates in the set \(E\). This is also the case in the definitions of martingale problems in the rest of the paper.

  5. Revuz and Yor [36, Theorem VII.2.7] requires an extension of the probability space when the coefficients \(A\) and \(\Sigma \) vanish. Such an extension is not required here as both coefficients are strictly positive definite.

  6. A set \(E\) is star-shaped with respect to a point \(y_{0}\in E\) if for each \(y\in E\), the line segment \(\{\alpha y+(1-\alpha )y_{0}: \alpha \in [0,1]\} \subseteq E\). This is always the case if \(E\) is convex.

  7. This definition of geometric mean for matrices, credited to Pusz and Woronowicz [35], implies that \(B\# C = C \# B\) is the unique positive definite solution \(X\) to the matrix equations \(X B^{-1} X = C\) and \(X C^{-1} X = B\). Extending the definition by continuity, \(B\# C\) is defined also if \(B\) or \(C\) is positive semidefinite (Bhatia [2, Chap. 4]). See also Horn and Johnson [25, Sect. 7.2].

  8. That is, the ratio between the sum of all dividends distributed in a calendar year by the companies included in the index, and the sum of their market capitalisations at the end of the same year. Note that the variable \(Y\) here is used as a state variable, as the real return reflects both price changes and dividend distributions.

  9. http://www.econ.yale.edu/~shiller/data/chapt26.xlsx.

  10. In the model (5.1) and (5.2), \(Y\) is a square-root process. Thus the stationary distribution of \(Y\) is a Gamma distribution with shape parameter \({2b\theta }/{a^{2}}\) and scale parameter \({a^{2}}/{2b}\).

References

  1. Barles, G., Souganidis, P.E.: Convergence of approximation schemes for fully nonlinear second order equations. Asymptot. Anal. 4, 271–283 (1991)

    Article  MathSciNet  Google Scholar 

  2. Bhatia, R.: Positive Definite Matrices. Princeton University Press, Princeton (2007)

    MATH  Google Scholar 

  3. Bick, B., Kraft, H., Munk, C.: Solving constrained consumption–investment problems by simulation of artificial market strategies. Manag. Sci. 59, 485–503 (2013)

    Article  Google Scholar 

  4. Campbell, J.Y., Viceira, L.M.: Who should buy long-term bonds? Am. Econ. Rev. 91, 99–127 (2001)

    Article  Google Scholar 

  5. Campbell, J.Y., Viceira, L.M.: Strategic Asset Allocation: Portfolio Choice for Long-Term Investors, 1st edn. Oxford University Press, New York (2002)

    Book  Google Scholar 

  6. Castañeda-Leyva, N., Hernández-Hernández, D.: Optimal consumption–investment problems in incomplete markets with stochastic coefficients. SIAM J. Control Optim. 44, 1322–1344 (2005)

    Article  MathSciNet  Google Scholar 

  7. Cheridito, P., Filipović, D., Yor, M.: Equivalent and absolutely continuous measure changes for jump-diffusion processes. Ann. Appl. Probab. 15, 1713–1732 (2005)

    Article  MathSciNet  Google Scholar 

  8. Cox, J.C., Huang, C-f.: Optimal consumption and portfolio policies when asset prices follow a diffusion process. J. Econ. Theory 49, 33–83 (1989)

    Article  MathSciNet  Google Scholar 

  9. Cox, J.C., Ingersoll, J.E., Ross, S.A.: A theory of the term structure of interest rates. Econometrica 53, 385–407 (1985)

    Article  MathSciNet  Google Scholar 

  10. Cvitanić, J., Karatzas, I.: Convex duality in constrained portfolio optimization. Ann. Appl. Probab. 2, 767–818 (1992)

    Article  MathSciNet  Google Scholar 

  11. Duffie, D., Fleming, W.H., Soner, H.M., Zariphopoulou, T.: Hedging in incomplete markets with HARA utility. J. Econ. Dyn. Control 21, 753–782 (1997)

    Article  MathSciNet  Google Scholar 

  12. Dybvig, P.H., Rogers, L.C.G., Back, K.: Portfolio turnpikes. Rev. Financ. Stud. 12, 165–195 (1999)

    Article  Google Scholar 

  13. Feller, W.: Two singular diffusion problems. Ann. Math. 54, 173–182 (1951)

    Article  MathSciNet  Google Scholar 

  14. Fleming, W.H., Hernández-Hernández, D.: An optimal consumption model with stochastic volatility. Finance Stoch. 7, 245–262 (2003)

    Article  MathSciNet  Google Scholar 

  15. Fleming, W.H., Pang, T.: An application of stochastic control theory to financial economics. SIAM J. Control Optim. 43, 502–531 (2004)

    Article  MathSciNet  Google Scholar 

  16. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order. Springer, Berlin (1998)

    MATH  Google Scholar 

  17. Goll, T., Kallsen, J.: A complete explicit solution to the log-optimal portfolio problem. Ann. Appl. Probab. 13, 774–799 (2003)

    Article  MathSciNet  Google Scholar 

  18. Guasoni, P., Robertson, S.: Portfolios and risk premia for the long run. Ann. Appl. Probab. 22, 239–284 (2012)

    Article  MathSciNet  Google Scholar 

  19. Guasoni, P., Wang, G.: Consumption and investment with interest rate risk. J. Math. Anal. Appl. 476, 215–239 (2019)

    Article  MathSciNet  Google Scholar 

  20. Hata, H., Sheu, S.-J.: On the Hamilton–Jacobi–Bellman equation for an optimal consumption problem: I. Existence of solution. SIAM J. Control Optim. 50, 2373–2400 (2012)

    Article  MathSciNet  Google Scholar 

  21. Hata, H., Sheu, S.-J.: On the Hamilton–Jacobi–Bellman equation for an optimal consumption problem: II. Verification theorem. SIAM J. Control Optim. 50, 2401–2430 (2012)

    Article  MathSciNet  Google Scholar 

  22. Haugh, M., Kogan, L., Wang, J.: Evaluating portfolio policies: a duality approach. Oper. Res. 54, 405–418 (2006)

    Article  MathSciNet  Google Scholar 

  23. He, H., Pearson, N.D.: Consumption and portfolio policies with incomplete markets and short-sale constraints: the infinite dimensional case. J. Econ. Theory 54, 259–304 (1991)

    Article  MathSciNet  Google Scholar 

  24. Heath, D., Schweizer, M.: Martingales versus PDEs in finance: an equivalence result with examples. J. Appl. Probab. 37, 947–957 (2000)

    Article  MathSciNet  Google Scholar 

  25. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    MATH  Google Scholar 

  26. Jin, X.: Consumption and portfolio turnpike theorems in a continuous-time finance model. J. Econ. Dyn. Control 22, 1001–1026 (1998)

    Article  MathSciNet  Google Scholar 

  27. Karatzas, I., Lehoczky, J.P., Shreve, S.E.: Optimal portfolio and consumption decisions for a small investor on a finite horizon. SIAM J. Control Optim. 25, 1557–1586 (1987)

    Article  MathSciNet  Google Scholar 

  28. Karatzas, I., Lehoczky, J.P., Shreve, S.E., Xu, G.-L.: Martingale and duality methods for utility maximization in an incomplete market. SIAM J. Control Optim. 29, 702–730 (1991)

    Article  MathSciNet  Google Scholar 

  29. Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, New York (1991)

    MATH  Google Scholar 

  30. Kim, T.S., Omberg, E.: Dynamic nonmyopic portfolio behavior. Rev. Financ. Stud. 9, 141–161 (1996)

    Article  Google Scholar 

  31. Liu, J.: Portfolio selection in stochastic environment. Rev. Financ. Stud. 20, 1–39 (2007)

    Article  Google Scholar 

  32. Merton, R.C.: Lifetime portfolio selection under uncertainty: the continuous-time case. Rev. Econ. Stat. 51, 247–257 (1969)

    Article  Google Scholar 

  33. Merton, R.C.: An intertemporal capital asset pricing model. Econometrica 41, 867–887 (1973)

    Article  MathSciNet  Google Scholar 

  34. Pohl, W., Schmedders, K., Wilms, O.: Higher order effects in asset pricing models with long-run risks. J. Finance 73, 1061–1111 (2018)

    Article  Google Scholar 

  35. Pusz, W., Woronowicz, S.L.: Functional calculus for sesquilinear forms and the purification map. Rep. Math. Phys. 8, 159–170 (1975)

    Article  MathSciNet  Google Scholar 

  36. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, 3rd edn. Springer, New York (2001)

    MATH  Google Scholar 

  37. Rogers, L.C.G.: Optimal Investment. Springer Briefs in Quantitative Finance. Springer, Heidelberg (2013)

    Google Scholar 

  38. Stroock, D.W., Varadhan, S.R.: Multidimensional Diffusion Processes. Springer, Berlin (2006)

    MATH  Google Scholar 

  39. Wachter, J.: Portfolio and consumption decision under mean-reverting returns: an exact solution for complete markets. J. Financ. Quant. Anal. 37, 63–91 (2002)

    Article  Google Scholar 

  40. Zariphoupoulou, T.: A solution approach to valuation with unhedgeable risks. Finance Stoch. 5, 61–82 (2001)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are indebted to Scott Robertson, Hao Xing and two anonymous referees for their suggestions which significantly improved the paper. We thank for helpful comments seminar participants at the AMS meeting in Boston, the University of Michigan and the Rutgers Mathematical Finance and Partial Differential Equations Conference.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Guasoni.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

P. Guasoni was partially supported by the ERC (279582), NSF (DMS-1412529), and SFI (16/IA/4443 and 16/SPP/3347).

Appendix: Proofs

Appendix: Proofs

We first recall a well-known duality property of wealth processes and stochastic discount factors. For any \(\eta \in \mathcal{R}\) and \((\pi ,\ell )\in \mathcal{A}\),

$$ \frac{d(M^{\eta }_{t}X^{\pi ,\ell }_{t})}{M^{\eta }_{t}X^{\pi ,\ell }_{t}} = (\pi _{t}'-\mu '\Sigma ^{-1}-\eta _{t}'\Upsilon '\Sigma ^{-1}) \sigma dZ_{t} + \eta _{t}'a dW_{t}-\ell _{t}dt. $$

Thus as \(X^{\pi ,\ell } \ge 0\) and \(X^{\pi ,\ell }_{t} \ell _{t} = c_{t}\), it follows that \(M^{\eta } X^{\pi ,\ell } + \int _{0}^{\cdot }M^{\eta }_{s}c_{s}ds\) is a nonnegative local martingale and therefore a supermartingale. Thus \(\mathbb{E}[\int _{0}^{t}M^{\eta }_{s}c_{s}ds]\leq x\), and in the limit as \(t\uparrow \infty \), \(\mathbb{E}[\int _{0}^{\infty }M^{\eta }_{s}c_{s}ds]\leq x\). As this inequality holds true for all \(\eta \in \mathcal{R}\), we get

$$ \sup _{\eta \in \mathcal{R}}\mathbb{E}\bigg[\int _{0}^{\infty }M^{ \eta }_{t}c_{t}dt\bigg]\leq x. $$
(A.1)

The next lemma establishes an upper bound, uniform for any policy \((\pi ,\ell ) \in \mathcal{A}\), for the expected utility from consumption up to a horizon \(T\) (cf. [18, Lemma 5] for expected utility from terminal wealth). We refer to the left-hand side of (A.2) below as the primal bound and to the right-hand side as the dual bound. The limits of the primal and dual bounds as \(T\uparrow \infty \) give the lower and upper bounds for the value function. If there exist \((\hat{\pi },\hat{\ell })\in \mathcal{A}\) and \(\hat{\eta }\in \mathcal{R}\) such that (with \(\hat{c} = \hat{\ell }X^{\hat{\pi },\hat{\ell }}\))

$$ \mathbb{E}\bigg[\int _{0}^{\infty }e^{-\beta t} \frac{\hat{c}^{1-\gamma }_{t}}{1-\gamma }dt\bigg] = \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{\infty }e^{- \frac{\beta }{\gamma }t}(M_{t}^{\hat{\eta }})^{ \frac{\gamma -1}{\gamma }}dt\bigg]^{\gamma }, $$

then \(\hat{\pi }\), \(\hat{\ell }\) and \(\hat{\eta }\) are the optimal portfolio, consumption and market price of nontraded risk, respectively.

Lemma A.1

For any\((\pi ,\ell ) \in \mathcal{A}\), \(\eta \in \mathcal{R}\)and\(T>0\),

$$ \mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c^{1-\gamma }_{t}}{1-\gamma }dt\bigg] \leq \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{T}e^{- \frac{\beta }{\gamma }t} (M_{t}^{\eta } )^{\frac{\gamma -1}{\gamma }}dt \bigg]^{\gamma }. $$
(A.2)

Proof

Recall that for any differentiable, strictly increasing, and strictly concave \(f\) on \((0,\infty )\) and \(z> 0\), we have \(\sup _{x>0}(f(x) - xz) = f((f')^{-1}(z)) - (f')^{-1}(z)z\). Let \(f(x) = e^{-\beta t}\frac{x^{1-\gamma }}{1-\gamma }\); then \((f')^{-1}(z) = e^{-\frac{\beta t}{\gamma }}z^{-\frac{1}{\gamma }}\). Replacing \(x\) by \(c_{t}\) and \(z\) by \(yM^{\eta }_{t}\) and setting \(q = \frac{\gamma -1}{\gamma }\), it follows that

$$ e^{-\beta t}\frac{c^{1-\gamma }_{t}}{1-\gamma } \leq e^{- \frac{\beta }{\gamma }t}\frac{(yM^{\eta }_{t})^{q}}{1-\gamma } - e^{- \frac{\beta }{\gamma }t}(yM^{\eta }_{t})^{q} + yM^{\eta }_{t}c_{t} \qquad \text{for all }y>0 , $$

whence integrating and recalling (A.1) yields

$$ \mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c^{1-\gamma }_{t}}{1-\gamma }dt\bigg] \leq y^{q} \frac{\gamma }{1-\gamma }\mathbb{E}\left [\int _{0}^{T}e^{- \frac{\beta }{\gamma }t}(M_{t}^{\eta })^{q}dt\right ] + y\mathbb{E} \left [\int _{0}^{T}M^{\eta }_{t}c_{t}dt\right ]. $$

The right-hand side reaches its minimum at \(\hat{y} = {x^{-\gamma }}/{\mathbb{E}[\int _{0}^{T}e^{- \frac{\beta }{\gamma }t}(M_{t}^{\eta })^{q}dt]^{-\gamma }}\), and the claim follows by substituting this value. □

Proof of Lemma 3.1

Consider the differential equation

$$\begin{aligned} &\bigg(-\frac{\beta }{1-\gamma } + \pi \mu - \frac{\gamma \pi '\Sigma \pi }{2}- \ell + r \bigg)f \\ &+ \frac{(\nabla f)' b}{1-\gamma } + \pi '\Upsilon \nabla f + \frac{1}{2(1-\gamma )}\operatorname{tr} (A D^{2} f) = - \frac{ \ell ^{1-\gamma }}{1-\gamma }. \end{aligned}$$
(A.3)

From Gilbarg and Trudinger [16, Theorem 6.6.13], assumptions (i) and (iii) imply that (A.3) with the boundary condition \(f_{n}(y) = u_{1}(y) = g_{1}(y)^{\gamma }\) on \(\partial E_{n}\) has a solution \(f_{n} \in C^{2,\alpha }(E_{n})\). By Itô’s lemma,

$$\begin{aligned} &d\bigg(e^{-\beta t}\frac{ X_{t}^{1-\gamma }}{1-\gamma }f_{n}(Y_{t}) \bigg) \\ &= e^{-\beta t} X^{1-\gamma }_{t}\bigg( \frac{(\nabla f_{n})' b}{1-\gamma } + \frac{1}{2(1-\gamma )}\operatorname{tr} (A D^{2} f_{n}) \\ &\qquad \qquad \qquad + \Big(-\frac{\beta }{1-\gamma } + \pi (Y_{t})\mu - \frac{\gamma \pi (Y_{t})'\Sigma \pi (Y_{t})}{2} - \ell (Y_{t}) + r\Big) f_{n} \bigg)dt \\ &\phantom{=:}+e^{-\beta t} X^{1-\gamma }_{t}\pi '\Upsilon \nabla f_{n} +e^{-\beta t} X^{1-\gamma }_{t}\frac{(\nabla f_{n})' a}{1-\gamma }dW_{t} + e^{- \beta t} X^{1-\gamma }_{t}\pi '\sigma dZ_{t}. \end{aligned}$$

For any initial value \(x\in \mathbb{R}^{n}\) and \(y\in E\), there exists \(n\) such that \(\frac{1}{n}< x< n\) and \(y\in E_{n}\). Let \(\tau _{n} = \inf \{t\geq 0: (X_{t},Y_{t}) \notin (\frac{1}{n},n) \times E_{n}\}\). Because \(f_{n}\) satisfies (A.3) in the bounded domain \(E_{n}\), we have

$$ \frac{x^{1-\gamma }}{1-\gamma }f_{n}(y) = \mathbb{E}\bigg[e^{-\beta \tau _{n}}\frac{ X_{\tau _{n}}^{1-\gamma }}{1-\gamma } u_{1}(Y_{\tau _{n}}) + \int _{0}^{\tau _{n}} \frac{e^{-\beta t} X_{t}^{1-\gamma } \ell (Y_{t})^{1-\gamma }}{1-\gamma }dt \bigg]. $$

The existence of a unique solution to the martingale problem from Assumption 2.1(ii) implies that \((X,Y)\) never explodes. Now the same argument as in the proof of Theorem 1 in Heath and Schweizer [24] together with the local Hölder-continuity of the model parameters implies that \((X,Y)\) is a strong Markov process. Thus for some finite \(T>0\),

$$\begin{aligned} &\mathbb{E}\bigg[\int _{0}^{\infty }e^{-\beta t} \frac{ ( \ell _{t} X_{t} )^{1-\gamma }}{1-\gamma }dt\bigg\rvert \mathcal{F}_{\tau _{n}\wedge T}\bigg] \\ &= \mathbb{E}\bigg[\int _{\tau _{n}\wedge T}^{\infty }e^{-\beta t} \frac{ ( \ell _{t} X_{t} )^{1-\gamma }}{1-\gamma }dt\bigg\rvert \mathcal{F}_{\tau _{n}\wedge T}\bigg] + \int _{0}^{\tau _{n}\wedge T}e^{- \beta t}\frac{\left ( \ell _{t} X_{t}\right )^{1-\gamma }}{1-\gamma }dt \\ &= e^{-\beta (\tau _{n}\wedge T)} \frac{X^{1-\gamma }_{\tau _{n}\wedge T}}{1-\gamma }u_{1}(Y_{\tau _{n} \wedge T}) + \int _{0}^{\tau _{n}\wedge T}e^{-\beta t} \frac{ ( \ell _{t} X_{t} )^{1-\gamma }}{1-\gamma }dt. \end{aligned}$$

The tower property of conditional expectations implies that

$$\begin{aligned} \frac{x^{1-\gamma }}{1-\gamma }u_{1}(y) &= \mathbb{E}\bigg[\int _{0}^{ \infty }e^{-\beta t} \frac{ ( \ell _{t} X_{t} )^{1-\gamma }}{1-\gamma }dt\bigg] \\ &=\mathbb{E}\bigg[e^{-\beta (\tau _{n}\wedge T)} \frac{X^{1-\gamma }_{\tau _{n}\wedge T}}{1-\gamma }u_{1}(Y_{\tau _{n} \wedge T}) + \int _{0}^{\tau _{n}\wedge T}e^{-\beta t} \frac{ ( \ell _{t} X_{t} )^{1-\gamma }}{1-\gamma }dt\bigg]. \end{aligned}$$

Letting \(T\rightarrow \infty \), by dominated convergence (for the first term in the the expectation) and monotone convergence (for the second term), the right-hand side converges to

$$ \mathbb{E}\bigg[e^{-\beta \tau _{n}} \frac{ X_{\tau _{n}}^{1-\gamma }}{1-\gamma } u_{1}(Y_{\tau _{n}}) + \int _{0}^{\tau _{n}} \frac{e^{-\beta t} X_{t}^{1-\gamma } \ell (Y_{t})^{1-\gamma }}{1-\gamma }dt \bigg] = \frac{x^{1-\gamma }}{1-\gamma }f_{n}(y). $$

As this equality holds for every \(n\), it follows that \(u_{1}\) is in \(C^{2}(E)\) and solves (A.3) in \(E\), whence \(g_{1}= u_{1}^{\frac{1}{\gamma }}\) solves

$$\begin{aligned} &r +\pi \mu - \frac{\gamma \pi '\Sigma \pi }{2}+ \frac{\beta }{\gamma -1} + \frac{\gamma (\nabla g_{1})' b}{(1-\gamma )g_{1}} \\ &+\gamma \pi ' \Upsilon \frac{\nabla g_{1}}{g_{1}}+ \frac{\gamma \operatorname{tr} (AD^{2}g_{1})}{2(1-\gamma )g_{1}} - \frac{\gamma (\nabla g_{1})'A\nabla g_{1}}{2g_{1}^{2}} + \frac{g_{1}^{-\gamma }\ell ^{1-\gamma }}{1-\gamma } -\ell =0. \end{aligned}$$

Thus

$$\begin{aligned} &r + \frac{\beta }{\gamma -1} + \frac{\gamma (\nabla g_{1})' b}{(1-\gamma )g_{1}} + \frac{\gamma \operatorname{tr} (AD^{2}g_{1})}{2(1-\gamma )g_{1}} - \frac{\gamma (\nabla g_{1})'A\nabla g_{1}}{2g_{1}^{2}} \\ &+\sup _{\pi ,\ell }\bigg(\pi '\mu - \frac{\gamma }{2}\pi '\Sigma \pi + \gamma \pi ' \Upsilon \frac{\nabla g_{1}}{g_{1}}+ \frac{g_{1}^{-\gamma }\ell ^{1-\gamma }}{1-\gamma } -\ell \bigg) \geq 0. \end{aligned}$$

Then \(\mathcal{H}(y,g_{1},\nabla g_{1}, D^{2} g_{1}) \geq 0\) for \(0 < \gamma <1\) and \(\mathcal{H}(y,g_{1},\nabla g_{1}g, D^{2}g_{1}) \leq 0\) for \(\gamma >1\). On the other hand, consider the differential equation

$$\begin{aligned} &\bigg(-\frac{\beta }{\gamma -1}- r - \frac{1}{2\gamma }\mu '\Sigma ^{-1} \mu - \frac{1}{2\gamma }\eta 'A\eta + \frac{\eta '\Upsilon '\Sigma ^{-1}\Upsilon \eta }{2\gamma }\bigg)f \\ &+(\nabla f)'\bigg( \frac{\gamma b}{\gamma -1} - \Upsilon '\Sigma ^{-1} (\mu + \Upsilon \eta ) + A\eta \bigg) + \frac{\gamma \operatorname{tr} (A D^{2} f )}{2(\gamma -1)} = \frac{\gamma }{1-\gamma }. \end{aligned}$$
(A.4)

With similar arguments as above, assumptions (ii) and (iii) imply that there exists a unique solution \(h_{n}\) in \(E_{n}\) with the boundary condition \(h_{n} = g_{2}\) on \(\partial E_{n}\). By Itô’s lemma,

$$\begin{aligned} &d\bigg(e^{-\frac{\beta }{\gamma }t} (M^{\eta }_{t} )^{ \frac{\gamma -1}{\gamma }}h_{n}(Y_{t})\bigg) \\ &= \frac{\gamma -1}{\gamma }e^{-\frac{\beta }{\gamma } t} (M^{\eta }_{t} )^{\frac{\gamma -1}{\gamma }} \bigg(\Big(-\frac{\beta }{\gamma -1}- r - \frac{\mu '\Sigma ^{-1}\mu }{2\gamma } - \frac{\eta 'A\eta }{2\gamma } + \frac{\eta '\Upsilon '\Sigma ^{-1}\Upsilon \eta }{2\gamma }\Big)h_{n} \\ & \qquad \qquad \qquad \qquad \qquad \,\,\,+ \frac{\gamma \nabla h_{n}' b}{\gamma -1} + \frac{\gamma \operatorname{tr} (A D^{2} h_{n} )}{2(\gamma -1)}\bigg)dt \\ &\phantom{=:}+\frac{\gamma -1}{\gamma }e^{-\frac{\beta }{\gamma } t} (M^{\eta }_{t} )^{\frac{\gamma -1}{\gamma }}\big(- (\mu '\Sigma ^{-1}+\eta ' \Upsilon '\Sigma ^{-1} )\Upsilon + \eta 'A\big)\nabla h_{n} dt \\ &\phantom{=:}+ \frac{\gamma -1}{\gamma }e^{-\frac{\beta }{\gamma }t} (M^{\eta }_{t} )^{\frac{\gamma -1}{\gamma }}h_{n}\big(- (\mu '\Sigma ^{-1} + \eta ' \Upsilon '\Sigma ^{-1} )\sigma dZ_{t} + \eta 'adW_{t}\big) \\ &\phantom{=:}+e^{-\frac{\beta }{\gamma }t} (M^{\eta }_{t} )^{ \frac{\gamma -1}{\gamma }}(\nabla h_{n})' a dW_{t}. \end{aligned}$$

For any initial value \(y\in E\), there exists \(n\) such that \(y\in E_{n}\). We therefore define \(\tau _{n} = \inf \{t\geq 0: Y_{t} \notin E_{n}\}\). As \(h_{n}\) satisfies (A.4) in the bounded domain \(E_{n}\), we have

$$ h_{n}(y) = \mathbb{E}\bigg[e^{-\frac{\beta }{\gamma } \tau _{n}} (M^{ \eta }_{\tau _{n}} )^{\frac{\gamma -1}{\gamma }}g_{2}(Y_{\tau _{n}}) + \int _{0}^{\tau _{n}}e^{-\frac{\beta }{\gamma }t} (M^{\eta }_{t} )^{ \frac{\gamma -1}{\gamma }}dt\bigg]. $$

The fact that \(Y\) never explodes together with the local Hölder-continuity of the model parameters implies that \(Y\) is a strong Markov process. Thus similarly to the argument above for \(u_{1}\), we get \(h_{n} = g_{2}\) in \(E_{n}\). Because this holds for every \(n\), \(g_{2}\) solves (A.4) in \(E\) or, equivalently,

$$\begin{aligned} 0&=r + \frac{\beta }{\gamma -1} + \frac{\gamma (\nabla g_{2})' b}{(1-\gamma )g_{2}}+ \frac{\gamma \operatorname{tr} (A D^{2} g_{2} )}{2(1-\gamma )g_{2}} + \frac{\mu '\Sigma ^{-1}\mu }{2\gamma } + \frac{(\nabla g_{2})'\Upsilon '\Sigma ^{-1}\mu }{g_{2}} \\ &\phantom{=:}+ \frac{\gamma g_{2}^{-1}}{1-\gamma } + \frac{\eta 'A\eta }{2\gamma } - \frac{\eta '\Upsilon '\Sigma ^{-1}\Upsilon \eta }{2\gamma } + \frac{(\nabla g_{2})'}{g_{2}} ( \Upsilon '\Sigma ^{-1}\Upsilon - A ) \eta . \end{aligned}$$

Note also that

$$\begin{aligned} &\inf _{\eta }\bigg( \frac{\eta 'A\eta }{2\gamma } - \frac{\eta '\Upsilon '\Sigma ^{-1}\Upsilon \eta }{2\gamma } + \frac{(\nabla g_{2})'}{g_{2}} ( \Upsilon '\Sigma ^{-1}\Upsilon - A ) \eta \bigg) \\ & = - \frac{\gamma (\nabla g_{2})' (A - \Upsilon '\Sigma ^{-1}\Upsilon )\nabla g_{2}}{2g_{2}^{2}} , \end{aligned}$$

thus

$$\begin{aligned} &r + \frac{\beta }{\gamma -1} + \frac{\gamma (\nabla g_{2})' b}{(1-\gamma )g_{2}}+ \frac{\gamma \operatorname{tr} (A D^{2} g_{2} )}{2(1-\gamma )g_{2}}- \frac{\gamma (\nabla g_{2})'A\nabla g_{2}}{2g_{2}^{2}} \\ &+ \frac{\mu '\Sigma ^{-1}\mu }{2\gamma } + \frac{(\nabla g_{2})'\Upsilon '\Sigma ^{-1}\mu }{g_{2}}+ \frac{\gamma g_{2}^{-1}}{1-\gamma } + \frac{\gamma (\nabla g_{2})'\Upsilon '\Sigma ^{-1}\Upsilon \nabla g_{2}}{2g_{2}^{2}} \leq 0. \end{aligned}$$

Since \(\sup _{\ell } (\frac{g_{2}^{-\gamma }\ell ^{1-\gamma }}{1-\gamma } - \ell ) = \frac{\gamma g_{2}^{-1}}{1-\gamma }\) and

$$\begin{aligned} &\sup _{\pi }\!\bigg(\!\pi '\!\mu - \frac{\gamma }{2}\pi '\Sigma \pi + \gamma \pi ' \Upsilon \frac{\nabla g_{2}}{g_{2}}\bigg) \\ &= \frac{\mu '\Sigma ^{-1}\mu }{2\gamma } + \frac{(\nabla g_{2})'\Upsilon '\Sigma ^{-1}\mu }{g_{2}}+ \frac{\gamma (\nabla g_{2})'\Upsilon '\Sigma ^{-1}\Upsilon \nabla g_{2}}{2g_{2}^{2}}, \end{aligned}$$

it follows that

$$\begin{aligned} &r + \frac{\beta }{\gamma -1} + \frac{\gamma (\nabla g_{2})' b}{(1-\gamma )g_{2}} + \frac{\gamma \operatorname{tr} (AD^{2}g_{2})}{2(1-\gamma )g_{2}} - \frac{\gamma (\nabla g_{2})'A\nabla g_{2}}{2g_{2}^{2}} \\ &+\sup _{\pi ,\ell }\bigg(\pi '\mu - \frac{\gamma }{2}\pi '\Sigma \pi + \gamma \pi ' \Upsilon \frac{\nabla g_{2}}{g_{2}}+ \frac{g_{2}^{-\gamma }\ell ^{1-\gamma }}{1-\gamma } -\ell \bigg) \leq 0. \end{aligned}$$

Then \(\mathcal{H}(y,g_{2},\nabla g_{2}, D^{2} g_{2}) \leq 0\) if \(0 < \gamma <1\) and \(\mathcal{H}(y,g_{2},\nabla g_{2}, D^{2}g_{2}) \geq 0\) if \(\gamma >1\). Finally, Lemma A.1 implies that \(\frac{x^{1-\gamma }}{1-\gamma } g_{1}^{\gamma } \leq \frac{x^{1-\gamma }}{1-\gamma } g_{2}^{\gamma }\). Thus if \(0 < \gamma <1\), we obtain \(g_{1} \leq g_{2}\) and if \(\gamma >1\), we have \(g_{1} \geq g_{2}\). □

Lemma 3.1 is conceptually close to the result in Heath and Schweizer [24], where the equivalence is shown between a Feynman–Kac functional and the solution to a partial differential equation with a terminal condition. The difference is that (i) in the present setting the horizon is infinite, therefore the associated HJB equation does not have such a terminal condition, and (ii) the equivalence here is established for both the primal and dual bounds of the value function. In addition, the comparison between the primal and dual bounds is used for the existence result in Theorem 3.2, which relies on a method of sub- and supersolution akin to Hata and Sheu [20], Gilbarg and Trudinger [16, Chaps. 6 and 13], whereby solutions are established first locally and then globally. □

Proof of Theorem 3.2

With \(u = \gamma \ln g\), we can first rewrite the HJB equation as \(\mathcal{G}(y,u,\nabla u, D^{2} u) = 0\), where

$$\begin{aligned} \mathcal{G}(y,u,\nabla u, D^{2} u) &= \gamma e^{-\frac{u}{\gamma }} + ( \nabla u)'\bigg(b + \frac{(1-\gamma )}{\gamma }\Upsilon ' \Sigma ^{-1} \mu \bigg) + \frac{1}{2}\text{tr} (AD^{2}u ) \\ &\phantom{=:}+ \frac{1}{2}(\nabla u)'\bigg(A + \frac{(1-\gamma )}{\gamma } \Upsilon '\Sigma ^{-1}\Upsilon \bigg)\nabla u - \beta \\ &\phantom{=:}+ \frac{(1-\gamma )\mu '\Sigma ^{-1}\mu }{2\gamma } + (1-\gamma ) r, \end{aligned}$$
(A.5)

and \(\underline{u} = \gamma \ln \underline{g}\) is a supersolution and \(\bar{u} = \gamma \ln \bar{g}\) a subsolution. It suffices to show that a classical solution to \(\mathcal{G}(y,u,\nabla u, D^{2} u) = 0\) exists.

For each \(n \in \mathbb{N}\), since \(A\) is positive definite and continuous, the eigenvalues of \(A\) are bounded (away from 0) in \(E_{n}\). Thus there exist \(\underline{\lambda }_{n} < \bar{\lambda }_{n}\) such that for any \(x\in \mathbb{R}^{k}\) and \(y\in E_{n}\), we have \(\underline{\lambda }_{n}\sum _{i=1}^{k} x_{i}^{2} \leq \sum _{i,j=1}^{k} A_{ij}(y)x_{i}x_{j} \leq \bar{\lambda }_{n}\sum _{i=1}^{k} x_{i}^{2}\). Then Lemma A.2 below implies that there exists a solution \(u_{n}\) in \(\bar{E}_{n}\) to the boundary value problem

$$\begin{aligned} &\mathcal{G}(y,u,\nabla u, D^{2} u) = 0, \qquad y\in E_{n}, \\ & u|_{\partial E_{n}} = \underline{u}|_{\partial E_{n}}. \end{aligned}$$

Since \(\underline{u} \leq \bar{u}\), the comparison principle (cf. Gilbarg and Trudinger [16, Theorem 10.1]) yields \(\underline{u} \leq u_{n} \leq \bar{u}\) in \(E_{n}\). The same holds for every \(m \geq n\), and thus \((u_{m})_{m\geq n}\) are uniformly bounded in \(E_{n}\). Because \(\bar{E}_{n} \subsetneq E_{n+1}\), Gilbarg and Trudinger [16, Theorem 13.6]) implies that for \(m\geq n+1\), there exists \(\alpha ' \in (0,1]\) such that \([\nabla u_{m}]_{\alpha ',E_{n}}\) is bounded above by a constant \(C\), where \([f]_{\alpha ,\Omega } = \sup _{x,y\in \Omega ,x\neq y} \frac{|f(x)-f(y)|}{|x-y|^{\alpha }}\) and \(C\), \(\alpha '\) only depend on \(\max _{E_{n+1}}|u_{m}|\), \(\underline{\lambda }_{n+1}\), \(\bar{\lambda }_{n+1}\) and are independent of \(m\). Without loss of generality, assume \(\alpha = \min (\alpha ,\alpha ')\) (otherwise reset \(\alpha \) to the minimum). Then consider \(u_{m}\) as solutions to the linear problem

$$ \mathcal{J}(y,u,\nabla u,D^{2} u) = f(y), $$

where

$$\begin{aligned} \mathcal{J}(y,u,\nabla u,D^{2} u) &= (\nabla u)'\bigg(b + \frac{(1-\gamma )}{\gamma }\Upsilon ' \Sigma ^{-1} \mu \bigg) + \text{tr} (AD^{2}u ), \\ f(y) &= -\gamma e^{-\frac{u}{\gamma }} - (\nabla u_{m})'\bigg(A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \bigg) \nabla u_{m} \\ &\phantom{=:}+ \beta - \frac{(1-\gamma )\mu '\Sigma ^{-1}\mu }{2\gamma } - (1- \gamma ) r. \end{aligned}$$

Since \(\nabla u_{m}\) is \(\alpha \)-Hölder-continuous in \(E_{n}\), so is \(f\), for every \(m\geq n+1\). Then the Schauder interior estimates (see Gilbarg and Trudinger [16, Corollary 6.3]) imply that for \(m\geq n+1\), with \(d = \text{dist}(E_{n},\partial E_{n+1})\), we have

$$\begin{aligned} &d\max _{E_{n}} |\nabla u_{m}| + d^{2} \max _{E_{n}}|D^{2} u_{m}| + d^{2+ \alpha }[D^{2} u_{m}]_{\alpha ,E_{n}} \\ &\leq D\Big(\max _{E_{n+1}}|u_{m}| + \max _{E_{n+1}}|f| + [f]_{ \alpha ,E_{n+1}}\Big), \end{aligned}$$

where the constant \(D\) is independent of \(m\). Thus in any compact set \(E_{n}\), for \(m\geq n+1\), the \(u_{m}\) are uniformly bounded in the essential supremum norm and the \(\nabla u_{m}\) and \(D^{2} u_{m}\) are equicontinuous. From the Arzelà–Ascoli theorem, \((u_{m})\) (up to a subsequence) converges locally uniformly to a function \(u\) and on each \(E_{n}\), \(\nabla u_{m} \rightarrow \nabla u\) and \(D^{2} u_{m} \rightarrow D^{2} u\) uniformly as \(m \uparrow \infty \). Thus \(u\) is a classical solution to (A.5), and \(\underline{u}\leq u\leq \bar{u}\). □

Lemma A.2

There exists a solution to the boundary value problem

$$\begin{aligned} &\mathcal{G}(y,u,\nabla u, D^{2} u) = 0, \qquad y\in E_{n}, \\ & u|_{\partial E_{n}} = \underline{u}|_{\partial E_{n}}. \end{aligned}$$

Proof

This proof follows an idea similar to Hata and Sheu [20] and we discuss the case of \(0<\gamma <1\). The case of \(\gamma >1\) follows similarly. By Hata and Sheu [20, Theorem 3.4], it suffices to prove the boundedness, uniformly in \(\tau \in [0,1]\), of the solutions to the two boundary value problems

$$ \begin{aligned} &\mathcal{G}^{\tau }(y,u,\nabla u, D^{2} u) = 0, \qquad y\in E_{n}, \\ & u|_{\partial E_{n}} = \tau \underline{u}|_{\partial E_{n}}, \end{aligned} $$
(A.6)

where \(\mathcal{G}^{\tau }\) is defined by replacing \(\gamma \) with \(1-\tau (1-\gamma )\) in \(\mathcal{G}\), and

$$ \begin{aligned} &\bar{\mathcal{G}}^{\tau }(y,u,\nabla u, D^{2} u) = 0, \qquad y\in E_{n}, \\ & u|_{\partial E_{n}} = 0, \end{aligned} $$
(A.7)

where \(\bar{\mathcal{G}}^{\tau } = \tau e^{-u} + \tau (\nabla u)'b + \frac{1}{2}\text{tr}(AD^{2}u) + \frac{\tau }{2}(\nabla u)' A \nabla u - \tau \beta \). For (A.6), first note that the constant function \(\underline{f}_{n} = -\sup _{y\in \partial E_{n}}|\underline{u}| - \ln \max (\frac{C_{n}}{\gamma },1)\), where

$$ C_{n} = \sup _{y\in \bar{E}_{n},\tau \in [0,1]}\left (\beta - \frac{\tau (1-\gamma )\mu '\Sigma ^{-1}\mu }{2(1-\tau (1-\gamma ))} - \tau (1-\gamma ) r\right ), $$

is a supersolution, and \(\tau \underline{u} (y)\geq \underline{f}_{n}(y)\) for \(y\in \partial E_{n}\). From the comparison principle, for any solution \(u_{n,\tau }\) to (A.6), we get \(u_{n,\tau } \geq \underline{f}_{n}\).

For an upper bound, consider the linear equation

$$\begin{aligned} \big(\nabla f(y)\big)'b + \frac{1}{2}\text{tr}\big(AD^{2}f(y)\big) - \beta f(y) & = 0 \qquad \text{for } y\in E_{n}, \\ f(y) & = 1 \qquad \text{for } y\in \partial E_{n}, \end{aligned}$$

which by Gilbarg and Trudinger [16, Theorem 8.34], has a solution in \(C^{1,\alpha }\). By the Feynman–Kac formula, the solution is \(\mathbb{E}_{y}[e^{-\beta \theta _{n}}]\), where \(\theta _{n}\) is the hitting time of \(\partial E_{n}\) by \((Y_{t})\) and \(\mathbb{E}_{y}\) indicates the expectation for \(Y_{0} = y\). Then the solution to the equation

$$\begin{aligned} \big(\nabla f(y)\big)'b + \frac{1}{2}\text{tr}\big(AD^{2}f(y)\big) - \beta f(y) +1 & = 0 \qquad \text{for } y\in E_{n}, \\ f(y) & = 1 \qquad \text{for } y\in \partial E_{n} \end{aligned}$$

is \(\bar{f}_{n} = \frac{1}{\beta } + (1-\frac{1}{\beta })\mathbb{E}_{y}[e^{- \beta \theta _{n}}]\), and [20, Theorem 3.8] yields \(e^{u_{n,\tau }} \leq \tau e^{\underline{u}} + (1-\tau )\bar{f}_{n}\), which is bounded from above.

For (A.7), let \(u^{0}_{n}\) be a solution. Note that \(u_{1} = -\ln \beta \) is a solution with boundary condition \(-\ln \beta \) on \(\partial E_{n}\). When \(0<\beta \leq 1\), we have \(-\ln \beta \geq 0\) and hence \(u^{0}_{n} \leq u_{1}\) by the comparison principle. Similarly, \(u_{2} = 0\) is a supersolution, while \(u_{2} \leq u^{0}_{n}\). When \(\beta >1\), we have \(-\ln \beta < 0\), and \(u_{1}\leq u^{0}_{n} \leq u_{2}\). □

Proof of Theorem 3.3

First we prove the equalities

$$\begin{aligned} \mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{\hat{c}^{1-\gamma }_{t}}{1-\gamma }dt\bigg] & = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\big(1-\mathbb{E}_{ \hat{\mathbb{P}}}\big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big), \end{aligned}$$
(A.8)
$$\begin{aligned} \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{T}e^{- \frac{\beta }{\gamma }t} (M^{\hat{\eta }}_{t} )^{ \frac{\gamma -1}{\gamma }}dt\bigg]^{\gamma } & = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\big(1 - \mathbb{E}_{ \hat{\mathbb{P}}}\big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big)^{ \gamma }. \end{aligned}$$
(A.9)

Since \(X^{\pi ,\ell }_{t} = xe^{\int _{0}^{t}((r + \pi '_{s}\mu - \frac{\pi '_{s}\Sigma \pi _{s}}{2}-\ell _{s})ds + \pi '_{s}\sigma dZ_{s})}\), we get

$$ e^{-\beta t}\frac{c^{1-\gamma }_{t}}{1-\gamma } = \frac{x^{1-\gamma }}{1-\gamma }\ell ^{1-\gamma }_{t}e^{(1-\gamma ) \int _{0}^{t}((r + \frac{\beta }{\gamma -1} + \pi '_{s}\mu - \frac{\pi _{s}'\Sigma \pi _{s}}{2}-\ell _{s})ds + \pi '_{s}\sigma dZ_{s})}. $$
(A.10)

Then substituting \(\pi = \frac{\Sigma ^{-1}\mu }{\gamma } + \Sigma ^{-1}\Upsilon \frac{\nabla g}{g}\) and \(\ell = g^{-1}\), the integral in the last exponential function above becomes

$$\begin{aligned} &(1-\gamma )\int _{0}^{t}\bigg(\Big(r + \frac{\beta }{\gamma -1} + \pi '_{s}\mu - \frac{\pi '_{s}\Sigma \pi _{s}}{2}-\ell _{s}\Big)ds + \pi '_{s}\sigma dZ_{s}\bigg) \\ & = -\int _{0}^{t}g^{-1}ds - \gamma \int _{0}^{t}\left ( \frac{(\nabla g)'b}{g}+\frac{\operatorname{tr} (AD^{2}g)}{2g} - \frac{(\nabla g)'A\nabla g}{2g^{2}}\right )ds \\ &\phantom{=:}- \gamma \int _{0}^{t} \frac{(\nabla g)' a}{g}dW_{s}+ \gamma \int _{0}^{t} \mathcal{H}(Y_{s},g,\nabla g,D^{2} g)ds +\ln D_{t}, \end{aligned}$$

where

$$\begin{aligned} D_{t} & = \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{1-\gamma }{\gamma }\Sigma ^{-1}\mu + \frac{(1-\gamma )\Sigma ^{-1}\Upsilon \nabla g}{g}\Big)'\sigma \bar{\rho }dB_{s} \bigg)_{t} \\ &\phantom{=:}\times \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\mu + \big(\gamma A+(1- \gamma )\Upsilon '\Sigma ^{-1}\Upsilon \big)\frac{\nabla g}{g}\Big)'(a')^{-1}dW_{s} \bigg)_{t}. \end{aligned}$$

From Lemma A.3 below, \(D\) is an \((\mathbb{F},\mathbb{P})\)-martingale and \(\hat{\mathbb{P}}|_{\mathcal{F}_{t}} = D_{t} \mathbb{P}|_{ \mathcal{F}_{t}}\).

Since \(g\) solves the HJB equation, \(\mathcal{H}(y,g,\nabla g,D^{2} g) = 0\). Also, note that by Itô’s formula,

$$ \int _{0}^{t}\!\bigg(\frac{(\nabla g)'b}{g}+\frac{\operatorname{tr} (AD^{2}g)}{2g} - \frac{(\nabla g)'A\nabla g}{2g^{2}}\bigg)ds + \int _{0}^{t} \frac{(\nabla g)' a}{g}dW_{s} = \ln g(Y_{t}) - \ln g(y). $$

Hence (A.10) equals \(\frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }g(Y_{t})^{-1}e^{- \int _{0}^{t}g^{-1}(Y_{s})ds}D_{t}\). Then with the candidate portfolio and consumption in (3.6), we obtain

$$\begin{aligned} \mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c^{1-\gamma }_{t}}{1-\gamma }dt\bigg] & = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\mathbb{E}\bigg[\int _{0}^{T}g(Y_{t})^{-1}e^{- \int _{0}^{t}g(Y_{s})^{-1}ds}D_{t}dt\bigg] \\ & = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\big(1 - \mathbb{E}_{ \hat{\mathbb{P}}}\big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big), \end{aligned}$$

where \(\mathbb{E}_{\hat{\mathbb{P}}}\) indicates the expectation under \(\hat{\mathbb{P}}\), and the equality (A.8) is proved.

On the other hand, plugging in the candidate \(\eta = \frac{\gamma \nabla g}{g}\) and following similar calculations with \(q = \frac{\gamma -1}{\gamma }\), we get

$$\begin{aligned} &e^{-\frac{\beta }{\gamma }t}(M_{t}^{\eta })^{q} = \frac{g(y)}{g(Y_{t})}e^{-\int _{0}^{t}g(Y_{s})^{-1}ds}D_{t}. \end{aligned}$$

Thus

$$\begin{aligned} \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{T}e^{- \frac{\beta }{\gamma }t} (M_{t}^{\eta } )^{\frac{\gamma -1}{\gamma }}dt \bigg]^{\gamma } & = \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[ \int _{0}^{T}\frac{g(y)}{g(Y_{t})}e^{-\int _{0}^{t}g(Y_{s})^{-1}ds}D_{t}dt \bigg]^{\gamma } \\ & = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\big(1 - \mathbb{E}_{ \hat{\mathbb{P}}}\big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big)^{ \gamma }, \end{aligned}$$

which concludes the proof of the equality (A.9). Now by monotone convergence,

$$ \mathbb{E}\bigg[\int _{0}^{\infty }e^{-\beta t} \frac{c^{1-\gamma }_{t}}{1-\gamma }dt\bigg] = \lim _{T\rightarrow \infty }\mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c^{1-\gamma }_{t}}{1-\gamma }dt\bigg] $$

for any \((\pi ,\ell )\in \mathcal{A}\). Thus with \((\hat{\pi },\hat{\ell },\hat{\eta })\) in (3.6), we have

$$\begin{aligned} &\mathbb{E}\bigg[\int _{0}^{\infty }e^{-\beta t} \frac{\hat{c}^{1-\gamma }_{t}}{1-\gamma }dt\bigg] = \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\lim _{T\rightarrow \infty }\big(1 - \mathbb{E}_{\hat{\mathbb{P}}}\big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds} \big]\big) \\ &\leq \frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\lim _{T \rightarrow \infty }\big(1 - \mathbb{E}_{\hat{\mathbb{P}}}\big[e^{- \int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big)^{\gamma } \\ &= \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{\infty }e^{- \frac{\beta t}{\gamma }} (M^{\hat{\eta }}_{t} )^{q}dt\bigg]^{\gamma }, \end{aligned}$$
(A.11)

and equality holds if and only if \(\lim _{T\rightarrow \infty }(1 - \mathbb{E}_{\hat{\mathbb{P}}}[e^{- \int _{0}^{T}g(Y_{s})^{-1}ds}]) = 1\). But the term \(1 - e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\) is nonnegative and increasing in \(T\), and so by monotone convergence,

$$ \lim _{T\rightarrow \infty }\big(1 - \mathbb{E}_{\hat{\mathbb{P}}} \big[e^{-\int _{0}^{T}g(Y_{s})^{-1}ds}\big]\big) = 1 - \mathbb{E}_{ \hat{\mathbb{P}}}\big[e^{-\int _{0}^{\infty }g(Y_{s})^{-1}ds}\big] . $$

Thus the inequality (A.11) becomes an equality, i.e., \((\hat{\pi },\hat{\ell })\) in (3.6) is optimal, exactly when \(\mathbb{E}_{\hat{\mathbb{P}}}[e^{-\int _{0}^{\infty } g(Y_{s})^{-1}ds}] = 0\), which is equivalent to \(\int _{0}^{\infty }g(Y_{s})^{-1}ds = \infty \text{ } \hat{\mathbb{P}}\text{-a.s.}\), and in this case, both sides of (A.11) are equal to \(\frac{x^{1-\gamma }}{1-\gamma }g(y)^{\gamma }\). □

Lemma A.3

Assume that for some\(f\in C^{1}(E;\mathbb{R})\), there exists a unique solution\(\hat{\mathbb{P}}\)to the martingale problem on\(\mathbb{R}^{n}\times E \ni x = (z,y)\)for

$$\begin{aligned} \hat{L} &= \frac{1}{2}\sum _{i,j=1}^{n+k}\tilde{A}_{i,j}(y) \frac{\partial ^{2}}{\partial x_{i}\partial x_{j}} + \sum _{i=1}^{n+k} \hat{b}_{i}(y)\frac{\partial }{\partial x_{i}}, \\ \tilde{A}(y) &= \bigg( \textstyle\begin{array}{c@{\quad }c} \Sigma (y) & \Upsilon (y) \\ \Upsilon '(y) & A(y) \\ \end{array}\displaystyle \bigg), \\ \hat{b} &= \bigg( \textstyle\begin{array}{c} \frac{\mu }{\gamma } + \Upsilon \frac{\nabla f}{f} \\ b + \frac{(1-\gamma )\Upsilon '\Sigma ^{-1}\mu }{\gamma } + (\gamma A + (1-\gamma )\Upsilon '\Sigma ^{-1}\Upsilon )\frac{\nabla f}{f} \\ \end{array}\displaystyle \bigg). \end{aligned}$$

Then the process

$$\begin{aligned} D_{t} &= \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{1-\gamma }{\gamma }\Sigma ^{-1}\mu + \frac{(1-\gamma )\Sigma ^{-1}\Upsilon \nabla f}{f}\Big)'\sigma \bar{\rho }dB_{s} \bigg)_{t} \\ &\phantom{=:}\times \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\mu + \big(\gamma A+(1- \gamma )\Upsilon '\Sigma ^{-1}\Upsilon \big)\frac{\nabla f}{f}\Big)'(a')^{-1}dW_{s} \bigg)_{t}, \end{aligned}$$

\(t \geq 0\), is an\((\mathbb{F},\mathbb{P})\)-martingale. Furthermore, for any\(t<\infty \), \(\hat{\mathbb{P}}|_{\mathcal{F}_{t}} = D_{t} \mathbb{P}|_{ \mathcal{F}_{t}}\).

Proof

Since Assumption 2.1 holds and \(\frac{\nabla f}{f}\) is locally bounded, Cheridito et al. [7, Theorem 2.4 and Remark 2.5] imply that there exists a \((\mathbb{B}, \mathbb{P})\)-martingale \(\hat{D}\) such that \(\hat{\mathbb{P}}|_{\mathcal{B}_{\tau }} = \hat{D}_{\tau } \mathbb{P}|_{\mathcal{B}_{\tau }}\) for any finite stopping time \(\tau \) (with respect to \(\mathbb{B}\)). Note that from Revuz and Yor [36, Theorem II.2.8], \(\hat{D}\) is also an \((\mathbb{F},\mathbb{P})\)-martingale. Furthermore, from [36, Proposition VII.2.4 and Theorem VII.2.7], there exist Brownian motions \(Z\) and \(W\) adapted to \(\mathbb{F}\) such that (2.1) and (2.2) hold (cf. footnote 5).

By definition, \(\mathbb{F}\) is the right-continuous envelope of the filtration generated by \((R,Y)\). On the other hand, we have

$$\begin{aligned} dR_{t} & = \mu (Y_{t})dt + \sigma (Y_{t})dZ_{t}, \\ dY_{t} & = b(Y_{t})dt + a(Y_{t})dW_{t}. \end{aligned}$$

Thus \(\mathbb{F}\) coincides with the right-continuous envelope of the filtration generated by \((Z,W)\). (Recall that \(\Sigma (y)\) and \(A(y)\) are positive definite for all \(y \in E\) by Assumption 2.1). Because \(d\langle Z,W\rangle _{t} = \rho (Y_{t})dt\), where \(\rho (y) = \sigma ^{-1}(y)\Upsilon (y){(a')}^{-1}(y)\), it follows that the Brownian motion \(Z\) admits the decomposition

$$ dZ_{t} = \rho (Y_{t}) dW_{t} + \bar{\rho }(Y_{t}) dB_{t}, $$

where \(B\) is a Brownian motion independent of \(W\) and \(\bar{\rho }\) is the unique positive definite, symmetric matrix such that \(\rho \rho '+\bar{\rho }\bar{\rho }'= I_{n}\), with \(I_{n}\) denoting the identity matrix of dimension \(n\). Thus by the martingale representation theorem with respect to the filtration \(\mathbb{F}\) above,

$$ \hat{D}= \mathcal{E}\bigg(\int _{0}^{\cdot } (d'_{1s}dB_{s} + d'_{2s}dW_{s} )\bigg). $$

for some adapted processes \(d_{1}\) and \(d_{2}\). Then by Girsanov’s theorem,

$$ \hat{B}_{t} = B_{t} - \int _{0}^{t}d_{1s}ds \qquad \text{and}\qquad \hat{W}_{t} = W_{t} -\int _{0}^{s}d_{2s}ds $$

are Brownian motions under \(\hat{\mathbb{P}}\), and the dynamics of \((R,Y)\) under \(\hat{\mathbb{P}}\) are

$$\begin{aligned} dR_{t} & = \left (\mu + \sigma \bar{\rho }d_{1t} + \sigma \rho d_{2t} \right )dt + \sigma d\hat{Z}_{t}, \\ dY_{t} & = \left (b + ad_{2t}\right )dt + ad\hat{W}_{t}. \end{aligned}$$

On the other hand, the infinitesimal generator for \((R,Y)\) under \(\hat{\mathbb{P}}\) is \(\hat{L}\). Thus we have

$$ \left ( \textstyle\begin{array}{c} \mu + \sigma \bar{\rho }d_{1} + \sigma \rho d_{2} \\ b + ad_{2} \\ \end{array}\displaystyle \right ) = \hat{b} , $$

which implies that

$$\begin{aligned} d_{1} & = \bar{\rho }\sigma \left (\frac{1-\gamma }{\gamma }\Sigma ^{-1} \mu + \frac{(1-\gamma )\Sigma ^{-1}\Upsilon \nabla f}{f}\right ), \\ d_{2} & = a^{-1}\left ( \frac{(1-\gamma )\Upsilon '\Sigma ^{-1}\mu }{\gamma } + \left ( \gamma A + (1-\gamma )\Upsilon '\Sigma ^{-1}\Upsilon \right ) \frac{\nabla f}{f}\right ). \end{aligned}$$

Thus \(D = \hat{D}\), and \(D\) is an \((\mathbb{F},\mathbb{P})\)-martingale. Finally, for \(t< T < \infty \) and every \(A\in \mathcal{F}_{t}\subseteq \mathcal{B}_{T}\), using \(\hat{\mathbb{P}}|_{\mathcal{B}_{T}} =D_{T} \mathbb{P}|_{ \mathcal{B}_{T}}\) implies \(\hat{\mathbb{P}}|_{\mathcal{F}_{t}} =D_{t} \mathbb{P}|_{ \mathcal{F}_{t}}\). □

Proof of Theorem 4.1

Consider the stochastic discount factor \(M^{\eta }\) with \(\eta = \frac{\gamma \nabla g^{d}}{g^{d}}\). Similar calculations as for the dual bound in Theorem 3.3 (with \(q = \frac{\gamma -1 }{\gamma }\)) yield

$$\begin{aligned} -\frac{\beta t}{\gamma } + \ln \big((M^{\eta }_{t})^{q}\big) &= \int _{0}^{t}-d \ln g^{d}(Y_{s}) + \ln \bar{D}_{t} + \int _{0}^{t}g^{d}(Y_{s})^{-1}ds \\ &\phantom{=:}+\int _{0}^{t}\bigg(\mathcal{H}^{d}(Y_{s},g^{d},\nabla g^{d}, D^{2} g^{d}) \\ &\phantom{=:}\qquad \qquad + \frac{(\gamma -1) (\nabla g^{d} )' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{d}}{2 (g^{d} )^{2}} \bigg)ds, \end{aligned}$$

where

$$\begin{aligned} \bar{D}& = \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{(1-\gamma )\Sigma ^{-1}\mu }{\gamma } + \frac{(1-\gamma )\Sigma ^{-1}\Upsilon \nabla g^{d}}{g^{d}}\Big)' \sigma \bar{\rho }dB\bigg) \\ &\phantom{=:}\times \mathcal{E}\bigg(\int _{0}^{\cdot }\Big( \frac{(1-\gamma )\Upsilon '\Sigma ^{-1}\mu }{\gamma } + \frac{ (\gamma A + (1-\gamma )\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{d}}{g^{d}} \Big)' (a' )^{-1}dW \bigg). \end{aligned}$$
(A.12)

Now \(\mathcal{H}^{d}(Y_{s},g^{d},\nabla g^{d}, D^{2} g^{d}) = 0\) implies that for any \(T < \infty \),

$$\begin{aligned} &\mathbb{E}\bigg[\int _{0}^{T}e^{-\frac{\beta t}{\gamma }} (M^{\eta }_{t} )^{q}dt\bigg]^{\gamma } \\ &=g^{d}(y)^{\gamma }\mathbb{E}\bigg[\int _{0}^{T}e^{\int _{0}^{t} (-g^{d}(Y_{s})^{-1} + \frac{(\gamma -1) (\nabla g^{d} )' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{d}}{2 (g^{d} )^{2}})ds}g^{d}(Y_{t})^{-1} \bar{D}_{t}dt\bigg]^{\gamma }. \end{aligned}$$
(A.13)

Since the martingale problem for \(\bar{L}^{d}\) has a unique solution, \(\bar{D}\) is an \((\mathbb{F},\mathbb{P})\)-martingale by Lemma A.3, and \(\bar{\mathbb{P}}^{d}|_{\mathcal{F}_{T}}=\bar{D}_{T}\mathbb{P}|_{ \mathcal{F}_{T}}\). Thus (A.13) is equal to

$$ g^{d}(y)^{\gamma }\mathbb{E}_{\bar{\mathbb{P}}^{d}}\bigg[\int _{0}^{T}e^{ \int _{0}^{t}( -g^{d}(Y_{s})^{-1}+ \frac{(\gamma -1) (\nabla g^{d} )' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{d}}{2 (g^{d} )^{2}})ds}g^{d}(Y_{t})^{-1}dt \bigg]^{\gamma }. $$
(A.14)

Since \(A-\Upsilon '\Sigma ^{-1}\Upsilon \) is nonnegative definite, (A.14) is for \(\gamma > 1\)\((\text{resp.} < 1)\) greater (resp. less) than or equal to

$$\begin{aligned} &g^{d}(y)^{\gamma }\mathbb{E}_{\bar{\mathbb{P}}^{d}}\bigg[\int _{0}^{T}g^{d}(Y_{s})^{-1}e^{- \int _{0}^{t} g^{d}(Y_{s})^{-1}ds }dt\bigg]^{\gamma } \\ &=g^{d}(y)^{\gamma }\big(1 - \mathbb{E}_{\bar{\mathbb{P}}^{d}}\big[e^{- \int _{0}^{T} g^{d}(Y_{s})^{-1}ds}\big]\big)^{\gamma }. \end{aligned}$$
(A.15)

Therefore, since \(\int _{0}^{\infty }g^{d}(Y_{t})^{-1}dt = \infty \)\(\bar{\mathbb{P}}^{d}\)-a.s., we get

$$\begin{aligned} \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{\infty }e^{- \frac{\beta }{\gamma }} (M^{\eta }_{t} )^{q}dt\bigg]^{\gamma } &= \lim _{T\rightarrow \infty }\frac{x^{1-\gamma }}{1-\gamma }\mathbb{E} \bigg[\int _{0}^{T}e^{-\frac{\beta }{\gamma }} (M^{\eta }_{t} )^{q}dt \bigg]^{\gamma } \\ &\leq \lim _{T\rightarrow \infty }\frac{x^{1-\gamma }}{1-\gamma }g^{d}(y)^{ \gamma }\big(1 - \mathbb{E}_{\bar{\mathbb{P}}^{d}}\big[e^{-\int _{0}^{T} g^{d}(Y_{s})^{-1}ds }\big]\big)^{\gamma } \\ &=\frac{x^{1-\gamma }}{1-\gamma }g^{d}(y)^{\gamma }. \end{aligned}$$

Finally, since \(A -\Upsilon '\Sigma ^{-1}\Upsilon \) is nonnegative definite and \(g^{d}\) solves

$$ \mathcal{H}\big(y,g(y),\nabla g, D^{2} g\big) - \frac{(\gamma -1)(\nabla g)'\left (A -\Upsilon '\Sigma ^{-1}\Upsilon \right )\nabla g}{2g^{2}} = 0, $$

we obtain \(\mathcal{H}(y,g^{d}(y),\nabla g^{d}, D^{2} g^{d}) \leq (\geq ) \, 0\) when \(0<\gamma <1\) (resp. \(\gamma >1\)), and so \(g^{d}\) is a sub(resp. super)solution. □

Proof of Theorem 4.2

With \(\ell = (g^{p})^{-1}\) and \(\pi = \frac{\Sigma ^{-1}\mu }{\gamma } + \frac{\Sigma ^{-1}\Upsilon \nabla g^{p}}{g^{p}}\), similar calculations as for the primal bound in Theorem 3.3 yield for every \(T<\infty \) that

$$\begin{aligned} &\mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c_{t}^{1-\gamma }}{1-\gamma }dt\bigg] \\ &=\frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma } \\ &\phantom{=:}\times \mathbb{E}\bigg[\int _{0}^{T}e^{\int _{0}^{t}(1-\gamma ) ( \phi (Y_{s}) - \frac{\gamma (\nabla g^{p} )' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{p}}{2 (g^{p} )^{2}} )ds}g^{p}(Y_{t})^{-1}e^{-\int _{0}^{t}g^{p}(Y_{s})^{-1}ds}\bar{D}_{t}dt \bigg], \end{aligned}$$

where \(\bar{D}\) is defined in (A.12) but with \(g^{d}\) replaced by \(g^{p}\). Since the martingale problem for \(\bar{L}^{p}\) has a unique solution, Lemma A.3 implies that \(\bar{D}\) is an \((\mathbb{F},\mathbb{P})\)-martingale, and \(\bar{\mathbb{P}}^{p}|_{\mathcal{F}_{T}} = \bar{D}_{T} \mathbb{P}|_{ \mathcal{F}_{T}}\). Thus we obtain

$$\begin{aligned} &\mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c_{t}^{1-\gamma }}{1-\gamma }dt\bigg] \\ &=\frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma } \\ &\phantom{=:}\times \mathbb{E}_{\bar{\mathbb{P}}^{p}}\bigg[\int _{0}^{T}e^{\int _{0}^{t}(1- \gamma ) (\phi (Y_{s}) - \frac{\gamma (\nabla g^{p} )' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{p}}{2 (g^{p} )^{2}} )ds}g^{p}(Y_{t})^{-1}e^{-\int _{0}^{t} g^{p}(Y_{s})^{-1}ds}dt\bigg]. \end{aligned}$$
(A.16)

Since \(\phi \geq \frac{\gamma (\nabla g^{p})'(A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{p}}{2(g^{p})^{2}}\), the above is greater than or equal to

$$\begin{aligned} &\frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma }\mathbb{E}_{ \bar{\mathbb{P}}^{p}}\bigg[\int _{0}^{T}g^{p}(Y_{t})^{-1}e^{-\int _{0}^{t} g^{p}(Y_{s})^{-1}ds}dt\bigg] \\ &=\frac{x^{1-\gamma }g^{p}(y)^{\gamma }}{1-\gamma }\big(1-\mathbb{E}_{ \bar{\mathbb{P}}^{p}}\big[e^{- \int _{0}^{T} g^{p}(Y_{s})^{-1}ds} \big]\big). \end{aligned}$$
(A.17)

Since \(\int _{0}^{\infty }g^{p}(Y_{t})^{-1}dt = \infty \)\(\bar{\mathbb{P}}^{p}\)-a.s., we get

$$\begin{aligned} \mathbb{E}\bigg[\int _{0}^{\infty }e^{-\beta t} \frac{c_{t}^{1-\gamma }}{1-\gamma }dt\bigg] &= \lim _{T\rightarrow \infty }\mathbb{E}\bigg[\int _{0}^{T}e^{-\beta t} \frac{c_{t}^{1-\gamma }}{1-\gamma }dt\bigg] \\ & \geq \frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma }\lim _{T \rightarrow \infty }\big(1-\mathbb{E}_{\bar{\mathbb{P}}^{p}}\big[e^{- \int _{0}^{T}g^{p}(Y_{s})^{-1}ds}\big]\big) \\ &= \frac{x^{1-\gamma }g^{p}(y)^{\gamma }}{1-\gamma }. \end{aligned}$$

If \(\frac{\gamma (\nabla g^{p})' (A-\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{p}}{2(g^{p})^{2}} = \phi \), then by (A.16), the above inequality becomes an equality. Note that \(g^{p}\) solves

$$ \mathcal{H}(y,g,\nabla g, D^{2} g) -\frac{(1-\gamma )}{\gamma } \bigg( \phi - \frac{\gamma (\nabla g)' (A -\Upsilon '\Sigma ^{-1}\Upsilon )\nabla g}{2g^{2}} \bigg) = 0. $$

Thus \(\mathcal{H}(y,g^{p}(y),\nabla g^{p}, D^{2} g^{p}) \geq (\leq ) \, 0\) if \(0<\gamma <1\) (resp. \(\gamma >1\)), and so \(g^{p}\) is a super(resp. sub)solution. Finally we have \(\frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma } \leq \frac{x^{1-\gamma }}{1-\gamma }g^{d}(y)^{\gamma }\) by Lemma A.1 and Theorem 4.1. Thus we get \(g^{p} \leq g^{d}\) (\(g^{p} \geq g^{d}\)) when \(0<\gamma <1\) (resp. \(\gamma >1\)). □

Proof of Proposition 4.4

By Lemma A.1 and Theorem 4.1

$$ V(x,y) \leq \inf _{\eta \in \mathcal{R}} \frac{x^{1-\gamma }}{1-\gamma }\mathbb{E}\bigg[\int _{0}^{\infty }e^{- \frac{\beta }{\gamma }} (M^{\eta }_{t} )^{q}dt\bigg]^{\gamma } \leq \frac{x^{1-\gamma }}{1-\gamma }g^{d}(y)^{\gamma }. $$

On the other hand, since \(V(x,y)\) is homogeneous in \(x\), the definition of the \(\operatorname{CEL} \) and Theorem 4.2 give

$$\begin{aligned} \big(1-\operatorname{CEL} (\pi ,\ell )\big)^{1-\gamma }V(x,y) &= V\Big(x\big(1- \operatorname{CEL} (\pi ,\ell )\big),y\Big) \\ &=\int _{0}^{\infty }e^{-\beta t} \frac{ (\ell _{t}X^{\pi ,\ell }_{t} )^{1-\gamma }}{1-\gamma }dt \geq \frac{x^{1-\gamma }}{1-\gamma }g^{p}(y)^{\gamma }. \end{aligned}$$

Thus if \(\gamma >1\), then \(1\leq (1-\operatorname{CEL} (\pi ,\ell ))^{1-\gamma } \leq ( \frac{g^{p}(y)}{g^{d}(y)})^{\gamma }\), and if \(0<\gamma <1\), the inequalities are reversed. Therefore \(0\leq \operatorname{CEL} (\pi ,\ell )\leq 1- (\frac{g^{p}(y)}{g^{d}(y)})^{ \frac{\gamma }{1-\gamma }}\). □

Proof of Proposition 4.6

First consider the well-posedness of the martingale problem for \(L^{Y}\), where \(L^{Y}\) is the operator associated to \(Y\) (with \(u = \gamma \ln g\)) given by

$$\begin{aligned} L^{Y} &= \frac{1}{2}\sum _{i,j=1}^{k}A_{i,j}(x) \frac{\partial ^{2}}{\partial x_{i}\partial x_{j}} \\ &\phantom{=:}+ \sum _{i=1}^{k}\bigg(b + \frac{(1-\gamma )\Upsilon '\Sigma ^{-1} \mu }{\gamma } + \Big(A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \Big) \nabla u\bigg)_{i}\frac{\partial }{\partial x_{i}}. \end{aligned}$$

Let \(\psi = \gamma \ln \bar{g}\) and \(\tilde{u} = \psi - u\). Then with \(\mathcal{G}\) defined in (A.5), we have

$$\begin{aligned} L^{Y} \tilde{u} &= L^{Y} \psi - L^{Y} u \\ &= L^{Y} \psi -\mathcal{G}(y,u,\nabla u,D^{2} u) \\ & \phantom{=:}- \frac{1}{2}(\nabla u)'\bigg(A + \frac{(1-\gamma )}{\gamma } \Upsilon '\Sigma ^{-1}\Upsilon \bigg)\nabla u + \gamma e^{- \frac{u}{\gamma }} - \beta \\ & \phantom{=:} + \frac{(1-\gamma )\mu '\Sigma ^{-1}\mu }{2\gamma } + (1-\gamma ) r \\ &= \mathcal{G}(y,\psi , \nabla \psi ,D^{2}\psi ) + (\nabla u)'\left (A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \right ) \nabla \psi \\ &\phantom{=:}-\frac{1}{2}(\nabla \psi )'\bigg(A + \frac{(1-\gamma )}{\gamma } \Upsilon '\Sigma ^{-1}\Upsilon \bigg)\nabla \psi \\ & \phantom{=:}-\frac{1}{2}(\nabla u)'\bigg(A + \frac{(1-\gamma )}{\gamma }\Upsilon ' \Sigma ^{-1}\Upsilon \bigg)\nabla u + \gamma e^{-\frac{u}{\gamma }}- \gamma e^{-\frac{\psi }{\gamma }} \\ &= \mathcal{G}(y,\psi , \nabla \psi ,D^{2}\psi ) \\ &\phantom{=:}- \frac{1}{2}(\nabla u-\nabla \psi )'\bigg(A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \bigg)( \nabla u-\nabla \psi ) + \gamma e^{-\frac{u}{\gamma }}-\gamma e^{- \frac{\psi }{\gamma }} \\ &\leq \mathcal{G}(y,\psi , \nabla \psi ,D^{2}\psi ) + \gamma e^{- \frac{u}{\gamma }}- \gamma e^{-\frac{\phi }{\gamma }} \\ &= \gamma \big(\mathcal{H}(y,\bar{g}, \nabla \bar{g},D^{2}\bar{g}) + g^{-1} - \bar{g}^{-1} \big), \end{aligned}$$

where the last inequality holds because \(A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \) is nonnegative definite. Because \(\bar{g}\) is a subsolution to \(\mathcal{H}(y, g, \nabla g,D^{2} g) = 0\) and

$$ \sup _{y\in E}\big( g(y)^{-1} - \bar{g}(y)^{-1}\big) < \sup _{y\in E} \big(\underline{g}(y)^{-1} - \bar{g}(y)^{-1}\big)< \infty $$

as assumed in (i), we get \(L^{Y} \tilde{u} < C\) in \(E\) for some constant \(C\). Furthermore, Theorem 4.1 implies that \(\tilde{u} = \psi -u = \gamma (\ln \bar{g} - \ln g) \geq 0\). Thus for a positive constant \(\lambda \geq C\), we have \(L^{Y} (\tilde{u} + 1) = L^{Y} \tilde{u} \leq C \leq \lambda (\tilde{u} +1)\) and

$$ \lim _{n\uparrow \infty }\inf _{y\in E\setminus E_{n}}\tilde{u} (y) \geq \lim _{n\uparrow \infty }\inf _{y\in E\setminus E_{n}} \big(\ln \bar{g}(y)- \ln \underline{g}(y)\big) = \infty . $$

Then Stroock and Varadhan [38, Theorem 10.2.1] implies that the martingale problem for \(L^{Y}\) is well posed, and there exists a Brownian motion \(W\) such that [36, Proposition VII.2.4 and Theorem VII.2.7]

$$ dY_{t} = \bigg(b + \frac{(1-\gamma )\Upsilon '\Sigma ^{-1} \mu }{\gamma } + \Big(A + \frac{(1-\gamma )}{\gamma }\Upsilon '\Sigma ^{-1}\Upsilon \Big) \nabla u\bigg)dt + adW_{t}. $$

For the martingale problem associated to \(\hat{L}\), expand the probability space to make it support a Brownian motion \(B\) independent of \(W\). Write \(Z = \rho W + \bar{\rho }B\), where \(\rho \rho ' + \bar{\rho }\bar{\rho }' = I_{n}\), and define \(R\) by \(dR_{t} = (\frac{\mu }{\gamma } + \Upsilon \frac{\nabla g}{g})dt + \sigma dZ_{t}\). Then \((R,Y)\) is the unique weak solution to the stochastic differential equation corresponding to \(\hat{L}\) (i.e., the SDE with drift \(\hat{b}\) and diffusion matrix \(\tilde{A}\)) and the martingale problem for \(\hat{L}\) has a unique solution. □

Proof of Lemma 5.1

Let \(g(y) = \int _{0}^{\infty }h(y,t)dt\) and suppose that

$$ g_{y} = \int _{0}^{\infty }h_{y}dt\qquad \text{and} \qquad g_{yy} = \int _{0}^{\infty }h_{yy}dt. $$
(A.18)

In order to prove that \(g\) solves (5.3), it suffices to prove that \(h(t,y)\) solves

$$ 1 + \left (-ky + \lambda \right )\int _{0}^{\infty }h_{y}dt + \frac{Ay\int _{0}^{\infty }h_{yy}dt}{2} - \left (cy + K\right )\int _{0}^{ \infty }hdt=0. $$

The above equation holds true if \(h(y,t)\) satisfies

$$ -h_{t} + \left (-ky + \lambda \right )h_{y} + \frac{Ay}{2}h_{yy} - \left (cy+K\right )h = 0 $$
(A.19)

with the boundary condition \(h(y,\infty ) - h(y,0) = -1\) for every \(y \in E\), because such an \(h\) guarantees that

$$\begin{aligned} &1 + (-ky + \lambda )\int _{0}^{\infty }h_{y}dt+ \frac{Ay\int _{0}^{\infty }h_{yy}dt}{2} - (cy + K )\int _{0}^{\infty }hdt \\ &=1+\int _{0}^{\infty }h_{t}dt = 1 + h(y,\infty ) - h(y,0)= 0. \end{aligned}$$

Taking derivatives of \(h(t,y) = e^{C(t) - B(t)y}\), with \(C(t)\) and \(B(t)\) defined in (5.4) and (5.5), shows that \(h(t,y)\) satisfies (A.19). Moreover, \(C(0) = B(0) = 0\), \(C(\infty ) = -\infty \) and \(B(\infty ) = \frac{2c}{k+a}\) implies \(h(y,0) = 1\) and \(h(y,\infty ) = 0\), and so the boundary condition for \(h\) holds for any \(y > 0\). Finally, since \(B'(t) = \frac{4c\alpha ^{2}e^{\alpha t}}{(e^{\alpha t}(k + \alpha ) - k + \alpha )^{2}} >0\), we have

$$ 0=B(0)\leq B(t)\leq B(\infty ) = \frac{2c}{k+\alpha } \qquad \text{for all } t\geq 0, $$

and so (A.18) holds by Lemma A.4 below. □

Lemma A.4

For the functions\(g(y) = \int _{0}^{\infty }h(y,t)dt\)and\(h(y,t) = e^{C(t)-B(t)y}\), if\(B(t) \geq 0\)and is bounded, then\(g_{y} = \int _{0}^{\infty }h_{y}dt\)and\(g_{yy} = \int _{0}^{\infty }h_{yy}dt\).

Proof

By definition,

$$\begin{aligned} g_{y} &= \lim _{\epsilon \rightarrow 0}\int _{0}^{\infty } \frac{h(t,y+\epsilon )-h(t,y)}{\epsilon }dt \\ & =\lim _{\epsilon \rightarrow 0}\int _{0}^{\infty } \frac{e^{C(t)-B(t)y-B(t)\epsilon } - e^{C(t)-B(t)y}}{\epsilon }dt \\ &= \lim _{\epsilon \rightarrow 0}\int _{0}^{\infty } \frac{e^{C(t)-B(t)y} (e^{-B(t)\epsilon } - 1 )}{\epsilon }dt. \end{aligned}$$

Now for \(\epsilon >0\), we have \(e^{-B(t)\epsilon } \geq 1 - B(t)\epsilon \), hence

$$ \bigg|\frac{e^{-B(t)\epsilon }-1}{\epsilon }\bigg| = \frac{1 - e^{-B(t)\epsilon }}{\epsilon } \leq \frac{1 - (1-B(t)\epsilon )}{\epsilon } = B(t), $$

and when \(\epsilon <0\), we have \(1 \geq e^{-B(t)\epsilon } + B(t)\epsilon e^{-B(t)\epsilon }\) so that

$$ \bigg|\frac{e^{-B(t)\epsilon }-1}{\epsilon }\bigg| = \frac{1 - e^{-B(t)\epsilon }}{\epsilon } \leq \frac{e^{-B(t)\epsilon }B(t)\epsilon }{\epsilon } = B(t) e^{-B(t) \epsilon }. $$

Since \(B(t)\) is bounded, \(|\frac{e^{-B(t)\epsilon } - 1}{\epsilon }|\) is bounded for \(\epsilon \) close to 0. Thus dominated convergence gives

$$\begin{aligned} g_{y} &= \lim _{\epsilon \rightarrow 0}\int _{0}^{\infty } \frac{h(t,y+\epsilon )-h(t,y)}{\epsilon }dt \\ &=\int _{0}^{\infty }\lim _{\epsilon \rightarrow 0} \frac{h(t,y+\epsilon )-h(t,y)}{\epsilon }dt = \int _{0}^{\infty }h_{y}dt, \end{aligned}$$

and \(g_{yy} = \int _{0}^{\infty }h_{yy}dt\) follows by a similar argument. □

Proof of Proposition 5.2

Assumptions 2.1 (i), 2.2 and 2.3 are straightforward, by inspection of the model (5.1) and (5.2), and we omit the proof.

To check Assumption 2.1 (ii), similarly to the argument in Proposition 4.6, it suffices to check the well-posedness of the martingale problem for the operator \(L^{Y} = b(\theta - y) \frac{\partial }{\partial y} + \frac{a^{2}y}{2} \frac{\partial ^{2}}{\partial y^{2}}\), the generator of the state variable \(Y\). By Karatzas and Shreve [29, Corollary 5.4.9], this is equivalent to the uniqueness of the weak solution to \(dY_{t} = b(\theta - Y_{t})dt + a\sqrt{Y_{t}}dW_{t}\). Since \(b(\theta - y)\) and \(a\sqrt{y}\) are Lipschitz-continuous on \((\epsilon ,\infty )\) for any \(\epsilon >0\), there exists a unique weak solution \(Y\) on \((\epsilon ,\infty )\). Then, since \(Y\) is a CIR process satisfying the parameter restriction \(b\theta \geq \frac{A}{2}\) under ℙ, it never reaches 0, and so there exists a unique solution on \((0,\infty )\).

For the additional assumptions in Theorem 4.1, in the model (5.1) and (5.2), the ODE \(\mathcal{H}^{d}(y,g^{d},\nabla g^{d},D^{2} g^{d}) = 0\) becomes

$$\begin{aligned} 0 &= (g^{d} )^{-1}+ \bigg(-by + b\theta + \frac{(1-\gamma )\rho a \mu y}{\gamma \sigma }\bigg) \frac{g^{d}_{y}}{g^{d}} + \frac{Ayg^{d}_{yy}}{2g^{d}} \\ &\phantom{=:}+ \frac{(1-\gamma )\mu ^{2} y}{2\gamma ^{2}\Sigma } - \frac{\beta }{\gamma } + \frac{(1-\gamma )r}{\gamma }. \end{aligned}$$

Since \(\gamma >1\), \(\frac{\beta }{\gamma } + \frac{(\gamma -1)r}{\gamma }>0\) and \(\frac{(\gamma -1)\mu ^{2} }{2\gamma ^{2}\Sigma } > 0\), Lemma 5.1 implies that \(g^{d}(y)\) defined in Proposition 5.2 is the solution to the above ODE.

For the martingale problem for \(\bar{L}^{d}\) in assumption (i), the corresponding SDE for \(Y\) is

$$ dY_{t} = (b\theta -\phi ^{d} Y_{t} )dt + a\sqrt{Y_{t}}d\bar{W}_{t}, $$
(A.20)

where \(\phi ^{d} = b - \frac{(1-\gamma )\rho a \mu }{\gamma \sigma } - ( \gamma + (1-\gamma )\rho ^{2}) A \frac{g^{d}_{y}}{g^{d}}>0\) because \(0\leq B(t) \leq \frac{2c}{k+\alpha }\) and \(0\geq \frac{g^{d}_{y}}{g^{d}} = \frac{\int _{0}^{\infty }-B(t)h(y,t)dt}{\int _{0}^{\infty }h(y,t)dt} \geq - \frac{2c}{k+\alpha }\). Thus \(b\theta - \phi y\) and \(a\sqrt{y}\) are Lipschitz-continuous, and (A.20) has a unique weak solution, on \((\epsilon ,\infty )\) for any \(\epsilon >0\). Then similarly to the argument for \(L\), Lemma A.5 below shows that \(Y\) in (A.20) never hits 0 or \(\infty \), and so the solution to the martingale problem for \(\bar{L}^{d}\) has a unique solution.

For assumption (ii), let \(G(t) = \ln ((k+\alpha )e^{\alpha t} - k+\alpha ) - \ln 2\alpha - \frac{1}{2}(k+\alpha )t\). Since \(G(0) = 0\) and

$$\begin{aligned} G'(t) &= \frac{(k+\alpha )\alpha e^{at}}{(k+\alpha )e^{\alpha t}-k+\alpha } - \frac{1}{2}(k+\alpha ) \\ &= (\alpha -k)\left (\frac{1}{2} - \frac{\alpha }{(k+\alpha )e^{\alpha t}-k+\alpha }\right )\geq 0, \end{aligned}$$

we obtain

$$ C(t) = -\frac{2b\theta G(t)}{A} - Kt\leq -Kt = - \frac{\beta + (\gamma -1)r}{\gamma }t. $$

Therefore, as \(B(t) \geq 0\) and \(y > 0\),

$$ g^{d}(y)< \int _{0}^{\infty }e^{C(t)}dt < \int _{0}^{\infty }e^{- \frac{\beta + (\gamma -1)r}{\gamma }t}dt = \bigg( \frac{\beta + (\gamma -1)r}{\gamma }\bigg)^{-1} . $$

Thus \((g^{d})^{-1}\) is bounded from below and \(\int _{0}^{\infty }g^{d}(Y_{t})^{-1}dt = \infty \)\(\bar{\mathbb{P}}^{d}\)-a.s.

For the additional assumptions in Theorem 4.2, first, from Lemma A.6 below, there exists a constant \(Q\) such that \(\bar{c} = \frac{(\gamma -1)\mu ^{2} }{2\gamma ^{2}\Sigma } - ( \gamma -1)(1-\rho ^{2})AQ>0\) and \(\frac{2\bar{c}^{2}}{(k+\bar{\alpha })^{2}} = Q\). For \(\phi (y) = \gamma (1-\rho ^{2})AQy\), the ODE \(\mathcal{H}^{d}(y,g^{p},\nabla g^{p},D^{2} g^{p}) - \frac{(1-\gamma )\phi }{\gamma }=0\) becomes

$$ 0= g^{p}(y)^{-1}+ \left (-by + b\theta + \frac{(1-\gamma )\rho a\mu y}{\gamma \sigma }\right ) \frac{g^{p}_{y}}{g^{p}} + \frac{Ayg^{p}_{yy}}{2g^{p}} -\bar{c}y - \frac{\beta }{\gamma } + \frac{(1-\gamma )r}{\gamma } . $$

Then since \(A>0\), \(\frac{\beta + (\gamma -1)r}{\gamma } > 0\) and \(\bar{c}>0\), Lemma 5.1 implies that \(g^{p}(y)\) defined in Proposition 5.2 is the solution of the above ODE.

Similarly to \(B(t)\), we get \(0\leq \bar{B}(t) \leq \frac{2\bar{c}}{k + \bar{\alpha }}\) and \(0\geq \frac{g^{p}_{y}}{g^{p}} \geq - \frac{2\bar{c}}{k + \bar{\alpha }}\). Then similarly to the argument for \(\bar{L}^{d}\) in assumption (i) in Theorem 4.1, Lemma A.5 below implies that the diffusion \(Y\) which follows \(dY_{t} = (b\theta -\phi ^{p} Y_{t}) dt + a\sqrt{Y_{t}}d\bar{W}_{t}\) with \(\phi ^{p} = b - \frac{(1-\gamma )\rho a \mu }{\gamma \sigma } - ( \gamma + (1-\gamma )\rho ^{2})A \frac{g^{p}_{y}}{g^{p}}\) never reaches 0 or \(\infty \). Thus \(Y\) has a unique weak solution and assumption (ii) holds.

Note that

$$ \phi (y) - \gamma \frac{ (\nabla g^{p} )' (A- \Upsilon '\Sigma ^{-1}\Upsilon )\nabla g^{p}}{2 (g^{p} )^{2} }y =\gamma (1 - \rho ^{2} )A\bigg(Q - \frac{1}{2}\Big( \frac{g^{p}_{y}}{g^{p}}\Big)^{2}\bigg)y\geq 0 , $$

and we already know that assumption (i) holds. Finally, similarly to \(C(t)\), we obtain \(\bar{C}(t)\leq -\frac{\beta +(\gamma -1)r}{\gamma }t\) which implies \(\int _{0}^{\infty }g^{p}(Y_{t})^{-1}dt = \infty \)\(\bar{\mathbb{P}}^{p}\)-a.s., and so assumption (iii) holds. □

Lemma A.5

If\(b_{1}\leq b_{t} \leq b_{2}\)for all\(t\geq 0\)for two constants\(b_{1}\)and\(b_{2}>0\), \(\theta \geq \frac{a^{2}}{2}\)and the stochastic process\(Y\)satisfies\(dY_{t} = (\theta -b_{t} Y_{t})dt + a\sqrt{Y_{t}}dW_{t}\)with\(Y_{0}>0\), then\(Y\)never explodes to 0 or\(\infty \).

Proof

For nonexplosion to 0, by the comparison principle, \(Y\) is bounded below by \(Y_{2}\) which follows \(dY_{2t} = (\theta -b_{2}Y_{2t})dt + a\sqrt{Y_{2t}}dW_{t}\). Since \(\theta \geq \frac{a^{2}}{2}\), \(Y_{2}\) never reaches 0, and thus neither does \(Y\). For nonexplosion to \(\infty \), consider \(n\) Ornstein–Uhlenbeck processes \(X^{i}\), \(i=1,\dots ,n\), given by

$$ dX^{i}_{t} = -\frac{b_{1}}{2} X^{i}_{t}dt + \frac{a}{2}dW^{i}_{t}, $$

where \(W^{i}\), \(i=1,\dots ,n\), are \(n\) independent Brownian motions. Let \(\tilde{Y} = \sum _{i=1}^{n}(X^{i})^{2}\); then

$$ d\tilde{Y}_{t} = \left (\frac{nA}{4} - b_{1} \tilde{Y}_{t}\right )dt + a\sqrt{\tilde{Y}_{t}}\sum _{i=1}^{n} \frac{X^{i}_{t}}{\sqrt{\tilde{Y}_{t}}}dW^{i}_{t}. $$

Note that \(\int _{0}^{\cdot }\sum _{i=1}^{n}({X^{i}_{s}}/{\sqrt{\tilde{Y}_{s}}})dW^{i}_{s}\) is a continuous local martingale starting from 0, and since \(\sum _{i=1}^{n}{(X^{i}_{t})^{2}}/{\tilde{Y}_{t}} = 1\), its quadratic variation is \(t\). Thus by Lévy’s theorem, it is a Brownian motion. Now let \(n\) be large enough such that \(\frac{nA}{4}\geq \theta \). Then by the comparison principle, \(\tilde{Y}\) with dynamics

$$ d\tilde{Y}_{t} = \left (\frac{nA}{4} - b_{1} \tilde{Y}_{t}\right )dt + a\sqrt{\tilde{Y}_{t}}dW_{t} $$

dominates \(Y_{1}\) satisfying

$$ dY_{1t} = \left (\theta -b_{1}Y_{1t}\right )dt + a\sqrt{Y_{1t}}dW_{t}, $$

which in turn dominates \(Y\). Since \(\tilde{Y}\) is the sum of \(n\) independent squared Ornstein–Uhlenbeck processes which are Gaussian, \(\tilde{Y}\) never explodes to \(\infty \), and so neither does \(Y\). □

Lemma A.6

For the model (5.1) and (5.2), there exists a constant\(\hat{Q} > 0\)such that\(\bar{c} >0\)and\(\frac{2\bar{c}^{2}}{(k+\bar{\alpha })^{2}} = \hat{Q}\), where\(\bar{c}\), \(k\)and\(\bar{\alpha }\)are defined in Proposition 5.2.

Proof

Consider \(U(Q) = \frac{2\bar{c}^{2}}{(k+\bar{\alpha })^{2}} - Q\) and note from Proposition 5.2 that the term \(\bar{c}= \frac{(\gamma -1)\mu ^{2}}{2\gamma ^{2}\Sigma }- (\gamma -1)A(1-\rho ^{2})Q\) depends on \(Q\). For \(Q = 0\), \(\bar{c} = \frac{(\gamma -1)\mu ^{2}}{2\gamma ^{2}\Sigma } > 0\) implies that \(U(0) = \frac{2\bar{c}^{2}}{(k+\bar{\alpha })^{2}}>0\), and when \(Q = \frac{\mu ^{2}}{2(1-\rho ^{2})\gamma ^{2}\Sigma A}> 0\), then \(\bar{c}=0\) and \(U(Q) = -Q < 0\). Since \(Q \mapsto U(Q)\) is continuous, there exists a constant \(\hat{Q}\) between 0 and \(\frac{\mu ^{2}}{2(1-\rho ^{2})\gamma ^{2}\Sigma A}\) such that \(U(\hat{Q}) = 0\), and since \(\bar{c}\) is strictly decreasing in \(Q\), we have \(\bar{c}(\hat{Q})>0\). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guasoni, P., Wang, G. Consumption in incomplete markets. Finance Stoch 24, 383–422 (2020). https://doi.org/10.1007/s00780-020-00420-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-020-00420-9

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation