Abstract
We study the well-posedness of Hamilton–Jacobi–Bellman equations on subsets of \({\mathbb {R}}^d\) in a context without boundary conditions. The Hamiltonian is given as the supremum over two parts: an internal Hamiltonian depending on an external control variable and a cost functional penalizing the control. The key feature in this paper is that the control function can be unbounded and discontinuous. This way we can treat functionals that appear e.g. in the Donsker–Varadhan theory of large deviations for occupation-time measures. To allow for this flexibility, we assume that the internal Hamiltonian and cost functional have controlled growth, and that they satisfy an equi-continuity estimate uniformly over compact sets in the space of controls. In addition to establishing the comparison principle for the Hamilton–Jacobi–Bellman equation, we also prove existence, the viscosity solution being the value function with exponentially discounted running costs. As an application, we verify the conditions on the internal Hamiltonian and cost functional in two examples.
Similar content being viewed by others
1 Introduction and aim of this note
The main purpose of this note is to establish well-posedness for first-order nonlinear partial differential equations of Hamilton–Jacobi–Bellman type on subsets E of \({\mathbb {R}}^d\),
in the context without boundary conditions and where the Hamiltonian flow generated by \({\mathcal {H}}\) remains inside E. In (HJB), \(\lambda > 0\) is a scalar and h is a continuous and bounded function. The Hamiltonian \({\mathcal {H}}:E\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is given by
where \(\theta \in \Theta \) plays the role of a control variable. For fixed \(\theta \), the function \(\Lambda \) can be interpreted as an Hamiltonian itself. We call it the internal Hamiltonian. The function \({\mathcal {I}}\) can be interpreted as the cost of applying the control \(\theta \).
The main result of this paper is the comparison principle for (HJB) in order to establish uniqueness of viscosity solutions. The standard assumption in the literature that allows one to obtain the comparison principle in the context of optimal control problems (e.g. [2] for the first order case and [10] for the second order case) is that either there is a modulus of continuity \(\omega \) such that
or that \({\mathcal {H}}\) is uniformly coercive:
More generally, the two estimates (1.2) and (1.3) can be combined in a single estimate, called pseudo-coercivity, see [4, (H4), Page 34], that uses the fact that the sub- and supersolution properties roughly imply that the estimate (1.2) only needs to hold for appropriately chosen x, y and p such that \({\mathcal {H}}\) is finite uniformly over these chosen x, y, p.
In the Hamilton–Jacobi–Bellman context, the comparison principle is typically obtained by translating (1.2) into conditions for \(\Lambda \) and \({\mathcal {I}}\) of (1.1), which include (e.g. [2, Chapter III])
-
(I)
\(|\Lambda (x,p,\theta )-\Lambda (y,p,\theta )|\le \omega _\Lambda (|x-y|(1+|p|))\), uniformly in \(\theta \), and
-
(II)
\({\mathcal {I}}\) is bounded, continuous and \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \omega _{\mathcal {I}}(|x-y|)\) for all \(\theta \).
The pseudo-coercivity property is harder to translate as in this way the control on \({\mathcal {H}}\) does not necessarily imply the same control on \(\Lambda \), in particular in the case when \({\mathcal {I}}\) is unbounded. We return on this issue below.
The estimates (I) and (II) are not satisfied for Hamiltonians arising from natural examples in the theory of large deviations [12, 13] for Markov processes with two scales (see e.g. [6, 18, 27, 29] for PDE’s arising from large deviations with two scales, see [3, 16, 17, 20, 21] for other works connection PDE’s with large deviations). Indeed, in [6] the authors mention that well-posedness of the Hamilton–Jacobi–Bellman equation for examples arising from large deviation theory is an open problem. Recent generalizations of the coercivity condition, see e.g. [9], also do not cover these examples.
In the large deviation context, however, we typically know that we have the comparison principle for the Hamilton–Jacobi equation in terms of \(\Lambda \). In addition, even though \({\mathcal {I}}\) might be discontinuous, we do have other types of regularity for the functional \({\mathcal {I}}\), see e.g. [32]. Thus, we aim to prove a comparison principle for (HJB) on the basis of the assumption that we have the following natural relaxations of (or the pseudo-coercive version of) (I) and (II).
-
(i)
For \(\theta \in \Theta \), define the Hamiltonian \({\mathcal {H}}_\theta (x,p):= \Lambda (x,p,\theta )\). We have an estimate on \({\mathcal {H}}_\theta \) that is uniform over \(\theta \) in compact sets \(K \subseteq \Theta \). This estimate, for one fixed \(\theta \), is in spirit similar to the pseudo-coercivity estimate of [4] and is morally equivalent to the comparison principle for \({\mathcal {H}}_\theta \). The uniformity is made rigorous as the continuity estimate in Assumption 2.14 (\(\Lambda 5\)) below.
-
(ii)
The cost functional \({\mathcal {I}}(x,\theta )\) satisfies an equi-continuity estimate of the type \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \omega _{{\mathcal {I}},C}(|x-y|)\) on sublevel sets \(\{{\mathcal {I}} \le C\}\) which we assume to be compact. This estimate is made rigorous in Assumption 2.15 (\({\mathcal {I}}5\)) below.
To work with these relaxations, we introduce a procedure that allows us to restrict our analysis to compact sets in the space of controls. In the proof of the comparison principle, the sub- and supersolution properties give boundedness of \({\mathcal {H}}\) when evaluated in optimizing points. We then translate this boundedness to boundedness of \({\mathcal {I}}\), which implies that the controls lie in a compact set.
The transfer of control builds upon (i) for \(\Lambda (x,p,\theta _{x}^0)\) when we use a control \(\theta _{x}^0\) that satisfies \({\mathcal {I}}(x,\theta _{x}^0) = 0\). This we call the bootstrap procedure: we use the comparison principle for the Hamilton–Jacobi equation in terms of \(\Lambda (x,p,\theta _{x}^0)\) to shift the control on \({\mathcal {H}}\) to control on \(\Lambda \) and \({\mathcal {I}}\) for general \(\theta \). That way the comparison principle for the internal Hamiltonian \(\Lambda \) bootstraps to the comparison principle for the full Hamiltonian \({\mathcal {H}}\).
Clearly, this bootstrap argument does not come for free. We pose four additional assumptions:
-
(iii)
The function \(\Lambda \) grows roughly equally fast in p: For all compact sets \({\widehat{K}} \subseteq E\), there are constants \(M,C_1,C_2\) such that
$$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M,C_1 \Lambda (x,p,\theta _2) + C_2\right\} , \end{aligned}$$for all \(x\in {\widehat{K}}\), \(p \in {\mathbb {R}}^d, \, \theta _1,\theta _2 \in \Theta \).
-
(iv)
The function \({\mathcal {I}}\) grows roughly equally fast in x: For all \(x \in E\) and \(M \ge 0\) there exists an open neighbourhood U of x and constants \(M',C_1',C_2'\) such that
$$\begin{aligned} {\mathcal {I}}(y_1,\theta ) \le \max \{M',C_1' {\mathcal {I}}(y_2,\theta ) + C_2'\} \end{aligned}$$for all \(y_1,y_2 \in U\) and for all \(\theta \) such that \({\mathcal {I}}(x,\theta ) \le M\).
-
(v)
\({\mathcal {I}}\ge 0\) and for each \(x \in E\), there exists \(\theta _{x}^0\) such that \({\mathcal {I}}(x,\theta _x^0) = 0\).
-
(vi)
The functional \({\mathcal {I}}\) is equi-coercive in x: for any compact set \({\hat{K}} \subseteq E\) the set \(\bigcup _{x \in {\hat{K}}} \{\theta \, | \, {\mathcal {I}}(x,\theta ) \le C\}\) is compact.
These four assumptions are stated below as Assumptions 2.14 (\(\Lambda 4\)), 2.15 (\({\mathcal {I}}4\)), 2.15 (\({\mathcal {I}}2\)), and 2.15 (\({\mathcal {I}}3\)). To explain in more detail our argument, we give a sketch of the bootstrap procedure, which can be skipped on first reading. In this sketch, we refrain from performing localization arguments that are needed for non-compact E.
Sketch of the bootstrap argument
Let u and v be a sub- and supersolution to \(f - \lambda Hf = h\) respectively. We estimate \(\sup _x u(x) - v(x)\) by the classical doubling of variables by means of penalizing the distance between x and y by some penalization \(\alpha \Psi (x-y)\) and aim to send \(\alpha \rightarrow \infty \). Let \(x_\alpha ,y_\alpha \) denote the optimizers, and denote by \(p_\alpha \) the corresponding momentum \(p_\alpha = \alpha \partial _x \Psi (x_\alpha -y_\alpha )\). Let \(\theta _\alpha \) be the control such that \({\mathcal {H}}(x_\alpha ,p_\alpha ) = \Lambda (x_\alpha ,p_\alpha ,\theta _\alpha )-{\mathcal {I}}(x_\alpha ,\theta _\alpha )\) and let \(\theta _{\alpha }^0\) be a control such that \({\mathcal {I}}(y_\alpha ,\theta _{\alpha }^0) = 0\), which exists due to (v).
The supersolution property for v yields the following estimate that is uniform in \(\alpha > 0\)
Using (iii), we obtain a uniform estimate in \(\alpha \):
which will allow us to use (i) if we can show that the controls \(\theta _\alpha \) take their value in a compact set \(K \subseteq \Theta \). For this, it suffices by (vi) to establish
This, in fact, implies by (iv) that
so that we can also apply (ii). This, in combination with the application of (i) establishes the comparison principle for \(f - \lambda Hf = h\).
We are thus left to prove (1.6), which is where our bootstrap comes into play. The subsolution property for u yields the following estimate that is uniform in \(\alpha > 0\)
Thus, (1.6) follows if we can establish
which in turn (by (iii)) follows from
To establish this final estimate, note that
and that we have control on the first term by means of (1.4) and on the second term by the pseudo-coercivity estimate of (i) on \(\Lambda \) for the controls \(\theta _{\alpha }^0\) which lie in a compact set due to (vi). \(\square \)
Thus, to summarize, we use the growth conditions posed on \(\Lambda \) and \({\mathcal {I}}\) and the pseudo-coercivity estimate for \(\Lambda \) to transfer the control on the full Hamiltonian \({\mathcal {H}}\) to the functions \(\Lambda \) and the cost function \({\mathcal {I}}\). Then the control on \(\Lambda \) and \({\mathcal {I}}\) allows us to apply the estimates (i) and (ii) to obtain the comparison principle.
Next to our main result, we also state for completeness an existence result in Theorem 2.8. The viscosity solution will be given in terms of a discounted control problem as is typical in the literature, see e.g. [2, Chapter 3]. Minor difficulties arise from working with \({\mathcal {H}}\) that arise from irregular \({\mathcal {I}}\).
Finally, we show that the conditions (i) to (vi) are satisfied in two examples that arise from large deviation theory for two-scale processes. In our companion paper [26], we will use existence and uniqueness for (HJB) for these examples to obtain large deviation principles.
Illustration in the context of an example
As an illustrating example, we consider a Hamilton–Jacobi–Bellman equation that arises from the large deviations of the empirical measure-flux pair of weakly coupled Markov jump processes that are coupled to fast Brownian motion on the torus. We skip the probabilistic background of this problem (See [26]), and come to the set-up relevant for this paper.
Let \(G := \{1,\dots ,q\}\) be some finite set, and let \(\Gamma = \{(a,b) \in G^2 \, | \, a \ne b\}\) be the set of directed bonds. Let \(E := {\mathcal {P}}(G) \times [0,\infty )^\Gamma \), where \({\mathcal {P}}(G)\) is the set of probability measures on G. Let \(F = {\mathcal {P}}(S^1)\) be the set of probability measures on the one-dimensional torus. We introduce \(\Lambda \) and \({\mathcal {I}}\).
-
Let \(r : G \times G \times {\mathcal {P}}(E) \times {\mathcal {P}}(S^1) \rightarrow [0,\infty )\) be some function that codes the \({\mathcal {P}}(E) \times {\mathcal {P}}(S^1)\) dependent jump rate of the Markov jump process over each bond \((a,b) \in \Gamma \). The internal Hamiltonian \(\Lambda \) is given by
$$\begin{aligned} \Lambda (\mu ,p,\theta ) = \sum _{(a,b) \in \Gamma } \mu _a r(a,b,\mu ,\theta ) \left[ e^{p_b - p_a + p_{a,b}} - 1 \right] . \end{aligned}$$ -
Let \(\sigma ^2 : S^1 \times {\mathcal {P}}(G) \rightarrow (0,\infty )\) be a bounded and strictly positive function. The cost function \({\mathcal {I}}:E \times \Theta \rightarrow [0,\infty ]\) is given by
$$\begin{aligned} {\mathcal {I}}(\mu ,w,\theta ) = {\mathcal {I}}(\mu ,\theta ) = \sup _{\begin{array}{c} u\in C^\infty (S^1)\\ u > 0 \end{array}} \int _{S^1} \sigma ^2(y,\mu ) \left( -\frac{u''(y)}{u(y)}\right) \,\theta ({\mathrm {d}}y). \end{aligned}$$
Aiming for the comparison principle, we note that classical methods do not apply. The functionals \(\Lambda \) are not coercive and do not satisfy (I). We show in “Appendix E” that they are also not pseudo-coercive as defined in [4]. The functional \({\mathcal {I}}\) is neither continuous nor bounded. Once can check e.g. that if \(\theta \) is a finite combination of Dirac measures, then \({\mathcal {I}}(\mu ,\theta ) = \infty \).
We show in Sect. 5, however, that (i) to (vi) hold, implying the comparison principle for the Hamilton–Jacobi–Bellman equations. The verification of these properties is based in part on results from [23, 32].
Summary and overview of the paper
To summarize, our novel bootstrap procedure allows to treat Hamilton–Jacobi–Bellman equations where:
-
We assume that the cost function \({\mathcal {I}}\) satisfies some regularity conditions on its sub-levelsets, but allow \({\mathcal {I}}\) to be possibly unbounded and discontinuous.
-
We assume that \(\Lambda \) satisfies the continuity estimate uniformly for controls in compact sets, which in spirit extends the pseudo-coercivity estimate of [4]. This implies that \(\Lambda \) can be possibly non-coercive, non-pseudo-coercive and non-Lipschitz as exhibited in our example above.
In particular, allowing discontinuity in \({\mathcal {I}}\) allows us to treat the comparison principle for examples like the one we considered above, which so far has been out of reach. We believe that the bootstrap procedure we introduce in this note has the potential to also apply to second order equations or equations in infinite dimensions. Of interest would be, for example, an extension of the results of [10] who work with continuous \({\mathcal {I}}\). For clarity of the exposition, and the already numerous applications for this setting, we stick to the finite-dimensional first-order case. We think that the key arguments that are used in the proof in Sect. 3 do not depend in a crucial way on this assumption.
The paper is organized as follows. The main results are formulated in Sect. 2. In Sect. 3 we establish the comparison principle. In Sect. 4 we establish that a resolvent operator \(R(\lambda )\) in terms of an exponentially discounted control problem gives rise to viscosity solutions of the Hamilton–Jacobi–Bellman equation (HJB). Finally, in Sect. 5 we treat two examples including the one mentioned in the introduction.
2 Main results
In this section, we start with preliminaries in Sect. 2.1, which includes the definition of viscosity solutions and that of the comparison principle.
We proceed in Sect. 2.2 with the main results: a comparison principle for the Hamilton–Jacobi–Bellman equation (HJB) based on variational Hamiltonians of the form (1.1), and the existence of viscosity solutions. In Sect. 2.3 we collect all assumptions that are needed for the main results.
2.1 Preliminaries
For a Polish space \({\mathcal {X}}\) we denote by \(C({\mathcal {X}})\) and \(C_b({\mathcal {X}})\) the spaces of continuous and bounded continuous functions respectively. If \({\mathcal {X}}\subseteq {\mathbb {R}}^d\) then we denote by \(C_c^\infty ({\mathcal {X}})\) the space of smooth functions that vanish outside a compact set. We denote by \(C_{cc}^\infty ({\mathcal {X}})\) the set of smooth functions that are constant outside of a compact set in \({\mathcal {X}}\), and by \({\mathcal {P}}({\mathcal {X}})\) the space of probability measures on \({\mathcal {X}}\). We equip \({\mathcal {P}}({\mathcal {X}})\) with the weak topology induced by convergence of integrals against bounded continuous functions.
Throughout the paper, E will be the set on which we base our Hamilton–Jacobi equations. We assume that E is a subset of \({\mathbb {R}}^d\) that is a Polish space which is contained in the \({\mathbb {R}}^d\) closure of its \({\mathbb {R}}^d\) interior. This ensures that gradients of functions are determined by their values on E. Note that we do not necessarily assume that E is open. We assume that the space of controls \(\Theta \) is Polish.
We next introduce viscosity solutions for the Hamilton–Jacobi equation with Hamiltonians like \({\mathcal {H}}(x,p)\) of our introduction.
Definition 2.1
(Viscosity solutions and comparison principle) Let \(A : {\mathcal {D}}(A) \subseteq C_b(E) \rightarrow C_b(E)\) be an operator with domain \({\mathcal {D}}(A)\), \(\lambda > 0\) and \(h \in C_b(E)\). Consider the Hamilton–Jacobi equation
We say that u is a (viscosity) subsolution of equation (2.1) if u is bounded from above, upper semi-continuous and if, for every \(f \in {\mathcal {D}}(A)\) there exists a sequence \(x_n \in E\) such that
We say that v is a (viscosity) supersolution of Eq. (2.1) if v is bounded from below, lower semi-continuous and if, for every \(f \in {\mathcal {D}}(A)\)there exists a sequence \(x_n \in E\) such that
We say that u is a (viscosity) solution of Eq. (2.1) if it is both a subsolution and a supersolution to (2.1). We say that (2.1) satisfies the comparison principle if for every subsolution u and supersolution v to (2.1), we have \(u \le v\).
Remark 2.2
(Uniqueness) If u and v are two viscosity solutions of 2.3, then we have \(u\le v\) and \(v\le u\) by the comparison principle, giving uniqueness.
Remark 2.3
Consider the definition of subsolutions. Suppose that the testfunction \(f \in {\mathcal {D}}(A)\) has compact sublevel sets, then instead of working with a sequence \(x_n\), there exists \(x_0 \in E\) such that
A similar simplification holds in the case of supersolutions.
Remark 2.4
For an explanatory text on the notion of viscosity solutions and fields of applications, we refer to [8].
Remark 2.5
At present, we refrain from working with unbounded viscosity solutions as we use the upper bound on subsolutions and the lower bound on supersolutions in the proof of Theorem 2.6. We can, however, imagine that the methods presented in this paper can be generalized if u and v grow slower than the containment function \(\Upsilon \) that will be defined below in Definition 2.13.
2.2 Main results: comparison and existence
In this section, we state our main results: the comparison principle in Theorem 2.6, and existence of solutions in Theorem 2.8.
Consider the variational Hamiltonian \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) given by
The precise assumptions on the maps \(\Lambda \) and \({\mathcal {I}}\) are formulated in Sect. 2.3.
Theorem 2.6
(Comparison principle) Consider the map \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) as in (2.2). Suppose that Assumptions 2.14 and 2.15 are satisfied for \(\Lambda \) and I. Define the operator \({\mathbf {H}}f(x) := {\mathcal {H}}(x,\nabla f(x))\) with domain \({\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\). Then:
-
(a)
For any \(f \in {\mathcal {D}}({\mathbf {H}})\) the map \(x\mapsto {\mathbf {H}}f(x)\) is continuous.
-
(b)
For any \(h \in C_b(E)\) and \(\lambda > 0\), the comparison principle holds for
$$\begin{aligned} f - \lambda \, {\mathbf {H}}f = h. \end{aligned}$$(2.3)
Remark 2.7
(Domain) The comparison principle holds with any domain that satisfies \(C_{cc}^\infty (E)\subseteq {\mathcal {D}}({\mathbf {H}})\subseteq C^1_b(E)\). We state it with \(C^\infty _{cc}(E)\) to connect it with the existence result of Theorem 2.8, where we need to work with test functions whose gradients have compact support.
Consider the Legendre dual \({\mathcal {L}}: E \times {\mathbb {R}}^d \rightarrow [0,\infty ]\) of the Hamiltonian,
and denote the collection of absolutely continuous paths in E by \({\mathcal {A}}{\mathcal {C}}\).
Theorem 2.8
(Existence of viscosity solution) Consider \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) as in (2.2). Suppose that Assumptions 2.14 and 2.15 are satisfied for \(\Lambda \) and \({\mathcal {I}}\), and that \({\mathcal {H}}\) satisfies Assumption 2.17. For each \(\lambda > 0\), let \(R(\lambda )\) be the operator
Then \(R(\lambda )h\) is the unique viscosity solution to \(f - \lambda {\mathbf {H}}f = h\).
Remark 2.9
The form of the solution is typical, see for example Section III.2 in [2]. It is the value function obtained by an optimization problem with exponentially discounted cost. The difficulty of the proof of Theorem 2.8 lies in treating the irregular form of \({\mathcal {H}}\).
2.3 Assumptions
In this section, we formulate and comment on the assumptions imposed on the Hamiltonians defined in the previous sections. The key assumptions were already mentioned in the sketch of the bootstrap method in the introduction. To these, we add minor additional assumptions on the regularity of \(\Lambda \) and \({\mathcal {I}}\) in Assumptions 2.14 and 2.15. Finally, Assumption 2.17 will imply that even if E has a boundary, no boundary conditions are necessary for the construction of the viscosity solution.
We start with the continuity estimate for \(\Lambda \), which was briefly discussed in (i) in the introduction. To that end, we first introduce a function that is used in the typical argument that doubles the number of variables.
Definition 2.10
(Penalization function) We say that \(\Psi : E^2 \rightarrow [0,\infty )\) is a penalization function if \(\Psi \in C^1(E^2)\) and if \(x = y\) if and only if \(\Psi (x,y) = 0\).
We will apply the definition below for \({\mathcal {G}}= \Lambda \).
Definition 2.11
(Continuity estimate) Let \(\Psi \) be a penalization function and let \({\mathcal {G}}: E \times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\), \((x,p,\theta )\mapsto {\mathcal {G}}(x,p,\theta )\) be a function. Suppose that for each \(\varepsilon > 0\), there is a sequence of positive real numbers \(\alpha \rightarrow \infty \). For sake of readability, we suppress the dependence on \(\varepsilon \) in our notation.
Suppose that for each \(\varepsilon \) and \(\alpha \) we have variables \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) in \(E^2\) and variables \(\theta _{\varepsilon ,\alpha }\) in \(\Theta \). We say that this collection is fundamental for \({\mathcal {G}}\) with respect to \(\Psi \) if:
-
(C1)
For each \(\varepsilon \), there are compact sets \(K_\varepsilon \subseteq E\) and \({\widehat{K}}_\varepsilon \subseteq \Theta \) such that for all \(\alpha \) we have \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in K_\varepsilon \) and \(\theta _{\varepsilon ,\alpha }\in {\widehat{K}}_\varepsilon \).
-
(C2)
For each \(\varepsilon > 0\), we have \(\lim _{\alpha \rightarrow \infty } \alpha \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = 0\). For any limit point \((x_\varepsilon ,y_\varepsilon )\) of \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\), we have \(\Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = 0\).
-
(C3)
We have for all \(\varepsilon > 0\)
$$\begin{aligned}&\sup _{\alpha } {\mathcal {G}}\left( y_{\varepsilon ,\alpha }, - \alpha (\nabla \Psi (x_{\varepsilon ,\alpha },\cdot ))(y_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) < \infty , \end{aligned}$$(2.4)$$\begin{aligned}&\inf _\alpha {\mathcal {G}}\left( x_{\varepsilon ,\alpha }, \alpha (\nabla \Psi (\cdot ,y_{\varepsilon ,\alpha }))(x_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) > - \infty . \end{aligned}$$(2.5)
We say that \({\mathcal {G}}\) satisfies the continuity estimate if for every fundamental collection of variables we have for each \(\varepsilon > 0\) that
Remark 2.12
In “Appendix C”, we state a slightly more general continuity estimate on the basis of two penalization functions. A proof of a comparison principle on the basis of two penalization functions was given in [23].
The continuity estimate is indeed exactly the estimate that one would perform when proving the comparison principle for the Hamilton–Jacobi equation in terms of the internal Hamiltonian (disregarding the control \(\theta \)). Typically, the control on \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) that is assumed in (C1) and (C2) is obtained from choosing \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) as optimizers in the doubling of variables procedure (see Lemma 3.5), and the control that is assumed in (C3) is obtained by using the viscosity sub- and supersolution properties in the proof of the comparison principle. The required restriction to compact sets in Lemma 3.5 is obtained by including in the test functions a containment function.
Definition 2.13
(Containment function) We say that a function \(\Upsilon : E \rightarrow [0,\infty ]\) is a containment function for \(\Lambda \) if \(\Upsilon \in C^1(E)\) and there is a constant \(c_\Upsilon \) such that
-
For every \(c \ge 0\), the set \(\{x \, | \, \Upsilon (x) \le c\}\) is compact;
-
We have \(\sup _\theta \sup _x \Lambda \left( x,\nabla \Upsilon (x),\theta \right) \le c_\Upsilon \).
To conclude, our assumption on \(\Lambda \) contains the continuity estimate, the controlled growth, the existence of a containment function and two regularity properties.
Assumption 2.14
The function \(\Lambda :E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) in the Hamiltonian (2.2) satisfies the following.
- (\(\Lambda 1\)):
-
The map \(\Lambda : E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) is continuous.
- (\(\Lambda 2\)):
-
For any \(x\in E\) and \(\theta \in \Theta \), the map \(p\mapsto \Lambda (x,p,\theta )\) is convex. We have \(\Lambda (x,0,\theta ) = 0\) for all \(x\in E\) and all \(\theta \in \Theta \).
- (\(\Lambda 3\)):
-
There exists a containment function \(\Upsilon : E \rightarrow [0,\infty )\) for \(\Lambda \) in the sense of Definition 2.13.
- (\(\Lambda 4\)):
-
For every compact set \(K \subseteq E\), there exist constants \(M, C_1, C_2 \ge 0\) such that for all \(x \in K\), \(p \in {\mathbb {R}}^d\) and all \(\theta _1,\theta _2\in \Theta \), we have
$$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M,C_1 \Lambda (x,p,\theta _2) + C_2\right\} . \end{aligned}$$ - (\(\Lambda 5\)):
-
The function \(\Lambda \) satisfies the continuity estimate in the sense of Definition 2.11, or in the extended sense of Definition C.2.
Our second main assumption is on the properties of \({\mathcal {I}}\). For a compact set \(K\subseteq E\) and a constant \(M\ge 0\), write
and
Assumption 2.15
The functional \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) in (2.2) satisfies the following.
- (\({\mathcal {I}}1\)):
-
The map \((x,\theta ) \mapsto {\mathcal {I}}(x,\theta )\) is lower semi-continuous on \(E \times \Theta \).
- (\({\mathcal {I}}2\)):
-
For any \(x\in E\), there exists a control \(\theta _{x}^0 \in \Theta \) such that \({\mathcal {I}}(x,\theta _{x}^0) = 0\).
- (\({\mathcal {I}}3\)):
-
For any compact set \(K \subseteq E\) and constant \(M \ge 0\) the set \(\Theta _{K,M}\) is compact.
- (\({\mathcal {I}}4\)):
-
For each \(x \in E\) and constant \(M \ge 0\), there exists an open neighbourhood \(U \subseteq E\) of x and constants \(M',C_1',C_2' \ge 0\) such that for all \(y_1,y_2 \in U\) and \(\theta \in \Theta _{\{x\},M}\) we have
$$\begin{aligned} {\mathcal {I}}(y_1,\theta ) \le \max \left\{ M', C_1'{\mathcal {I}}(y_2,\theta ) + C_2' \right\} . \end{aligned}$$ - (\({\mathcal {I}}5\)):
-
For every compact set \(K \subseteq E\) and each \(M \ge 0\) the collection of functions \(\{{\mathcal {I}}(\cdot ,\theta )\}_{\theta \in \Omega _{K,M}}\) is equicontinuous. That is: for all \(\varepsilon > 0\), there is a \(\delta > 0\) such that for all \(\theta \in \Omega _{K,M}\) and \(x,y \in K\) such that \(d(x,y) \le \delta \) we have \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \varepsilon \).
To establish the existence of viscosity solutions, we will impose one additional assumption. For a general convex functional \(p \mapsto \Phi (p)\) we denote
Definition 2.16
The tangent cone (sometimes also called Bouligand cotingent cone) to E in \({\mathbb {R}}^d\) at x is
Assumption 2.17
The set E is closed and convex. The map \(\Lambda \) is such that \(\partial _p \Lambda (x,p,\theta ) \subseteq T_E(x)\) for all \(x \in E\), \(p \in {\mathbb {R}}^d\) and \(\theta \in \Theta \).
In Lemma 4.1 we will show that the assumption implies that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\), which in turn implies that the solutions of the differential inclusion in terms of \(\partial _p {\mathcal {H}}(x,p)\) remain inside E. Motivated by our examples, we work with closed convex domains E. While in this context we can apply results from e.g. Deimling [11], we believe that similar results can be obtained in different contexts.
Remark 2.18
The statement that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\) is intuitively implied by the comparison principle for \({\mathbf {H}}\) and therefore, we expect it to hold in any setting for which Theorem 2.6 holds. Here, we argue in a simple case why this is to be expected. First of all, note that the comparison principle for \({\mathbf {H}}\) builds upon the maximum principle. Suppose that \(E = [0,1]\), \(f,g \in C^1_b(E)\) and suppose that \(f(0) - g(0) = \sup _x f(x) - g(x)\). As \(x=0\) is a boundary point, we conclude that \(f'(0) \le g'(0)\). If indeed the maximum principle holds, we must have
implying that \(p \mapsto {\mathcal {H}}(0,p)\) is increasing, in other words
3 The comparison principle
In this section, we establish Theorem 2.6. To establish the comparison principle for \(f - \lambda {\mathbf {H}}f = h\) we use the bootstrap method explained in the introduction. We start by a classical localization argument.
We carry out the localization argument by absorbing the containment function \(\Upsilon \) from Assumption 2.14 (\(\Lambda 3\)) into the test functions. This leads to two new operators, \(H_\dagger \) and \(H_\ddagger \) that serve as an upper bound and a lower bound for the true \({\mathbf {H}}\). We will then show the comparison principle for the Hamilton–Jacobi equation in terms of these two new operators. We therefore have to extend our notion of Hamilton–Jacobi equations and the comparison principle. This extension of the definition is standard, but we included it for completeness in the appendix as Definition A.1.
This procedure allows us to clearly separate the reduction to compact sets on one hand, and the proof of the comparison principle on the basis of the bootstrap procedure on the other. Schematically, we will establish the following diagram:
In this diagram, an arrow connecting an operator A with operator B with subscript ’sub’ means that viscosity subsolutions of \(f - \lambda A f = h\) are also viscosity subsolutions of \(f - \lambda B f = h\). Similarly for arrows with a subscript ’super’.
We introduce the operators \(H_\dagger \) and \(H_\ddagger \) in Sect. 3.1. The arrows will be established in Sect. 3.2. Finally, we will establish the comparison principle for \(H_\dagger \) and \(H_\ddagger \) in Sect. 3.3. Combined these two results imply the comparison principle for \({\mathbf {H}}\).
Proof of Theorem 2.6
We start with the proof of (a). Let \(f \in {\mathcal {D}}({\mathbf {H}})\). Then \({\mathbf {H}}f\) is continuous since by Proposition B.3 in “Appendix B”, the Hamiltonian \({\mathcal {H}}\) is continuous.
We proceed with the proof of (b). Fix \(h_1,h_2 \in C_b(E)\) and \(\lambda > 0\).
Let \(u_1,u_2\) be a viscosity sub- and supersolution to \(f - \lambda {\mathbf {H}}f = h_1\) and \(f - \lambda {\mathbf {H}}f = h_2\) respectively. By Lemma 3.3 proven in Sect. 3.2, \(u_1\) and \(u_2\) are a sub- and supersolution to \(f - \lambda H_\dagger f = h_1\) and \(f - \lambda H_\ddagger f = h_2\) respectively. Thus \(\sup _E u_1 - u_2 \le \sup _E h_1 - h_2\) by Proposition 3.4 of Sect. 3.3. Specialising to \(h_1=h_2\) gives Theorem 2.6. \(\square \)
3.1 Definition of auxiliary operators
In this section, we repeat the definition of \({\mathbf {H}}\), and introduce the operators \(H_\dagger \) and \(H_\ddagger \).
Definition 3.1
The operator \({\mathbf {H}}\subseteq C_b^1(E) \times C_b(E)\) has domain \({\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\) and satisfies \({\mathbf {H}}f(x) = {\mathcal {H}}(x, \nabla f(x))\), where \({\mathcal {H}}\) is the map
We proceed by introducing \(H_\dagger \) and \(H_\ddagger \). Recall Assumption (\(\Lambda 3\)) and the constant \(C_\Upsilon := \sup _{\theta }\sup _x \Lambda (x,\nabla \Upsilon (x),\theta )\) therein. Denote by \(C_\ell ^\infty (E)\) the set of smooth functions on E that have a lower bound and by \(C_u^\infty (E)\) the set of smooth functions on E that have an upper bound.
Definition 3.2
(The operators \(H_\dagger \) and \(H_\ddagger \)) For \(f \in C_\ell ^\infty (E)\) and \(\varepsilon \in (0,1)\) set
and set
For \(f \in C_u^\infty (E)\) and \(\varepsilon \in (0,1)\) set
and set
3.2 Preliminary results
The operator \({\mathbf {H}}\) is related to \(H_\dagger , H_\ddagger \) by the following Lemma.
Lemma 3.3
Fix \(\lambda > 0\) and \(h \in C_b(E)\).
-
(a)
Every subsolution to \(f - \lambda {\mathbf {H}}f = h\) is also a subsolution to \(f - \lambda H_\dagger f = h\).
-
(b)
Every supersolution to \(f - \lambda {\mathbf {H}}f = h\) is also a supersolution to \(f-\lambda H_\ddagger f=~h\).
We only prove (a) of Lemma 3.3, as (b) can be carried out analogously.
Proof
Fix \(\lambda > 0\) and \(h \in C_b(E)\). Let u be a subsolution to \(f - \lambda {\mathbf {H}}f = h\). We prove it is also a subsolution to \(f - \lambda H_\dagger f = h\).
Fix \(\varepsilon > 0 \) and \(f\in C_\ell ^\infty (E)\) and let \((f^\varepsilon _\dagger ,H^\varepsilon _{\dagger ,f}) \in H_\dagger \) as in Definition 3.2. We will prove that there are \(x_n\in E\) such that
As the function \(\left[ u -(1-\varepsilon )f\right] \) is bounded from above and \(\varepsilon \Upsilon \) has compact sublevel-sets, the sequence \(x_n\) along which the first limit is attained can be assumed to lie in the compact set
Set \(M = \varepsilon ^{-1} \sup _x \left( u(x) - (1-\varepsilon )f(x) \right) \). Let \(\gamma : {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a smooth increasing function such that
Denote by \(f_\varepsilon \) the function on E defined by
By construction \(f_\varepsilon \) is smooth and constant outside of a compact set and thus lies in \({\mathcal {D}}(H) = C_{cc}^\infty (E)\). As u is a viscosity subsolution for \(f - \lambda Hf = h\) there exists a sequence \(x_n \in K \subseteq E\) (by our choice of K) with
As \(f_\varepsilon \) equals \(f_\dagger ^\varepsilon \) on K, we have from (3.3) that also
establishing (3.1). Convexity of \(p \mapsto {\mathcal {H}}(x,p)\) yields for arbitrary points \(x\in K\) the estimate
Combining this inequality with (3.4) yields
establishing (3.2). This concludes the proof. \(\square \)
3.3 The comparison principle
In this section, we prove the comparison principle for the operators \(H_\dagger \) and \(H_\ddagger \).
Proposition 3.4
Fix \(\lambda > 0\) and \(h_1,h_2 \in C_b(E)\). Let \(u_1\) be a viscosity subsolution to \(f - \lambda H_\dagger f = h_1\) and let \(u_2\) be a viscosity supersolution to \(f - \lambda H_\ddagger f = h_2\). Then we have \(\sup _x u_1(x) - u_2(x) \le \sup _x h_1(x) - h_2(x)\).
The proof uses a variant of a classical estimate that was proven e.g. in [8, Proposition 3.7] or in the present form in Proposition A.11 of [7].
Lemma 3.5
Let u be bounded and upper semi-continuous, let v be bounded and lower semi-continuous, let \(\Psi : E^2 \rightarrow {\mathbb {R}}^+\) be penalization functions and let \(\Upsilon \) be a containment function.
Fix \(\varepsilon > 0\). For every \(\alpha >0\) there exist \(x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon } \in E\) such that
Additionally, for every \(\varepsilon > 0\) we have that
-
(a)
The set \(\{x_{\alpha ,\varepsilon }, y_{\alpha ,\varepsilon } \, | \, \alpha > 0\}\) is relatively compact in E.
-
(b)
All limit points of \(\{(x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon })\}_{\alpha > 0}\) as \(\alpha \rightarrow \infty \) are of the form (z, z) and for these limit points we have \(u(z) - v(z) = \sup _{x \in E} \left\{ u(x) - v(x) \right\} \).
-
(c)
We have
$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \alpha \Psi (x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon }) = 0. \end{aligned}$$
Proof of Proposition 3.4
Fix \(\lambda >0\) and \(h_1,h_2 \in C_b(E)\). Let \(u_1\) be a viscosity subsolution and \(u_2\) be a viscosity supersolution of \(f - \lambda H_\dagger f = h_1\) and \(f - \lambda H_\ddagger f = h_2\) respectively. We prove Theorem 3.4 in five steps of which the first two are classical.
We sketch the steps, before giving full proofs.
\(\underline{Step 1 }\): We prove that for \(\varepsilon > 0 \) and \(\alpha > 0\), there exist points \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in E\) satisfying the properties listed in Lemma 3.5 and momenta \(p_{\varepsilon ,\alpha }^1,p_{\varepsilon ,\alpha }^2 \in {\mathbb {R}}^d\) such that
and
This step is solely based on the sub- and supersolution properties of \(u_1,u_2\), the continuous differentiability of the penalization function \(\Psi (x,y)\), the containment function \(\Upsilon \), and convexity of \(p \mapsto {\mathcal {H}}(x,p)\). We conclude it suffices to establish for each \(\varepsilon > 0\) that
\(\underline{Step 2 }:\) We will show that there are controls \(\theta _{\varepsilon ,\alpha }\) such that
As a consequence we have
For establishing (3.7), it is sufficient to bound the differences in (3.9) by using Assumptions 2.14 (\(\Lambda 5\)) and 2.15 (\({\mathcal {I}}5\)).
\(\underline{Step 3 }\): We verify the conditions to apply the continuity estimate, Assumption 2.14 (\(\Lambda 5\)).
The bootstrap argument allows us to find for each \(\varepsilon \) a subsequence \(\alpha = \alpha (\varepsilon ) \rightarrow \infty \) such that the variables \((x_{\varepsilon ,\alpha },x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) are fundamental for \(\Lambda \) with respect to \(\Psi \) (See Definition 2.11).
\(\underline{Step 4 }:\) We verify the conditions to apply the estimate on \({\mathcal {I}}\), Assumption 2.15 (\({\mathcal {I}}5\)).
\(\underline{Step 5 }:\) Using the outcomes of Steps 3 and 4, we can apply the continuity estimate of Assumption 2.14 (\(\Lambda 4\)) and the equi-continuity of Assumption 2.15 (\({\mathcal {I}}5\)) to estimate (3.9) for any \(\varepsilon \):
which establishes (3.7) and thus also the comparison principle.
We proceed with the proofs of the first four steps, as the fifth step is immediate.
\(\underline{Proof of Step 1 }\): The proof of this first step is classical. We include it for completeness. For any \(\varepsilon > 0\) and any \(\alpha > 0\), define the map \(\Phi _{\varepsilon ,\alpha }: E \times E \rightarrow {\mathbb {R}}\) by
Let \(\varepsilon > 0\). By Lemma 3.5, there is a compact set \(K_\varepsilon \subseteq E\) and there exist points \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in K_\varepsilon \) such that
and
As in the proof of Proposition A.11 of [23], it follows that
At this point, we want to use the sub- and supersolution properties of \(u_1\) and \(u_2\). Define the test functions \(\varphi ^{\varepsilon ,\alpha }_1 \in {\mathcal {D}}(H_\dagger ), \varphi ^{\varepsilon ,\alpha }_2 \in {\mathcal {D}}(H_\ddagger )\) by
Using (3.11), we find that \(u_1 - \varphi ^{\varepsilon ,\alpha }_1\) attains its supremum at \(x = x_{\varepsilon ,\alpha }\), and thus
Denote \(p_{\varepsilon ,\alpha }^1 := \alpha \nabla _x \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\). By our addition of the penalization \((x-x_{\varepsilon ,\alpha })^2\) to the test function, the point \(x_{\varepsilon ,\alpha }\) is in fact the unique optimizer, and we obtain from the subsolution inequality that
With a similar argument for \(u_2\) and \(\varphi ^{\varepsilon ,\alpha }_2\), we obtain by the supersolution inequality that
where \(p_{\varepsilon ,\alpha }^2 := -\alpha \nabla _y \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\). With that, estimating further in (3.13) leads to
Thus, (3.6) in Step 1 follows.
\(\underline{Proof of Step 2 }\): Recall that \({\mathcal {H}}(x,p)\) is given by
Since \(\Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\cdot ) : \Theta \rightarrow {\mathbb {R}}\) is bounded and continuous by (\(\Lambda 1\)) and (\(\Lambda 4\)), and \({\mathcal {I}}(x_{\varepsilon ,\alpha },\cdot ) : \Theta \rightarrow [0,\infty ]\) has compact sub-level sets in \(\Theta \) by (\({\mathcal {I}}3\)), there exists an optimizer \(\theta _{\varepsilon ,\alpha } \in \Theta \) such that
Choosing the same point in the supremum of the second term \({\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha })\), we obtain for all \(\varepsilon > 0\) and \(\alpha > 0\) the estimate
\(\underline{Proof of Step 3 }\): We will construct for each \(\varepsilon > 0\) a sequence \(\alpha = \alpha (\varepsilon ) \rightarrow \infty \) such that the collection \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) is fundamental for \(\Lambda \) with respect to \(\Psi \) in the sense of Definition 2.11. We thus need to verify for each \(\varepsilon > 0\)
-
(i)
$$\begin{aligned} \inf _\alpha \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) > - \infty , \end{aligned}$$(3.18)
-
(ii)
$$\begin{aligned} \sup _{\alpha }\Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty \end{aligned}$$(3.19)
-
(iii)
The set of controls \(\theta _{\varepsilon ,\alpha }\) is relatively compact.
To prove (i), (ii) and (iii), we introduce auxiliary controls \(\theta _{\varepsilon ,\alpha }^0\), obtained by (\({\mathcal {I}}2\)), satisfying
We will first establish (i) and (ii) for all \(\alpha \). Then, for the proof of (iii), we will construct for each \(\varepsilon > 0\) a suitable subsequence \(\alpha \rightarrow \infty \).
\(\underline{Proof of Step 3, (i) and (ii) }:\)
We first establish (i). By the subsolution inequality (3.14),
and the lower bound (3.18) follows.
We next establish (ii). By the supersolution inequality (3.15), we can estimate
and the upper bound (3.19) follows by Assumption 2.14 (\(\Lambda 4\)).
\(\underline{Proof of Step 3, (iii) }\): To prove (iii), it suffices by Assumption 2.15 (\({\mathcal {I}}3\)) to find for each \(\varepsilon > 0\) a subsequence \(\alpha \) such that
By (3.21), we have
We conclude that \(\sup _\alpha {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty \) is implied by
which by (\(\Lambda 4\)) is equivalent to
To perform this estimate, we first write
To estimate the second term, we aim to apply the continuity estimate for the controls \(\theta _{\varepsilon ,\alpha }^0\). To do so, must establish that \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0)\) is fundamental for \(\Lambda \) with respect to \(\Psi \). By Assumption 2.15 (\({\mathcal {I}}3\)), for each \(\varepsilon \) the set of controls \(\theta _{\varepsilon ,\alpha }^0\) is relatively compact. Thus it suffices to establish
These two estimates follow by Assumption 2.14 (\(\Lambda 4\)) and (3.18) and (3.19).
The continuity estimate of Assumption 2.14 (\(\Lambda 5\)) yields that
This means that there exists a subsequence, also denoted by \(\alpha \) such that
Thus, we can estimate (3.24) by (3.27) and (3.26). This implies that (3.22) holds for the chosen subsequences \(\alpha \) and that for these the collection \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) is fundamental for \(\Lambda \) with respect to \(\Psi \) establishing Step 3.
\(\underline{Proof of Step 4 }\):
For the subsequences constructed in Step 3, we have by (3.22) that
As established in Step 1, following Lemma 3.5, for each \(\varepsilon > 0\) the set \(\{(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\}\) is relatively compact where \(\alpha \) varies over the subsequences selected in Step 3. In addition, for each \(\varepsilon > 0\) there exists \(z \in E\) such that \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) \rightarrow (z,z)\). It follows by (3.28) and Assumption 2.15 (\({\mathcal {I}}4\)) that also
With the bounds (3.28) and (3.29), the estimate (\({\mathcal {I}}5\)) is satisfied for the subsequences \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\). \(\square \)
4 Existence of viscosity solutions
In this section, we will prove Theorem 2.8. In other words, we show that for \(h\in C_b(E)\) and \(\lambda >0\), the function \(R(\lambda )h\) given by
is indeed a viscosity solution to \(f - \lambda {\mathbf {H}}f = h\). To do so, we will use the methods of Chapter 8 of [19]. For this strategy, one needs to check three properties of \(R(\lambda )\):
-
(a)
For all \((f,g) \in {\mathbf {H}}\), we have \(f = R(\lambda )(f - \lambda g)\).
-
(b)
The operator \(R(\lambda )\) is a pseudo-resolvent: for all \(h \in C_b(E)\) and \(0< \alpha < \beta \) we have
$$\begin{aligned} R(\beta )h = R(\alpha ) \left( R(\beta )h - \alpha \frac{R(\beta )h - h}{\beta } \right) . \end{aligned}$$ -
(c)
The operator \(R(\lambda )\) is contractive.
Thus, if \(R(\lambda )\) serves as a classical left-inverse to \({\mathbb {1}}- \lambda {\mathbf {H}}\) and is also a pseudo-resolvent, then it is a viscosity right-inverse of \(({\mathbb {1}}- \lambda {\mathbf {H}})\). For a second proof of this statement, outside of the control theory context, see Proposition 3.4 of [24].
Establishing (c) is straightforward. The proof of (a) and (b) stems from two main properties of exponential random variable. Let \(\tau _\lambda \) be the measure on \({\mathbb {R}}^+\) corresponding to the exponential random variable with mean \(\lambda ^{-1}\).
-
(a) is related to integration by parts: for bounded measurable functions z on \({\mathbb {R}}^+\), we have
$$\begin{aligned} \lambda \int _0^\infty z(t) \, \tau _\lambda ( {\mathrm {d}}t) = \int _0^\infty \int _0^t z(s) \, {\mathrm {d}}s \, \tau _\lambda ({\mathrm {d}}t). \end{aligned}$$ -
(b) is related to a more involved integral property of exponential random variables. For \(0< \alpha < \beta \), we have
$$\begin{aligned} \int _0^\infty z(s) \tau _\beta ({\mathrm {d}}s) \\ = \frac{\alpha }{\beta } \int _0^\infty z(s) \tau _\alpha ({\mathrm {d}}s) + \left( 1 - \frac{\alpha }{\beta }\right) \int _0^\infty \int _0^\infty z(s+u) \, \tau _\beta ({\mathrm {d}}u) \, \tau _\alpha ({\mathrm {d}}s). \end{aligned}$$
Establishing (a) and (b) can then be reduced by a careful analysis of optimizers in the definition of \(R(\lambda )\), and concatenation or splittings thereof. This was carried out in Chapter 8 of [19] on the basis of three assumptions, namely [19, Assumptions 8.9, 8.10 and 8.11]. We verify these below.
Verification of Conditions 8.9, 8.10 and 8.11
In the notation of [19], we use \(U = {\mathbb {R}}^d\), \(\Gamma = E \times U\), one operator \({\mathbf {H}}= {\mathbf {H}}_\dagger = {\mathbf {H}}_\ddagger \) and \(Af(x,u) = \langle \nabla f(x),u\rangle \) for \(f \in {\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\).
Regarding Condition 8.9, by continuity and convexity of \({\mathcal {H}}\) obtained in Propositions B.1 and B.3, parts 8.9.1, 8.9.2, 8.9.3 and 8.9.5 can be proven e.g. as in the proof of [19, Lemma 10.21] for \(\psi = 1\). Part 8.9.4 is a consequence of the existence of a containment function, and follows as shown in the proof of Theorem A.17 of [7]. Since we use the argument further below, we briefly recall it here. We need to show that for any compact set \(K \subseteq E\), any finite time \(T > 0\) and finite bound \(M \ge 0\), there exists a compact set \(K' = K'(K,T,M) \subseteq E\) such that for any absolutely continuous path \(\gamma :[0,T] \rightarrow E\) with \(\gamma (0) \in K\), if
then \(\gamma (t) \in K'\) for any \(0\le t \le T\).
For \(K\subseteq E\), \(T>0\), \(M\ge 0\) and \(\gamma \) as above, this follows by noting that
for any \(0 \le \tau \le T\), so that the compact set \(K' := \{z \in E \,:\, \Upsilon (z) \le C\}\) satisfies the claim.
We proceed with the verification of Conditions 8.10 and 8.11 of [19]. By Proposition B.1, we have \({\mathcal {H}}(x,0) = 0\) and hence the application of \({\mathbf {H}}\) to constant 1 function \({\mathbb {1}}\) satisfies \({\mathbf {H}}{\mathbb {1}}= 0\). Thus, Condition 8.10 is implied by Condition 8.11 (see Remark 8.12 (e) in [19]).
We establish that Condition 8.11 is satisfied: for any function \(f\in {\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\) and \(x_0 \in E\), there exists an absolutely continuous path \(x:[0,\infty ) \rightarrow E\) such that \(x(0) = x_0\) and for any \(t \ge 0\),
To do so, we solve the differential inclusion
where the subdifferential of \({\mathcal {H}}\) was defined in (2.9) on page 10.
Since the addition of a constant to f does not change the gradient, we may assume without loss of generality that f has compact support. A general method to establish existence of differential inclusions \({\dot{x}} \in F(x)\) is given by Lemma 5.1 of Deimling [11]. We have included this result as Lemma D.5, and corresponding preliminary definitions in “Appendix D”. We use this result for \(F(x) := \partial _p {\mathcal {H}}(x,\nabla f(x))\). To apply Lemma D.5, we need to verify that:
-
(F1)
F is upper hemi-continuous and F(x) is non-empty, closed, and convex for all \(x \in E\).
-
(F2)
\(\Vert F(x)\Vert \le c(1 + |x|)\) on E, for some \(c > 0\).
-
(F3)
\(F(x) \cap T_E(x) \ne \emptyset \) for all \(x \in E\). (For the definition of \(T_E\), see Definition 2.16 on page 10).
Part (F1) follows from the properties of subdifferential sets of convex and continuous functionals. \({\mathcal {H}}\) is continuous in (x, p) and convex in p by Proposition B.1. Part (F3) is a consequence of Lemma 4.1, which yields that \(F(x)\subseteq T_E(x)\). Part (F2) is in general not satisfied. To circumvent this problem, we use properties of \({\mathcal {H}}\) to establish a-priori bounds on the range of solutions.
Step 1: Let \(T > 0\), and assume that x(t) solves (4.4). We establish that there is some M such that (4.1) is satisfied. By (4.4) we obtain for all \(p \in {\mathbb {R}}^d\),
and as a consequence
Since f has compact support and \({\mathcal {H}}(y,0) = 0\) for any \(y \in E\), we estimate
By continuity of \({\mathcal {H}}\) the field F is bounded on compact sets, so the first term can be bounded by
Therefore, for any \(T>0\), we obtain that the integral over the Lagrangian is bounded from above by \(M = M(T)\), with
From the first part of the, see the argument concluding after (4.2), we find that the solution x(t) remains in the compact set
for all \(t \in [0,T]\).
Step 2: We prove that there exists a solution x(t) of (4.4) on [0, T].
Using F, we define a new multi-valued vector-field \(F'(z)\) that equals \(F(z) = \partial _p {\mathcal {H}}(z,\nabla f(z))\) inside \(K'\), but equals \(\{0\}\) outside a neighborhood of K. This can e.g. be achieved by multiplying with a smooth cut-off function \(g_{K'} : E \rightarrow [0,1]\) that is equal to one on \(K'\) and zero outside of a neighborhood of \(K'\).
The field \(F'\) satisfies (F1), (F2) and (F3) from above, and hence there exists an absolutely continuous path \(y : [0,\infty ) \rightarrow E\) such that \(y(0) = x_0\) and for almost every \(t \ge 0\),
By the estimate established in step 1 and the fact that \(\Upsilon (\gamma (t)) \le C\) for any \(0 \le t \le T\), it follows from the argument as shown above in (4.2) that the solution y stays in \(K'\) up to time T. Since on \(K'\), we have \(F' = F\), this implies that setting \(x = y|_{[0,T]}\), we obtain a solution x(t) of (4.4) on the time interval [0, T]. \(\square \)
Lemma 4.1
Let Assumption 2.17 be satisfied. Then the map \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) defined in (2.2) is such that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\) for all p and \(x \in E\).
Proof
Fix \(x \in E\) and \(p_0 \in {\mathbb {R}}^d\). We aim to prove that \(\partial _p {\mathcal {H}}(x,p_0) \subseteq T_E(x)\). Recall the definition of \({\mathcal {H}}\):
Let \(\Omega (p) \subseteq \Theta \) be the set of controls that optimize \({\mathcal {H}}\): thus if \(\theta \in \Omega (p)\) then \({\mathcal {H}}(x,p) = \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\).
The result will follow from the following claim,
where ch denotes the convex hull. Having established this claim, the result follows from Assumption 2.17 and the fact that \(T_E(x)\) is a convex set by Lemma D.4.
We start with the proof of (4.7). For this we will use [22, Theorem D.4.4.2]. To study the subdifferential of the function \(\partial _{p} {\mathcal {H}}(x,p_0)\), it suffices to restrict the domain of the map \(p \mapsto {\mathcal {H}}(x,p)\) to the closed ball \(B_1(p_0)\) around \(p_0\) with radius 1.
To apply [22, Theorem D.4.4.2] for this restricted map, first recall that \(\Lambda \) is continuous by Assumption 2.14 (\(\Lambda 1\)) and that \({\mathcal {I}}\) is lower semi-continuous by Assumption 2.15 (\({\mathcal {I}}1\)). Secondly, we need to find a compact set \(\Omega \subseteq \Theta \) such that we can restrict the supremum (for any \(p \in B_1(p_0))\) in (4.6) to \(\Omega \):
In particular, we show that we can take for \(\Omega \) a sublevelset of \({\mathcal {I}}(x,\cdot )\) which is compact by Assumption 2.15 (\({\mathcal {I}}3\)).
Let \(\theta _{x}^0\) be the control such that \({\mathcal {I}}(x,\theta _{x}^0) = 0\), which exists due to Assumption 2.15 (\({\mathcal {I}}2\)). Let \(M^*\) be such that (with the constants \(M,C_1,C_2\) as in Assumption 2.14 (\(\Lambda 4\)))
Note that \(M^*\) is finite as \(p \mapsto \Lambda (x,p,\theta _{x}^0)\) is continuous on the closed unit ball \(B_1(p_0)\). Then we find, due to Assumption 2.14 (\(\Lambda 4\)), that if \(\theta \) satisfies \({\mathcal {I}}(x,\theta ) > M^*\), then for any \(p\in B_1(p_0)\) we have
We obtain that if \(p \in B_1(p_0)\), then we can restrict our supremum in (4.6) to the compact set \(\Omega := \Theta _{\{x\},M^*}\), see Assumption 2.15 (\({\mathcal {I}}3\)).
Thus, it follows by [22, Theorem D.4.4.2] that
where ch denotes the convex hull. Now (4.7) follows by noting that \({\mathcal {I}}(x,\theta )\) does not depend on p. \(\square \)
5 Examples of Hamiltonians
In this section we specify our general results to two concrete examples of Hamiltonians of the type
The purpose of this section is to showcase that the method introduced in this paper is versatile enough to capture interesting examples that could not be treated before.
First, we consider in Proposition 5.1 Hamiltonians that one encounters in the large deviation analysis of two-scale systems as studied in [6] and [27] when considering a diffusion process coupled to a fast jump process. Second, we consider in Proposition 5.7 the example treated in our introduction that arises from models of mean-field interacting particles that are coupled to fast external variables. This example will be further analyzed in [26].
Proposition 5.1
(Diffusion coupled to jumps) Let \(E={\mathbb {R}}^d\) and \(F=\{1,\dots ,J\}\) be a finite set. Suppose the following:
-
(i)
The set of control variables is \(\Theta :={\mathcal {P}}(\{1,\dots ,J\})\), that is probability measures over the finite set F.
-
(ii)
The function \(\Lambda \) is given by
$$\begin{aligned} \Lambda (x,p,\theta ) := \sum _{i\in F}\left[ \langle a(x,i)p,p\rangle +\langle b(x,i),p\rangle \right] \theta _i, \end{aligned}$$where \(a:E\times F\rightarrow {\mathbb {R}}^{d\times d}\) and \(b:E\times F\rightarrow {\mathbb {R}}^d\), and \(\theta _i:=\theta (\{i\})\).
-
(iii)
The cost function \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty )\) is given by
$$\begin{aligned} {\mathcal {I}}(x,\theta ) := \sup _{w\in {\mathbb {R}}^J}\sum _{ij}r(i,j,x)\theta _i \left[ 1-e^{w_j-w_i}\right] , \end{aligned}$$with non-negative rates \(r:F^2\times E\rightarrow [0,\infty )\).
Suppose that the cost function \({\mathcal {I}}\) satisfies the assumptions of Proposition 5.9 below and the function \(\Lambda \) satisfies the assumptions of Proposition 5.11 below. Then Theorems 2.6 and 2.8 apply to the Hamiltonian (5.1).
Proof
To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.11, Assumption 2.15 follows from Proposition 5.9 and Assumption 2.17 is satisfied as \(E = {\mathbb {R}}^d\).
\(\square \)
Remark 5.2
We assume uniform ellipticity of a, which we use to establish (\(\Lambda 4\)). This leaves our comparison principle slightly lacking to prove a large deviation principle as general as in [5]. In contrast, we do not need a Lipschitz condition on r in terms of x.
While we believe that the conditions on a can be relaxed by performing a finer analysis of the estimates in terms of a, we do not pursue this relaxation here.
Remark 5.3
The cost function is the large deviation rate function for the occupation time measures of a jump process taking values in a finite set \(\{1,\dots ,J\}\), see e.g. [13, 14].
Remark 5.4
In the context with \(a = 0\) and \({\mathcal {I}}\) as general as Assumption 2.15, we improve upon the results of Chapter III of [2] by allowing a more general class of functionals \({\mathcal {I}}\), that are e.g. discontinuous as for example in Proposition 5.7 below.
In [10] the authors consider a second order Hamilton–Jacobi–Bellman equation, with the quadratic part replaced by a second order part. They work, however, with continuous cost functional \({\mathcal {I}}\). An extension of [10] that allows for a similar flexibility in the choice of \({\mathcal {I}}\) would therefore be of interest.
Remark 5.5
Under irreducibility conditions on the rates, as we shall assume below in Proposition 5.9, by [15] the Hamiltonian \({\mathcal {H}}(x,p)\) is the principal eigenvalue of the matrix \(A_{x,p} \in \mathrm {Mat}_{J \times J}({\mathbb {R}})\) given by
where \(x,p \in {\mathbb {R}}^d\) and \(R_x\) is the matrix
that is \((R_x)_{ii} = -\sum _{j \ne i} r(i,j,x)\) on the diagonal and \((R_x)_{ij} = r(i,j,x)\) for \(i \ne j\).
Next we consider Hamiltonians arising in the context of weakly interacting jump processes on a collection of states \(\{1,\dots ,q\}\) as described in our introduction. We analyze and motivate this example in more detail in our companion paper [26]. We give the terminology as needed for the results in this paper.
The empirical measure of the interacting processes takes its values in the set of measures \({\mathcal {P}}(\{1,\dots ,q\})\). The dynamics arises from mass moving over the bonds \((a,b) \in \Gamma = \left\{ (i,j) \in \{1,\dots ,q\}^2 \, | \, i \ne j\right\} \). As the number of processes is send to infinity, there is a type of limiting result for the total mass moving over the bonds.
We will denote by \(v(a,b,\mu ,\theta )\) the total mass that moves from a to b if the empirical measure equals \(\mu \) and the control is given by \(\theta \). We will make the following assumption on the kernel v.
Definition 5.6
(Proper kernel) Let \(v : \Gamma \times {\mathcal {P}}(\{1,\dots ,q\}) \times \Theta \rightarrow {\mathbb {R}}^+\). We say that v is a proper kernel if v is continuous and if for each \((a,b) \in \Gamma \), the map \((\mu ,\theta ) \mapsto v(a,b,\mu ,\theta )\) is either identically equal to zero or satisfies the following two properties:
-
(a)
\(v(a,b,\mu ,\theta ) = 0\) if \(\mu (a) = 0\) and \(v(a,b,\mu ,\theta ) > 0\) for all \(\mu \) such that \(\mu (a) > 0\).
-
(b)
There exists a decomposition \(v(a,b,\mu ,\theta ) = v_{\dagger }(a,b,\mu (a)) v_{\ddagger }(a,b,\mu ,\theta )\) such that \(v_{\dagger }\) is increasing in the third coordinate and such that \(v_{\ddagger }(a,b,\cdot ,\cdot )\) is continuous and satisfies \(v_{\ddagger }(a,b,\mu ,\theta ) > 0\).
A typical example of a proper kernel is given by
with \(r > 0\) continuous and \(V \in C^1_b({\mathcal {P}}(\{1,\dots ,q\}))\).
Proposition 5.7
(Mean-field coupled to diffusion) Let the space E be given by the embedding of \(E:={\mathcal {P}}(\{1,\dots ,J\})\times [0,\infty )^\Gamma \subseteq {\mathbb {R}}^d\) and F be a smooth compact Riemannian manifold without boundary. Suppose the following.
-
(i)
The set of control variables \(\Theta \) equals \({\mathcal {P}}(F)\).
-
(ii)
The function \(\Lambda \) is given by
$$\begin{aligned} \Lambda ((\mu ,w),p,\theta ) = \sum _{(a,b) \in \Gamma } v(a,b,\mu ,\theta )\left[ \exp \left\{ p_b - p _a + p_{(a,b)} \right\} - 1 \right] \end{aligned}$$with a proper kernel v in the sense of Definition 5.6.
-
(iii)
The cost function \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) is given by
$$\begin{aligned} {\mathcal {I}}(x,\theta ) := \sup _{\begin{array}{c} u\in {\mathcal {D}}(L_x)\\ \inf u > 0 \end{array}}\left[ -\int _F \frac{L_x u}{u}\,d\theta \right] , \end{aligned}$$where \(L_x\) is a second-order elliptic operator locally of the form
$$\begin{aligned} L_x = \frac{1}{2}\nabla \cdot \left( a_x \nabla \right) + b_x\cdot \nabla , \end{aligned}$$on the domain \({\mathcal {D}}(L_x):=C^2(F)\), with positive-definite matrix \(a_x\) and co-vectors \(b_x\).
Suppose that the cost function \({\mathcal {I}}\) satisfies the assumptions of Proposition 5.10 and the function \(\Lambda \) satisfies the assumptions of Proposition 5.13. Then Theorems 2.6 and 2.8 apply to the Hamiltonian (5.1).
Proof
To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.13 and Assumption 2.15 follows from Proposition 5.10. We verify Assumption 2.17 in Proposition 5.19. \(\square \)
Remark 5.8
The cost function stems from occupation-time large deviations of a drift-diffusion process on a compact manifold, see e.g. [15, 32]. We expect Proposition 5.7 to extend also to non-compact spaces F, but we feel this technical extension is better suited for a separate paper.
5.1 Verifying assumptions for cost functions \({\mathcal {I}}\)
Here we verify Assumption 2.15 for two types of cost functions \({\mathcal {I}}(x,\theta )\) appearing in the examples of Propositions 5.1 and 5.7.
Proposition 5.9
(Donsker–Varadhan functional for jump processes) Consider a finite set \(F = \{1,\dots ,J\}\) and let \(\Theta := {\mathcal {P}}(\{1,\dots ,J\})\) be the set of probability measures on F. For \(x\in E\), let \(L_x : C_b(F) \rightarrow C_b(F)\) be the operator given by
Suppose that the rates \(r:\{1,\dots ,J\}^2\times E \rightarrow {\mathbb {R}}^+\) are continuous as a function on E and moreover satisfy the following:
-
(i)
For any \(x\in E\), the matrix R(x) with entries \(R(x)_{ij} := r(i,j,x)\) for \(i\ne j\) and \(R(x)_{ii} = -\sum _{j\ne i}r(i,j,x)\) is irreducible.
-
(ii)
For each pair (i, j), we either have \(r(i,j,\cdot )\equiv 0\) or for each compact set \(K\subseteq E\), it holds that
$$\begin{aligned} r_{K}(i,j) := \inf _{x\in K}r(i,j,x) > 0. \end{aligned}$$
Then the Donsker–Varadhan functional \({\mathcal {I}}:E\times \Theta \rightarrow {\mathbb {R}}^+\) defined by
satisfies Assumption 2.15.
Proof
- \(\underline{({\mathcal {I}}1)}\)::
-
For a fixed vector \(w\in {\mathbb {R}}^J\), the map
$$\begin{aligned} (x,\theta )\mapsto \sum _{ij}r(i,j,x)\theta _i \left[ 1-e^{w_j-w_i}\right] \end{aligned}$$is continuous on \(E\times \Theta \). Hence \({\mathcal {I}}(x,\theta )\) is lower semicontinuous as the supremum over continuous functions.
- \(\underline{({\mathcal {I}}2)}:\):
-
Let \(x\in E\). First note that for all \(\theta \), the choice \(w = 0\) implies that \({\mathcal {I}}(x,\theta ) \ge 0\). By the irreducibility assumption on the rates r(i, j, x), there exists a unique measure \(\theta _{x}^0\in \Theta \) such that for any \(f:\{1,\dots ,J\}\rightarrow {\mathbb {R}}\),
$$\begin{aligned} \sum _i L_x f(i) \theta _{x}^0(i)=0. \end{aligned}$$(5.2)We establish that \({\mathcal {I}}(x,\theta _{x}^0) = 0\). Let \(w \in {\mathbb {R}}^J\). By the elementary estimate
$$\begin{aligned} \left( 1-e^{b - a}\right) \le -(b-a) \quad \text { for all } \; a,b > 0, \end{aligned}$$we obtain that
$$\begin{aligned} \sum _{ij}r(i,j,x) \theta _{x}^0(i) \left( 1-e^{w_j - w_i}\right)&\le \sum _{ij}r(i,j,x) \theta _{x}^0(i) \left( w_j - w_i \right) \\&= \sum _i (L_x w)(i) \theta _{x}^0(i) = 0 \end{aligned}$$by (5.2). Since \({\mathcal {I}} \ge 0\), this implies \({\mathcal {I}}(x,\theta _{x}^0) = 0\).
- \(\underline{({\mathcal {I}}3)}\)::
-
Any closed subset of \(\Theta \) is compact.
- \(\underline{({\mathcal {I}}4)}\)::
-
Let \(x_n\rightarrow x\) in E. It follows that the sequence is contained in some compact set \(K \subseteq E\) that contains the \(x_n\) and x in its interior. For any \(y\in K\),
$$\begin{aligned} {\mathcal {I}}(y,\theta ) \le \sum _{ij, i \ne j} r(i,j,y) \theta _i \le \sum _{ij, i\ne j} r(i,j,y) \le \sum _{ij, i \ne j} {\bar{r}}_{ij}, \quad {\bar{r}}_{ij} := \sup _{y \in K} r(i,j,y). \end{aligned}$$Hence \({\mathcal {I}}\) is uniformly bounded on \(K\times \Theta \), and (\({\mathcal {I}}4\)) follows with \(U_x\) the interior of K.
- \(\underline{({\mathcal {I}}5)}:\):
-
Let d be some metric that metrizes the topology of E. We will prove that for any compact set \(K\subseteq E\) and \(\varepsilon > 0\) there is some \(\delta > 0\) such that for all \(x,y \in K\) with \(d(x,y) \le \delta \) and for all \(\theta \in {\mathcal {P}}(F)\), we have
$$\begin{aligned} |{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \varepsilon . \end{aligned}$$(5.3)Let \(x,y \in K\). By continuity of the rates the \({\mathcal {I}}(x,\cdot )\) are uniformly bounded for \(x \in K\):
$$\begin{aligned} 0 \le {\mathcal {I}}(x,\theta ) \le \sum _{ij, i \ne j} r(i,j,x) \theta _i \le \sum _{ij, i\ne j} r(i,j,x) \le \sum _{ij, i \ne j} {\bar{r}}_{ij}, \quad {\bar{r}}_{ij} := \sup _{x \in K} r(i,j,x). \end{aligned}$$For any \(n \in {\mathbb {N}}\), there exists \(w^n \in {\mathbb {R}}^J\) such that
$$\begin{aligned} 0 \le {\mathcal {I}}(x,\theta ) \le \sum _{ij, i \ne j} r_{ij}(x) \theta _i (1 - e^{w^n_j - w^n_i}) + \frac{1}{n}. \end{aligned}$$
By reorganizing, we find for all bonds (a, b) the bound
where \(r_{K,a,b}:=\inf _{x\in K}r(a,b,x)\). Thereby, evaluating in \({\mathcal {I}}(y,\theta )\) the same vector \(w^n\) to estimate the supremum,
We take \(n \rightarrow \infty \) and use that the rates \(x \mapsto r(a,b,x)\) are continuous, and hence uniformly continuous on compact sets, to obtain (5.3). \(\square \)
Proposition 5.10
(Donsker–Varadhan functional for drift-diffusions) Let F be a smooth compact Riemannian manifold without boundary and set \(\Theta :={\mathcal {P}}(F)\), the set of probability measures on F. For \(x\in E\), let \(L_x : C^2(F) \subseteq C_b(F) \rightarrow C_b(F)\) be the second-order elliptic operator that in local coordinates is given by
where \(a_x\) is a positive definite matrix and \(b_x\) is a vector field having smooth entries \(a_x^{ij}\) and \(b_x^i\) on F. Suppose that for all i, j the maps
are continuous as functions from E to \(C_b(F)\), where we equip \(C_b(F)\) with the supremum norm. Then the functional \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) defined by
satisfies Assumption 2.15.
Proof
- \(\underline{({\mathcal {I}}1)}\)::
-
For any fixed function \(u\in {\mathcal {D}}(L_x)\) such that \(u > 0\), the function \((-L_xu/u)\) is continuous on F. Note that by definition of \({\mathcal {I}}\) it suffices to only consider \(u > 0\). Thus, for any such fixed \(u > 0\) it follows by (5.4) and compactness of F that
$$\begin{aligned} (x,\theta )\mapsto -\int _F \frac{L_xu}{u}\,d\theta \end{aligned}$$is continuous on \(E\times \Theta \). As a consequence \({\mathcal {I}}(x,\theta )\) is lower semicontinuous as the supremum over continuous functions.
- \(\underline{({\mathcal {I}}2)}\)::
-
Let \(x\in E\). The stationary measure \(\theta _{x}^0 \in \Theta \) satisfying
$$\begin{aligned} \int _F L_xg(z)\,\theta _{x}^0({\mathrm {d}}z) = 0\quad \text {for all}\;g\in {\mathcal {D}}(L_x) \end{aligned}$$(5.5)is the minimizer of \({\mathcal {I}}(x,\cdot )\), that is \({\mathcal {I}}(x,\theta _{x}^0) = 0\). This follows by considering the Hille-Yosida approximation \(L_x^\varepsilon \) of \(L_x\) and using the same argument (using \(w = \log u\)) as in Proposition 5.9 for these approximations. For any \(u>0\) and \(\varepsilon >0\),
$$\begin{aligned} -\int _F \frac{L_xu}{u}\,d\theta&= -\int _F \frac{L^\varepsilon _xu}{u}\,d\theta + \int _F \frac{(L^\varepsilon _x-L_x)u}{u}\,d\theta \\&\le -\int _F \frac{L^\varepsilon _xu}{u}\,d\theta + \frac{1}{\inf _F u} \Vert (L_x^\varepsilon -L_x)u\Vert _F\\&\le -\int _F L^\varepsilon _x \log (u)\,d\theta + o(1). \end{aligned}$$Sending \(\varepsilon \rightarrow 0\) and then using (5.5) gives (\({\mathcal {I}}2\)).
- \(\underline{({\mathcal {I}}3)}\)::
-
Since \(\Theta = {\mathcal {P}}(F)\) is compact, any closed subset of \(\Theta \) is compact. Hence any union of sub-level sets of \({\mathcal {I}}(x,\cdot )\) is relatively compact in \(\Theta \).
- \(\underline{({\mathcal {I}}4)}\)::
-
Fix \(x \in E\) and \(M \ge 0\). Let \(\theta \in \Theta _{\{x\},M}\). As \({\mathcal {I}}(x,\theta ) \le M\), we find by [31] that the density \(\frac{{\mathrm {d}}\theta }{{\mathrm {d}}z}\) exists, where \({\mathrm {d}}z\) denotes the Riemannian volume measure.
In addition it follows from [31, Theorem 1.4] there exists constants \(c_1(y),c_2(y),c_3(y),c_4(y)\), \(c_1(y),c_2(y)\) being positive, depending continuously on \(a_y, a_y^{-1},b_y\) (See the derivation of [30, Eq. (2.18), (2.19)]), but not on \(\theta \), such that
$$\begin{aligned} c_1(y)\int _F|\nabla g_\theta |^2\,dz - c_2(y) \le {\mathcal {I}}(y,\theta ) \le c_3(y) \int _F|\nabla g_\theta |^2\,dz + c_4(y), \end{aligned}$$(5.6)where \(g_\theta = ({\mathrm {d}}\theta /{\mathrm {d}}z)^{1/2}\).
As the dependence is continuous in y, we can find a open set \(U \subseteq E\) of x such that there are constants \(c_1,c_2,c_3,c_4\), \(c_1,c_3\) being positive, that do not depend on \(\theta \), such that for any \(y \in U\):
$$\begin{aligned} c_1\int _F|\nabla g_\theta |^2\,dz - c_2 \le {\mathcal {I}}(y,\theta ) \le c_3 \int _F|\nabla g_\theta |^2\,dz + c_4. \end{aligned}$$(5.7)From (5.7), (\({\mathcal {I}}4\)) immediately follows.
- \(\underline{({\mathcal {I}}5)}\)::
-
Since the coefficients \(a_x\) and \(b_x\) of the operator \(L_x\) depend continuously on x, assumption (\({\mathcal {I}}5\)) follows from Theorem 2 of [32].\(\square \)
5.2 Verifying assumptions for functions \(\Lambda \)
In this section we verify Assumption 2.14 for two types of functions \(\Lambda (x,p,\theta )\) appearing in the examples of Propositions 5.1 and 5.7.
Proposition 5.11
(Quadratic function \(\Lambda \)) Let \(E={\mathbb {R}}^d\) and \(\Theta ={\mathcal {P}}(F)\) for some compact Polish space F. Suppose that the function \(\Lambda :E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) is given by
where \(a:E\times F\rightarrow {\mathbb {R}}^{d\times d}\) and \(b:E\times F\rightarrow {\mathbb {R}}^d\) are continuous. Suppose that for every compact set \(K \subseteq {\mathbb {R}}^d\),
Furthermore, there exists a constant \(L>0\) such that for all \(x,y\in E\) and \(z\in F\),
and suppose that the functions b are one-sided Lipschitz. Then Assumption 2.14 holds.
Remark 5.12
The above proposition is slightly more general than what we consider in Proposition 5.1, as there we assume that \(F = \{1,\dots ,J\}\) is a finite set.
Proof
- \(\underline{(\Lambda 1)}:\):
-
Let \((x,p)\in E\times {\mathbb {R}}^d\). Continuity of \(\Lambda \) is a consequence of the fact that
$$\begin{aligned} \Lambda (x,p,\theta ) = \int _F V(x,p,z)\,d\theta (z) \end{aligned}$$is the pairing of a continuous and bounded function \(V(x,p,\cdot )\) with the measure \(\theta \in {\mathcal {P}}(F)\).
- \(\underline{(\Lambda 2)}\)::
-
Let \(x\in E\) and \(\theta \in {\mathcal {P}}(F)\). Convexity of \(p\mapsto \Lambda (x,p,\theta )\) follows since a(x, z) is positive definite by assumption. If \(p_0 = 0\), then evidently \(\Lambda (x,p_0,\theta ) = 0\).
- \(\underline{(\Lambda 3)}\)::
-
We show that the map \(\Upsilon : E\rightarrow {\mathbb {R}}\) defined by
$$\begin{aligned} \Upsilon (x) := \frac{1}{2}\log \left( 1 + |x|^2\right) \end{aligned}$$is a containment function for \(\Lambda \). For any \(x\in E\) and \(\theta \in {\mathcal {P}}(F)\), we have
$$\begin{aligned} \Lambda (x,\nabla \Upsilon (x),\theta )&= \int _F \langle a(x,z)\nabla \Upsilon (x),\nabla \Upsilon (x)\rangle \,d\theta (z) + \int _F\langle b(x,z),\nabla \Upsilon (x)\rangle \,d\theta (z)\\&\le a_{\{x\},\text {max}} |\nabla \Upsilon (x)|^2 + b_{\{x\},\text {max}}|\nabla \Upsilon (x)|\\&\le C (1+|x|) \frac{x^2}{(1+x^2)^2} + C(1+|x|) \frac{x}{(1+x^2)}, \end{aligned}$$and the boundedness condition follows with the constant
$$\begin{aligned} C_\Upsilon := C \,\sup _x (1+|x|) \left[ \frac{x^2}{(1+x^2)^2} + \frac{x}{(1+x^2)} \right] <\infty . \end{aligned}$$ - \(\underline{(\Lambda 4)}\)::
-
Let \(K\subseteq E\) be compact. We have to show that there exist constants \(M, C_1, C_2 \ge 0\) such that for all \(x \in K\), \(p \in {\mathbb {R}}^d\) and all \(\theta _1,\theta _2 \in {\mathcal {P}}(F)\), we have
$$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M, C_1 \Lambda (x,p,\theta _2) + C_2 \right\} . \end{aligned}$$(5.8)Fix \(\theta _1,\theta _2 \in {\mathcal {P}}(F)\). We have for \(x \in K\)
$$\begin{aligned} \int \langle a(x,z)p,p\rangle d\theta _1(z) \le \frac{a_{K,max}}{a_{K,min}} \int \langle a(x,z)p,p\rangle d\theta _2(z) \end{aligned}$$In addition, as \(a_{K,min} > 0\) and \(b_{K,max} < \infty \) we have for any \(C > 0\) and sufficiently large |p| that
$$\begin{aligned} \int \langle b(x,z),p\rangle \,d\theta _1(z) - (C+1)\int \langle b(x,z),p\rangle \,d\theta _2(z) \le C \int \langle a(x,z)p,p\rangle \,d\theta _2(z) \end{aligned}$$Thus, for sufficiently large |p| (depending on C) we have
$$\begin{aligned} \Lambda (x,p,\theta _1) \le (1+C) \Lambda (x,p,\theta _2). \end{aligned}$$Fix a \(C =: C_1\) and denote the set of ‘large’ p by S. The map \((x,p,\theta ) \mapsto \Lambda (x,p,\theta )\) is bounded on \(K \times \times S^c\times \Theta \). Thus, we can find a constant \(C_2\) such that (5.8) holds.
- \(\underline{(\Lambda 5)}\)::
-
By the assumption on a(x, z), the function \(\Lambda \) is uniformly coercive in the sense that for any compact set \(K\subseteq E\),
$$\begin{aligned} \inf _{x\in K, \theta \in \Theta }\Lambda (x,p,\theta ) \rightarrow \infty \quad \text { as }\; |p|\rightarrow \infty , \end{aligned}$$and the continuity estimate follows by Proposition 5.15.
\(\square \)
We proceed with the example in which \(\Lambda \) depends on p through exponential functions (Proposition 5.7). Let \(q \in {\mathbb {N}}\) be an integer and
be the set of oriented edges in \(\{1,\dots ,q\}\).
Proposition 5.13
(Exponential function \(\Lambda \)) Let \(E\subseteq {\mathbb {R}}^d\) be the embedding of \(E = {\mathcal {P}}(\{1,\dots ,q\}) \times ({\mathbb {R}}^+)^{|\Gamma |}\) and \(\Theta \) be a topological space. Suppose that \(\Lambda \) is given by
where v is a proper kernel in the sense of Definition 5.6. Suppose in addition that there is a constant \(C > 0\) such that for all \((a,b) \in \Gamma \) such that \(v(a,b, \cdot ,\cdot ) \ne 0\) we have
Then \(\Lambda \) satisfies Assumption 2.14.
Remark 5.14
Similar to the previous proposition, the assumptions on \(\Lambda \) are satisfied when \(\Theta = {\mathcal {P}}(F)\) for some Polish space F, we have \(v(a,b,\mu ,\theta ) = \mu (a) \int r(a,b,\mu ,z) \theta ({\mathrm {d}}z)\), and there are constants \(0< r_{min} \le r_{max} < \infty \) such that for all \((a,b) \in \Gamma \) such that \(\sup _{\mu ,z} r(a,b,\mu ,z) > 0\), we have
Regarding (5.9), for \((a,b) \in \Gamma \) for which \(v(a,b,\cdot ,\cdot )\) is non-trivial, we have
Proof of Proposition 5.13
- \(\underline{(\Lambda 1)}\)::
-
The function \(\Lambda \) is continuous as the sum of continuous functions.
- \(\underline{(\Lambda 2)}\)::
-
Convexity of \(\Lambda \) as a function of p follows from the fact that \(\Lambda \) is a finite sum of convex functions, and \(\Lambda (x,0,\theta )=0\) is evident.
- \(\underline{(\Lambda 3)}\)::
-
The function \(\Upsilon : E\rightarrow {\mathbb {R}}\) defined by
$$\begin{aligned} \Upsilon (\mu ,w) := \sum _{(a,b)\in \Gamma }\log \left[ 1 + w_{(a,b)}\right] \end{aligned}$$is a containment function for \(\Lambda \). For a verification, see [23].
- \(\underline{(\Lambda 4)}\)::
-
Note that
$$\begin{aligned} \Lambda ((\mu ,w),\theta _1,p)&\le \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _1) e^{p_{a,b} + p_b - p_a} \\&\le C \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _2) e^{p_{a,b} + p_b - p_a} \\&\le C \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _2) \left[ e^{p_{a,b} + p_b - p_a} - 1 \right] + C_2 . \end{aligned}$$Thus the estimate holds with \(M = 0\), \(C_1 = C\) and \(C_2 = \sup _{\mu ,\theta } \sum _{a,b} v(a,b,\mu ,\theta )\).
- \(\underline{(\Lambda 5)}\)::
-
The continuity estimate is the content of Proposition 5.18 below.
\(\square \)
5.3 Verifying the continuity estimate
With the exception of the verification of the continuity estimate in Assumption 2.14 the verification in Sect. 5.2 is straightforward. On the other hand, the continuity estimate is an extension of the comparison principle, and is therefore more complex. We verify the continuity estimate in three contexts, which illustrates that the continuity estimate follows from essentially the same arguments as the standard comparison principle. We will do this for:
-
Coercive Hamiltonians
-
One-sided Lipschitz Hamiltonians
-
Hamiltonians arising from large deviations of empirical measures.
This list is not meant to be an exhaustive list, but to illustrate that the continuity estimate is a sensible extension of the comparison principle, which is satisfied in a wide range of contexts. In what follows, \(E\subseteq {\mathbb {R}}^d\) is a Polish subset and \(\Theta \) a topological space.
Proposition 5.15
(Coercive \(\Lambda \)) Let \(\Lambda : E \times {\mathbb {R}}^d \times \Theta \rightarrow {\mathbb {R}}\) be continuous and uniformly coercive: that is, for any compact \(K \subseteq E\) we have
Then the continuity estimate holds for \(\Lambda \) with respect to any penalization function \(\Psi \).
Proof
Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\). By the upper bound (2.4), we find that for sufficiently small \(\varepsilon > 0\) there is some \(\alpha (\varepsilon )\) such that
As the variables \(y_{\alpha ,\varepsilon }\) are contained in a compact set by property (C1) of fundamental collections of variables, the uniform coercivity implies that the momenta \(p_{\varepsilon ,\alpha }\) for \(\alpha \ge \alpha (\varepsilon )\) remain in a bounded set. Thus, we can extract a subsequence \(\alpha '\) such that \((x_{\varepsilon ,\alpha '},y_{\varepsilon ,\alpha '},p_{\varepsilon ,\alpha '},\theta _{\varepsilon ,\alpha '})\) converges to \((x,y,p,\theta )\) with \(x = y\) due to property (C2) of fundamental collections of variables. By continuity of \(\Lambda \) we find
establishing the continuity estimate. \(\square \)
Proposition 5.16
(One-sided Lipschitz \(\Lambda \)) Let \(\Lambda : E \times {\mathbb {R}}^d \times \Theta \rightarrow {\mathbb {R}}\) satisfy
for some collection of constants \(c(\theta )\) satisfying \(\sup _\theta c(\theta ) < \infty \) and a function \(\omega : {\mathbb {R}}^+ \rightarrow {\mathbb {R}}^+\) satisfying \(\lim _{\delta \downarrow 0} \omega (\delta ) = 0\).
Then the continuity estimate holds for \(\Lambda \) with respect to \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\).
Proof
Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\). We find
which equals 0 as \(\sup _\theta c(\theta ) < \infty \), \(\lim _{\delta \downarrow 0} \omega (\delta ) = 0\) and property (C2) of a fundamental collection of variables. \(\square \)
For the empirical measure of a collection of independent processes one obtains maps \(\Lambda \) that are neither uniformly coercive nor Lipschitz. Also in this context one can establish the continuity estimate. We treat a simple 1d case and then state a more general version for which we refer to [23].
Proposition 5.17
Suppose that \(E = [-1,1]\) and that \(\Lambda (x,p,\theta )\) is given by
with \(c_-,c_+\) non-negative functions of \(\theta \). Then the continuity estimate holds for \(\Lambda \) with respect to \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\).
Proof
Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\).
We have
Now note that \(y_{\varepsilon ,\alpha }-x_{\varepsilon ,\alpha }\) is positive if and only if \(e^{p_{\varepsilon ,\alpha }} -1\) is negative so that the first term is bounded above by 0. With a similar argument the second term is bounded above by 0. Thus the continuity estimate is satisfied. \(\square \)
Proposition 5.18
Suppose \(E = {\mathcal {P}}(\{1,\dots ,q\} \times ({\mathbb {R}}^+)^\Gamma \) and suppose that \(\Lambda \) is given by
where v is a proper kernel. Then the continuity estimate holds for \(\Lambda \) with respect to penalization functions (see Sect. C)
Here we denote \(r^+ = r \vee 0\) for \(r \in {\mathbb {R}}\).
In this context, one can use coercivity like in Proposition 5.15 in combination with directional properties used in the proof of Proposition 5.17 above.
To be more specific: the proof of this proposition can be carried out exactly as the proof of Theorem 3.8 of [23]: namely at any point a converging subsequence is constructed, the variables \(\alpha \) need to be chosen such that we also get convergence of the measures \(\theta _{\varepsilon ,\alpha }\) in \({\mathcal {P}}(F)\).
5.4 Verifying assumption 2.17 for the exponential internal Hamiltonian
Proposition 5.19
Let \(\Lambda \) be as in Proposition 5.7:
Then we have \(\partial _p \Lambda ((\mu ,x),p) \subseteq T_E(\mu ,w)\).
A sketch of the verification of Assumption 2.17
We sketch the proof in a simplified case, the general case being similar. Consider \(E={\mathcal {P}}(\{a,b\})\) (ignoring the flux for the moment), and identify E with the simplex in \({\mathbb {R}}^2\). Fix the control \(\theta \in \Theta \). We have to show \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\). Recall that \(T_E(\mu )\) is the tangent cone at \(\mu \), that means the vectors at \(\mu \) pointing inside of E. We compute the vector \(\nabla _p \Lambda (\mu ,p,\theta ) \in {\mathbb {R}}^2\),
For \(\mu =(\mu _a,\mu _b)\in E\) with \(\mu _a,\mu _b > 0\), the tangent cone \(T_E(\mu )\) is spanned by \((1,-1)^T\). Since \(\nabla _p \Lambda (\mu ,p,\theta )\) is orthogonal to \((1,1)^T\), we indeed find that \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\) in that case. For \(\mu =(1,0)\), the tangent cone is \(T_E(1,0)=\{\lambda (-1,1)^T\,:\,\lambda \ge 0\}\). We have
which is parallel to \((-1,1)^T\), and therefore \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\). The argument is similar for \(\mu =(0,1)\). The general case (including the fluxes) follows from a more tedious, but straightforward, computation. \(\square \)
References
Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis. A Hitchhiker’s Guide, 3rd edn. Springer, Berlin (2006)
Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman equations. Birkhäuser, Boston (1997)
Bardi, M., Cesaroni, A., Ghilli, D.: Large deviations for some fast stochastic volatility models by viscosity methods. Discrete Contin. Dyn. Syst. 35(9), 3965–3988 (2015)
Barles, G.: Solutions de viscosité des équations de Hamilton–Jacobi. Mathématiques & Applications (Berlin) [Mathematics & Applications], Vol. 17. Springer, Paris (1994)
Budhiraja, A., Dupuis, P.: Analysis and Approximation of Rare Events: Representations and Weak Convergence Methods, volume 94 of Probability Theory and Stochastic Modelling. Springer, Berlin (2019)
Budhiraja, A., Dupuis, P., Ganguly, A.: Large deviations for small noise diffusions in a fast Markovian environment. Electron. J. Probab. 23, 33 (2018)
Collet, F., Kraaij, R.C.: Dynamical moderate deviations for the Curie–Weiss model. Stoch. Process. Appl. 127(9), 2900–2925 (2017)
Crandall, M.G., Ishii, H., Lions, P.-L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. New Ser. 27(1), 1–67 (1992)
Cutrì, A., Da Lio, F.: Comparison and existence results for evolutive non-coercive first-order Hamilton–Jacobi equations. ESAIM Control Optim. Calc. Var. 13(3), 484–502 (2007)
Da Lio, F., Ley, O.: Convex Hamilton–Jacobi equations under superlinear growth conditions on data. Appl. Math. Optim. 63(3), 309–339 (2011)
Deimling, K.: Multivalued Differential Equations. De Gruyter Series in Nonlinear Analysis and Applications, vol. 1. Walter de Gruyter & Co., Berlin (1992)
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, 2nd edn. Springer, Berlin (1998)
Den Hollander, F.: Large Deviations, vol. 14. American Mathematical Society, New York (2008)
Donsker, M.D., Varadhan, S.R.S.: Asymptotic evaluation of certain Markov process expectations for large time, i. Commun. Pure Appl. Math. 28(1), 1–47 (1975)
Donsker, M.D., Varadhan, S.R.S.: On a variational formula for the principal eigenvalue for operators with maximum principle. Proc. Natl. Acad. Sci. 72(3), 780–783 (1975)
Dupuis, P., Ishii, H., Soner, H.M.: A viscosity solution approach to the asymptotic analysis of queueing systems. Ann. Probab. 18(1), 226–255 (1990)
Evans, L.C., Souganidis, P.E.: A PDE approach to certain large deviation problems for systems of parabolic equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 6(suppl.):229–258 (1989). Analyse non linéaire (Perpignan, 1987)
Feng, J., Fouque, J.-P., Kumar, R.: Small-time asymptotics for fast mean-reverting stochastic volatility models. Ann. Appl. Probab. 22(4), 1541–1575 (2012)
Feng, J., Kurtz, T.G.: Large Deviations for Stochastic Processes. American Mathematical Society, Philadelphia (2006)
Fleming, W.H., Souganidis, P.E.: PDE-viscosity solution approach to some problems of large deviations. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Ser. 4 13(2), 171–192 (1986)
Ghilli, D.: Viscosity methods for large deviations estimates of multiscale stochastic processes. ESAIM COCV 24(2), 605–637 (2018)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis. Grundlehren Text Editions. Springer, Berlin (2001). Abridged version of ıt Convex analysis and minimization algorithms. I [Springer, Berlin, 1993; MR1261420 (95m:90001)] and ıt II [ibid.; MR1295240 (95m:90002)]
Kraaij, R.C.: Flux large deviations of weakly interacting jump processes via well-posedness of an associated Hamilton–Jacobi equation. Bernoulli (to appear) (2017)
Kraaij, R.C.: A general convergence result for viscosity solutions of Hamilton–Jacobi equations and non-linear semigroups. J. Funct. Anal. (to appear) (2019)
Kraaij, R.C., Mahé, L.: Well-posedness of Hamilton–Jacobi equations in population dynamics and applications to large deviations. Stoch. Process. Appl. (2020)
Kraaij, R.C., Schlottke, M.C.: A large deviation principle for Markovian slow-fast systems. preprint; arXiv:2011.05686
Kumar, R., Popovic, L.: Large deviations for multi-scale jump-diffusion processes. Stoch. Process. Appl. 127(4), 1297–1320 (2017)
Kunze, M.: Non-Smooth Dynamical Systems. Number nr. 1744 in Lecture Notes in Mathematics. Springer, Berlin (2000)
Peletier, M.A., Schlottke, M.C.: Large-deviation principles of switching Markov processes via Hamilton–Jacobi equations. preprint; ArXiv:1901.08478 (2019)
Pinsky, R.G.: The \(I\)-function for diffusion processes with boundaries. Ann. Probab. 13(3), 676–692 (1985)
Pinsky, R.G.: On evaluating the Donsker–Varadhan I-function. Ann. Probab. 66, 342–362 (1985)
Pinsky, R.G.: Regularity properties of the Donsker–Varadhan rate functional for non-reversible diffusions and random evolutions. Stoch. Dyn. 7(2), 123–140 (2007)
Shwartz, A., Weiss, A.: Large deviations with diminishing rates. Math. Oper. Res. 30(2), 281–310 (2005)
Acknowledgements
MS acknowledges financial support through NWO Grant 613.001.552.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Viscosity solutions
In Sect. 3 we work with a pair of Hamilton–Jacobi equations instead of a single Hamilton–Jacobi equation. To this end, we need to extend the notion of a viscosity solution and that of the comparison principle of Sect. 2.1.
Definition A.1
Let \(A_1 \subseteq C(E) \times C(E)\) and \(A_2 \subseteq C(E) \times C(E)\). Fix \(\lambda > 0\) and \(h_1,h_2 \in C_b(E)\). Consider the equations
We say that u is a (viscosity) subsolution of Eq. (A.1) if u is bounded, upper semi-continuous and if for all \((f,g) \in A_1\) there exists a sequence \(x_n \in E\) such that
We say that v is a (viscosity) supersolution of Eq. (A.2) if v is bounded, lower semi-continuous and if for all \((f,g) \in A_2\) there exists a sequence \(x_n \in E\) such that
If \(h_1 = h_2\), we say that u is a (viscosity) solution of Eqs. (A.1) and (A.2) if it is both a subsolution to (A.1) and a supersolution to (A.2).
We say that (A.1) and (A.2) satisfy the comparison principle if for every subsolution u to (A.1) and supersolution v to (A.2), we have \(\sup _E u-v \le \sup _E h_1 - h_2\).
As before, if test functions have compact levelsets, the existence of a sequences can be replaced by the existence of a point.
Regularity of the Hamiltonian
In this section, we establish continuity, convexity and the existence of a containment function for the Hamiltonian \({\mathcal {H}}\) of (2.2). We repeat its definition for convenience:
Proposition B.1
(Regularity of the Hamiltonian) Let \({\mathcal {H}} : E \times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) be the Hamiltonian as in (B.1), and suppose that Assumptions 2.14 and 2.15 are satisfied. Then:
-
(i)
For any \(x \in E\), the map \(p \mapsto {\mathcal {H}}(x,p)\) is convex and \({\mathcal {H}}(x,0) = 0\).
-
(ii)
With the containment function \(\Upsilon : E \rightarrow {\mathbb {R}}\) of (\(\Lambda 3\)), we have
$$\begin{aligned} \sup _{x \in E}{\mathcal {H}}(x,\nabla \Upsilon (x)) \le C_\Upsilon < \infty . \end{aligned}$$
Proof
The map \(p \mapsto {\mathcal {H}}(x,p)\) is convex as it is the supremum over convex (in p) functions.
For proving \({\mathcal {H}}(x,0) = 0\), let \(x \in E\). Then by (\(\Lambda 2\)) of Assumption 2.14, we have \(\Lambda (x,0,\theta ) = 0\), and therefore
since \({\mathcal {I}}\ge 0\) by Assumption 2.15 and \({\mathcal {I}}(x,\theta _x^0)=0\) for some \(\theta _x^0\) by (\({\mathcal {I}}2\)) of Assumption 2.15. Regarding (ii), we note that by (\(\Lambda 3\)),
\(\square \)
To prove that \({\mathcal {H}}\) is continuous, we use Assumption 2.15. What we truly need, however, is that \({\mathcal {I}}\) Gamma converges as a function of x. We establish this result first.
Proposition B.2
(Gamma convergence of the cost functions) Let a cost function \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) satisfy Assumption 2.15. Then if \(x_n\rightarrow x\) in E, the functionals \({\mathcal {I}}_n\) defined by
converge in the \(\Gamma \)-sense to \({\mathcal {I}}_\infty (\theta ) := {\mathcal {I}}(x,\theta )\). That is:
-
1.
If \(x_n \rightarrow x\) and \(\theta _n \rightarrow \theta \), then \(\liminf _{n\rightarrow \infty } {\mathcal {I}}(x_n,\theta _n) \ge {\mathcal {I}}(x,\theta )\),
-
2.
For \(x_n \rightarrow x\) and all \(\theta \in \Theta \) there are controls \(\theta _n \in \Theta \) such that \(\theta _n \rightarrow \theta \) and \(\limsup _{n\rightarrow \infty } {\mathcal {I}}(x_n,\theta _n) \le {\mathcal {I}}(x,\theta )\).
Proof
Let \(x_n\rightarrow x\). If \(\theta _n\rightarrow \theta \), then by lower semicontinuity (\({\mathcal {I}}1\)),
For the \(\text {lim-sup}\) bound, let \(\theta \in \Theta \). If \({\mathcal {I}}(x,\theta )=\infty \), there is nothing to prove. Thus suppose that \({\mathcal {I}}(x,\theta )\) is finite. Then by (\({\mathcal {I}}4\)), there is a neighborhood \(U_x\) of x and a constant \(M < \infty \) such that for any \(y\in U_x\),
Since \(x_n\rightarrow x\), the \(x_n\) are eventually contained in \(U_x\). Taking the constant sequence \(\theta _n:=\theta \), we thus get that \({\mathcal {I}}(x_n,\theta _n) \le M\) for all n large enough. By (\({\mathcal {I}}5\)),
and the \(\text {lim-sup}\) bound follows. \(\square \)
Proposition B.3
(Continuity of the Hamiltonian) Let \({\mathcal {H}} : E \times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) be the Hamiltonian defined in (2.2), and suppose that Assumptions 2.14 and 2.15 are satisfied. Then the map \((x,p) \mapsto {\mathcal {H}}(x,p)\) is continuous and the Lagrangian \((x,v) \mapsto {\mathcal {L}}(x,v) := \sup _{p} \langle p,v\rangle - {\mathcal {H}}(x,p)\) is lower semi-continuous.
Before we start with the proof, we give a remark on the generality of its statement and on the assumption that \(\Theta \) is Polish.
Remark B.4
The proof of upper semi-continuity of \({\mathcal {H}}\) works in general, using continuity properties of \(\Lambda \), lower semi-continuity of \((x,\theta ) \mapsto {\mathcal {I}}(x,\theta )\) and the compact sublevel sets of \({\mathcal {I}}(x,\cdot )\). To establish lower semi-continuity, we need that the functionals \({\mathcal {I}}\) Gamma converge as a function of x. This was established in Proposition B.2.
Remark B.5
In the lemma we use a sequential characterization of upper hemi-continuity which holds if \(\Theta \) is Polish. This is inspired by the natural formulation of Gamma convergence in terms of sequences. An extension of our results to spaces \(\Theta \) beyond the Polish context should be possible to Hausdorff \(\Theta \) that are k-spaces in which all compact sets are metrizable.
We will use the following technical result to establish upper semi-continuity of \({\mathcal {H}}\).
Lemma B.6
(Lemma 17.30 in [1]) Let \({\mathcal {X}}\) and \({\mathcal {Y}}\) be two Polish spaces. Let \(\phi : {\mathcal {X}}\rightarrow {\mathcal {K}}({\mathcal {Y}})\), where \({\mathcal {K}}({\mathcal {Y}})\) is the space of non-empty compact subsets of \({\mathcal {Y}}\). Suppose that \(\phi \) is upper hemi-continuous, that is if \(x_n \rightarrow x\) and \(y_n \rightarrow y\) and \(y_n \in \phi (x_n)\), then \(y \in \phi (x)\).
Let \(f : \text {Graph} (\phi ) \rightarrow {\mathbb {R}}\) be upper semi-continuous. Then the map \(m(x) = \sup _{y \in \phi (x)} f(x,y)\) is upper semi-continuous.
Proof of Proposition B.3
We start by establishing upper semi-continuity of \({\mathcal {H}}\). We argue on the basis of Lemma B.6. Recall the representation of \({\mathcal {H}}\) of (B.1). Set \({\mathcal {X}} = E\times {\mathbb {R}}^d\) for the (x, p) variables, \({\mathcal {Y}} = \Theta \), and \(f(x,p,\theta ) = \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\) and note that this function is upper semi-continuous by Assumption 2.15 (\({\mathcal {I}}1\)) and by Assumption 2.14 (\(\Lambda 1\)).
By Assumption 2.15 (\({\mathcal {I}}2\)), we have \({\mathcal {H}}(x,p) \ge \Lambda (x,p,\theta _{x}^0)\), where \(\theta _{x}^0\) is a control such that \({\mathcal {I}}(x,\theta _{x}^0) = 0\). Thus, it suffices to restrict the supremum over \(\theta \in \Theta \) to \(\theta \in \phi (x,p)\) where
where \(\left| \! \left| \cdot \right| \! \right| _\Theta \) denotes the supremum norm on \(\Theta \). Note that \(\left| \! \left| \Lambda (x,p,\cdot )\right| \! \right| _{\Theta } < \infty \) by Assumption 2.14 (\(\Lambda 4\)). It follows that
\(\phi (x,p)\) is non-empty as \(\theta _{x}^0 \in \phi (x,p)\) and it is compact due to Assumption 2.15 (\({\mathcal {I}}3\)). We are left to show that \(\phi \) is upper hemi-continuous.
Thus, let \((x_n,p_n,\theta _n) \rightarrow (x,p,\theta )\) with \(\theta _n \in \phi (x_n,p_n)\). We establish that \(\theta \in \phi (x,p)\). By (\({\mathcal {I}}1\)) and the definition of \(\phi \) we find
which implies indeed that \(\theta \in \phi (x,p)\). Thus, upper semi-continuity follows by an application of Lemma B.6.
We proceed with proving lower semi-continuity of \({\mathcal {H}}\). Suppose that \((x_n,p_n) \rightarrow (x,p)\), we prove that \(\liminf _n {\mathcal {H}}(x_n,p_n) \ge {\mathcal {H}}(x,p)\).
Let \(\theta \) be the measure such that \({\mathcal {H}}(x,p) = \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\). We have
-
By Proposition B.2 there are \(\theta _n\) such that \(\theta _n \rightarrow \theta \) and \(\limsup _n {\mathcal {I}}(x_n,\theta _n) \le {\mathcal {I}}(x,\theta )\).
-
\(\Lambda (x_n,p_n,\theta _n)\) converges to \(\Lambda (x,p,\theta )\) by Assumption (\(\Lambda 1\)).
Therefore,
establishing that \({\mathcal {H}}\) is lower semi-continuous.
The Lagrangian \({\mathcal {L}}\) is obtained as the supremum over continuous functions. This implies \({\mathcal {L}}\) is lower semi-continuous. \(\square \)
A more general continuity estimate
In classical literature, the comparison principle for the Hamilton–Jacobi equation \(f - \lambda Hf = h\) is often proven using a squared distance as a penalization function. This often works well due to the quadratic structure of the Hamiltonian. In different contexts, e.g. for the Hamiltonians arising from the large deviations of jump processes, this is not natural, see the issues arising in the proofs in [16, 23]. In absence of a general method to solve these issues, ad-hoc procedures can be introduced. One such ad-hoc procedure introduced in [23] is to work with multiple penalization functions (in that context \(\{\Psi _1,\Psi _2\}\)) that explore different parts of the state-space.
Any argument that has been carried out in the main text can be carried out with the generalization of the continuity estimate below.
Definition C.1
We say that \(\{\Psi _1,\Psi _2\}\), \(\Psi _i : E^2 \rightarrow {\mathbb {R}}^+\) is a pair of penalization functions if \(\Psi _i \in C^1(E^2)\) and if \(x = y\) if and only if \(\Psi _i(x,y) = 0\) for all i.
Definition C.2
(Continuity estimate) Let \({\mathcal {G}}: E \times {\mathbb {R}}^d \times \Theta \rightarrow {\mathbb {R}}\), \( (x,p,\theta )\mapsto {\mathcal {G}}(x,p,\theta )\) be a function and \(\{\Psi _1,\Psi _2\}\) be a pair of penalization functions. Suppose that for each \(\varepsilon > 0\) there is a sequence \(\alpha _2 \rightarrow \infty \). As before, we suppress the dependence on \(\varepsilon \). Suppose that for each \(\varepsilon \) and \(\alpha _2\) , there is a sequence \(\alpha _1 \rightarrow \infty \). We suppress writing the dependence of the sequence \(\alpha _1\) on \(\varepsilon \) and \(\alpha _2\). We write \(\alpha = (\alpha _1,\alpha _2)\).
Suppose that for each triplet \((\varepsilon ,\alpha _1,\alpha _2)\) as above we have variables \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) in \(E^2\) and variables \(\theta _{\varepsilon ,\alpha }\) in \(\Theta \). We say that this collection is fundamental for \({\mathcal {G}}\) with respect to \(\{\Psi _1,\Psi _2\}\) if:
-
(C1)
For each \(\varepsilon \), there are compact sets \(K_\varepsilon \subseteq E\) and \({\widehat{K}}_\varepsilon \subseteq \Theta \) such that for all \(\alpha \) we have \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in K_\varepsilon \) and \(\theta _{\varepsilon ,\alpha }\in {\widehat{K}}_\varepsilon \).
-
(C2)
For each \(\varepsilon > 0\) and \(\alpha _2\) there are limit points \(x_{\varepsilon ,\alpha _2}, y_{\varepsilon ,\alpha _2} \in K_\varepsilon \) of \(x_{\varepsilon ,\alpha }\) and \(y_{\varepsilon ,\alpha }\) as \(\alpha _1 = \alpha _1(\varepsilon ,\alpha _2) \rightarrow \infty \). For each \(\varepsilon \) there are limit points \(x_\varepsilon ,y_\varepsilon \) in \(K_\varepsilon \) of \(x_{\varepsilon ,\alpha _2}\) and \(y_{\varepsilon ,\alpha _2}\) as \(\alpha _2 \rightarrow \infty \). We furthermore have
$$\begin{aligned}&\Psi _1(x_{\varepsilon ,\alpha _2},y_{\varepsilon ,\alpha _2}) = 0&\forall \, \varepsilon>0, \, \forall \, \alpha _2, \\&\Psi _1(x_\varepsilon ,y_\varepsilon ) + \Psi _2(x_\varepsilon ,y_\varepsilon ) = 0,&\forall \, \varepsilon> 0, \\&\lim _{\alpha _1 \rightarrow \infty } \alpha _1 \Psi _1(x_{\varepsilon ,\alpha _1,\alpha _2},x_{\varepsilon ,\alpha _1,\alpha _2}) = 0,&\forall \, \varepsilon>0, \, \forall \, \alpha _2, \\&\lim _{\alpha _2 \rightarrow \infty } \alpha _2 \Psi _2(x_{\varepsilon ,\alpha _1},x_{\varepsilon ,\alpha _1}) = 0,&\forall \, \varepsilon > 0. \end{aligned}$$ -
(C3)
We have
$$\begin{aligned}&\sup _{\alpha _2} \sup _{\alpha _1} {\mathcal {G}}\left( y_{\varepsilon ,\alpha }, - \sum _{i=1}^2\alpha _i (\nabla \Psi _i(x_{\varepsilon ,\alpha },\cdot ))(y_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) < \infty , \end{aligned}$$(C.1)$$\begin{aligned}&\inf _{\alpha _2} \inf _{\alpha _1} {\mathcal {G}}\left( x_{\varepsilon ,\alpha }, \sum _{i=1}^2\alpha _i (\nabla \Psi _i(\cdot ,y_{\varepsilon ,\alpha }))(y_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) > - \infty . \end{aligned}$$(C.2)In other words, the operator \({\mathcal {G}}\) evaluated in the proper momenta is eventually bounded from above and from below.
We say that \({\mathcal {G}}\) satisfies the continuity estimate if for every fundamental collection of variables we have for each \(\varepsilon > 0\) that
Differential inclusions
To establish that Condition 8.11 of [19] is satisfied in the proof of Theorem 2.8, we need to solve a differential inclusion. The following appendix is based on [11, 28] and is a copy of the one in [25]. We state it for completeness.
Let \(D \subseteq {\mathbb {R}}^d\) be a non-empty set. A multi-valued mapping \(F : D \rightarrow 2^{{\mathbb {R}}^d} \setminus \{\emptyset \}\) is a map that assigns to every \(x \in D\) a set \(F(x) \subseteq {\mathbb {R}}^d\), \(F(x) \ne \emptyset \).
Definition D.1
Let \(I \subseteq {\mathbb {R}}\) be an interval with \(0 \in I\), \(D\subseteq {\mathbb {R}}^d\), \(x \in D\) and \(F : D \rightarrow 2^{{\mathbb {R}}^d} \setminus \emptyset \) a multi-valued mapping. A function \(\gamma \) such that
-
(a)
\(\gamma : I \rightarrow D\) is absolutely continuous,
-
(b)
\(\gamma (0) = x\),
-
(c)
\({\dot{\gamma }}(t) \in F(\gamma (t))\) for almost every \(t \in I\)
is called a solution of the differential inclusion \({\dot{\gamma }} \in F(\gamma )\) a.e., \(\gamma (0) = x\).
If we assume sufficient regularity on the multi-valued mapping F, we can ensure the existence of a solution to differential inclusions that remain inside D.
Definition D.2
Let \(D \subseteq {\mathbb {R}}^d\) be a non-empty set and let \(F : D \rightarrow 2^{{\mathbb {R}}^d} \setminus \{\emptyset \}\) be a multi-valued mapping.
-
(i)
We say that F is closed, compact or convex valued if each set F(x), \(x \in D\) is closed, compact or convex, respectively.
-
(ii)
We say that F is upper hemi-continuous at \(x \in D\) if for each neighbourhood \({\mathcal {U}}\) of F(x), there is a neighbourhood \({\mathcal {V}}\) of x in D such that \(F({\mathcal {V}}) \subseteq {\mathcal {U}}\). We say that F is upper hemi-continuous if it is upper hemi-continuous at every point. F is upper hemi-continuous if and only if for each sequence \(x_n \rightarrow x\) in D and \(\xi _n \in F(x_n)\) such that \(\xi _n \rightarrow \xi \) we have \(\xi \in F(x)\).
Definition D.3
Let \(D \subseteq {\mathbb {R}}^d\) be a closed non-empty set. The tangent cone to D at x is
The set \(T_D(x)\) is sometimes called the the Bouligand cotingent cone.
Lemma D.4
(Proposition 4.1 in [11]) Let \(D \subseteq {\mathbb {R}}^d\) be a closed, convex, non-empty set. Then the set \(T_D(x)\) is convex and contains 0.
Lemma D.5
(Theorem 2.2.1 in [28], Lemma 5.1 in [11]) Let \(D \subseteq {\mathbb {R}}^d\) be closed and let \(F : D \rightarrow 2^{{\mathbb {R}}^d} \setminus \{\emptyset \}\) satisfy
-
(a)
F has closed convex values and is upper hemi-continuous;
-
(b)
for every x, we have \(F(x) \cap T_D(x) \ne \emptyset \);
-
(c)
F has bounded growth: there is some \(c > 0\) such that \(\left| \! \left| F(x)\right| \! \right| = \sup \)\(\left\{ |z| \, \big | \, z \in F(x) \right\} \le c(1 + |x|)\) for all \(x \in D\).
Then the differential inclusion \({\dot{\gamma }} \in F(\gamma )\) has a solution on \({\mathbb {R}}^+\) for every starting point \(x \in D\).
Pseudo-coercivity for exponential Hamiltonians
In this section we consider the notions of pseudo-coercivity and the continuity estimate for the Hamiltonian-Jacobi equation \(f - \Lambda f = h\) on \(E = [0,\infty )\), with \(h \in C_b(E)\) for the Hamiltonian \(\Lambda f(x) = \Lambda (x,f'(x))\) with
which is a simplified version of the Hamiltonian given in the introduction and Proposition 5.7.
The pseudo-coercivity estimate of [4, Pages 34 and 35] translates into the present context as
where \(Q(x,y,p) = \max \left( \Psi (H(x,p)),\Psi (H(y,p))\right) \), where \(m : [0,\infty ) \rightarrow [0,\infty )\) is such that \(\lim _{t \downarrow 0} m(t) = 0\) and \(\Psi : {\mathbb {R}}\rightarrow [0,\infty )\) is continuous.
We first make some general remarks on the relation between pseudo-coercivity and the continuity estimate. We then show that for E.1, pseudo-coercivity fails, whereas the continuity estimate holds (in the case of a single \(\theta \)).
The continuity estimate (for a single \(\theta \)) is more general than pseudo-coercivity in the sense that:
-
It does not rely on the fact that you use a multiple of \(|x-y|^2\) as a penalization in the comparison principle. This is of importance for Hamiltonians \(\Lambda \) on e.g. the set of probability measures \({\mathcal {P}}(\{1,\dots ,q\})\) with \(q \in \{3,4,\dots \}\) like in Proposition 5.18.
-
It removes the necessity of taking absolute values in the estimate. This last fact is important as can be seen for the Hamilton–Jacobi equation for \(f - \lambda \Lambda f = h\), \(h \in C_b\), \(\lambda > 0\) for the Hamiltonian \(\Lambda f(x) = \Lambda (x,f'(x))\) with
$$\begin{aligned} \Lambda (x,p) = x \left[ e^{-p} - 1\right] , \qquad x \in [0,\infty ), p \in {\mathbb {R}}, \end{aligned}$$for which the comparison principle holds, whereas for \({\widetilde{\Lambda }}(x,p) := \Lambda (x,-p)\) it fails. Our continuity estimate holds for \(\Lambda \), but not for \({\widetilde{\Lambda }}\). Pseudo-coercivity fails for both as we explain below.
We first show that pseudo-coercivity fails for the Hamilton–Jacobi equation in terms of \(\Lambda \) of (E.1).
Lemma E.1
\(\Lambda \) is not pseudo-coercive.
Proof
A counterexample suffices.
Let W be the Lambert function. That is, W is the inverse of \(\phi \) where \(\phi (x) = xe^x\). Next, let \(x_\alpha = 0\), \(y_\alpha = \alpha ^{-1} W(\alpha )\). Then we have \(y_\alpha \rightarrow 0\), \(\alpha y_\alpha \rightarrow \infty \), \(\alpha y_\alpha ^2 = \frac{W(\alpha )^2}{\alpha } \rightarrow 0\) and
Thus, with \(p_\alpha =\alpha (x_\alpha -y_\alpha )=-\alpha y_\alpha \),
contradicting (E.2). \(\square \)
Note that the comparison principle for \(f - \lambda \Lambda f = h\) does in fact hold. This is due to the fact that one only needs to establish that \(\liminf _{\alpha \rightarrow \infty } \Lambda (x_\alpha ,p_\alpha ) - \Lambda (y_\alpha ,p_\alpha ) \le 0\) for appropriately chosen \(x_\alpha ,y_\alpha ,p_\alpha \) without absolute value signs, see Proposition 5.17. The removal of absolute value signs is essential: the comparison principle for \(f - {\widetilde{\Lambda }} f = h\) for
fails. This is related to the statement that an associated large deviation principle fails, see Example E of [33].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kraaij, R.C., Schlottke, M.C. Comparison Principle for Hamilton-Jacobi-Bellman Equations via a Bootstrapping Procedure. Nonlinear Differ. Equ. Appl. 28, 22 (2021). https://doi.org/10.1007/s00030-021-00680-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00030-021-00680-0