1 Introduction and aim of this note

The main purpose of this note is to establish well-posedness for first-order nonlinear partial differential equations of Hamilton–Jacobi–Bellman type on subsets E of \({\mathbb {R}}^d\),

$$\begin{aligned} u(x)-\lambda \,{\mathcal {H}}\left( x,\nabla u(x)\right) = h(x),\quad x \in E\subseteq {\mathbb {R}}^d, \end{aligned}$$
(HJB)

in the context without boundary conditions and where the Hamiltonian flow generated by \({\mathcal {H}}\) remains inside E. In (HJB), \(\lambda > 0\) is a scalar and h is a continuous and bounded function. The Hamiltonian \({\mathcal {H}}:E\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is given by

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta }\left[ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\right] , \end{aligned}$$
(1.1)

where \(\theta \in \Theta \) plays the role of a control variable. For fixed \(\theta \), the function \(\Lambda \) can be interpreted as an Hamiltonian itself. We call it the internal Hamiltonian. The function \({\mathcal {I}}\) can be interpreted as the cost of applying the control \(\theta \).

The main result of this paper is the comparison principle for (HJB) in order to establish uniqueness of viscosity solutions. The standard assumption in the literature that allows one to obtain the comparison principle in the context of optimal control problems (e.g. [2] for the first order case and [10] for the second order case) is that either there is a modulus of continuity \(\omega \) such that

$$\begin{aligned} |{\mathcal {H}}(x,p)-{\mathcal {H}}(y,p)| \le \omega \left( |x-y|(1+|p|)\right) , \end{aligned}$$
(1.2)

or that \({\mathcal {H}}\) is uniformly coercive:

$$\begin{aligned} \lim _{|p| \rightarrow \infty } \inf _x {\mathcal {H}}(x,p) = \infty . \end{aligned}$$
(1.3)

More generally, the two estimates (1.2) and (1.3) can be combined in a single estimate, called pseudo-coercivity, see [4, (H4), Page 34], that uses the fact that the sub- and supersolution properties roughly imply that the estimate (1.2) only needs to hold for appropriately chosen xy and p such that \({\mathcal {H}}\) is finite uniformly over these chosen xyp.

In the Hamilton–Jacobi–Bellman context, the comparison principle is typically obtained by translating (1.2) into conditions for \(\Lambda \) and \({\mathcal {I}}\) of (1.1), which include (e.g. [2, Chapter III])

  1. (I)

    \(|\Lambda (x,p,\theta )-\Lambda (y,p,\theta )|\le \omega _\Lambda (|x-y|(1+|p|))\), uniformly in \(\theta \), and

  2. (II)

    \({\mathcal {I}}\) is bounded, continuous and \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \omega _{\mathcal {I}}(|x-y|)\) for all \(\theta \).

The pseudo-coercivity property is harder to translate as in this way the control on \({\mathcal {H}}\) does not necessarily imply the same control on \(\Lambda \), in particular in the case when \({\mathcal {I}}\) is unbounded. We return on this issue below.

The estimates (I) and (II) are not satisfied for Hamiltonians arising from natural examples in the theory of large deviations [12, 13] for Markov processes with two scales (see e.g. [6, 18, 27, 29] for PDE’s arising from large deviations with two scales, see [3, 16, 17, 20, 21] for other works connection PDE’s with large deviations). Indeed, in [6] the authors mention that well-posedness of the Hamilton–Jacobi–Bellman equation for examples arising from large deviation theory is an open problem. Recent generalizations of the coercivity condition, see e.g. [9], also do not cover these examples.

In the large deviation context, however, we typically know that we have the comparison principle for the Hamilton–Jacobi equation in terms of \(\Lambda \). In addition, even though \({\mathcal {I}}\) might be discontinuous, we do have other types of regularity for the functional \({\mathcal {I}}\), see e.g. [32]. Thus, we aim to prove a comparison principle for (HJB) on the basis of the assumption that we have the following natural relaxations of (or the pseudo-coercive version of) (I) and (II).

  1. (i)

    For \(\theta \in \Theta \), define the Hamiltonian \({\mathcal {H}}_\theta (x,p):= \Lambda (x,p,\theta )\). We have an estimate on \({\mathcal {H}}_\theta \) that is uniform over \(\theta \) in compact sets \(K \subseteq \Theta \). This estimate, for one fixed \(\theta \), is in spirit similar to the pseudo-coercivity estimate of [4] and is morally equivalent to the comparison principle for \({\mathcal {H}}_\theta \). The uniformity is made rigorous as the continuity estimate in Assumption 2.14 (\(\Lambda 5\)) below.

  2. (ii)

    The cost functional \({\mathcal {I}}(x,\theta )\) satisfies an equi-continuity estimate of the type \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \omega _{{\mathcal {I}},C}(|x-y|)\) on sublevel sets \(\{{\mathcal {I}} \le C\}\) which we assume to be compact. This estimate is made rigorous in Assumption 2.15 (\({\mathcal {I}}5\)) below.

To work with these relaxations, we introduce a procedure that allows us to restrict our analysis to compact sets in the space of controls. In the proof of the comparison principle, the sub- and supersolution properties give boundedness of \({\mathcal {H}}\) when evaluated in optimizing points. We then translate this boundedness to boundedness of \({\mathcal {I}}\), which implies that the controls lie in a compact set.

The transfer of control builds upon (i) for \(\Lambda (x,p,\theta _{x}^0)\) when we use a control \(\theta _{x}^0\) that satisfies \({\mathcal {I}}(x,\theta _{x}^0) = 0\). This we call the bootstrap procedure: we use the comparison principle for the Hamilton–Jacobi equation in terms of \(\Lambda (x,p,\theta _{x}^0)\) to shift the control on \({\mathcal {H}}\) to control on \(\Lambda \) and \({\mathcal {I}}\) for general \(\theta \). That way the comparison principle for the internal Hamiltonian \(\Lambda \) bootstraps to the comparison principle for the full Hamiltonian \({\mathcal {H}}\).

Clearly, this bootstrap argument does not come for free. We pose four additional assumptions:

  1. (iii)

    The function \(\Lambda \) grows roughly equally fast in p: For all compact sets \({\widehat{K}} \subseteq E\), there are constants \(M,C_1,C_2\) such that

    $$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M,C_1 \Lambda (x,p,\theta _2) + C_2\right\} , \end{aligned}$$

    for all \(x\in {\widehat{K}}\)\(p \in {\mathbb {R}}^d, \, \theta _1,\theta _2 \in \Theta \).

  2. (iv)

    The function \({\mathcal {I}}\) grows roughly equally fast in x: For all \(x \in E\) and \(M \ge 0\) there exists an open neighbourhood U of x and constants \(M',C_1',C_2'\) such that

    $$\begin{aligned} {\mathcal {I}}(y_1,\theta ) \le \max \{M',C_1' {\mathcal {I}}(y_2,\theta ) + C_2'\} \end{aligned}$$

    for all \(y_1,y_2 \in U\) and for all \(\theta \) such that \({\mathcal {I}}(x,\theta ) \le M\).

  3. (v)

    \({\mathcal {I}}\ge 0\) and for each \(x \in E\), there exists \(\theta _{x}^0\) such that \({\mathcal {I}}(x,\theta _x^0) = 0\).

  4. (vi)

    The functional \({\mathcal {I}}\) is equi-coercive in x: for any compact set \({\hat{K}} \subseteq E\) the set \(\bigcup _{x \in {\hat{K}}} \{\theta \, | \, {\mathcal {I}}(x,\theta ) \le C\}\) is compact.

These four assumptions are stated below as Assumptions 2.14 (\(\Lambda 4\)), 2.15 (\({\mathcal {I}}4\)), 2.15 (\({\mathcal {I}}2\)), and 2.15 (\({\mathcal {I}}3\)). To explain in more detail our argument, we give a sketch of the bootstrap procedure, which can be skipped on first reading. In this sketch, we refrain from performing localization arguments that are needed for non-compact E.

Sketch of the bootstrap argument

Let u and v be a sub- and supersolution to \(f - \lambda Hf = h\) respectively. We estimate \(\sup _x u(x) - v(x)\) by the classical doubling of variables by means of penalizing the distance between x and y by some penalization \(\alpha \Psi (x-y)\) and aim to send \(\alpha \rightarrow \infty \). Let \(x_\alpha ,y_\alpha \) denote the optimizers, and denote by \(p_\alpha \) the corresponding momentum \(p_\alpha = \alpha \partial _x \Psi (x_\alpha -y_\alpha )\). Let \(\theta _\alpha \) be the control such that \({\mathcal {H}}(x_\alpha ,p_\alpha ) = \Lambda (x_\alpha ,p_\alpha ,\theta _\alpha )-{\mathcal {I}}(x_\alpha ,\theta _\alpha )\) and let \(\theta _{\alpha }^0\) be a control such that \({\mathcal {I}}(y_\alpha ,\theta _{\alpha }^0) = 0\), which exists due to (v).

The supersolution property for v yields the following estimate that is uniform in \(\alpha > 0\)

$$\begin{aligned} \infty > \frac{\left| \! \left| v - h\right| \! \right| }{\lambda } \ge {\mathcal {H}}(y_\alpha ,p_\alpha ) \ge \Lambda (y_\alpha ,p_\alpha ,\theta _{\alpha }^0) - {\mathcal {I}}(y_\alpha , \theta _{\alpha }^0) = \Lambda (y_\alpha ,p_\alpha ,\theta _{\alpha }^0). \end{aligned}$$
(1.4)

Using (iii), we obtain a uniform estimate in \(\alpha \):

$$\begin{aligned} \sup _\alpha \Lambda (y_\alpha ,p_\alpha ,\theta _\alpha ) < \infty . \end{aligned}$$
(1.5)

which will allow us to use (i) if we can show that the controls \(\theta _\alpha \) take their value in a compact set \(K \subseteq \Theta \). For this, it suffices by (vi) to establish

$$\begin{aligned} \sup _\alpha {\mathcal {I}}(x_\alpha ,\theta _\alpha ) < \infty . \end{aligned}$$
(1.6)

This, in fact, implies by (iv) that

$$\begin{aligned} \sup {\mathcal {I}}(y_\alpha ,\theta _\alpha ) \vee {\mathcal {I}}(x_\alpha ,\theta _\alpha ) < \infty \end{aligned}$$

so that we can also apply (ii). This, in combination with the application of (i) establishes the comparison principle for \(f - \lambda Hf = h\).

We are thus left to prove (1.6), which is where our bootstrap comes into play. The subsolution property for u yields the following estimate that is uniform in \(\alpha > 0\)

$$\begin{aligned} - \infty < \frac{\left| \! \left| u - h\right| \! \right| }{\lambda } \le {\mathcal {H}}(x_\alpha ,p_\alpha ) = \Lambda (x_\alpha ,p_\alpha ,\theta _\alpha ) - {\mathcal {I}}(x_\alpha ,\theta _\alpha ). \end{aligned}$$

Thus, (1.6) follows if we can establish

$$\begin{aligned} \sup _\alpha \Lambda (x_\alpha ,p_\alpha ,\theta _\alpha ) < \infty , \end{aligned}$$

which in turn (by (iii)) follows from

$$\begin{aligned} \sup _\alpha \Lambda (x_\alpha ,p_\alpha ,\theta _{\alpha }^0) < \infty . \end{aligned}$$

To establish this final estimate, note that

$$\begin{aligned} \Lambda (x_\alpha ,p_\alpha ,\theta _{\alpha }^0) = \Lambda (y_\alpha ,p_\alpha ,\theta _{\alpha }^0) + \left[ \Lambda (x_\alpha ,p_\alpha ,\theta _{\alpha }^0) - \Lambda (y_\alpha ,p_\alpha ,\theta _{\alpha }^0) \right] \end{aligned}$$

and that we have control on the first term by means of (1.4) and on the second term by the pseudo-coercivity estimate of (i) on \(\Lambda \) for the controls \(\theta _{\alpha }^0\) which lie in a compact set due to (vi). \(\square \)

Thus, to summarize, we use the growth conditions posed on \(\Lambda \) and \({\mathcal {I}}\) and the pseudo-coercivity estimate for \(\Lambda \) to transfer the control on the full Hamiltonian \({\mathcal {H}}\) to the functions \(\Lambda \) and the cost function \({\mathcal {I}}\). Then the control on \(\Lambda \) and \({\mathcal {I}}\) allows us to apply the estimates (i) and (ii) to obtain the comparison principle.

Next to our main result, we also state for completeness an existence result in Theorem 2.8. The viscosity solution will be given in terms of a discounted control problem as is typical in the literature, see e.g. [2, Chapter 3]. Minor difficulties arise from working with \({\mathcal {H}}\) that arise from irregular \({\mathcal {I}}\).

Finally, we show that the conditions (i) to (vi) are satisfied in two examples that arise from large deviation theory for two-scale processes. In our companion paper [26], we will use existence and uniqueness for (HJB) for these examples to obtain large deviation principles.

Illustration in the context of an example

As an illustrating example, we consider a Hamilton–Jacobi–Bellman equation that arises from the large deviations of the empirical measure-flux pair of weakly coupled Markov jump processes that are coupled to fast Brownian motion on the torus. We skip the probabilistic background of this problem (See [26]), and come to the set-up relevant for this paper.

Let \(G := \{1,\dots ,q\}\) be some finite set, and let \(\Gamma = \{(a,b) \in G^2 \, | \, a \ne b\}\) be the set of directed bonds. Let \(E := {\mathcal {P}}(G) \times [0,\infty )^\Gamma \), where \({\mathcal {P}}(G)\) is the set of probability measures on G. Let \(F = {\mathcal {P}}(S^1)\) be the set of probability measures on the one-dimensional torus. We introduce \(\Lambda \) and \({\mathcal {I}}\).

  • Let \(r : G \times G \times {\mathcal {P}}(E) \times {\mathcal {P}}(S^1) \rightarrow [0,\infty )\) be some function that codes the \({\mathcal {P}}(E) \times {\mathcal {P}}(S^1)\) dependent jump rate of the Markov jump process over each bond \((a,b) \in \Gamma \). The internal Hamiltonian \(\Lambda \) is given by

    $$\begin{aligned} \Lambda (\mu ,p,\theta ) = \sum _{(a,b) \in \Gamma } \mu _a r(a,b,\mu ,\theta ) \left[ e^{p_b - p_a + p_{a,b}} - 1 \right] . \end{aligned}$$
  • Let \(\sigma ^2 : S^1 \times {\mathcal {P}}(G) \rightarrow (0,\infty )\) be a bounded and strictly positive function. The cost function \({\mathcal {I}}:E \times \Theta \rightarrow [0,\infty ]\) is given by

    $$\begin{aligned} {\mathcal {I}}(\mu ,w,\theta ) = {\mathcal {I}}(\mu ,\theta ) = \sup _{\begin{array}{c} u\in C^\infty (S^1)\\ u > 0 \end{array}} \int _{S^1} \sigma ^2(y,\mu ) \left( -\frac{u''(y)}{u(y)}\right) \,\theta ({\mathrm {d}}y). \end{aligned}$$

Aiming for the comparison principle, we note that classical methods do not apply. The functionals \(\Lambda \) are not coercive and do not satisfy (I). We show in “Appendix E” that they are also not pseudo-coercive as defined in [4]. The functional \({\mathcal {I}}\) is neither continuous nor bounded. Once can check e.g. that if \(\theta \) is a finite combination of Dirac measures, then \({\mathcal {I}}(\mu ,\theta ) = \infty \).

We show in Sect. 5, however, that (i) to (vi) hold, implying the comparison principle for the Hamilton–Jacobi–Bellman equations. The verification of these properties is based in part on results from [23, 32].

Summary and overview of the paper

To summarize, our novel bootstrap procedure allows to treat Hamilton–Jacobi–Bellman equations where:

  • We assume that the cost function \({\mathcal {I}}\) satisfies some regularity conditions on its sub-levelsets, but allow \({\mathcal {I}}\) to be possibly unbounded and discontinuous.

  • We assume that \(\Lambda \) satisfies the continuity estimate uniformly for controls in compact sets, which in spirit extends the pseudo-coercivity estimate of [4]. This implies that \(\Lambda \) can be possibly non-coercive, non-pseudo-coercive and non-Lipschitz as exhibited in our example above.

In particular, allowing discontinuity in \({\mathcal {I}}\) allows us to treat the comparison principle for examples like the one we considered above, which so far has been out of reach. We believe that the bootstrap procedure we introduce in this note has the potential to also apply to second order equations or equations in infinite dimensions. Of interest would be, for example, an extension of the results of [10] who work with continuous \({\mathcal {I}}\). For clarity of the exposition, and the already numerous applications for this setting, we stick to the finite-dimensional first-order case. We think that the key arguments that are used in the proof in Sect. 3 do not depend in a crucial way on this assumption.

The paper is organized as follows. The main results are formulated in Sect. 2. In Sect. 3 we establish the comparison principle. In Sect. 4 we establish that a resolvent operator \(R(\lambda )\) in terms of an exponentially discounted control problem gives rise to viscosity solutions of the Hamilton–Jacobi–Bellman equation (HJB). Finally, in Sect. 5 we treat two examples including the one mentioned in the introduction.

2 Main results

In this section, we start with preliminaries in Sect. 2.1, which includes the definition of viscosity solutions and that of the comparison principle.

We proceed in Sect. 2.2 with the main results: a comparison principle for the Hamilton–Jacobi–Bellman equation (HJB) based on variational Hamiltonians of the form (1.1), and the existence of viscosity solutions. In Sect. 2.3 we collect all assumptions that are needed for the main results.

2.1 Preliminaries

For a Polish space \({\mathcal {X}}\) we denote by \(C({\mathcal {X}})\) and \(C_b({\mathcal {X}})\) the spaces of continuous and bounded continuous functions respectively. If \({\mathcal {X}}\subseteq {\mathbb {R}}^d\) then we denote by \(C_c^\infty ({\mathcal {X}})\) the space of smooth functions that vanish outside a compact set. We denote by \(C_{cc}^\infty ({\mathcal {X}})\) the set of smooth functions that are constant outside of a compact set in \({\mathcal {X}}\), and by \({\mathcal {P}}({\mathcal {X}})\) the space of probability measures on \({\mathcal {X}}\). We equip \({\mathcal {P}}({\mathcal {X}})\) with the weak topology induced by convergence of integrals against bounded continuous functions.

Throughout the paper, E will be the set on which we base our Hamilton–Jacobi equations. We assume that E is a subset of \({\mathbb {R}}^d\) that is a Polish space which is contained in the \({\mathbb {R}}^d\) closure of its \({\mathbb {R}}^d\) interior. This ensures that gradients of functions are determined by their values on E. Note that we do not necessarily assume that E is open. We assume that the space of controls \(\Theta \) is Polish.

We next introduce viscosity solutions for the Hamilton–Jacobi equation with Hamiltonians like \({\mathcal {H}}(x,p)\) of our introduction.

Definition 2.1

(Viscosity solutions and comparison principle) Let \(A : {\mathcal {D}}(A) \subseteq C_b(E) \rightarrow C_b(E)\) be an operator with domain \({\mathcal {D}}(A)\), \(\lambda > 0\) and \(h \in C_b(E)\). Consider the Hamilton–Jacobi equation

$$\begin{aligned} f - \lambda A f = h. \end{aligned}$$
(2.1)

We say that u is a (viscosity) subsolution of equation (2.1) if u is bounded from above, upper semi-continuous and if, for every \(f \in {\mathcal {D}}(A)\) there exists a sequence \(x_n \in E\) such that

$$\begin{aligned}&\lim _{n \uparrow \infty } u(x_n) - f(x_n) = \sup _x u(x) - f(x), \\&\lim _{n \uparrow \infty } u(x_n) - \lambda A f(x_n) - h(x_n) \le 0. \end{aligned}$$

We say that v is a (viscosity) supersolution of Eq. (2.1) if v is bounded from below, lower semi-continuous and if, for every \(f \in {\mathcal {D}}(A)\)there exists a sequence \(x_n \in E\) such that

$$\begin{aligned}&\lim _{n \uparrow \infty } v(x_n) - f(x_n) = \inf _x v(x) - f(x), \\&\lim _{n \uparrow \infty } v(x_n) - \lambda Af(x_n) - h(x_n) \ge 0. \end{aligned}$$

We say that u is a (viscosity) solution of Eq. (2.1) if it is both a subsolution and a supersolution to (2.1). We say that (2.1) satisfies the comparison principle if for every subsolution u and supersolution v to (2.1), we have \(u \le v\).

Remark 2.2

(Uniqueness) If u and v are two viscosity solutions of 2.3, then we have \(u\le v\) and \(v\le u\) by the comparison principle, giving uniqueness.

Remark 2.3

Consider the definition of subsolutions. Suppose that the testfunction \(f \in {\mathcal {D}}(A)\) has compact sublevel sets, then instead of working with a sequence \(x_n\), there exists \(x_0 \in E\) such that

$$\begin{aligned}&u(x_0) - f(x_0) = \sup _x u(x) - f(x), \\&u(x_0) - \lambda A f(x_0) - h(x_0) \le 0. \end{aligned}$$

A similar simplification holds in the case of supersolutions.

Remark 2.4

For an explanatory text on the notion of viscosity solutions and fields of applications, we refer to [8].

Remark 2.5

At present, we refrain from working with unbounded viscosity solutions as we use the upper bound on subsolutions and the lower bound on supersolutions in the proof of Theorem 2.6. We can, however, imagine that the methods presented in this paper can be generalized if u and v grow slower than the containment function \(\Upsilon \) that will be defined below in Definition 2.13.

2.2 Main results: comparison and existence

In this section, we state our main results: the comparison principle in Theorem 2.6, and existence of solutions in Theorem 2.8.

Consider the variational Hamiltonian \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta }\left[ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\right] . \end{aligned}$$
(2.2)

The precise assumptions on the maps \(\Lambda \) and \({\mathcal {I}}\) are formulated in Sect. 2.3.

Theorem 2.6

(Comparison principle) Consider the map \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) as in (2.2). Suppose that Assumptions 2.14 and 2.15 are satisfied for \(\Lambda \) and I. Define the operator \({\mathbf {H}}f(x) := {\mathcal {H}}(x,\nabla f(x))\) with domain \({\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\). Then:

  1. (a)

    For any \(f \in {\mathcal {D}}({\mathbf {H}})\) the map \(x\mapsto {\mathbf {H}}f(x)\) is continuous.

  2. (b)

    For any \(h \in C_b(E)\) and \(\lambda > 0\), the comparison principle holds for

    $$\begin{aligned} f - \lambda \, {\mathbf {H}}f = h. \end{aligned}$$
    (2.3)

Remark 2.7

(Domain) The comparison principle holds with any domain that satisfies \(C_{cc}^\infty (E)\subseteq {\mathcal {D}}({\mathbf {H}})\subseteq C^1_b(E)\). We state it with \(C^\infty _{cc}(E)\) to connect it with the existence result of Theorem 2.8, where we need to work with test functions whose gradients have compact support.

Consider the Legendre dual \({\mathcal {L}}: E \times {\mathbb {R}}^d \rightarrow [0,\infty ]\) of the Hamiltonian,

$$\begin{aligned} {\mathcal {L}}(x,v) := \sup _{p\in {\mathbb {R}}^d} \left[ \langle p,v\rangle - {\mathcal {H}}(x,p)\right] , \end{aligned}$$

and denote the collection of absolutely continuous paths in E by \({\mathcal {A}}{\mathcal {C}}\).

Theorem 2.8

(Existence of viscosity solution) Consider \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) as in (2.2). Suppose that Assumptions 2.14 and 2.15 are satisfied for \(\Lambda \) and \({\mathcal {I}}\), and that \({\mathcal {H}}\) satisfies Assumption 2.17. For each \(\lambda > 0\), let \(R(\lambda )\) be the operator

$$\begin{aligned} R(\lambda ) h(x) = \sup _{\begin{array}{c} \gamma \in {\mathcal {A}}{\mathcal {C}}\\ \gamma (0) = x \end{array}} \int _0^\infty \lambda ^{-1} e^{-\lambda ^{-1}t} \left[ h(\gamma (t)) - \int _0^t {\mathcal {L}}(\gamma (s),{\dot{\gamma }}(s))\right] \, {\mathrm {d}}t. \end{aligned}$$

Then \(R(\lambda )h\) is the unique viscosity solution to \(f - \lambda {\mathbf {H}}f = h\).

Remark 2.9

The form of the solution is typical, see for example Section III.2 in [2]. It is the value function obtained by an optimization problem with exponentially discounted cost. The difficulty of the proof of Theorem 2.8 lies in treating the irregular form of \({\mathcal {H}}\).

2.3 Assumptions

In this section, we formulate and comment on the assumptions imposed on the Hamiltonians defined in the previous sections. The key assumptions were already mentioned in the sketch of the bootstrap method in the introduction. To these, we add minor additional assumptions on the regularity of \(\Lambda \) and \({\mathcal {I}}\) in Assumptions 2.14 and 2.15. Finally, Assumption 2.17 will imply that even if E has a boundary, no boundary conditions are necessary for the construction of the viscosity solution.

We start with the continuity estimate for \(\Lambda \), which was briefly discussed in (i) in the introduction. To that end, we first introduce a function that is used in the typical argument that doubles the number of variables.

Definition 2.10

(Penalization function) We say that \(\Psi : E^2 \rightarrow [0,\infty )\) is a penalization function if \(\Psi \in C^1(E^2)\) and if \(x = y\) if and only if \(\Psi (x,y) = 0\).

We will apply the definition below for \({\mathcal {G}}= \Lambda \).

Definition 2.11

(Continuity estimate) Let \(\Psi \) be a penalization function and let \({\mathcal {G}}: E \times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\), \((x,p,\theta )\mapsto {\mathcal {G}}(x,p,\theta )\) be a function. Suppose that for each \(\varepsilon > 0\), there is a sequence of positive real numbers \(\alpha \rightarrow \infty \). For sake of readability, we suppress the dependence on \(\varepsilon \) in our notation.

Suppose that for each \(\varepsilon \) and \(\alpha \) we have variables \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) in \(E^2\) and variables \(\theta _{\varepsilon ,\alpha }\) in \(\Theta \). We say that this collection is fundamental for \({\mathcal {G}}\) with respect to \(\Psi \) if:

  1. (C1)

    For each \(\varepsilon \), there are compact sets \(K_\varepsilon \subseteq E\) and \({\widehat{K}}_\varepsilon \subseteq \Theta \) such that for all \(\alpha \) we have \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in K_\varepsilon \) and \(\theta _{\varepsilon ,\alpha }\in {\widehat{K}}_\varepsilon \).

  2. (C2)

    For each \(\varepsilon > 0\), we have \(\lim _{\alpha \rightarrow \infty } \alpha \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = 0\). For any limit point \((x_\varepsilon ,y_\varepsilon )\) of \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\), we have \(\Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = 0\).

  3. (C3)

    We have for all \(\varepsilon > 0\)

    $$\begin{aligned}&\sup _{\alpha } {\mathcal {G}}\left( y_{\varepsilon ,\alpha }, - \alpha (\nabla \Psi (x_{\varepsilon ,\alpha },\cdot ))(y_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) < \infty , \end{aligned}$$
    (2.4)
    $$\begin{aligned}&\inf _\alpha {\mathcal {G}}\left( x_{\varepsilon ,\alpha }, \alpha (\nabla \Psi (\cdot ,y_{\varepsilon ,\alpha }))(x_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) > - \infty . \end{aligned}$$
    (2.5)

We say that \({\mathcal {G}}\) satisfies the continuity estimate if for every fundamental collection of variables we have for each \(\varepsilon > 0\) that

$$\begin{aligned}&\liminf _{\alpha \rightarrow \infty } {\mathcal {G}}\left( x_{\varepsilon ,\alpha }, \alpha (\nabla \Psi (\cdot ,y_{\varepsilon ,\alpha }))(x_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) \nonumber \\&\quad - {\mathcal {G}}\left( y_{\varepsilon ,\alpha }, - \alpha (\nabla \Psi (x_{\varepsilon ,\alpha },\cdot ))(y_{\varepsilon ,\alpha }),\theta _{\varepsilon ,\alpha }\right) \le 0. \end{aligned}$$
(2.6)

Remark 2.12

In “Appendix C”, we state a slightly more general continuity estimate on the basis of two penalization functions. A proof of a comparison principle on the basis of two penalization functions was given in [23].

The continuity estimate is indeed exactly the estimate that one would perform when proving the comparison principle for the Hamilton–Jacobi equation in terms of the internal Hamiltonian (disregarding the control \(\theta \)). Typically, the control on \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) that is assumed in (C1) and (C2) is obtained from choosing \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\) as optimizers in the doubling of variables procedure (see Lemma 3.5), and the control that is assumed in (C3) is obtained by using the viscosity sub- and supersolution properties in the proof of the comparison principle. The required restriction to compact sets in Lemma 3.5 is obtained by including in the test functions a containment function.

Definition 2.13

(Containment function) We say that a function \(\Upsilon : E \rightarrow [0,\infty ]\) is a containment function for \(\Lambda \) if \(\Upsilon \in C^1(E)\) and there is a constant \(c_\Upsilon \) such that

  • For every \(c \ge 0\), the set \(\{x \, | \, \Upsilon (x) \le c\}\) is compact;

  • We have \(\sup _\theta \sup _x \Lambda \left( x,\nabla \Upsilon (x),\theta \right) \le c_\Upsilon \).

To conclude, our assumption on \(\Lambda \) contains the continuity estimate, the controlled growth, the existence of a containment function and two regularity properties.

Assumption 2.14

The function \(\Lambda :E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) in the Hamiltonian (2.2) satisfies the following.

(\(\Lambda 1\)):

The map \(\Lambda : E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) is continuous.

(\(\Lambda 2\)):

For any \(x\in E\) and \(\theta \in \Theta \), the map \(p\mapsto \Lambda (x,p,\theta )\) is convex. We have \(\Lambda (x,0,\theta ) = 0\) for all \(x\in E\) and all \(\theta \in \Theta \).

(\(\Lambda 3\)):

There exists a containment function \(\Upsilon : E \rightarrow [0,\infty )\) for \(\Lambda \) in the sense of Definition 2.13.

(\(\Lambda 4\)):

For every compact set \(K \subseteq E\), there exist constants \(M, C_1, C_2 \ge 0\) such that for all \(x \in K\), \(p \in {\mathbb {R}}^d\) and all \(\theta _1,\theta _2\in \Theta \), we have

$$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M,C_1 \Lambda (x,p,\theta _2) + C_2\right\} . \end{aligned}$$
(\(\Lambda 5\)):

The function \(\Lambda \) satisfies the continuity estimate in the sense of Definition 2.11, or in the extended sense of Definition C.2.

Our second main assumption is on the properties of \({\mathcal {I}}\). For a compact set \(K\subseteq E\) and a constant \(M\ge 0\), write

$$\begin{aligned} \Theta _{K,M}:= \bigcup _{x \in K} \left\{ \theta \in \Theta \, \big | \, {\mathcal {I}}(x,\theta ) \le M \right\} , \end{aligned}$$
(2.7)

and

$$\begin{aligned} \Omega _{K,M}:= \bigcap _{x \in K} \left\{ \theta \in \Theta \, \big | \, {\mathcal {I}}(x,\theta ) \le M \right\} . \end{aligned}$$
(2.8)

Assumption 2.15

The functional \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) in (2.2) satisfies the following.

(\({\mathcal {I}}1\)):

The map \((x,\theta ) \mapsto {\mathcal {I}}(x,\theta )\) is lower semi-continuous on \(E \times \Theta \).

(\({\mathcal {I}}2\)):

For any \(x\in E\), there exists a control \(\theta _{x}^0 \in \Theta \) such that \({\mathcal {I}}(x,\theta _{x}^0) = 0\).

(\({\mathcal {I}}3\)):

For any compact set \(K \subseteq E\) and constant \(M \ge 0\) the set \(\Theta _{K,M}\) is compact.

(\({\mathcal {I}}4\)):

For each \(x \in E\) and constant \(M \ge 0\), there exists an open neighbourhood \(U \subseteq E\) of x and constants \(M',C_1',C_2' \ge 0\) such that for all \(y_1,y_2 \in U\) and \(\theta \in \Theta _{\{x\},M}\) we have

$$\begin{aligned} {\mathcal {I}}(y_1,\theta ) \le \max \left\{ M', C_1'{\mathcal {I}}(y_2,\theta ) + C_2' \right\} . \end{aligned}$$
(\({\mathcal {I}}5\)):

For every compact set \(K \subseteq E\) and each \(M \ge 0\) the collection of functions \(\{{\mathcal {I}}(\cdot ,\theta )\}_{\theta \in \Omega _{K,M}}\) is equicontinuous. That is: for all \(\varepsilon > 0\), there is a \(\delta > 0\) such that for all \(\theta \in \Omega _{K,M}\) and \(x,y \in K\) such that \(d(x,y) \le \delta \) we have \(|{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \varepsilon \).

To establish the existence of viscosity solutions, we will impose one additional assumption. For a general convex functional \(p \mapsto \Phi (p)\) we denote

$$\begin{aligned} \partial _p \Phi (p_0) := \left\{ \xi \in {\mathbb {R}}^d \,:\, \Phi (p) \ge \Phi (p_0) + \xi \cdot (p-p_0) \quad (\forall p \in {\mathbb {R}}^d) \right\} . \end{aligned}$$
(2.9)

Definition 2.16

The tangent cone (sometimes also called Bouligand cotingent cone) to E in \({\mathbb {R}}^d\) at x is

$$\begin{aligned} T_E(x) := \left\{ z \in {\mathbb {R}}^d \, \big | \, \liminf _{\lambda \downarrow 0} \frac{d(x + \lambda z, E)}{\lambda } = 0\right\} . \end{aligned}$$

Assumption 2.17

The set E is closed and convex. The map \(\Lambda \) is such that \(\partial _p \Lambda (x,p,\theta ) \subseteq T_E(x)\) for all \(x \in E\), \(p \in {\mathbb {R}}^d\) and \(\theta \in \Theta \).

In Lemma 4.1 we will show that the assumption implies that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\), which in turn implies that the solutions of the differential inclusion in terms of \(\partial _p {\mathcal {H}}(x,p)\) remain inside E. Motivated by our examples, we work with closed convex domains E. While in this context we can apply results from e.g. Deimling [11], we believe that similar results can be obtained in different contexts.

Remark 2.18

The statement that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\) is intuitively implied by the comparison principle for \({\mathbf {H}}\) and therefore, we expect it to hold in any setting for which Theorem 2.6 holds. Here, we argue in a simple case why this is to be expected. First of all, note that the comparison principle for \({\mathbf {H}}\) builds upon the maximum principle. Suppose that \(E = [0,1]\), \(f,g \in C^1_b(E)\) and suppose that \(f(0) - g(0) = \sup _x f(x) - g(x)\). As \(x=0\) is a boundary point, we conclude that \(f'(0) \le g'(0)\). If indeed the maximum principle holds, we must have

$$\begin{aligned} {\mathcal {H}}(0,f'(0)) = Hf(0) \le Hg(0) = {\mathcal {H}}(0,g'(0)) \end{aligned}$$

implying that \(p \mapsto {\mathcal {H}}(0,p)\) is increasing, in other words

$$\begin{aligned} \partial _p {\mathcal {H}}(x,p)) \subseteq [0,\infty ) = T_{[0,1]}(0). \end{aligned}$$

3 The comparison principle

In this section, we establish Theorem 2.6. To establish the comparison principle for \(f - \lambda {\mathbf {H}}f = h\) we use the bootstrap method explained in the introduction. We start by a classical localization argument.

We carry out the localization argument by absorbing the containment function \(\Upsilon \) from Assumption 2.14 (\(\Lambda 3\)) into the test functions. This leads to two new operators, \(H_\dagger \) and \(H_\ddagger \) that serve as an upper bound and a lower bound for the true \({\mathbf {H}}\). We will then show the comparison principle for the Hamilton–Jacobi equation in terms of these two new operators. We therefore have to extend our notion of Hamilton–Jacobi equations and the comparison principle. This extension of the definition is standard, but we included it for completeness in the appendix as Definition A.1.

This procedure allows us to clearly separate the reduction to compact sets on one hand, and the proof of the comparison principle on the basis of the bootstrap procedure on the other. Schematically, we will establish the following diagram:

figure a

In this diagram, an arrow connecting an operator A with operator B with subscript ’sub’ means that viscosity subsolutions of \(f - \lambda A f = h\) are also viscosity subsolutions of \(f - \lambda B f = h\). Similarly for arrows with a subscript ’super’.

We introduce the operators \(H_\dagger \) and \(H_\ddagger \) in Sect. 3.1. The arrows will be established in Sect. 3.2. Finally, we will establish the comparison principle for \(H_\dagger \) and \(H_\ddagger \) in Sect. 3.3. Combined these two results imply the comparison principle for \({\mathbf {H}}\).

Proof of Theorem 2.6

We start with the proof of (a). Let \(f \in {\mathcal {D}}({\mathbf {H}})\). Then \({\mathbf {H}}f\) is continuous since by Proposition B.3 in “Appendix B”, the Hamiltonian \({\mathcal {H}}\) is continuous.

We proceed with the proof of (b). Fix \(h_1,h_2 \in C_b(E)\) and \(\lambda > 0\).

Let \(u_1,u_2\) be a viscosity sub- and supersolution to \(f - \lambda {\mathbf {H}}f = h_1\) and \(f - \lambda {\mathbf {H}}f = h_2\) respectively. By Lemma 3.3 proven in Sect. 3.2, \(u_1\) and \(u_2\) are a sub- and supersolution to \(f - \lambda H_\dagger f = h_1\) and \(f - \lambda H_\ddagger f = h_2\) respectively. Thus \(\sup _E u_1 - u_2 \le \sup _E h_1 - h_2\) by Proposition 3.4 of Sect. 3.3. Specialising to \(h_1=h_2\) gives Theorem 2.6. \(\square \)

3.1 Definition of auxiliary operators

In this section, we repeat the definition of \({\mathbf {H}}\), and introduce the operators \(H_\dagger \) and \(H_\ddagger \).

Definition 3.1

The operator \({\mathbf {H}}\subseteq C_b^1(E) \times C_b(E)\) has domain \({\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\) and satisfies \({\mathbf {H}}f(x) = {\mathcal {H}}(x, \nabla f(x))\), where \({\mathcal {H}}\) is the map

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta }\left[ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\right] . \end{aligned}$$

We proceed by introducing \(H_\dagger \) and \(H_\ddagger \). Recall Assumption (\(\Lambda 3\)) and the constant \(C_\Upsilon := \sup _{\theta }\sup _x \Lambda (x,\nabla \Upsilon (x),\theta )\) therein. Denote by \(C_\ell ^\infty (E)\) the set of smooth functions on E that have a lower bound and by \(C_u^\infty (E)\) the set of smooth functions on E that have an upper bound.

Definition 3.2

(The operators \(H_\dagger \) and \(H_\ddagger \)) For \(f \in C_\ell ^\infty (E)\) and \(\varepsilon \in (0,1)\) set

$$\begin{aligned} f^\varepsilon _\dagger:= & {} (1-\varepsilon ) f + \varepsilon \Upsilon \\ H_{\dagger ,f}^\varepsilon (x):= & {} (1-\varepsilon ) {\mathcal {H}}(x,\nabla f(x)) + \varepsilon C_\Upsilon . \end{aligned}$$

and set

$$\begin{aligned} H_\dagger := \left\{ (f^\varepsilon _\dagger ,H_{\dagger ,f}^\varepsilon ) \, \big | \, f \in C_\ell ^\infty (E), \varepsilon \in (0,1) \right\} . \end{aligned}$$

For \(f \in C_u^\infty (E)\) and \(\varepsilon \in (0,1)\) set

$$\begin{aligned} f^\varepsilon _\ddagger := (1+\varepsilon ) f - \varepsilon \Upsilon \\ H_{\ddagger ,f}^\varepsilon (x) := (1+\varepsilon ) {\mathcal {H}}(x,\nabla f(x)) - \varepsilon C_\Upsilon . \end{aligned}$$

and set

$$\begin{aligned} H_\ddagger := \left\{ (f^\varepsilon _\ddagger ,H_{\ddagger ,f}^\varepsilon ) \, \big | \, f \in C_u^\infty (E), \varepsilon \in (0,1) \right\} . \end{aligned}$$

3.2 Preliminary results

The operator \({\mathbf {H}}\) is related to \(H_\dagger , H_\ddagger \) by the following Lemma.

Lemma 3.3

Fix \(\lambda > 0\) and \(h \in C_b(E)\).

  1. (a)

    Every subsolution to \(f - \lambda {\mathbf {H}}f = h\) is also a subsolution to \(f - \lambda H_\dagger f = h\).

  2. (b)

    Every supersolution to \(f - \lambda {\mathbf {H}}f = h\) is also a supersolution to \(f-\lambda H_\ddagger f=~h\).

We only prove (a) of Lemma 3.3, as (b) can be carried out analogously.

Proof

Fix \(\lambda > 0\) and \(h \in C_b(E)\). Let u be a subsolution to \(f - \lambda {\mathbf {H}}f = h\). We prove it is also a subsolution to \(f - \lambda H_\dagger f = h\).

Fix \(\varepsilon > 0 \) and \(f\in C_\ell ^\infty (E)\) and let \((f^\varepsilon _\dagger ,H^\varepsilon _{\dagger ,f}) \in H_\dagger \) as in Definition 3.2. We will prove that there are \(x_n\in E\) such that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\left( u-f_\dagger ^\varepsilon \right) (x_n) = \sup _{x\in E}\left( u(x)-f_\dagger ^\varepsilon (x) \right) , \end{aligned}$$
(3.1)
$$\begin{aligned}&\limsup _{n\rightarrow \infty } \left[ u(x_n)-\lambda H_{\dagger ,f}^\varepsilon (x_n) - h(x_n)\right] \le 0. \end{aligned}$$
(3.2)

As the function \(\left[ u -(1-\varepsilon )f\right] \) is bounded from above and \(\varepsilon \Upsilon \) has compact sublevel-sets, the sequence \(x_n\) along which the first limit is attained can be assumed to lie in the compact set

$$\begin{aligned} K := \left\{ x \, | \, \Upsilon (x) \le \varepsilon ^{-1} \sup _x \left( u(x) - (1-\varepsilon )f(x) \right) \right\} . \end{aligned}$$

Set \(M = \varepsilon ^{-1} \sup _x \left( u(x) - (1-\varepsilon )f(x) \right) \). Let \(\gamma : {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a smooth increasing function such that

$$\begin{aligned} \gamma (r) = {\left\{ \begin{array}{ll} r &{} \text {if } r \le M, \\ M + 1 &{} \text {if } r \ge M+2. \end{array}\right. } \end{aligned}$$

Denote by \(f_\varepsilon \) the function on E defined by

$$\begin{aligned} f_\varepsilon (x) := \gamma \left( (1-\varepsilon )f(x) + \varepsilon \Upsilon (x) \right) . \end{aligned}$$

By construction \(f_\varepsilon \) is smooth and constant outside of a compact set and thus lies in \({\mathcal {D}}(H) = C_{cc}^\infty (E)\). As u is a viscosity subsolution for \(f - \lambda Hf = h\) there exists a sequence \(x_n \in K \subseteq E\) (by our choice of K) with

$$\begin{aligned}&\lim _n \left( u-f_\varepsilon \right) (x_n) = \sup _x \left( u(x)-f_\varepsilon (x)\right) , \end{aligned}$$
(3.3)
$$\begin{aligned}&\limsup _n \left[ u(x_n) - \lambda {\mathbf {H}} f_\varepsilon (x_n) - h(x_n)\right] \le 0. \end{aligned}$$
(3.4)

As \(f_\varepsilon \) equals \(f_\dagger ^\varepsilon \) on K, we have from (3.3) that also

$$\begin{aligned} \lim _n \left( u-f_\dagger ^\varepsilon \right) (x_n) = \sup _{x\in E}\left( u(x)-f_\dagger ^\varepsilon (x)\right) , \end{aligned}$$

establishing (3.1). Convexity of \(p \mapsto {\mathcal {H}}(x,p)\) yields for arbitrary points \(x\in K\) the estimate

$$\begin{aligned} {\mathbf {H}} f_\varepsilon (x)&= {\mathcal {H}}(x,\nabla f_\varepsilon (x)) \\&\le (1-\varepsilon ) {\mathcal {H}}(x,\nabla f(x)) + \varepsilon {\mathcal {H}}(x,\nabla \Upsilon (x)) \\&\le (1-\varepsilon ) {\mathcal {H}}(x,\nabla f(x)) + \varepsilon C_\Upsilon = H^\varepsilon _{\dagger ,f}(x). \end{aligned}$$

Combining this inequality with (3.4) yields

$$\begin{aligned}&\limsup _n \left[ u(x_n) - \lambda H^\varepsilon _{\dagger ,f}(x_n) - h(x_n)\right] \\&\quad \le \limsup _n \left[ u(x_n) - \lambda {\mathbf {H}} f_\varepsilon (x_n) - h(x_n)\right] \le 0, \end{aligned}$$

establishing (3.2). This concludes the proof. \(\square \)

3.3 The comparison principle

In this section, we prove the comparison principle for the operators \(H_\dagger \) and \(H_\ddagger \).

Proposition 3.4

Fix \(\lambda > 0\) and \(h_1,h_2 \in C_b(E)\). Let \(u_1\) be a viscosity subsolution to \(f - \lambda H_\dagger f = h_1\) and let \(u_2\) be a viscosity supersolution to \(f - \lambda H_\ddagger f = h_2\). Then we have \(\sup _x u_1(x) - u_2(x) \le \sup _x h_1(x) - h_2(x)\).

The proof uses a variant of a classical estimate that was proven e.g. in [8, Proposition 3.7] or in the present form in Proposition A.11 of [7].

Lemma 3.5

Let u be bounded and upper semi-continuous, let v be bounded and lower semi-continuous, let \(\Psi : E^2 \rightarrow {\mathbb {R}}^+\) be penalization functions and let \(\Upsilon \) be a containment function.

Fix \(\varepsilon > 0\). For every \(\alpha >0\) there exist \(x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon } \in E\) such that

$$\begin{aligned}&\frac{u(x_{\alpha ,\varepsilon })}{1-\varepsilon } - \frac{v(y_{\alpha ,\varepsilon })}{1+\varepsilon } - \alpha \Psi (x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon }) - \frac{\varepsilon }{1-\varepsilon }\Upsilon (x_{\alpha ,\varepsilon }) -\frac{\varepsilon }{1+\varepsilon }\Upsilon (y_{\alpha ,\varepsilon }) \nonumber \\&\quad = \sup _{x,y \in E} \left\{ \frac{u(x)}{1-\varepsilon } - \frac{v(y)}{1+\varepsilon } - \alpha \Psi (x,y) - \frac{\varepsilon }{1-\varepsilon }\Upsilon (x) - \frac{\varepsilon }{1+\varepsilon }\Upsilon (y)\right\} .\nonumber \\ \end{aligned}$$
(3.5)

Additionally, for every \(\varepsilon > 0\) we have that

  1. (a)

    The set \(\{x_{\alpha ,\varepsilon }, y_{\alpha ,\varepsilon } \, | \, \alpha > 0\}\) is relatively compact in E.

  2. (b)

    All limit points of \(\{(x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon })\}_{\alpha > 0}\) as \(\alpha \rightarrow \infty \) are of the form (zz) and for these limit points we have \(u(z) - v(z) = \sup _{x \in E} \left\{ u(x) - v(x) \right\} \).

  3. (c)

    We have

    $$\begin{aligned} \lim _{\alpha \rightarrow \infty } \alpha \Psi (x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon }) = 0. \end{aligned}$$

Proof of Proposition 3.4

Fix \(\lambda >0\) and \(h_1,h_2 \in C_b(E)\). Let \(u_1\) be a viscosity subsolution and \(u_2\) be a viscosity supersolution of \(f - \lambda H_\dagger f = h_1\) and \(f - \lambda H_\ddagger f = h_2\) respectively. We prove Theorem 3.4 in five steps of which the first two are classical.

We sketch the steps, before giving full proofs.

\(\underline{Step 1 }\): We prove that for \(\varepsilon > 0 \) and \(\alpha > 0\), there exist points \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in E\) satisfying the properties listed in Lemma 3.5 and momenta \(p_{\varepsilon ,\alpha }^1,p_{\varepsilon ,\alpha }^2 \in {\mathbb {R}}^d\) such that

$$\begin{aligned} p_{\varepsilon ,\alpha }^1 = \alpha \nabla \Psi (\cdot ,y_{\varepsilon ,\alpha })(x_{\varepsilon ,\alpha }), \qquad p_{\varepsilon ,\alpha }^2 = - \alpha \nabla \Psi (x_{\varepsilon ,\alpha },\cdot )(y_{\varepsilon ,\alpha }), \end{aligned}$$

and

$$\begin{aligned}&\sup _E(u_1-u_2) \le \lambda \liminf _{\varepsilon \rightarrow 0}\liminf _{\alpha \rightarrow \infty } \left[ {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) - {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha })\right] \nonumber \\&\quad + \sup _{E}(h_1 - h_2). \end{aligned}$$
(3.6)

This step is solely based on the sub- and supersolution properties of \(u_1,u_2\), the continuous differentiability of the penalization function \(\Psi (x,y)\), the containment function \(\Upsilon \), and convexity of \(p \mapsto {\mathcal {H}}(x,p)\). We conclude it suffices to establish for each \(\varepsilon > 0\) that

$$\begin{aligned} \liminf _{\alpha \rightarrow \infty } {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) - {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha }) \le 0. \end{aligned}$$
(3.7)

\(\underline{Step 2 }:\) We will show that there are controls \(\theta _{\varepsilon ,\alpha }\) such that

$$\begin{aligned} {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) = \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) - {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }). \end{aligned}$$
(3.8)

As a consequence we have

$$\begin{aligned}&{\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha })- {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha }) \le \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\nonumber \\&\quad +{\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }). \end{aligned}$$
(3.9)

For establishing (3.7), it is sufficient to bound the differences in (3.9) by using Assumptions 2.14 (\(\Lambda 5\)) and 2.15 (\({\mathcal {I}}5\)).

\(\underline{Step 3 }\): We verify the conditions to apply the continuity estimate, Assumption 2.14 (\(\Lambda 5\)).

The bootstrap argument allows us to find for each \(\varepsilon \) a subsequence \(\alpha = \alpha (\varepsilon ) \rightarrow \infty \) such that the variables \((x_{\varepsilon ,\alpha },x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) are fundamental for \(\Lambda \) with respect to \(\Psi \) (See Definition 2.11).

\(\underline{Step 4 }:\) We verify the conditions to apply the estimate on \({\mathcal {I}}\), Assumption 2.15 (\({\mathcal {I}}5\)).

\(\underline{Step 5 }:\) Using the outcomes of Steps 3 and 4, we can apply the continuity estimate of Assumption 2.14 (\(\Lambda 4\)) and the equi-continuity of Assumption 2.15 (\({\mathcal {I}}5\)) to estimate (3.9) for any \(\varepsilon \):

$$\begin{aligned}&\liminf _{\alpha \rightarrow \infty } {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha })- {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha }) \nonumber \\&\quad \le \liminf _{\alpha \rightarrow \infty } \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) \nonumber \\&\qquad +{\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) \le 0, \end{aligned}$$
(3.10)

which establishes (3.7) and thus also the comparison principle.

We proceed with the proofs of the first four steps, as the fifth step is immediate.

\(\underline{Proof of Step 1 }\): The proof of this first step is classical. We include it for completeness. For any \(\varepsilon > 0\) and any \(\alpha > 0\), define the map \(\Phi _{\varepsilon ,\alpha }: E \times E \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \Phi _{\varepsilon ,\alpha }(x,y) := \frac{u_1(x)}{1-\varepsilon } - \frac{u_2(y)}{1+\varepsilon } - \alpha \Psi (x,y) - \frac{\varepsilon }{1-\varepsilon } \Upsilon (x) - \frac{\varepsilon }{1+\varepsilon }\Upsilon (y). \end{aligned}$$

Let \(\varepsilon > 0\). By Lemma 3.5, there is a compact set \(K_\varepsilon \subseteq E\) and there exist points \(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha } \in K_\varepsilon \) such that

$$\begin{aligned} \Phi _{\varepsilon ,\alpha }(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = \sup _{x,y \in E} \Phi _{\varepsilon ,\alpha }(x,y), \end{aligned}$$
(3.11)

and

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \alpha \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) = 0. \end{aligned}$$
(3.12)

As in the proof of Proposition A.11 of [23], it follows that

$$\begin{aligned} \sup _E (u_1 - u_2) \le \liminf _{\varepsilon \rightarrow 0} \liminf _{\alpha \rightarrow \infty } \left[ \frac{u_1(x_{\varepsilon ,\alpha })}{1-\varepsilon } - \frac{u_2(y_{\varepsilon ,\alpha })}{1+\varepsilon }\right] . \end{aligned}$$
(3.13)

At this point, we want to use the sub- and supersolution properties of \(u_1\) and \(u_2\). Define the test functions \(\varphi ^{\varepsilon ,\alpha }_1 \in {\mathcal {D}}(H_\dagger ), \varphi ^{\varepsilon ,\alpha }_2 \in {\mathcal {D}}(H_\ddagger )\) by

$$\begin{aligned} \varphi ^{\varepsilon ,\alpha }_1(x)&:= (1-\varepsilon ) \left[ \frac{u_2(y_{\varepsilon ,\alpha })}{1+\varepsilon } + \alpha \Psi (x,y_{\varepsilon ,\alpha }) + \frac{\varepsilon }{1-\varepsilon }\Upsilon (x) + \frac{\varepsilon }{1+\varepsilon }\Upsilon (y_{\varepsilon ,\alpha })\right] \\&\quad + (1-\varepsilon )(x-x_{\varepsilon ,\alpha })^2, \\ \varphi ^{\varepsilon ,\alpha }_2(y)&:= (1+\varepsilon )\left[ \frac{u_1(x_{\varepsilon ,\alpha })}{1-\varepsilon } - \alpha \Psi (x_{\varepsilon ,\alpha },y) - \frac{\varepsilon }{1-\varepsilon }\Upsilon (x_{\varepsilon ,\alpha }) - \frac{\varepsilon }{1+\varepsilon }\Upsilon (y)\right] \\&\quad - (1+\varepsilon ) (y-y_{\varepsilon ,\alpha })^2. \end{aligned}$$

Using (3.11), we find that \(u_1 - \varphi ^{\varepsilon ,\alpha }_1\) attains its supremum at \(x = x_{\varepsilon ,\alpha }\), and thus

$$\begin{aligned} \sup _E (u_1-\varphi ^{\varepsilon ,\alpha }_1) = (u_1-\varphi ^{\varepsilon ,\alpha }_1)(x_{\varepsilon ,\alpha }). \end{aligned}$$

Denote \(p_{\varepsilon ,\alpha }^1 := \alpha \nabla _x \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\). By our addition of the penalization \((x-x_{\varepsilon ,\alpha })^2\) to the test function, the point \(x_{\varepsilon ,\alpha }\) is in fact the unique optimizer, and we obtain from the subsolution inequality that

$$\begin{aligned} u_1(x_{\varepsilon ,\alpha }) - \lambda \left[ (1-\varepsilon ) {\mathcal {H}}\left( x_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha }^1 \right) + \varepsilon C_\Upsilon \right] \le h_1(x_{\varepsilon ,\alpha }). \end{aligned}$$
(3.14)

With a similar argument for \(u_2\) and \(\varphi ^{\varepsilon ,\alpha }_2\), we obtain by the supersolution inequality that

$$\begin{aligned} u_2(y_{\varepsilon ,\alpha }) - \lambda \left[ (1+\varepsilon ){\mathcal {H}}\left( y_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha }^2 \right) - \varepsilon C_\Upsilon \right] \ge h_2(y_{\varepsilon ,\alpha }), \end{aligned}$$
(3.15)

where \(p_{\varepsilon ,\alpha }^2 := -\alpha \nabla _y \Psi (x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\). With that, estimating further in (3.13) leads to

$$\begin{aligned}&\sup _E(u_1-u_2) \le \liminf _{\varepsilon \rightarrow 0}\liminf _{\alpha \rightarrow \infty } \bigg [\frac{h_1(x_{\varepsilon ,\alpha })}{1-\varepsilon } - \frac{h_2(y_{\varepsilon ,\alpha })}{1+\varepsilon } + \frac{\varepsilon }{1-\varepsilon } C_\Upsilon \\&\quad + \frac{\varepsilon }{1+\varepsilon } C_\Upsilon + \lambda \left[ {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) - {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha })\right] \bigg ]. \end{aligned}$$

Thus, (3.6) in Step 1 follows.

\(\underline{Proof of Step 2 }\): Recall that \({\mathcal {H}}(x,p)\) is given by

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta }\left[ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\right] . \end{aligned}$$

Since \(\Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\cdot ) : \Theta \rightarrow {\mathbb {R}}\) is bounded and continuous by (\(\Lambda 1\)) and (\(\Lambda 4\)), and \({\mathcal {I}}(x_{\varepsilon ,\alpha },\cdot ) : \Theta \rightarrow [0,\infty ]\) has compact sub-level sets in \(\Theta \) by (\({\mathcal {I}}3\)), there exists an optimizer \(\theta _{\varepsilon ,\alpha } \in \Theta \) such that

$$\begin{aligned} {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) = \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) - {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }). \end{aligned}$$
(3.16)

Choosing the same point in the supremum of the second term \({\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha })\), we obtain for all \(\varepsilon > 0\) and \(\alpha > 0\) the estimate

$$\begin{aligned}&{\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha })- {\mathcal {H}}(y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha }) \le \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\nonumber \\&\quad +{\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })- {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }). \end{aligned}$$
(3.17)

\(\underline{Proof of Step 3 }\): We will construct for each \(\varepsilon > 0\) a sequence \(\alpha = \alpha (\varepsilon ) \rightarrow \infty \) such that the collection \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) is fundamental for \(\Lambda \) with respect to \(\Psi \) in the sense of Definition 2.11. We thus need to verify for each \(\varepsilon > 0\)

  1. (i)
    $$\begin{aligned} \inf _\alpha \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) > - \infty , \end{aligned}$$
    (3.18)
  2. (ii)
    $$\begin{aligned} \sup _{\alpha }\Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty \end{aligned}$$
    (3.19)
  3. (iii)

    The set of controls \(\theta _{\varepsilon ,\alpha }\) is relatively compact.

To prove (i), (ii) and (iii), we introduce auxiliary controls \(\theta _{\varepsilon ,\alpha }^0\), obtained by (\({\mathcal {I}}2\)), satisfying

$$\begin{aligned} {\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) = 0. \end{aligned}$$
(3.20)

We will first establish (i) and (ii) for all \(\alpha \). Then, for the proof of (iii), we will construct for each \(\varepsilon > 0\) a suitable subsequence \(\alpha \rightarrow \infty \).

\(\underline{Proof of Step 3, (i) and (ii) }:\)

We first establish (i). By the subsolution inequality (3.14),

$$\begin{aligned} \frac{1}{\lambda } \inf _E\left( u_1 - h\right)&\le (1-\varepsilon ) {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) + \varepsilon C_{\Upsilon } \nonumber \\&\le (1-\varepsilon ) \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) + \varepsilon C_\Upsilon , \end{aligned}$$
(3.21)

and the lower bound (3.18) follows.

We next establish (ii). By the supersolution inequality (3.15), we can estimate

$$\begin{aligned} (1+\varepsilon ) \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0)&= (1+\varepsilon ) \left[ \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) - {\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0)\right] \\&\le \left( (1+\varepsilon ) {\mathcal {H}}\left( y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha }\right) - \varepsilon C_\Upsilon \right) + \varepsilon C_\Upsilon \\&\le \frac{1}{\lambda } \sup _E (u_2-h) + \varepsilon C_{\Upsilon } < \infty , \end{aligned}$$

and the upper bound (3.19) follows by Assumption 2.14 (\(\Lambda 4\)).

\(\underline{Proof of Step 3, (iii) }\): To prove (iii), it suffices by Assumption 2.15 (\({\mathcal {I}}3\)) to find for each \(\varepsilon > 0\) a subsequence \(\alpha \) such that

$$\begin{aligned} \sup _{\alpha } {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty . \end{aligned}$$
(3.22)

By (3.21), we have

$$\begin{aligned} \frac{1}{\lambda } \inf _E\left( u_1 - h\right)&\le (1-\varepsilon ) {\mathcal {H}}(x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha }) + \varepsilon C_{\Upsilon } \\&= (1-\varepsilon ) \left[ \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) - {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) \right] + \varepsilon C_{\Upsilon }. \end{aligned}$$

We conclude that \(\sup _\alpha {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty \) is implied by

$$\begin{aligned} \sup _{\alpha } \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty \end{aligned}$$

which by (\(\Lambda 4\)) is equivalent to

$$\begin{aligned} \sup _\alpha \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) < \infty . \end{aligned}$$
(3.23)

To perform this estimate, we first write

$$\begin{aligned}&\Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) \nonumber \\&\quad = \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) + \left[ \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) - \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0)\right] .\nonumber \\ \end{aligned}$$
(3.24)

To estimate the second term, we aim to apply the continuity estimate for the controls \(\theta _{\varepsilon ,\alpha }^0\). To do so, must establish that \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0)\) is fundamental for \(\Lambda \) with respect to \(\Psi \). By Assumption 2.15 (\({\mathcal {I}}3\)), for each \(\varepsilon \) the set of controls \(\theta _{\varepsilon ,\alpha }^0\) is relatively compact. Thus it suffices to establish

$$\begin{aligned}&\inf _\alpha \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) > - \infty , \end{aligned}$$
(3.25)
$$\begin{aligned}&\sup _{\alpha }\Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) < \infty . \end{aligned}$$
(3.26)

These two estimates follow by Assumption 2.14 (\(\Lambda 4\)) and (3.18) and (3.19).

The continuity estimate of Assumption 2.14 (\(\Lambda 5\)) yields that

$$\begin{aligned} \liminf _{\alpha \rightarrow \infty } \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) - \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) \le 0. \end{aligned}$$

This means that there exists a subsequence, also denoted by \(\alpha \) such that

$$\begin{aligned} \sup _{\alpha } \Lambda (x_{\varepsilon ,\alpha },p^1_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) - \Lambda (y_{\varepsilon ,\alpha },p^2_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }^0) < \infty . \end{aligned}$$
(3.27)

Thus, we can estimate (3.24) by (3.27) and (3.26). This implies that (3.22) holds for the chosen subsequences \(\alpha \) and that for these the collection \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\) is fundamental for \(\Lambda \) with respect to \(\Psi \) establishing Step 3.

\(\underline{Proof of Step 4 }\):

For the subsequences constructed in Step 3, we have by (3.22) that

$$\begin{aligned} \sup _{\alpha } {\mathcal {I}}(x_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty . \end{aligned}$$
(3.28)

As established in Step 1, following Lemma 3.5, for each \(\varepsilon > 0\) the set \(\{(x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha })\}\) is relatively compact where \(\alpha \) varies over the subsequences selected in Step 3. In addition, for each \(\varepsilon > 0\) there exists \(z \in E\) such that \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha }) \rightarrow (z,z)\). It follows by (3.28) and Assumption 2.15 (\({\mathcal {I}}4\)) that also

$$\begin{aligned} \sup _{\alpha } {\mathcal {I}}(y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }) < \infty . \end{aligned}$$
(3.29)

With the bounds (3.28) and (3.29), the estimate (\({\mathcal {I}}5\)) is satisfied for the subsequences \((x_{\varepsilon ,\alpha },y_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha })\). \(\square \)

4 Existence of viscosity solutions

In this section, we will prove Theorem 2.8. In other words, we show that for \(h\in C_b(E)\) and \(\lambda >0\), the function \(R(\lambda )h\) given by

$$\begin{aligned} R(\lambda ) h(x) = \sup _{\begin{array}{c} \gamma \in {\mathcal {A}}{\mathcal {C}} \\ \gamma (0) = x \end{array}} \int _0^\infty \lambda ^{-1} e^{-\lambda ^{-1}t} \left[ h(\gamma (t)) - \int _0^t {\mathcal {L}}(\gamma (s),{\dot{\gamma }}(s))\right] \, {\mathrm {d}}t \end{aligned}$$

is indeed a viscosity solution to \(f - \lambda {\mathbf {H}}f = h\). To do so, we will use the methods of Chapter 8 of [19]. For this strategy, one needs to check three properties of \(R(\lambda )\):

  1. (a)

    For all \((f,g) \in {\mathbf {H}}\), we have \(f = R(\lambda )(f - \lambda g)\).

  2. (b)

    The operator \(R(\lambda )\) is a pseudo-resolvent: for all \(h \in C_b(E)\) and \(0< \alpha < \beta \) we have

    $$\begin{aligned} R(\beta )h = R(\alpha ) \left( R(\beta )h - \alpha \frac{R(\beta )h - h}{\beta } \right) . \end{aligned}$$
  3. (c)

    The operator \(R(\lambda )\) is contractive.

Thus, if \(R(\lambda )\) serves as a classical left-inverse to \({\mathbb {1}}- \lambda {\mathbf {H}}\) and is also a pseudo-resolvent, then it is a viscosity right-inverse of \(({\mathbb {1}}- \lambda {\mathbf {H}})\). For a second proof of this statement, outside of the control theory context, see Proposition 3.4 of [24].

Establishing (c) is straightforward. The proof of (a) and (b) stems from two main properties of exponential random variable. Let \(\tau _\lambda \) be the measure on \({\mathbb {R}}^+\) corresponding to the exponential random variable with mean \(\lambda ^{-1}\).

  • (a) is related to integration by parts: for bounded measurable functions z on \({\mathbb {R}}^+\), we have

    $$\begin{aligned} \lambda \int _0^\infty z(t) \, \tau _\lambda ( {\mathrm {d}}t) = \int _0^\infty \int _0^t z(s) \, {\mathrm {d}}s \, \tau _\lambda ({\mathrm {d}}t). \end{aligned}$$
  • (b) is related to a more involved integral property of exponential random variables. For \(0< \alpha < \beta \), we have

    $$\begin{aligned} \int _0^\infty z(s) \tau _\beta ({\mathrm {d}}s) \\ = \frac{\alpha }{\beta } \int _0^\infty z(s) \tau _\alpha ({\mathrm {d}}s) + \left( 1 - \frac{\alpha }{\beta }\right) \int _0^\infty \int _0^\infty z(s+u) \, \tau _\beta ({\mathrm {d}}u) \, \tau _\alpha ({\mathrm {d}}s). \end{aligned}$$

Establishing (a) and (b) can then be reduced by a careful analysis of optimizers in the definition of \(R(\lambda )\), and concatenation or splittings thereof. This was carried out in Chapter 8 of [19] on the basis of three assumptions, namely [19, Assumptions 8.9, 8.10 and 8.11]. We verify these below.

Verification of Conditions 8.9, 8.10 and 8.11

In the notation of [19], we use \(U = {\mathbb {R}}^d\), \(\Gamma = E \times U\), one operator \({\mathbf {H}}= {\mathbf {H}}_\dagger = {\mathbf {H}}_\ddagger \) and \(Af(x,u) = \langle \nabla f(x),u\rangle \) for \(f \in {\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\).

Regarding Condition 8.9, by continuity and convexity of \({\mathcal {H}}\) obtained in Propositions B.1 and B.3, parts 8.9.1, 8.9.2, 8.9.3 and 8.9.5 can be proven e.g. as in the proof of [19, Lemma 10.21] for \(\psi = 1\). Part 8.9.4 is a consequence of the existence of a containment function, and follows as shown in the proof of Theorem A.17 of [7]. Since we use the argument further below, we briefly recall it here. We need to show that for any compact set \(K \subseteq E\), any finite time \(T > 0\) and finite bound \(M \ge 0\), there exists a compact set \(K' = K'(K,T,M) \subseteq E\) such that for any absolutely continuous path \(\gamma :[0,T] \rightarrow E\) with \(\gamma (0) \in K\), if

$$\begin{aligned} \int _0^T {\mathcal {L}}(\gamma (t),{\dot{\gamma }}(t)) \, dt \le M, \end{aligned}$$
(4.1)

then \(\gamma (t) \in K'\) for any \(0\le t \le T\).

For \(K\subseteq E\), \(T>0\), \(M\ge 0\) and \(\gamma \) as above, this follows by noting that

$$\begin{aligned} \Upsilon (\gamma (\tau ))&= \Upsilon (\gamma (0)) + \int _0^\tau \nabla \Upsilon (\gamma (t)) {\dot{\gamma }}(t) \, dt \nonumber \\&\le \Upsilon (\gamma (0)) + \int _0^\tau \left[ {\mathcal {L}}(\gamma (t),{\dot{\gamma }}(t))) + {\mathcal {H}}(x(t),\nabla \Upsilon (\gamma (t))) \right] \, dt \nonumber \\&\le \sup _K \Upsilon + M + T \sup _{x \in E} {\mathcal {H}}(x,\nabla \Upsilon (x)) =: C < \infty , \end{aligned}$$
(4.2)

for any \(0 \le \tau \le T\), so that the compact set \(K' := \{z \in E \,:\, \Upsilon (z) \le C\}\) satisfies the claim.

We proceed with the verification of Conditions 8.10 and 8.11 of [19]. By Proposition B.1, we have \({\mathcal {H}}(x,0) = 0\) and hence the application of \({\mathbf {H}}\) to constant 1 function \({\mathbb {1}}\) satisfies \({\mathbf {H}}{\mathbb {1}}= 0\). Thus, Condition 8.10 is implied by Condition 8.11 (see Remark 8.12 (e) in [19]).

We establish that Condition 8.11 is satisfied: for any function \(f\in {\mathcal {D}}({\mathbf {H}}) = C_{cc}^\infty (E)\) and \(x_0 \in E\), there exists an absolutely continuous path \(x:[0,\infty ) \rightarrow E\) such that \(x(0) = x_0\) and for any \(t \ge 0\),

$$\begin{aligned} \int _0^t {\mathcal {H}}(x(s),\nabla f(x(s)) \, ds = \int _0^t \left[ {\dot{x}}(s) \cdot \nabla f(x(s)) - {\mathcal {L}}(x(s),{\dot{x}}(s)) \right] \, ds. \end{aligned}$$
(4.3)

To do so, we solve the differential inclusion

$$\begin{aligned} {\dot{x}}(t) \in \partial _p {\mathcal {H}}(x(t),\nabla f(x(t))), \qquad x(0) = x_0, \end{aligned}$$
(4.4)

where the subdifferential of \({\mathcal {H}}\) was defined in (2.9) on page 10.

Since the addition of a constant to f does not change the gradient, we may assume without loss of generality that f has compact support. A general method to establish existence of differential inclusions \({\dot{x}} \in F(x)\) is given by Lemma 5.1 of Deimling [11]. We have included this result as Lemma D.5, and corresponding preliminary definitions in “Appendix D”. We use this result for \(F(x) := \partial _p {\mathcal {H}}(x,\nabla f(x))\). To apply Lemma D.5, we need to verify that:

  1. (F1)

    F is upper hemi-continuous and F(x) is non-empty, closed, and convex for all \(x \in E\).

  2. (F2)

    \(\Vert F(x)\Vert \le c(1 + |x|)\) on E, for some \(c > 0\).

  3. (F3)

    \(F(x) \cap T_E(x) \ne \emptyset \) for all \(x \in E\). (For the definition of \(T_E\), see Definition 2.16 on page 10).

Part (F1) follows from the properties of subdifferential sets of convex and continuous functionals. \({\mathcal {H}}\) is continuous in (xp) and convex in p by Proposition B.1. Part (F3) is a consequence of Lemma 4.1, which yields that \(F(x)\subseteq T_E(x)\). Part (F2) is in general not satisfied. To circumvent this problem, we use properties of \({\mathcal {H}}\) to establish a-priori bounds on the range of solutions.

Step 1: Let \(T > 0\), and assume that x(t) solves (4.4). We establish that there is some M such that (4.1) is satisfied. By (4.4) we obtain for all \(p \in {\mathbb {R}}^d\),

$$\begin{aligned} {\mathcal {H}}(x(t),p) \ge {\mathcal {H}}(x(t),\nabla f(x(t))) + {\dot{x}}(t) \cdot (p - \nabla f(x(t))), \end{aligned}$$

and as a consequence

$$\begin{aligned} {\dot{x}}(t) \nabla f(x(t)) - {\mathcal {H}}(x(t),\nabla f(x(t))) \ge {\mathcal {L}}(x(t),{\dot{x}}(t)). \end{aligned}$$

Since f has compact support and \({\mathcal {H}}(y,0) = 0\) for any \(y \in E\), we estimate

$$\begin{aligned} \int _0^T {\mathcal {L}}(x(t),{\dot{x}}(t)) \, ds&\le \int _0^T {\dot{x}}(t) \nabla f(x(t)) \, dt - T\inf _{y \in \mathrm {supp}(f)} {\mathcal {H}}(y,\nabla f(y)). \end{aligned}$$

By continuity of \({\mathcal {H}}\) the field F is bounded on compact sets, so the first term can be bounded by

$$\begin{aligned} \int _0^T {\dot{x}}(t) \nabla f(x(t)) \, dt \le T \sup _{y \in \mathrm {supp}(f)}\Vert F(y)\Vert \sup _{z \in \mathrm {supp}(f)}|\nabla f(z)|. \end{aligned}$$

Therefore, for any \(T>0\), we obtain that the integral over the Lagrangian is bounded from above by \(M = M(T)\), with

$$\begin{aligned} M := T \sup _{y \in \mathrm {supp}(f)}\Vert F(y)\Vert \sup _{z \in \mathrm {supp}(f)}|\nabla f(z)| - \inf _{y \in \mathrm {supp}(f)} {\mathcal {H}}(y,\nabla f(y)). \end{aligned}$$

From the first part of the, see the argument concluding after (4.2), we find that the solution x(t) remains in the compact set

$$\begin{aligned} K' := \left\{ z \in E \, \big | \, \Upsilon (z) \le C \right\} , \quad C := \Upsilon (x_0) + M + T \sup _x {\mathcal {H}}(x,\nabla \Upsilon (x)),\nonumber \\ \end{aligned}$$
(4.5)

for all \(t \in [0,T]\).

Step 2: We prove that there exists a solution x(t) of (4.4) on [0, T].

Using F, we define a new multi-valued vector-field \(F'(z)\) that equals \(F(z) = \partial _p {\mathcal {H}}(z,\nabla f(z))\) inside \(K'\), but equals \(\{0\}\) outside a neighborhood of K. This can e.g. be achieved by multiplying with a smooth cut-off function \(g_{K'} : E \rightarrow [0,1]\) that is equal to one on \(K'\) and zero outside of a neighborhood of \(K'\).

The field \(F'\) satisfies (F1), (F2) and (F3) from above, and hence there exists an absolutely continuous path \(y : [0,\infty ) \rightarrow E\) such that \(y(0) = x_0\) and for almost every \(t \ge 0\),

$$\begin{aligned} {\dot{y}}(t) \in F'(y(t)). \end{aligned}$$

By the estimate established in step 1 and the fact that \(\Upsilon (\gamma (t)) \le C\) for any \(0 \le t \le T\), it follows from the argument as shown above in (4.2) that the solution y stays in \(K'\) up to time T. Since on \(K'\), we have \(F' = F\), this implies that setting \(x = y|_{[0,T]}\), we obtain a solution x(t) of (4.4) on the time interval [0, T]. \(\square \)

Lemma 4.1

Let Assumption 2.17 be satisfied. Then the map \({\mathcal {H}}: E \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) defined in (2.2) is such that \(\partial _p {\mathcal {H}}(x,p) \subseteq T_E(x)\) for all p and \(x \in E\).

Proof

Fix \(x \in E\) and \(p_0 \in {\mathbb {R}}^d\). We aim to prove that \(\partial _p {\mathcal {H}}(x,p_0) \subseteq T_E(x)\). Recall the definition of \({\mathcal {H}}\):

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta } \left\{ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta ) \right\} . \end{aligned}$$
(4.6)

Let \(\Omega (p) \subseteq \Theta \) be the set of controls that optimize \({\mathcal {H}}\): thus if \(\theta \in \Omega (p)\) then \({\mathcal {H}}(x,p) = \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\).

The result will follow from the following claim,

$$\begin{aligned} \partial _p {\mathcal {H}}(x,p_0) = ch \bigcup _{\theta \in \Omega (p_0)} \partial _p \Lambda (x,p_0,\theta ), \end{aligned}$$
(4.7)

where ch denotes the convex hull. Having established this claim, the result follows from Assumption 2.17 and the fact that \(T_E(x)\) is a convex set by Lemma D.4.

We start with the proof of (4.7). For this we will use [22, Theorem D.4.4.2]. To study the subdifferential of the function \(\partial _{p} {\mathcal {H}}(x,p_0)\), it suffices to restrict the domain of the map \(p \mapsto {\mathcal {H}}(x,p)\) to the closed ball \(B_1(p_0)\) around \(p_0\) with radius 1.

To apply [22, Theorem D.4.4.2] for this restricted map, first recall that \(\Lambda \) is continuous by Assumption 2.14 (\(\Lambda 1\)) and that \({\mathcal {I}}\) is lower semi-continuous by Assumption 2.15 (\({\mathcal {I}}1\)). Secondly, we need to find a compact set \(\Omega \subseteq \Theta \) such that we can restrict the supremum (for any \(p \in B_1(p_0))\) in (4.6) to \(\Omega \):

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Omega } \left\{ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta ) \right\} . \end{aligned}$$

In particular, we show that we can take for \(\Omega \) a sublevelset of \({\mathcal {I}}(x,\cdot )\) which is compact by Assumption 2.15 (\({\mathcal {I}}3\)).

Let \(\theta _{x}^0\) be the control such that \({\mathcal {I}}(x,\theta _{x}^0) = 0\), which exists due to Assumption 2.15 (\({\mathcal {I}}2\)). Let \(M^*\) be such that (with the constants \(M,C_1,C_2\) as in Assumption 2.14 (\(\Lambda 4\)))

$$\begin{aligned} M^* = \sup _{p \in B_1(p_0)} \left\{ \max \left\{ M,C_1 \Lambda (x,p,\theta _{x}^0) + C_2\right\} - \Lambda (x,p,\theta _{x}^0) \right\} < \infty . \end{aligned}$$

Note that \(M^*\) is finite as \(p \mapsto \Lambda (x,p,\theta _{x}^0)\) is continuous on the closed unit ball \(B_1(p_0)\). Then we find, due to Assumption 2.14 (\(\Lambda 4\)), that if \(\theta \) satisfies \({\mathcal {I}}(x,\theta ) > M^*\), then for any \(p\in B_1(p_0)\) we have

$$\begin{aligned} \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta ) < \Lambda (x,p,\theta _{x}^0) \le {\mathcal {H}}(x_0,p). \end{aligned}$$

We obtain that if \(p \in B_1(p_0)\), then we can restrict our supremum in (4.6) to the compact set \(\Omega := \Theta _{\{x\},M^*}\), see Assumption 2.15 (\({\mathcal {I}}3\)).

Thus, it follows by [22, Theorem D.4.4.2] that

$$\begin{aligned} \partial _p {\mathcal {H}}(x,p_0) = ch \left( \bigcup _{\theta \in \Theta _{\{x\},{\bar{M}}^*}} \partial _p \left( \Lambda (x,p_0,\theta ) - {\mathcal {I}}(x,\theta ) \right) \right) , \end{aligned}$$

where ch denotes the convex hull. Now (4.7) follows by noting that \({\mathcal {I}}(x,\theta )\) does not depend on p. \(\square \)

5 Examples of Hamiltonians

In this section we specify our general results to two concrete examples of Hamiltonians of the type

$$\begin{aligned} {\mathcal {H}}(x,p) = \sup _{\theta \in \Theta }\left[ \Lambda (x,p,\theta ) - {\mathcal {I}}(x,\theta )\right] . \end{aligned}$$
(5.1)

The purpose of this section is to showcase that the method introduced in this paper is versatile enough to capture interesting examples that could not be treated before.

First, we consider in Proposition 5.1 Hamiltonians that one encounters in the large deviation analysis of two-scale systems as studied in [6] and [27] when considering a diffusion process coupled to a fast jump process. Second, we consider in Proposition 5.7 the example treated in our introduction that arises from models of mean-field interacting particles that are coupled to fast external variables. This example will be further analyzed in [26].

Proposition 5.1

(Diffusion coupled to jumps) Let \(E={\mathbb {R}}^d\) and \(F=\{1,\dots ,J\}\) be a finite set. Suppose the following:

  1. (i)

    The set of control variables is \(\Theta :={\mathcal {P}}(\{1,\dots ,J\})\), that is probability measures over the finite set F.

  2. (ii)

    The function \(\Lambda \) is given by

    $$\begin{aligned} \Lambda (x,p,\theta ) := \sum _{i\in F}\left[ \langle a(x,i)p,p\rangle +\langle b(x,i),p\rangle \right] \theta _i, \end{aligned}$$

    where \(a:E\times F\rightarrow {\mathbb {R}}^{d\times d}\) and \(b:E\times F\rightarrow {\mathbb {R}}^d\), and \(\theta _i:=\theta (\{i\})\).

  3. (iii)

    The cost function \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty )\) is given by

    $$\begin{aligned} {\mathcal {I}}(x,\theta ) := \sup _{w\in {\mathbb {R}}^J}\sum _{ij}r(i,j,x)\theta _i \left[ 1-e^{w_j-w_i}\right] , \end{aligned}$$

    with non-negative rates \(r:F^2\times E\rightarrow [0,\infty )\).

Suppose that the cost function \({\mathcal {I}}\) satisfies the assumptions of Proposition 5.9 below and the function \(\Lambda \) satisfies the assumptions of Proposition 5.11 below. Then Theorems 2.6 and 2.8 apply to the Hamiltonian (5.1).

Proof

To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.11, Assumption 2.15 follows from Proposition 5.9 and Assumption 2.17 is satisfied as \(E = {\mathbb {R}}^d\).

\(\square \)

Remark 5.2

We assume uniform ellipticity of a, which we use to establish (\(\Lambda 4\)). This leaves our comparison principle slightly lacking to prove a large deviation principle as general as in [5]. In contrast, we do not need a Lipschitz condition on r in terms of x.

While we believe that the conditions on a can be relaxed by performing a finer analysis of the estimates in terms of a, we do not pursue this relaxation here.

Remark 5.3

The cost function is the large deviation rate function for the occupation time measures of a jump process taking values in a finite set \(\{1,\dots ,J\}\), see e.g. [13, 14].

Remark 5.4

In the context with \(a = 0\) and \({\mathcal {I}}\) as general as Assumption 2.15, we improve upon the results of Chapter III of [2] by allowing a more general class of functionals \({\mathcal {I}}\), that are e.g. discontinuous as for example in Proposition 5.7 below.

In [10] the authors consider a second order Hamilton–Jacobi–Bellman equation, with the quadratic part replaced by a second order part. They work, however, with continuous cost functional \({\mathcal {I}}\). An extension of [10] that allows for a similar flexibility in the choice of \({\mathcal {I}}\) would therefore be of interest.

Remark 5.5

Under irreducibility conditions on the rates, as we shall assume below in Proposition 5.9, by [15] the Hamiltonian \({\mathcal {H}}(x,p)\) is the principal eigenvalue of the matrix \(A_{x,p} \in \mathrm {Mat}_{J \times J}({\mathbb {R}})\) given by

$$\begin{aligned} A_{x,p} = \mathrm {diag}\left[ \langle a(x,1)p,p\rangle +\langle b(x,1),p\rangle , \dots , \langle a(x,J)p,p\rangle +\langle b(x,J),p\rangle \right] + R_x, \end{aligned}$$

where \(x,p \in {\mathbb {R}}^d\) and \(R_x\) is the matrix

$$\begin{aligned} \begin{pmatrix} -\sum _{j \ne 1} r(1,j,x) &{} r(1,2,x) &{} \dots &{} r(1,J,x)\\ r(2,1,x) &{} -\sum _{j \ne 2} r(2,j,x) &{} \dots &{} \vdots \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ r(J,1,x) &{} \dots &{} r(J,J-1,x) &{} -\sum _{j \ne J}r(J,j,x) \end{pmatrix}, \end{aligned}$$

that is \((R_x)_{ii} = -\sum _{j \ne i} r(i,j,x)\) on the diagonal and \((R_x)_{ij} = r(i,j,x)\) for \(i \ne j\).

Next we consider Hamiltonians arising in the context of weakly interacting jump processes on a collection of states \(\{1,\dots ,q\}\) as described in our introduction. We analyze and motivate this example in more detail in our companion paper [26]. We give the terminology as needed for the results in this paper.

The empirical measure of the interacting processes takes its values in the set of measures \({\mathcal {P}}(\{1,\dots ,q\})\). The dynamics arises from mass moving over the bonds \((a,b) \in \Gamma = \left\{ (i,j) \in \{1,\dots ,q\}^2 \, | \, i \ne j\right\} \). As the number of processes is send to infinity, there is a type of limiting result for the total mass moving over the bonds.

We will denote by \(v(a,b,\mu ,\theta )\) the total mass that moves from a to b if the empirical measure equals \(\mu \) and the control is given by \(\theta \). We will make the following assumption on the kernel v.

Definition 5.6

(Proper kernel) Let \(v : \Gamma \times {\mathcal {P}}(\{1,\dots ,q\}) \times \Theta \rightarrow {\mathbb {R}}^+\). We say that v is a proper kernel if v is continuous and if for each \((a,b) \in \Gamma \), the map \((\mu ,\theta ) \mapsto v(a,b,\mu ,\theta )\) is either identically equal to zero or satisfies the following two properties:

  1. (a)

    \(v(a,b,\mu ,\theta ) = 0\) if \(\mu (a) = 0\) and \(v(a,b,\mu ,\theta ) > 0\) for all \(\mu \) such that \(\mu (a) > 0\).

  2. (b)

    There exists a decomposition \(v(a,b,\mu ,\theta ) = v_{\dagger }(a,b,\mu (a)) v_{\ddagger }(a,b,\mu ,\theta )\) such that \(v_{\dagger }\) is increasing in the third coordinate and such that \(v_{\ddagger }(a,b,\cdot ,\cdot )\) is continuous and satisfies \(v_{\ddagger }(a,b,\mu ,\theta ) > 0\).

A typical example of a proper kernel is given by

$$\begin{aligned} v(a,b,\mu ,\theta ) = \mu (a) r(a,b,\theta ) e^{ \partial _a V(\mu ) - \partial _b V(\mu )}, \end{aligned}$$

with \(r > 0\) continuous and \(V \in C^1_b({\mathcal {P}}(\{1,\dots ,q\}))\).

Proposition 5.7

(Mean-field coupled to diffusion) Let the space E be given by the embedding of \(E:={\mathcal {P}}(\{1,\dots ,J\})\times [0,\infty )^\Gamma \subseteq {\mathbb {R}}^d\) and F be a smooth compact Riemannian manifold without boundary. Suppose the following.

  1. (i)

    The set of control variables \(\Theta \) equals \({\mathcal {P}}(F)\).

  2. (ii)

    The function \(\Lambda \) is given by

    $$\begin{aligned} \Lambda ((\mu ,w),p,\theta ) = \sum _{(a,b) \in \Gamma } v(a,b,\mu ,\theta )\left[ \exp \left\{ p_b - p _a + p_{(a,b)} \right\} - 1 \right] \end{aligned}$$

    with a proper kernel v in the sense of Definition 5.6.

  3. (iii)

    The cost function \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) is given by

    $$\begin{aligned} {\mathcal {I}}(x,\theta ) := \sup _{\begin{array}{c} u\in {\mathcal {D}}(L_x)\\ \inf u > 0 \end{array}}\left[ -\int _F \frac{L_x u}{u}\,d\theta \right] , \end{aligned}$$

    where \(L_x\) is a second-order elliptic operator locally of the form

    $$\begin{aligned} L_x = \frac{1}{2}\nabla \cdot \left( a_x \nabla \right) + b_x\cdot \nabla , \end{aligned}$$

    on the domain \({\mathcal {D}}(L_x):=C^2(F)\), with positive-definite matrix \(a_x\) and co-vectors \(b_x\).

Suppose that the cost function \({\mathcal {I}}\) satisfies the assumptions of Proposition 5.10 and the function \(\Lambda \) satisfies the assumptions of Proposition 5.13. Then Theorems 2.6 and 2.8 apply to the Hamiltonian (5.1).

Proof

To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.13 and Assumption 2.15 follows from Proposition 5.10. We verify Assumption 2.17 in Proposition 5.19. \(\square \)

Remark 5.8

The cost function stems from occupation-time large deviations of a drift-diffusion process on a compact manifold, see e.g. [15, 32]. We expect Proposition 5.7 to extend also to non-compact spaces F, but we feel this technical extension is better suited for a separate paper.

5.1 Verifying assumptions for cost functions \({\mathcal {I}}\)

Here we verify Assumption 2.15 for two types of cost functions \({\mathcal {I}}(x,\theta )\) appearing in the examples of Propositions 5.1 and 5.7.

Proposition 5.9

(Donsker–Varadhan functional for jump processes) Consider a finite set \(F = \{1,\dots ,J\}\) and let \(\Theta := {\mathcal {P}}(\{1,\dots ,J\})\) be the set of probability measures on F. For \(x\in E\), let \(L_x : C_b(F) \rightarrow C_b(F)\) be the operator given by

$$\begin{aligned} L_x f(i) := \sum _{j=1}^Jr(i,j,x)\left[ f(j)-f(i)\right] ,\quad f :\{1,\dots ,J\}\rightarrow {\mathbb {R}}. \end{aligned}$$

Suppose that the rates \(r:\{1,\dots ,J\}^2\times E \rightarrow {\mathbb {R}}^+\) are continuous as a function on E and moreover satisfy the following:

  1. (i)

    For any \(x\in E\), the matrix R(x) with entries \(R(x)_{ij} := r(i,j,x)\) for \(i\ne j\) and \(R(x)_{ii} = -\sum _{j\ne i}r(i,j,x)\) is irreducible.

  2. (ii)

    For each pair (ij), we either have \(r(i,j,\cdot )\equiv 0\) or for each compact set \(K\subseteq E\), it holds that

    $$\begin{aligned} r_{K}(i,j) := \inf _{x\in K}r(i,j,x) > 0. \end{aligned}$$

Then the Donsker–Varadhan functional \({\mathcal {I}}:E\times \Theta \rightarrow {\mathbb {R}}^+\) defined by

$$\begin{aligned} {\mathcal {I}}(x,\theta )&:= -\inf _{\begin{array}{c} \phi \in C_b(F) \\ \inf \phi > 0 \end{array}} \int \frac{L_x \phi (z)}{\phi (z)} \, {\mathrm {d}}\theta \\&= \sup _{w\in {\mathbb {R}}^J}\sum _{ij}r(i,j,x)\theta _i \left[ 1-e^{w_j-w_i}\right] \end{aligned}$$

satisfies Assumption 2.15.

Proof

\(\underline{({\mathcal {I}}1)}\)::

For a fixed vector \(w\in {\mathbb {R}}^J\), the map

$$\begin{aligned} (x,\theta )\mapsto \sum _{ij}r(i,j,x)\theta _i \left[ 1-e^{w_j-w_i}\right] \end{aligned}$$

is continuous on \(E\times \Theta \). Hence \({\mathcal {I}}(x,\theta )\) is lower semicontinuous as the supremum over continuous functions.

\(\underline{({\mathcal {I}}2)}:\):

Let \(x\in E\). First note that for all \(\theta \), the choice \(w = 0\) implies that \({\mathcal {I}}(x,\theta ) \ge 0\). By the irreducibility assumption on the rates r(ijx), there exists a unique measure \(\theta _{x}^0\in \Theta \) such that for any \(f:\{1,\dots ,J\}\rightarrow {\mathbb {R}}\),

$$\begin{aligned} \sum _i L_x f(i) \theta _{x}^0(i)=0. \end{aligned}$$
(5.2)

We establish that \({\mathcal {I}}(x,\theta _{x}^0) = 0\). Let \(w \in {\mathbb {R}}^J\). By the elementary estimate

$$\begin{aligned} \left( 1-e^{b - a}\right) \le -(b-a) \quad \text { for all } \; a,b > 0, \end{aligned}$$

we obtain that

$$\begin{aligned} \sum _{ij}r(i,j,x) \theta _{x}^0(i) \left( 1-e^{w_j - w_i}\right)&\le \sum _{ij}r(i,j,x) \theta _{x}^0(i) \left( w_j - w_i \right) \\&= \sum _i (L_x w)(i) \theta _{x}^0(i) = 0 \end{aligned}$$

by (5.2). Since \({\mathcal {I}} \ge 0\), this implies \({\mathcal {I}}(x,\theta _{x}^0) = 0\).

\(\underline{({\mathcal {I}}3)}\)::

Any closed subset of \(\Theta \) is compact.

\(\underline{({\mathcal {I}}4)}\)::

Let \(x_n\rightarrow x\) in E. It follows that the sequence is contained in some compact set \(K \subseteq E\) that contains the \(x_n\) and x in its interior. For any \(y\in K\),

$$\begin{aligned} {\mathcal {I}}(y,\theta ) \le \sum _{ij, i \ne j} r(i,j,y) \theta _i \le \sum _{ij, i\ne j} r(i,j,y) \le \sum _{ij, i \ne j} {\bar{r}}_{ij}, \quad {\bar{r}}_{ij} := \sup _{y \in K} r(i,j,y). \end{aligned}$$

Hence \({\mathcal {I}}\) is uniformly bounded on \(K\times \Theta \), and (\({\mathcal {I}}4\)) follows with \(U_x\) the interior of K.

\(\underline{({\mathcal {I}}5)}:\):

Let d be some metric that metrizes the topology of E. We will prove that for any compact set \(K\subseteq E\) and \(\varepsilon > 0\) there is some \(\delta > 0\) such that for all \(x,y \in K\) with \(d(x,y) \le \delta \) and for all \(\theta \in {\mathcal {P}}(F)\), we have

$$\begin{aligned} |{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta )| \le \varepsilon . \end{aligned}$$
(5.3)

Let \(x,y \in K\). By continuity of the rates the \({\mathcal {I}}(x,\cdot )\) are uniformly bounded for \(x \in K\):

$$\begin{aligned} 0 \le {\mathcal {I}}(x,\theta ) \le \sum _{ij, i \ne j} r(i,j,x) \theta _i \le \sum _{ij, i\ne j} r(i,j,x) \le \sum _{ij, i \ne j} {\bar{r}}_{ij}, \quad {\bar{r}}_{ij} := \sup _{x \in K} r(i,j,x). \end{aligned}$$

For any \(n \in {\mathbb {N}}\), there exists \(w^n \in {\mathbb {R}}^J\) such that

$$\begin{aligned} 0 \le {\mathcal {I}}(x,\theta ) \le \sum _{ij, i \ne j} r_{ij}(x) \theta _i (1 - e^{w^n_j - w^n_i}) + \frac{1}{n}. \end{aligned}$$

By reorganizing, we find for all bonds (ab) the bound

$$\begin{aligned} \theta _a e^{w^n_b - w^n_a} \le \frac{1}{r_{K,a,b}} \left[ \sum _{ij, i \ne j} r(i,j,x)\theta _i + \frac{1}{n} \right] \le \frac{1}{r_{K,a,b}} \left[ \sum _{ij, i \ne j} {\bar{r}}_{ij} + \frac{1}{n} \right] , \end{aligned}$$

where \(r_{K,a,b}:=\inf _{x\in K}r(a,b,x)\). Thereby, evaluating in \({\mathcal {I}}(y,\theta )\) the same vector \(w^n\) to estimate the supremum,

$$\begin{aligned}&{\mathcal {I}}(x,\theta ) - {\mathcal {I}}(y,\theta ) \\&\quad \le \frac{1}{n} + \sum _{ab, a\ne b} r(a,b,x) \theta _a (1 - e^{w^n_b - w^n_a}) - \sum _{ab, a\ne b} r(a,b,y) \theta _a (1 - e^{w^n_b - w^n_a}) \\&\quad \le \frac{1}{n} + \sum _{ab, a\ne b} |r(a,b,x) - r(a,b,y)| \theta _a + \sum _{ab, a\ne b} |r(a,b,y) - r(a,b,x)| \theta _a e^{w^n_b - w^n_a} \\&\quad \le \frac{1}{n} + \sum _{ab, a\ne b}|r(a,b,x) - r(a,b,y)| \left( 1 + \frac{1}{r_{K,a,b}} \left[ \sum _{ij, i \ne j} {\bar{r}}_{ij} + 1 \right] \right) . \end{aligned}$$

We take \(n \rightarrow \infty \) and use that the rates \(x \mapsto r(a,b,x)\) are continuous, and hence uniformly continuous on compact sets, to obtain (5.3). \(\square \)

Proposition 5.10

(Donsker–Varadhan functional for drift-diffusions) Let F be a smooth compact Riemannian manifold without boundary and set \(\Theta :={\mathcal {P}}(F)\), the set of probability measures on F. For \(x\in E\), let \(L_x : C^2(F) \subseteq C_b(F) \rightarrow C_b(F)\) be the second-order elliptic operator that in local coordinates is given by

$$\begin{aligned} L_x = \frac{1}{2}\nabla \cdot \left( a_x \nabla \right) + b_x\cdot \nabla , \end{aligned}$$

where \(a_x\) is a positive definite matrix and \(b_x\) is a vector field having smooth entries \(a_x^{ij}\) and \(b_x^i\) on F. Suppose that for all ij the maps

$$\begin{aligned} x \mapsto a_x^{i,j}(\cdot ), \qquad x \mapsto b_x^i(\cdot ) \end{aligned}$$
(5.4)

are continuous as functions from E to \(C_b(F)\), where we equip \(C_b(F)\) with the supremum norm. Then the functional \({\mathcal {I}}:E\times \Theta \rightarrow [0,\infty ]\) defined by

$$\begin{aligned} {\mathcal {I}}(x,\theta ) := \sup _{\begin{array}{c} u\in {\mathcal {D}}(L_x)\\ u>0 \end{array}}\left[ -\int _F \frac{L_xu}{u}\,d\theta \right] \end{aligned}$$

satisfies Assumption 2.15.

Proof

\(\underline{({\mathcal {I}}1)}\)::

For any fixed function \(u\in {\mathcal {D}}(L_x)\) such that \(u > 0\), the function \((-L_xu/u)\) is continuous on F. Note that by definition of \({\mathcal {I}}\) it suffices to only consider \(u > 0\). Thus, for any such fixed \(u > 0\) it follows by (5.4) and compactness of F that

$$\begin{aligned} (x,\theta )\mapsto -\int _F \frac{L_xu}{u}\,d\theta \end{aligned}$$

is continuous on \(E\times \Theta \). As a consequence \({\mathcal {I}}(x,\theta )\) is lower semicontinuous as the supremum over continuous functions.

\(\underline{({\mathcal {I}}2)}\)::

Let \(x\in E\). The stationary measure \(\theta _{x}^0 \in \Theta \) satisfying

$$\begin{aligned} \int _F L_xg(z)\,\theta _{x}^0({\mathrm {d}}z) = 0\quad \text {for all}\;g\in {\mathcal {D}}(L_x) \end{aligned}$$
(5.5)

is the minimizer of \({\mathcal {I}}(x,\cdot )\), that is \({\mathcal {I}}(x,\theta _{x}^0) = 0\). This follows by considering the Hille-Yosida approximation \(L_x^\varepsilon \) of \(L_x\) and using the same argument (using \(w = \log u\)) as in Proposition 5.9 for these approximations. For any \(u>0\) and \(\varepsilon >0\),

$$\begin{aligned} -\int _F \frac{L_xu}{u}\,d\theta&= -\int _F \frac{L^\varepsilon _xu}{u}\,d\theta + \int _F \frac{(L^\varepsilon _x-L_x)u}{u}\,d\theta \\&\le -\int _F \frac{L^\varepsilon _xu}{u}\,d\theta + \frac{1}{\inf _F u} \Vert (L_x^\varepsilon -L_x)u\Vert _F\\&\le -\int _F L^\varepsilon _x \log (u)\,d\theta + o(1). \end{aligned}$$

Sending \(\varepsilon \rightarrow 0\) and then using (5.5) gives (\({\mathcal {I}}2\)).

\(\underline{({\mathcal {I}}3)}\)::

Since \(\Theta = {\mathcal {P}}(F)\) is compact, any closed subset of \(\Theta \) is compact. Hence any union of sub-level sets of \({\mathcal {I}}(x,\cdot )\) is relatively compact in \(\Theta \).

\(\underline{({\mathcal {I}}4)}\)::

Fix \(x \in E\) and \(M \ge 0\). Let \(\theta \in \Theta _{\{x\},M}\). As \({\mathcal {I}}(x,\theta ) \le M\), we find by [31] that the density \(\frac{{\mathrm {d}}\theta }{{\mathrm {d}}z}\) exists, where \({\mathrm {d}}z\) denotes the Riemannian volume measure.

In addition it follows from [31, Theorem 1.4] there exists constants \(c_1(y),c_2(y),c_3(y),c_4(y)\), \(c_1(y),c_2(y)\) being positive, depending continuously on \(a_y, a_y^{-1},b_y\) (See the derivation of [30, Eq. (2.18), (2.19)]), but not on \(\theta \), such that

$$\begin{aligned} c_1(y)\int _F|\nabla g_\theta |^2\,dz - c_2(y) \le {\mathcal {I}}(y,\theta ) \le c_3(y) \int _F|\nabla g_\theta |^2\,dz + c_4(y), \end{aligned}$$
(5.6)

where \(g_\theta = ({\mathrm {d}}\theta /{\mathrm {d}}z)^{1/2}\).

As the dependence is continuous in y, we can find a open set \(U \subseteq E\) of x such that there are constants \(c_1,c_2,c_3,c_4\), \(c_1,c_3\) being positive, that do not depend on \(\theta \), such that for any \(y \in U\):

$$\begin{aligned} c_1\int _F|\nabla g_\theta |^2\,dz - c_2 \le {\mathcal {I}}(y,\theta ) \le c_3 \int _F|\nabla g_\theta |^2\,dz + c_4. \end{aligned}$$
(5.7)

From (5.7), (\({\mathcal {I}}4\)) immediately follows.

\(\underline{({\mathcal {I}}5)}\)::

Since the coefficients \(a_x\) and \(b_x\) of the operator \(L_x\) depend continuously on x, assumption (\({\mathcal {I}}5\)) follows from Theorem 2 of [32].\(\square \)

5.2 Verifying assumptions for functions \(\Lambda \)

In this section we verify Assumption 2.14 for two types of functions \(\Lambda (x,p,\theta )\) appearing in the examples of Propositions 5.1 and 5.7.

Proposition 5.11

(Quadratic function \(\Lambda \)) Let \(E={\mathbb {R}}^d\) and \(\Theta ={\mathcal {P}}(F)\) for some compact Polish space F. Suppose that the function \(\Lambda :E\times {\mathbb {R}}^d\times \Theta \rightarrow {\mathbb {R}}\) is given by

$$\begin{aligned} \Lambda (x,p,\theta ) = \int _F\langle a(x,z)p,p\rangle \,d\theta (z) + \int _F\langle b(x,z),p\rangle \,d\theta (z), \end{aligned}$$

where \(a:E\times F\rightarrow {\mathbb {R}}^{d\times d}\) and \(b:E\times F\rightarrow {\mathbb {R}}^d\) are continuous. Suppose that for every compact set \(K \subseteq {\mathbb {R}}^d\),

$$\begin{aligned} a_{K,min}&:= \inf _{x \in K, z \in F, |p|=1} \langle a(x,z)p,p\rangle > 0, \\ a_{K,max}&:= \sup _{x \in K, z \in F, |p| = 1} \langle a(x,z)p,p\rangle< \infty , \\ b_{K,max}&:= \sup _{x \in K, z \in F, |p|=1} |\langle b(x,z),p\rangle | < \infty . \end{aligned}$$

Furthermore, there exists a constant \(L>0\) such that for all \(x,y\in E\) and \(z\in F\),

$$\begin{aligned} \Vert a(x,z)-a(y,z)\Vert \le L|x-y|, \end{aligned}$$

and suppose that the functions b are one-sided Lipschitz. Then Assumption 2.14 holds.

Remark 5.12

The above proposition is slightly more general than what we consider in Proposition 5.1, as there we assume that \(F = \{1,\dots ,J\}\) is a finite set.

Proof

   

 \(\underline{(\Lambda 1)}:\):

Let \((x,p)\in E\times {\mathbb {R}}^d\). Continuity of \(\Lambda \) is a consequence of the fact that

$$\begin{aligned} \Lambda (x,p,\theta ) = \int _F V(x,p,z)\,d\theta (z) \end{aligned}$$

is the pairing of a continuous and bounded function \(V(x,p,\cdot )\) with the measure \(\theta \in {\mathcal {P}}(F)\).

\(\underline{(\Lambda 2)}\)::

Let \(x\in E\) and \(\theta \in {\mathcal {P}}(F)\). Convexity of \(p\mapsto \Lambda (x,p,\theta )\) follows since a(xz) is positive definite by assumption. If \(p_0 = 0\), then evidently \(\Lambda (x,p_0,\theta ) = 0\).

\(\underline{(\Lambda 3)}\)::

We show that the map \(\Upsilon : E\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} \Upsilon (x) := \frac{1}{2}\log \left( 1 + |x|^2\right) \end{aligned}$$

is a containment function for \(\Lambda \). For any \(x\in E\) and \(\theta \in {\mathcal {P}}(F)\), we have

$$\begin{aligned} \Lambda (x,\nabla \Upsilon (x),\theta )&= \int _F \langle a(x,z)\nabla \Upsilon (x),\nabla \Upsilon (x)\rangle \,d\theta (z) + \int _F\langle b(x,z),\nabla \Upsilon (x)\rangle \,d\theta (z)\\&\le a_{\{x\},\text {max}} |\nabla \Upsilon (x)|^2 + b_{\{x\},\text {max}}|\nabla \Upsilon (x)|\\&\le C (1+|x|) \frac{x^2}{(1+x^2)^2} + C(1+|x|) \frac{x}{(1+x^2)}, \end{aligned}$$

and the boundedness condition follows with the constant

$$\begin{aligned} C_\Upsilon := C \,\sup _x (1+|x|) \left[ \frac{x^2}{(1+x^2)^2} + \frac{x}{(1+x^2)} \right] <\infty . \end{aligned}$$
\(\underline{(\Lambda 4)}\)::

Let \(K\subseteq E\) be compact. We have to show that there exist constants \(M, C_1, C_2 \ge 0\) such that for all \(x \in K\), \(p \in {\mathbb {R}}^d\) and all \(\theta _1,\theta _2 \in {\mathcal {P}}(F)\), we have

$$\begin{aligned} \Lambda (x,p,\theta _1) \le \max \left\{ M, C_1 \Lambda (x,p,\theta _2) + C_2 \right\} . \end{aligned}$$
(5.8)

Fix \(\theta _1,\theta _2 \in {\mathcal {P}}(F)\). We have for \(x \in K\)

$$\begin{aligned} \int \langle a(x,z)p,p\rangle d\theta _1(z) \le \frac{a_{K,max}}{a_{K,min}} \int \langle a(x,z)p,p\rangle d\theta _2(z) \end{aligned}$$

In addition, as \(a_{K,min} > 0\) and \(b_{K,max} < \infty \) we have for any \(C > 0\) and sufficiently large |p| that

$$\begin{aligned} \int \langle b(x,z),p\rangle \,d\theta _1(z) - (C+1)\int \langle b(x,z),p\rangle \,d\theta _2(z) \le C \int \langle a(x,z)p,p\rangle \,d\theta _2(z) \end{aligned}$$

Thus, for sufficiently large |p| (depending on C) we have

$$\begin{aligned} \Lambda (x,p,\theta _1) \le (1+C) \Lambda (x,p,\theta _2). \end{aligned}$$

Fix a \(C =: C_1\) and denote the set of ‘large’ p by S. The map \((x,p,\theta ) \mapsto \Lambda (x,p,\theta )\) is bounded on \(K \times \times S^c\times \Theta \). Thus, we can find a constant \(C_2\) such that (5.8) holds.

\(\underline{(\Lambda 5)}\)::

By the assumption on a(xz), the function \(\Lambda \) is uniformly coercive in the sense that for any compact set \(K\subseteq E\),

$$\begin{aligned} \inf _{x\in K, \theta \in \Theta }\Lambda (x,p,\theta ) \rightarrow \infty \quad \text { as }\; |p|\rightarrow \infty , \end{aligned}$$

and the continuity estimate follows by Proposition 5.15.

\(\square \)

We proceed with the example in which \(\Lambda \) depends on p through exponential functions (Proposition 5.7). Let \(q \in {\mathbb {N}}\) be an integer and

$$\begin{aligned} \Gamma := \left\{ (a,b)\, \big | \,a,b\in \{1,\dots ,q\}, \,a\ne b\right\} \end{aligned}$$

be the set of oriented edges in \(\{1,\dots ,q\}\).

Proposition 5.13

(Exponential function \(\Lambda \)) Let \(E\subseteq {\mathbb {R}}^d\) be the embedding of \(E = {\mathcal {P}}(\{1,\dots ,q\}) \times ({\mathbb {R}}^+)^{|\Gamma |}\) and \(\Theta \) be a topological space. Suppose that \(\Lambda \) is given by

$$\begin{aligned} \Lambda ((\mu ,w),p,\theta ) = \sum _{(a,b) \in \Gamma } v(a,b,\mu ,\theta )\left[ \exp \left\{ p_b - p _a + p_{(a,b)} \right\} - 1 \right] \end{aligned}$$

where v is a proper kernel in the sense of Definition 5.6. Suppose in addition that there is a constant \(C > 0\) such that for all \((a,b) \in \Gamma \) such that \(v(a,b, \cdot ,\cdot ) \ne 0\) we have

$$\begin{aligned} \sup _{\mu } \sup _{\theta _1,\theta _2} \frac{v(a,b,\mu ,\theta _1)}{v(a,b,\mu ,\theta _2)} \le C. \end{aligned}$$
(5.9)

Then \(\Lambda \) satisfies Assumption 2.14.

Remark 5.14

Similar to the previous proposition, the assumptions on \(\Lambda \) are satisfied when \(\Theta = {\mathcal {P}}(F)\) for some Polish space F, we have \(v(a,b,\mu ,\theta ) = \mu (a) \int r(a,b,\mu ,z) \theta ({\mathrm {d}}z)\), and there are constants \(0< r_{min} \le r_{max} < \infty \) such that for all \((a,b) \in \Gamma \) such that \(\sup _{\mu ,z} r(a,b,\mu ,z) > 0\), we have

$$\begin{aligned} r_{min} \le \inf _{z} \inf _{\mu } r(a,b,\mu ,z) \le \sup _{z} \sup _{\mu } r(a,b,\mu ,z) \le r_{max}. \end{aligned}$$

Regarding (5.9), for \((a,b) \in \Gamma \) for which \(v(a,b,\cdot ,\cdot )\) is non-trivial, we have

$$\begin{aligned} \frac{v(a,b,\mu ,\theta _1)}{v(a,b,\mu ,\theta _2)} = \frac{\int r(a,b,\mu ,z) \theta _1({\mathrm {d}}z)}{\int r(a,b,\mu ,z) \theta _2({\mathrm {d}}z)} \le \frac{r_{max}}{r_{min}}. \end{aligned}$$

Proof of Proposition 5.13

\(\underline{(\Lambda 1)}\)::

The function \(\Lambda \) is continuous as the sum of continuous functions.

\(\underline{(\Lambda 2)}\)::

Convexity of \(\Lambda \) as a function of p follows from the fact that \(\Lambda \) is a finite sum of convex functions, and \(\Lambda (x,0,\theta )=0\) is evident.

\(\underline{(\Lambda 3)}\)::

The function \(\Upsilon : E\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} \Upsilon (\mu ,w) := \sum _{(a,b)\in \Gamma }\log \left[ 1 + w_{(a,b)}\right] \end{aligned}$$

is a containment function for \(\Lambda \). For a verification, see [23].

\(\underline{(\Lambda 4)}\)::

Note that

$$\begin{aligned} \Lambda ((\mu ,w),\theta _1,p)&\le \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _1) e^{p_{a,b} + p_b - p_a} \\&\le C \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _2) e^{p_{a,b} + p_b - p_a} \\&\le C \sum _{(a,b)\in \Gamma } v(a,b,\mu ,\theta _2) \left[ e^{p_{a,b} + p_b - p_a} - 1 \right] + C_2 . \end{aligned}$$

Thus the estimate holds with \(M = 0\), \(C_1 = C\) and \(C_2 = \sup _{\mu ,\theta } \sum _{a,b} v(a,b,\mu ,\theta )\).

\(\underline{(\Lambda 5)}\)::

The continuity estimate is the content of Proposition 5.18 below.

\(\square \)

5.3 Verifying the continuity estimate

With the exception of the verification of the continuity estimate in Assumption 2.14 the verification in Sect. 5.2 is straightforward. On the other hand, the continuity estimate is an extension of the comparison principle, and is therefore more complex. We verify the continuity estimate in three contexts, which illustrates that the continuity estimate follows from essentially the same arguments as the standard comparison principle. We will do this for:

  • Coercive Hamiltonians

  • One-sided Lipschitz Hamiltonians

  • Hamiltonians arising from large deviations of empirical measures.

This list is not meant to be an exhaustive list, but to illustrate that the continuity estimate is a sensible extension of the comparison principle, which is satisfied in a wide range of contexts. In what follows, \(E\subseteq {\mathbb {R}}^d\) is a Polish subset and \(\Theta \) a topological space.

Proposition 5.15

(Coercive \(\Lambda \)) Let \(\Lambda : E \times {\mathbb {R}}^d \times \Theta \rightarrow {\mathbb {R}}\) be continuous and uniformly coercive: that is, for any compact \(K \subseteq E\) we have

$$\begin{aligned} \inf _{x \in K, \theta \in \Theta } \Lambda (x,p,\theta ) \rightarrow \infty \quad \mathrm {as} \; |p| \rightarrow \infty . \end{aligned}$$

Then the continuity estimate holds for \(\Lambda \) with respect to any penalization function \(\Psi \).

Proof

Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\). By the upper bound (2.4), we find that for sufficiently small \(\varepsilon > 0\) there is some \(\alpha (\varepsilon )\) such that

$$\begin{aligned} \sup _{\alpha \ge \alpha (\varepsilon )} \Lambda \left( y_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha }, \theta _{\varepsilon ,\alpha }\right) < \infty . \end{aligned}$$

As the variables \(y_{\alpha ,\varepsilon }\) are contained in a compact set by property (C1) of fundamental collections of variables, the uniform coercivity implies that the momenta \(p_{\varepsilon ,\alpha }\) for \(\alpha \ge \alpha (\varepsilon )\) remain in a bounded set. Thus, we can extract a subsequence \(\alpha '\) such that \((x_{\varepsilon ,\alpha '},y_{\varepsilon ,\alpha '},p_{\varepsilon ,\alpha '},\theta _{\varepsilon ,\alpha '})\) converges to \((x,y,p,\theta )\) with \(x = y\) due to property (C2) of fundamental collections of variables. By continuity of \(\Lambda \) we find

$$\begin{aligned}&\liminf _{\alpha \rightarrow \infty } \Lambda \left( x_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }\right) - \Lambda \left( y_{\alpha ,\varepsilon },p_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }\right) \\&\quad \le \lim _{\alpha '\rightarrow \infty } \Lambda \left( x_{\varepsilon ,\alpha '}, p_{\varepsilon ,\alpha '},\theta _{\varepsilon ,\alpha '}\right) - \Lambda \left( y_{\varepsilon ,\alpha '},p_{\varepsilon ,\alpha '},\theta _{\varepsilon ,\alpha '}\right) = 0 \end{aligned}$$

establishing the continuity estimate. \(\square \)

Proposition 5.16

(One-sided Lipschitz \(\Lambda \)) Let \(\Lambda : E \times {\mathbb {R}}^d \times \Theta \rightarrow {\mathbb {R}}\) satisfy

$$\begin{aligned} \Lambda (x,\alpha (x-y),\theta ) - \Lambda (y,\alpha (x-y),\theta ) \le c(\theta ) \omega (|x-y| + \alpha (x-y)^2) \end{aligned}$$
(5.10)

for some collection of constants \(c(\theta )\) satisfying \(\sup _\theta c(\theta ) < \infty \) and a function \(\omega : {\mathbb {R}}^+ \rightarrow {\mathbb {R}}^+\) satisfying \(\lim _{\delta \downarrow 0} \omega (\delta ) = 0\).

Then the continuity estimate holds for \(\Lambda \) with respect to \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\).

Proof

Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\). We find

$$\begin{aligned}&\liminf _{\alpha \rightarrow \infty } \Lambda \left( x_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha }, \theta _{\varepsilon ,\alpha }\right) - \Lambda \left( y_{\alpha ,\varepsilon },p_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }\right) \\&\quad \le \liminf _{\alpha \rightarrow \infty } c(\theta _{\varepsilon ,\alpha }) \omega \left( |x_{\varepsilon ,\alpha }-y_{\varepsilon ,\alpha }| + \alpha (x_{\varepsilon ,\alpha }-y_{\varepsilon ,\alpha })^2\right) \end{aligned}$$

which equals 0 as \(\sup _\theta c(\theta ) < \infty \), \(\lim _{\delta \downarrow 0} \omega (\delta ) = 0\) and property (C2) of a fundamental collection of variables. \(\square \)

For the empirical measure of a collection of independent processes one obtains maps \(\Lambda \) that are neither uniformly coercive nor Lipschitz. Also in this context one can establish the continuity estimate. We treat a simple 1d case and then state a more general version for which we refer to [23].

Proposition 5.17

Suppose that \(E = [-1,1]\) and that \(\Lambda (x,p,\theta )\) is given by

$$\begin{aligned} \Lambda (x,p,\theta ) = (1-x) c_+(\theta ) \left[ e^{p} -1\right] + (1+x) c_-(\theta ) \left[ e^{-p} -1\right] \end{aligned}$$

with \(c_-,c_+\) non-negative functions of \(\theta \). Then the continuity estimate holds for \(\Lambda \) with respect to \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\).

Proof

Let \(\Psi (x,y) = \tfrac{1}{2}(x-y)^2\). Let \((x_{\alpha ,\varepsilon },y_{\alpha ,\varepsilon },\theta _{\varepsilon ,\alpha })\) be fundamental for \(\Lambda \) with respect to \(\Psi \). Set \(p_{\alpha ,\varepsilon } = \alpha (x_{\varepsilon ,\alpha } - y_{\varepsilon ,\alpha })\).

We have

$$\begin{aligned}&\Lambda \left( x_{\varepsilon ,\alpha }, p_{\varepsilon ,\alpha }, \theta _{\varepsilon ,\alpha }\right) - \Lambda \left( y_{\alpha ,\varepsilon },p_{\varepsilon ,\alpha },\theta _{\varepsilon ,\alpha }\right) \\&\quad = \left( y_{\varepsilon ,\alpha }-x_{\varepsilon ,\alpha }\right) c_+(\theta _{\varepsilon ,\alpha }) \left[ e^{p_{\varepsilon ,\alpha }} -1\right] + \left( x_{\varepsilon ,\alpha }-y_{\varepsilon ,\alpha }\right) c_-(\theta _{\varepsilon ,\alpha }) \left[ e^{-p_{\varepsilon ,\alpha }} -1\right] \end{aligned}$$

Now note that \(y_{\varepsilon ,\alpha }-x_{\varepsilon ,\alpha }\) is positive if and only if \(e^{p_{\varepsilon ,\alpha }} -1\) is negative so that the first term is bounded above by 0. With a similar argument the second term is bounded above by 0. Thus the continuity estimate is satisfied. \(\square \)

Proposition 5.18

Suppose \(E = {\mathcal {P}}(\{1,\dots ,q\} \times ({\mathbb {R}}^+)^\Gamma \) and suppose that \(\Lambda \) is given by

$$\begin{aligned} \Lambda ((\mu ,w),\theta ,p) = \sum _{(a,b) \in \Gamma } v(a,b,\mu ,\theta )\left[ \exp \left\{ p_b - p _a + p_{(a,b)} \right\} - 1 \right] \end{aligned}$$

where v is a proper kernel. Then the continuity estimate holds for \(\Lambda \) with respect to penalization functions (see Sect. C)

$$\begin{aligned} \Psi _1(\mu ,{\hat{\mu }})&:= \frac{1}{2} \sum _{a} (({\hat{\mu }}(a) - \mu (a))^+)^2, \\ \Psi _2(w,{\hat{w}})&:= \frac{1}{2} \sum _{(a,b) \in \Gamma } (w_{(a,b)} - {\hat{w}}_{(a,b)})^2. \end{aligned}$$

Here we denote \(r^+ = r \vee 0\) for \(r \in {\mathbb {R}}\).

In this context, one can use coercivity like in Proposition 5.15 in combination with directional properties used in the proof of Proposition 5.17 above.

To be more specific: the proof of this proposition can be carried out exactly as the proof of Theorem 3.8 of [23]: namely at any point a converging subsequence is constructed, the variables \(\alpha \) need to be chosen such that we also get convergence of the measures \(\theta _{\varepsilon ,\alpha }\) in \({\mathcal {P}}(F)\).

5.4 Verifying assumption 2.17 for the exponential internal Hamiltonian

Proposition 5.19

Let \(\Lambda \) be as in Proposition 5.7:

$$\begin{aligned} \Lambda ((\mu ,w),p,\theta ) = \sum _{(a,b) \in \Gamma } v(a,b,\mu ,\theta )\left[ \exp \left\{ p_b - p _a + p_{(a,b)} \right\} - 1 \right] \end{aligned}$$

Then we have \(\partial _p \Lambda ((\mu ,x),p) \subseteq T_E(\mu ,w)\).

A sketch of the verification of Assumption 2.17

We sketch the proof in a simplified case, the general case being similar. Consider \(E={\mathcal {P}}(\{a,b\})\) (ignoring the flux for the moment), and identify E with the simplex in \({\mathbb {R}}^2\). Fix the control \(\theta \in \Theta \). We have to show \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\). Recall that \(T_E(\mu )\) is the tangent cone at \(\mu \), that means the vectors at \(\mu \) pointing inside of E. We compute the vector \(\nabla _p \Lambda (\mu ,p,\theta ) \in {\mathbb {R}}^2\),

$$\begin{aligned} \nabla _p \Lambda (\mu ,p,\theta ) = \begin{pmatrix} -v(a,b,\mu ,\theta ) e^{p_b-p_a} + v(b,a,\mu ,\theta ) e^{p_a-p_b}\\ v(a,b,\mu ,\theta ) e^{p_b-p_a} - v(b,a,\mu ,\theta ) e^{p_a-p_b} \end{pmatrix}. \end{aligned}$$

For \(\mu =(\mu _a,\mu _b)\in E\) with \(\mu _a,\mu _b > 0\), the tangent cone \(T_E(\mu )\) is spanned by \((1,-1)^T\). Since \(\nabla _p \Lambda (\mu ,p,\theta )\) is orthogonal to \((1,1)^T\), we indeed find that \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\) in that case. For \(\mu =(1,0)\), the tangent cone is \(T_E(1,0)=\{\lambda (-1,1)^T\,:\,\lambda \ge 0\}\). We have

$$\begin{aligned} \nabla _p \Lambda (\mu ,p,\theta ) = \begin{pmatrix} - v(a,b,\mu ,\theta ) e^{p_b-p_a}\\ v(a,b,\mu ,\theta ) e^{p_b-p_a} \end{pmatrix}, \end{aligned}$$

which is parallel to \((-1,1)^T\), and therefore \(\partial _p \Lambda (\mu ,p,\theta ) \subseteq T_E(\mu )\). The argument is similar for \(\mu =(0,1)\). The general case (including the fluxes) follows from a more tedious, but straightforward, computation. \(\square \)