1 Introduction

In this paper, we consider an infinite horizon control problem for the dynamics of an agent constrained to remain on a multidimensional junction on \({\mathbb {R}}^3\), i.e. a union of \(N\ge 2\) half-planes \({\mathcal {P}}_i\) which share a straight line \(\Gamma \), see Fig. 2. The controlled dynamics are given by a system of ordinary differential equations, where in each \( {\mathcal {P}}_i \) it is given by a drift \(f_i(\cdot ,\cdot )\) and to which is associated a running cost \(\ell _i(\cdot ,\cdot )\). Moreover, the agent pays a cost \(c_i(\cdot )\) each time it enters the half-plane \({\mathcal {P}}_i\) from \(\Gamma \). The goal of this work is to study the properties of the value function of this control problem and derive the associated Hamilton–Jacobi equation (HJ) under some regularity conditions on the involved dynamics, running and entry cost functions. Although we will not discuss it in this paper, the optimal control problem with exit costs, i.e. instead of paying an entry cost each time the agent enters the half-plane, it pays a cost each time it exits it, can be solved similarly. Oudet [25] considers a similar optimal control problem but without entry or exit costs from the interface to the half-planes.

When the interface \(\Gamma \) is reduced to a point, the junction becomes a simple network with one vertex, i.e. a 1-dimensional junction. Optimal control problems (without entry costs) in which the set of admissible states are networks attracted a lot of interest in recent years. Being among the first papers discussing this topic, Achdou et al. [2], derived an HJ equation associated to an infinite horizon optimal control on networks and proposed a suitable notion of viscosity solutions, where the admissible test-functions whose restriction to each edge are \( C^1 \) are applied. Independently and at the same time, Imbert et al. [18] proposed an equivalent notion of viscosity solution for studying an HJ approach to junction problems and traffic flows. Both [2] and [18] contain first results on the comparison principle. In the particular case of eikonal equations on networks, Schieborn and Camilli [26] considered a less general notion of viscosity solution. For that later case, Camilli and Marchi [12] showed the equivalence between the definitions notion of viscosity solution given in [2, 18] and [26]. Optimal control on networks with entry costs (and exit costs) has recently been considered by the first author [14].

An important feature of the effect of the entry costs is a possible discontinuity of the value function. Discontinuous solutions of HJ equations have been studied by various authors, see for example Barles [5] for general open domains in \( {\mathbb {R}}^d \), Frankowska and Mazzola [15] for state constraint problems, and in particular Graber et al. [16] for a class of HJ equations on networks.

In the case considered in the present work, the effect of entry costs induces a discontinuity of the value function \( {\mathcal {V}}\) at the interface \(\Gamma \), while it is still continuous on each \( {\mathcal {P}}_i \backslash \Gamma \). This allows us to adopt the techniques which apply to the continuous solution case in the works of Barles et al. [7] and Oudet [25], where we split the value function \({\mathcal {V}}\) into the collection \(\{v_1,\ldots ,v_N\}\) of functions, where each \( v_i \) is continuous function defined on \( {\mathcal {P}}_i \) and satisfies

$$\begin{aligned} v_i (x) = {\left\{ \begin{array}{ll} {\mathcal {V}}(x) &{} \text{ if } x\in {\mathcal {P}}_i \backslash \Gamma ,\\ \displaystyle \lim _{({\mathcal {P}}_i \backslash \Gamma ) \ni z \rightarrow x} {\mathcal {V}}(z) &{} \text {if } x\in \Gamma . \end{array}\right. } \end{aligned}$$

We note that the existence of the limit in the above formula comes from the fact that the value functions is Lipschitz continuous on the neighborhood of \( \Gamma \) (see Lemma 3.3), thanks to the ’strong controllability assumption’, which is introduced below. The first main result of the present work is to show that \((v_1,\ldots ,v_N,{\mathcal {V}}|_\Gamma )\) is a viscosity solution of the following system

$$\begin{aligned} \lambda v_i (x) + H_i \left( x, \partial v_i (x) \right) = 0,&\quad \text {if }x\in {\mathcal {P}}_i \backslash \Gamma , \end{aligned}$$
(1.1)
$$\begin{aligned} \lambda v_i (x) + \max \left\{ H_i^+ \left( x, \partial v_i (x) \right) , -\lambda {\mathcal {V}}|_\Gamma (x) \right\} = 0,&\quad \text {if }x\in \Gamma , \end{aligned}$$
(1.2)
$$\begin{aligned} \lambda {\mathcal {V}}|_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{v_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial {\mathcal {V}}|_{\Gamma }}{\partial e_0} (x) \right) \right\} =0,&\quad \text {if } x\in \Gamma , \end{aligned}$$
(1.3)

where \( H_i \) is the Hamiltonian corresponding to the half-plane \( {\mathcal {P}}_i \), \( {\mathcal {V}}|_\Gamma \) is the restriction of our value function on the interface and \( H_{\Gamma } \) is the Hamiltonian defined on \(\Gamma \). At \(x \in \Gamma \), the definition of the Hamiltonian has to be particular, in order to consider all the possibilities when x is in the neighborhood of \( \Gamma \). More specifically,

  • the term \( H_i^+ \left( x, \partial u_i (x) \right) \) accounts for the situation in which the trajectory does not leave \( {\mathcal {P}}_i \),

  • the term \( \min _{i=1,\ldots ,N} \{v_i (x) + c_i(x) \} \) accounts for situations in which the trajectory enters \({\mathcal {P}}_k\) where \( v_k(x) + c_k(x) = \min _{i=1,\ldots ,N} \{v_i (x) + c_i(x) \} \),

  • the term \( H_\Gamma (x,\frac{\partial {\mathcal {V}}|_\Gamma }{\partial e_0} (x)) \) accounts for situations in which the trajectory remains on \( \Gamma \).

This feature is quite different from the one induced by the effect of entry costs in a network (i.e. when \(\Gamma \) is reduced to a point) considered in [14], where the value function at the junction point is a constant which is the minimum of the cost when the trajectory stays at the junction point forever and the cost when the trajectory enters immediately the edge that has the lowest possible cost.

The paper is organized as follows. In Sect. 2, we formulate the optimal control problem on a multidimensional junction on \({\mathbb {R}}^3\) with entry cost. In Sect. 3, we study the control problem under the strong controllability condition, where we derive the system of HJ equations associated with the optimal control problem, propose a comparison principle, which leads to the well-posedness of (1.1)–(1.3), and prove that the value function of the optimal control problem is the unique discontinuous solution of the HJ system. We suggest two different proofs of the comparison principle. The first one is inspired from the work by Lions and Souganidis [21] and uses arguments from the theory of PDEs, and the second one uses a blend of arguments from optimal control theory and PDE techniques suggested in [3, 7, 8] and [25]. Finally, in Sect. 4, the same program is carried out when the strong controllability is replaced by the weaker one that we coin ’moderate controllability near the interface’. The proof of the comparison principle under the moderate controllability condition is carried on by only using the PDE techniques provided in Lions and Souganidis [21].

The results obtained in the present work extend easily to multidimensional junction on \({\mathbb {R}}^d\), i.e. a union of \(N\ge 2\) half-hyperplanes \({\mathcal {P}}_i\) which share an affine space \(\Gamma \) of dimension \( d-2 \), and to the more general class of ramified sets, i.e. closed and connected subsets of \({\mathbb {R}}^d\) obtained as the union of embedded manifolds with dimension strictly less than d, for which the interfaces are non-intersecting manifolds of dimension \(d-2\), see Fig. 1a for example. We do not know whether these results apply to the ramified sets for which interfaces of dimension \( d-2 \) cross each other (see Fig. 1b). Recent results on optimal control and HJ equations on ramified sets include Bressan and Hong [11], Camilli et al. [13], Nakayasu [24] and Hermosilla and Zidani [17] and the book of Barles and Chasseigne [9].

Fig. 1
figure 1

Example of remafied sets

2 Formulation of the control problem on a junction

2.1 The geometry of the state of the system

Let \(\left\{ e_{i}\right\} _{0\le i\le N}\) be distinct unit vectors in \(\mathbb {R}^{3}\) such that \(\ e_i \cdot e_0 =0\) for all \(i\in \left\{ 1,\ldots ,N\right\} \). The state of the system is given by the junction \(\mathcal {S}\) which is the union of N closed half-planes \(\mathcal {P}_{i}=\mathbb {R}e_{0}\times \mathbb {R}^{+}e_{i}\). The half-planes \(\mathcal {P}_{i}\) are glued at the straight line \(\Gamma :=\mathbb {R}e_{0}\) (see Fig. 2).

Fig. 2
figure 2

The junction \({\mathcal {S}}\) in \({\mathbb {R}}^3\)

If \(x\in {\mathcal {S}}\backslash \Gamma \), there exist unique \(i\in \left\{ 1,\ldots ,N\right\} \), \(x_i>0\) and \(x_0\in {\mathbb {R}}\) such that

$$\begin{aligned} x=x^0 e_0 + x^i e_i =: (x^i,x^0). \end{aligned}$$

Let \(x=(x^i,x^0)\in {\mathcal {P}}_i\) and \(y=(y^j,y^0)\in {\mathcal {P}}_j\), the geodesic distance d(xy) between two points \(x,y\in {\mathcal {S}}\) is

$$\begin{aligned} d(x,y)={\left\{ \begin{array}{ll} \left| x-y\right| =\left( \left| x^0-y^0\right| ^2+\left| x^i-y^i\right| ^2\right) ^{\frac{1}{2}}, &{} \text {if }x,y\in \mathcal {P}_{i},\\ \displaystyle \inf _{z\in \Gamma }\left\{ \left| x-z\right| +\left| z-y\right| \right\} = \left( \left| x^0-y^0\right| ^2+\left| x^i+y^j\right| ^2\right) ^{\frac{1}{2}}, &{} \text {if }x\in \mathcal {P}_{i} \text { and } y\in \mathcal {P}_{j}. \end{array}\right. } \end{aligned}$$

2.2 The optimal control problem

We consider an infinite horizon optimal control problem which has different dynamics and running costs for each half-plane. For \(i=1,\ldots ,N\),

  • the set of controls (action set) on \({\mathcal {P}}_i\) is denoted by \(A_i\),

  • on \({\mathcal {P}}_i\) the dynamics of the system is deterministic with associated dynamic \(f_i\),

  • the agent has to pay the running cost \(\ell _i\) while (s)he is on \({\mathcal {P}}_i\).

The following conditions, referred to as [A] hereafter, are our standing assumptions throughout the paper.

[A0]:

Control sets. Let A be a metric space (for example \(A=\mathbb {R}^d\)). For \(i=1,\ldots , N\), \( A_i \) is a nonempty compact subset of A and the sets \(A_{i}\) are disjoint.

[A1]:

Dynamics and running costs. For \(i=1,\ldots ,N\), the functions \(\ell _i:{\mathcal {P}}_i\times A_{i}\rightarrow \mathbb {R}\) and \( f_{i}: {\mathcal {P}}_i\times A_{i} \rightarrow {\mathbb {R}}^3 \) are continuous and bounded by M. Moreover, there exists \(L>0\) such that

$$\begin{aligned} \left| f_{i}\left( x,a\right) -f_{i}\left( y,a\right) \right| ,~ \left| \ell _{i}\left( x,a\right) -\ell _{i}\left( y,a\right) \right| \le L\left| x-y\right| ,\quad \hbox { for all } x,y\in {\mathcal {P}}_i, a\in A_{i}. \end{aligned}$$

Hereafter, we will use the notation

$$\begin{aligned} F_{i}\left( x\right) :=\left\{ f_{i}\left( x,a\right) :a\in A_{i}\right\} . \end{aligned}$$

      Entry costs.\(\left\{ c_1,\ldots ,c_N\right\} \) is a set of entry cost functions, where \(c_i: \Gamma \rightarrow {\mathbb {R}}^+\) is Lipschitz continuous and bounded from below by some positive constant C.

[A2]:

Convexity of dynamics and costs. For \(x\in {\mathcal {P}}_i\), the following set

$$\begin{aligned} \textsc {FL}_{i}\left( x\right) :=\left\{ \left( f_{i}\left( x,a\right) ,\ell _{i}\left( x,a\right) \right) :a\in A_{i}\right\} \end{aligned}$$

is non empty, closed and convex.

Remark 2.1

In [A0], the assumption that the set \(A_i\) are disjoint is not restrictive since we can always replace \(A_i\) by \({\tilde{A}}_i=A_i\times \left\{ i\right\} \). Assumption [A2] is made to avoid the use of relaxed control (see the definition for relaxed control in [4]). Many of these conditions can be weakened at the cost of keeping the presentation of the results easy to follow.

2.2.1 Controlled dynamics

Let \({\mathcal {M}}\) be the closed set given by

$$\begin{aligned} {\mathcal {M}}:= \left\{ (x,a): x\in {\mathcal {P}}_i,~a\in A_i \text { if } x\in {\mathcal {P}}_i \backslash \Gamma , \text { and } a\in \cup _{i=1}^N A_i \text { if } x\in \Gamma \right\} \end{aligned}$$

and define the function f on \({\mathcal {M}}\) by

$$\begin{aligned} f\left( x,a\right) ={\left\{ \begin{array}{ll} f_{i}\left( x,a\right) , &{} \quad \hbox {if }x\in {\mathcal {P}}_i\backslash \Gamma \hbox { and }a\in A_{i},\\ f_{i}\left( x,a\right) , &{} \quad \hbox {if }x\in \Gamma \hbox { and }a\in A_{i}. \end{array}\right. } \end{aligned}$$

The function f is continuous on \({\mathcal {M}}\) since the sets \(A_{i}\) are disjoint. Consider the set \({\tilde{F}}\left( x\right) \) which contains all the ’possible speeds’ at x defined by

$$\begin{aligned} {\tilde{F}}\left( x\right) ={\left\{ \begin{array}{ll} F_{i}\left( x\right) &{} \quad \hbox {if }x\in {\mathcal {P}}_i\backslash \Gamma ,\\ \bigcup _{i=1}^{N}F_{i}\left( x\right) &{} \quad \text{ if } x\in \Gamma . \end{array}\right. } \end{aligned}$$

For \(x\in {\mathcal {S}}\), the set of admissible trajectories starting from x is

$$\begin{aligned} Y_{x}=\left\{ y_{x}\in Lip\left( \mathbb {R}^{+};{\mathcal {S}}\right) \left| \begin{array}[c]{l} {\dot{y}}_{x}\left( t\right) \in {\tilde{F}}\left( y_{x}\left( t\right) \right) \quad \hbox {for a.e. } t>0\\ y_{x}\left( 0\right) =x. \end{array} \right. \right\} . \end{aligned}$$

Thanks to the Filippov implicit function lemma (see [23]), it is shown in [25, Theorems 3.2.2 and 3.3.1] that under the respective assumptions [A3] and \([{\tilde{A}}3]\) below, the set \(Y_x\) is not empty. We introduce the set of admissible controlled trajectories starting from x

$$\begin{aligned} {\mathcal {T}}_x= & {} \left\{ \left( y_{x},\alpha \right) \in L_{loc}^{\infty }\left( \mathbb {R}^{+};\mathcal {M}\right) :y_{x}\in Lip\left( \mathbb {R}^{+};{\mathcal {S}}\right) \hbox { and }y_{x}\left( t\right) \right. \\&\left. = x+\int _{0}^{t}f\left( y_{x}\left( s\right) ,\alpha \left( s\right) \right) ds\right\} , \end{aligned}$$

where \( \left( y_{x},\alpha \right) \in L_{loc}^{\infty }\left( \mathbb {R}^{+};\mathcal {M}\right) \) means \( t\mapsto \left( y_{x} (t),\alpha (t)\right) \in L_{loc}^{\infty }\left( \mathbb {R}^{+};\mathcal {M}\right) \). We note that if \((y_x,\alpha )\in {\mathcal {T}}_x\) then \(y_x\in Y_x\). Thus, from now on, we will denote \(y_x\) by \(y_{x,\alpha }\) if \((y_x,\alpha )\in {\mathcal {T}}_x\). By continuity of the trajectory \(y_{x,\alpha }\), the set \(T^{\Gamma }_{x,\alpha }:=\left\{ t\in {\mathbb {R}}^+: y_{x,\alpha }(t)\in \Gamma \right\} \) containing all the times at which the trajectory stays on \(\Gamma \) is closed and therefore, the set \(T^{i}_{x,\alpha }:=\left\{ t\in {\mathbb {R}}^+: y_{x,\alpha }(t)\in {\mathcal {P}}_i \backslash \Gamma \right\} \) is open. Consequently, \(T^{i}_{x,\alpha }\) is a countable union of disjoint open intervals

$$\begin{aligned} T^i_{x,\alpha } = {\left\{ \begin{array}{ll} \left[ 0,\eta _{i0}\right) \cup \bigcup _{k\in K_{i}\subset \mathbb {N^{\star }}}\left( t_{ik},\eta _{ik}\right) , &{} \quad \hbox {if }x\in {\mathcal {P}}_i \backslash \Gamma ,\\ \bigcup _{k\in K_{i}\subset \mathbb {N^{\star }}}\left( t_{ik},\eta _{ik}\right) , &{} \quad \hbox {if }x\notin {\mathcal {P}}_i \backslash \Gamma , \end{array}\right. } \end{aligned}$$

where \(K_{i}=\left\{ 1,\ldots ,n\right\} \) if the trajectory \(y_{x,\alpha }\) enters \({\mathcal {P}}_i\)n times, \(K_{i}=\mathbb {N}\) if the trajectory \(y_{x,\alpha }\) enters \({\mathcal {P}}_i\) infinite times and \( K_i = \emptyset \) if the trajectory never enters \( {\mathcal {P}}_i \).

Remark 2.2

From the previous definition, we see that \(t_{ik}\) is an entry time in \({\mathcal {P}}_i \backslash \Gamma \) and \(\eta _{ik}\) is an exit time from \({\mathcal {P}}_i \backslash \Gamma \). Hence

$$\begin{aligned} x_{ik}:=y_{x,\alpha }(t_{ik})\in \Gamma ,~z_{ik}:=y_{x,\alpha }(\eta _{ik})\in \Gamma . \end{aligned}$$

We now define a cost functional and a value function corresponding to the optimal problem.

2.2.2 Cost functional and value function

Definition 2.3

The cost functional associated to the trajectory \((y_x,\alpha )\in {\mathcal {T}}_x\) is defined by

$$\begin{aligned} J(x,\alpha )=\int _0^\infty \ell (y_{x} (t),\alpha (t)) e^{-\lambda t} dt + \sum ^N_{i=1} \sum _{k\in K_i} c_i (x_{ik}) e^{-\lambda t_{ik}}, \end{aligned}$$

where \( \lambda >0 \) and the running cost \(\ell : {\mathcal {M}}\rightarrow {\mathbb {R}}\) is

$$\begin{aligned} \ell (x,a)= {\left\{ \begin{array}{ll} \ell _{i}\left( x,a\right) &{} \quad \hbox {if }x\in {\mathcal {P}}_i\backslash \Gamma \hbox { and }a\in A_{i},\\ \ell _{i}\left( x,a\right) &{} \quad \hbox {if }x\in \Gamma \hbox { and }a\in A_{i}. \end{array}\right. } \end{aligned}$$
(2.1)

The value function of the infinite horizon optimal control problem is defined by

$$\begin{aligned} {\mathcal {V}}(x) = \inf _{(y_{x},\alpha ) \in {\mathcal {T}}_x} J(x,\alpha ). \end{aligned}$$
(2.2)

Remark 2.4

By the definition of the value function, we are mainly interested in admissible control laws \(\alpha \) for which \(J(x,\alpha )<+\infty \). In such a case, even if the set \(K_i\) may be infinite, it is possible to reorder \(\left\{ t_{ik},\eta _{ik}:k\in {\mathbb {N}}\right\} \) such that

$$\begin{aligned} t_{i1}<\eta _{i1}<t_{i2}<\eta _{i2}<\ldots<t_{ik}<\eta _{ik}<\ldots , \end{aligned}$$

and

$$\begin{aligned} \lim _{k\rightarrow \infty } t_{ik} = \lim _{k\rightarrow \infty } \eta _{ik}=+\infty . \end{aligned}$$

Indeed, because of the positivity of the entry cost functions, if there exists a cluster point, \(J(x,\alpha )\) has to be infinite which leads to a contradiction, since we assumed that \(J(x,\alpha )<+\infty \). This means that the state cannot switch half-planes infinitely many times in finite time, otherwise the cost functional becomes obviously infinite.

The following example shows that the value function with entry costs can possibly be discontinuous at the interface \(\Gamma \).

Example 2.5

Consider a simple junction \( {\mathcal {S}}\) with two half-planes \( {\mathcal {P}}_1\) and \( {\mathcal {P}}_2 \). To simplify, we may identify \( {\mathcal {S}}\equiv {\mathbb {R}}^2 \) and \({\mathcal {P}}_1= {\mathbb {R}}^+ e_1 \times {\mathbb {R}}e_0 \equiv (-\infty , 0] \times {\mathbb {R}}\), \({\mathcal {P}}_2 = {\mathbb {R}}^+ e_2 \times {\mathbb {R}}e_0 \equiv [0, +\infty ) \times {\mathbb {R}}\) and \(\Gamma = {\mathbb {R}}e_0 \equiv \{0\} \times {\mathbb {R}}\). The control sets are \(A_i=\left\{ (a_i,a_0)\in {\mathbb {R}}^2 : a_0^2+a_i^2\le 1\right\} \) with \(i\in \left\{ 1,2\right\} \). Set

$$\begin{aligned} (f(x,a),\ell (x,a))= {\left\{ \begin{array}{ll} ((a_1,a_0) , 1)&{} \text {if } x\in {\mathcal {P}}_1 \text { and } a=(a_1,a_0) \in A_1, \\ ((a_2,a_0),1-a_2) &{} \text {if } x\in {\mathcal {P}}_2 \text { and } a=(a_2,a_0) \in A_2, \end{array}\right. } \end{aligned}$$

and entry costs functions \(c_1\equiv C_1\), where \(C_1\) is a positive constant and

$$\begin{aligned} c_2 (\zeta ) = {\left\{ \begin{array}{ll} 3 - |\zeta |, &{} \text { if } \zeta \le 1, \\ 2, &{} \text { if } \zeta \ge 1. \end{array}\right. } \end{aligned}$$

For \(x\in {\mathcal {P}}_2\backslash \Gamma \), then \({\mathcal {V}}(x)=v_2(x)=0\) with optimal strategy which consists of choosing \(\alpha \equiv (a_2 = 1, a_0 = 0)\). For \(x\in {\mathcal {P}}_1\), we can check that

  • If \( 2\ge 1/\lambda \), then \( {\mathcal {V}}(x) = 1/\lambda \) with optimal control law \( \alpha \equiv (a_1 = 0, a_0 = 1) \).

  • If \( 2<1/\lambda \), we consider \( x= (x_1,x_0) \in {\mathcal {P}}_i \) in two cases:

    Case 1::

    If \( |x_1| \ge 1 \), then the optimal control law \( \alpha (t) = (a_1= -1, a_0 =0)\) if \( t\le |x_1| \) and \( \alpha (t) = (a_2 = 1, a_0 = 0) \) if \( t\ge |x_1| \) and

    $$\begin{aligned} {\mathcal {V}}(x) = \int ^{|x_1|}_0 1 dt + c_2 ((0,x_1)) e^{-\lambda |x_1|}+0 = \dfrac{1-e^{-\lambda |x_1|} }{\lambda } +2e^{-\lambda |x_1|}. \end{aligned}$$

    The optimal trajectory starting from \( x = (x_1,x_0) \in {\mathcal {P}}_1 \) in case \( |x_1|\ge 1 \) is plotted in red in Fig. 3.

    Case 2::

    If \( |x_1| \le 1 \), let \( \tau (x) = \left( |x_1|^2 +|x_0 - 1|^2\right) ^{1/2} \) and \( s: [-1,1] \rightarrow {\mathbb {R}}\) where \( s(\zeta ) =1 \) if \( \zeta \ge 0 \) and \( s(\zeta ) =-1 \) if \( \zeta < 0 \). We have the optimal control law

    $$\begin{aligned} \alpha (t) = {\left\{ \begin{array}{ll} \left( a_1 = -\dfrac{x_1}{\tau (x)}, a_0 = \dfrac{1 - x_0}{\tau (x)}\right) , &{} t\le \tau (x),\\ (a_2 = 1, a_0 = 0), &{} t\ge \tau (x). \end{array}\right. } \end{aligned}$$

    and the value function

    $$\begin{aligned} {\mathcal {V}}(x)= \int ^{\tau (x)}_0 1 dt + c_2 ( (0,s(x_1)) ) e^{-\lambda }+0= \frac{1-e^{-\lambda \tau (x)}}{\lambda }+ 2 e^{-\lambda \tau (x)}. \end{aligned}$$

    The optimal trajectory starting from \( x = (x_1,x_0) \in {\mathcal {P}}_1 \) in case \( |x_1|< 1 \) is plotted in blue in Fig. 3.

Fig. 3
figure 3

The trajectories in case \(2<1/ \lambda \)

To sum up, there are two cases

  1. 1.

    If \(\displaystyle \inf _{x\in \Gamma }c_2(x) = 2 \ge 1/\lambda \), then

    $$\begin{aligned} {\mathcal {V}}(x)= {\left\{ \begin{array}{ll} 0 &{} \text {if } x\in {\mathcal {P}}_2 \backslash \Gamma ,\\ \dfrac{1}{\lambda } &{} \text {if } x\in {\mathcal {P}}_1. \end{array}\right. } \end{aligned}$$

    The graph of the value function with entry costs satisfying \( \inf _{x\in \Gamma } c_2(x) \ge 1/\lambda \) is plotted in Fig. 4a.

  2. 2.

    If \(\displaystyle \inf _{x\in \Gamma }c_2(x) = 2 <1/\lambda \), then

    $$\begin{aligned} {\mathcal {V}}(x)= {\left\{ \begin{array}{ll} 0 &{} \text {if } x\in {\mathcal {P}}_2 \backslash \Gamma ,\\ \dfrac{1-e^{-\lambda |x_1| }}{\lambda }+ 2 e^{-\lambda |x_1| } &{} \text {if } x\in {\mathcal {P}}_1\text { and } |x_1| \ge 1,\\ \dfrac{1-e^{-\lambda \left( |x_1|^2 +|x_0 - 1|^2\right) ^{1/2} }}{\lambda }+ 2 e^{-\lambda \left( |x_1|^2 +|x_0 - 1|^2\right) ^{1/2} } &{} \text {if } x\in {\mathcal {P}}_1 \text { and } |x_1| \le 1. \end{array}\right. } \end{aligned}$$

    The graph of the value function in the case \(\inf c_2 <1/\lambda \) is plotted in Fig. 4b.

Fig. 4
figure 4

The value function \({\mathcal {V}}\) in two cases

3 Hamilton–Jacobi system under strong controllability condition near the interface

In this section we derive the Hamilton–Jacobi system (HJ) associated with the above optimal control problem and prove that the value function given by (2.2) is the unique viscosity solution of that (HJ) system, under the following condition:

\(\left[ A3\right] \):

(Strong controllability) There exists a real number \(\delta >0\) such that for any \(i=1,\ldots ,N\) and for all \(x\in \Gamma \),

$$\begin{aligned} B(0,\delta )\cap ({\mathbb {R}}e_0 \times {\mathbb {R}}e_i)\subset F_{i}\left( x\right) . \end{aligned}$$

Remark 3.1

If x is close to \(\Gamma \), we can use [A3] to obtain the coercivity of the Hamiltonian which will be needed in Lemma 3.17 below to prove the Lipschitz continuity of the viscosity subsolution of the HJ system.

Hereafter we will denote by \(B(\Gamma ,\rho ),\,\rho >0,\) the set

$$\begin{aligned} B(\Gamma ,\rho ):=\left\{ x: \inf _{z\in \Gamma } |x-z|< \rho \right\} . \end{aligned}$$

Lemma 3.2

Under Assumptions [A1] and [A3], there exist two positive numbers \(r_0\) and C such that for all \(x_1,x_2\in B(\Gamma ,r_0)\), there exist \((y_{x_1}, \alpha _{x_1,x_2})\in {\mathcal {T}}_{x_1}\) and \(\tau _{x_1,x_2}<C d(x_1,x_2)\) such that \(y_{x_1}(\tau _{x_1,x_2})=x_2\).

Proof

The proof is classical and similar to the one in [14], so we skip it. \(\square \)

3.1 Value function on the interface

Lemma 3.3

Under Assumptions [A] and [A3], for all \(i\in \{1,\ldots ,N\}\), \({\mathcal {V}}|_{{\mathcal {P}}_i \backslash \Gamma }\) is continuous. Moreover, there exists \(\varepsilon >0\) such that \({\mathcal {V}}|_{{\mathcal {P}}_i \backslash \Gamma }\) is Lipschitz continuous in \(B(\Gamma ,\varepsilon )\cap {\mathcal {P}}_i \backslash \Gamma \). Therefore, it is possible to extend \({\mathcal {V}}|_{{\mathcal {P}}_i \backslash \Gamma }\) to the interface \(\Gamma \) and from now on, we use the following notation

$$\begin{aligned} v_i(x)= {\left\{ \begin{array}{ll} {\mathcal {V}}|_{{\mathcal {P}}_i}(x), &{}\text {if } x\in {\mathcal {P}}_i \backslash \Gamma ,\\ \displaystyle \lim _{ ( {\mathcal {P}}_i \backslash \Gamma ) \ni z \rightarrow x} {\mathcal {V}}|_{{\mathcal {P}}_i}(z), &{}\text {if } x\in \Gamma . \end{array}\right. } \end{aligned}$$
(3.1)

Proof

This lemma is a consequence of Lemma 3.2, see [1] and [14] for more details. \(\square \)

For \(x\in \Gamma \), we set

$$\begin{aligned} \textsc {FL}_\Gamma (x) := \bigcup _{i=1}^N \left( \textsc {FL}_i(x) \cap ({\mathbb {R}}e_0 \times {\mathbb {R}}) \right) \end{aligned}$$
(3.2)

and

$$\begin{aligned} A^{\Gamma }_{i}(x)=\left\{ a\in A_i : f_i(x,a)\cdot e_i =0 \right\} ,\quad i=1,\ldots N, \end{aligned}$$
(3.3)

where \( \textsc {FL}_i(x) \) is defined in Assumption [A2] . Let us define a viscosity solution of the switching Hamilton–Jacobi equation on the interface \(\Gamma \):

$$\begin{aligned} \lambda u_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{v_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial u_\Gamma }{\partial e_0} (x) \right) \right\} =0,\quad x\in \Gamma ,\nonumber \\ \end{aligned}$$
(3.4)

where \(H_\Gamma \) is the Hamiltonian on \(\Gamma \) defined by

$$\begin{aligned} H_\Gamma (x,p)= & {} \max _{i=1,\ldots , N} \max _{a \in A^{\Gamma }_{i}(x)} \left\{ -(f_i(x,a) \cdot e_0) p - \ell _i(x,a) \right\} \nonumber \\= & {} \max _{(\zeta ,\xi )\in \textsc {FL}_\Gamma (x)} \left\{ - (\zeta \cdot e_0 ) p - \xi \right\} . \end{aligned}$$
(3.5)

Definition 3.4

An upper (resp. lower) semi-continuous \(u_\Gamma : \Gamma \rightarrow {\mathbb {R}}\) is a viscosity subsolution (resp. supersolution) of (3.4) if for any \(x\in \Gamma \), any \(\varphi \in C^1 ( \Gamma )\) such that \(u_\Gamma - \varphi \) has a local maximum (resp. minimum) point at x, then

$$\begin{aligned} \lambda u_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{v_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial \varphi }{\partial e_0} (x)\right) \right\} \le 0 \quad (\text {resp.}\ge 0 ). \end{aligned}$$

The continuous function \(u_\Gamma : \Gamma \rightarrow {\mathbb {R}}\) is called viscosity solution of (3.4) if it is both viscosity sub and supersolution of (3.4).

We have the following characterization of the value function \({\mathcal {V}}\) on the interface.

Theorem 3.5

Under Assumptions [A] and [A3], the restriction of value function \({\mathcal {V}}\) on the interface \(\Gamma \), \({\mathcal {V}}|_{\Gamma }\), is a viscosity solution of (3.4).

The proof of Theorem 3.5 is made in several steps. The first step is to prove that \({\mathcal {V}}|_\Gamma \) is a viscosity solution of an HJ equation with an extended definition of the Hamiltonian on \(\Gamma \). For that, we consider the following larger relaxed vector field: for \(x\in \Gamma \),

$$\begin{aligned} f\ell _\Gamma (x)=\left\{ (\eta ,\mu )\in {\mathbb {R}}e_0 \times {\mathbb {R}}: \left. \begin{array}[c]{l} \exists \{y_{x,n}, \alpha _n \}_{n\in {\mathbb {N}}}\\ (y_{x,n}, \alpha _n)\in {\mathcal {T}}_x\\ \exists \{t_n \}_{n\in {\mathbb {N}}} \end{array} \right. \left| \begin{array}[c]{l} t_n\rightarrow 0^+,~y_{x,n}(t)\in \Gamma \text { for all } t\in [0,t_n] \\ \displaystyle \eta =\lim _{n\rightarrow +\infty } t_n^{-1}\int _0^{t_n} f(y_{x,n} (t), \alpha _n(t)) dt\\ \displaystyle \mu =\lim _{n\rightarrow +\infty } t_n^{-1}\int _0^{t_n} \ell (y_{x,n} (t), \alpha _n(t)) dt \end{array} \right. \right\} . \end{aligned}$$

We have

Lemma 3.6

For any function \(\varphi \in C^1 (\Gamma )\) and \(x\in \Gamma \),

$$\begin{aligned} \sup _{(\zeta ,\xi )\in f\ell _\Gamma (x)} \left\{ -(\zeta \cdot e_0 ) \dfrac{\partial \varphi }{\partial e_0} (x) - \xi \right\} = \sup _{(\zeta ,\xi )\in \textsc {FL}_\Gamma (x)} \left\{ (-\zeta \cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) - \xi \right\} . \end{aligned}$$

Proof

See Appendix. \(\square \)

The second step consists of proving the following lemma.

Lemma 3.7

The restriction of the value function \({\mathcal {V}}\) on the interface \(\Gamma \), \({\mathcal {V}}|_{\Gamma }\) satisfies

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) + H_\Gamma \left( x, \dfrac{\partial {\mathcal {V}}|_\Gamma }{\partial e_0} (x) \right) \le 0, \quad x\in \Gamma , \end{aligned}$$
(3.6)

in the viscosity sense.

Proof

Let \(x \in \Gamma \) and \(\varphi \in C^1 (\Gamma )\) such that \({\mathcal {V}}|_\Gamma -\varphi \) has a maximum at x, i.e.

$$\begin{aligned} \varphi (x) - \varphi (z) \le {\mathcal {V}}|_\Gamma (x) - {\mathcal {V}}|_\Gamma (z), \quad \text {for all }z\in \Gamma . \end{aligned}$$
(3.7)

From Lemma 3.6, it suffices to prove that

$$\begin{aligned} \lambda V|_\Gamma (x) + \sup _{(\zeta ,\xi )\in f\ell _\Gamma (x)} \left\{ (-\zeta \cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) -\xi \right\} \le 0. \end{aligned}$$
(3.8)

Let \((\zeta ,\xi ) \in f\ell _\Gamma (x) \), there exist \((y_{x,n},\alpha _n)\in {\mathcal {T}}_x\) and \(t_n \rightarrow 0^+\) such that \(y_{x,n} (t) \in \Gamma \) for all \(t\le t_n\) and

$$\begin{aligned} \zeta&=\lim _{n\rightarrow +\infty }\dfrac{1}{t_{n}}\int _{0}^{t_{n}}f\left( y_{x,n}\left( t\right) ,\alpha _{n}\left( t\right) \right) dt=\lim _{n\rightarrow \infty }\dfrac{y_{x,n}\left( t_{n}\right) -x}{t_{n}},\\ \xi&=\lim _{n\rightarrow +\infty }\dfrac{1}{t_{n}}\int _{0}^{t_{n}}\ell \left( y_{x,n}\left( t\right) ,\alpha _{n}\left( t\right) \right) dt. \end{aligned}$$

According to (3.7) and the dynamic programming principle, for all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \varphi \left( x\right) -\varphi \left( y_{x,n}(t_{n})\right)&\le \mathcal {V}|_{\Gamma }\left( x\right) -\mathcal {V}|_{\Gamma }\left( y_{x,n}(t_{n})\right) \\&\le \int _{0}^{t_{n}}\ell \left( y_{x,n}\left( t\right) , \alpha _n(t) \right) e^{-\lambda t}dt+\mathcal {V}|_{\Gamma }\left( y_{x,n}\left( t_{n}\right) \right) (e^{-\lambda t_{n}}-1 ). \end{aligned}$$

Dividing both sides by \(t_n\), the goal is to take the limit as n tends to \(\infty \). On the one hand, we have

$$\begin{aligned}&\lim _{n\rightarrow +\infty } \left( \dfrac{1}{t_n} \int _0^{t_n} \ell \left( y_{x,n}(t), \alpha _n(t) \right) e^{-\lambda t }dt + \dfrac{{\mathcal {V}}|_\Gamma (y_{x,n}(t_n)) (e^{-\lambda t_n }-1)}{t_n} \right) \nonumber \\&\quad = \xi - \lambda V|_{\Gamma } (x). \end{aligned}$$
(3.9)

On the other hand, since \(y_{x,n} (t_n) = x + t_n (\zeta + o(1)e_0 )\), we obtain

$$\begin{aligned} \lim _{n\rightarrow +\infty } \dfrac{\varphi (x) - \varphi (y_{x,n} (t_n))}{t_n}=-(\zeta \cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) . \end{aligned}$$
(3.10)

Hence, in view of (3.9) and (3.10), we have

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) - (\zeta \cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) - \xi \le 0. \end{aligned}$$
(3.11)

Thus (3.11) holds for any \((\zeta ,\xi ) \in f\ell _\Gamma (x) \) and therefore (3.8) holds. \(\square \)

Lemma 3.8

Under Assumptions [A] and [A3], for all \(x\in \Gamma \),

$$\begin{aligned} \max _{i=1,\ldots ,N} \{ v_i (x) \} \le {\mathcal {V}}|_\Gamma (x) \le \min _{i=1,\ldots ,N} \{v_i (x) +c_i (x) \}. \end{aligned}$$

Proof

Let \(i\in \{1,\ldots ,N\}\), \(x\in \Gamma \) and \(z\in {\mathcal {P}}_i \backslash \Gamma \) such that \(|x-z|\) is small. It suffices to prove (a) \( v_i (x) \le {\mathcal {V}}|_\Gamma (x) \) and (b) \( {\mathcal {V}}|_\Gamma (x)\le v_i(x)+c_i (x) \).

  1. (a)

    Consider any control law \(\alpha \) such that \((y_{x}, \alpha ) \in {\mathcal {T}}_x\). Let \(\alpha _{z,x}\) be a control law which connects z to x (which exists thanks to Lemma 3.2) and consider the control law

    $$\begin{aligned} {\hat{\alpha }}(x) = {\left\{ \begin{array}{ll} \alpha _{z,x} (s) &{} \text {if } s\le \tau _{z,x},\\ \alpha (s-\tau _{z,x}) &{} \text {if } s>\tau _{z,x}. \end{array}\right. } \end{aligned}$$

    This means that the trajectory goes from z to x with the control law \(\alpha _{z,x}\) and then proceeds with the control law \(\alpha \). Therefore,

    $$\begin{aligned} {\mathcal {V}}(z) = v_i (z) \le J(z,{\hat{\alpha }}) = \int _0^{\tau _{z,x}} \ell _i \left( y_{z,{\hat{\alpha }}} (s), {\hat{\alpha }}(s) \right) e^{-\lambda s} ds + e^{-\lambda \tau _{z,x}} J(x,\alpha ). \end{aligned}$$

    Since \(\alpha \) is chosen arbitrarily and \(\ell _i\) is bounded by M, we obtain

    $$\begin{aligned} v_i(z) \le M \tau _{z,x} + e^{-\lambda \tau _{z,x}} {\mathcal {V}}(x). \end{aligned}$$

    Let z tend to x (then \(\tau _{z,x}\) tends to 0 by Lemma 3.2), we conclude \(v_i(x)\le {\mathcal {V}}(x)\).

  2. (b)

    Consider any control law \(\alpha _z\) such that \((y_{z}, \alpha _z)\in {\mathcal {T}}_z\) and use Lemma 3.2 to pick a control law \(\alpha _{x,z}\) connecting x to z. Consider the control law

    $$\begin{aligned} {\hat{\alpha }}(x)= {\left\{ \begin{array}{ll} \alpha _{x,z}(s) &{} \text {if } s\le \tau _{x,z},\\ \alpha _z (s-\tau _{x,z}) &{} \text {if } s > \tau _{x,z}, \end{array}\right. } \end{aligned}$$

    for which the trajectory \(y_{x,{\hat{\alpha }}}\) goes from x to z using the control law \(\alpha _{x,z}\) and then proceeds with the control law \(\alpha _z\). Therefore,

    $$\begin{aligned} {\mathcal {V}}(x) \le J(x,{\hat{\alpha }}) = c_i (x) + \int ^{\tau _{x,z}}_0 \ell _i \left( y_{x,{\hat{\alpha }}} (s), {\hat{\alpha }}(s) \right) e^{-\lambda s }ds + e^{-\lambda \tau _{x,z}} J(z,\alpha _z). \end{aligned}$$

    Since \(\alpha _z\) is chosen arbitrarily and \(\ell _i\) is bounded by M, we obtain

    $$\begin{aligned} {\mathcal {V}}(x) \le c_i(x) + M\tau _{x,z}+ e^{-\lambda \tau _{x,z}} v_i(z). \end{aligned}$$

    Let z tend x (then \(\tau _{x,z}\) tends to 0, by Lemma 3.2), we conclude \({\mathcal {V}}(x)\le c_i(x) + v_i (x)\). \(\square \)

From Lemmas 3.7 and 3.8, we conclude that \({\mathcal {V}}|_{\Gamma }\) is a viscosity subsolution of (3.4). The last step of of the proof of Theorem 3.5 is to prove that \({\mathcal {V}}|_{\Gamma }\) is a viscosity supersolution of (3.4).

Lemma 3.9

The restriction of the value function \({\mathcal {V}}\) on the interface \(\Gamma \), \({\mathcal {V}}|_{\Gamma }\) satisfies

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{v_i(x) +c_i(x) \}, H_\Gamma \left( x, \dfrac{\partial {\mathcal {V}}|_\Gamma }{\partial e_0} (x) \right) \right\} \ge 0, \quad x\in \Gamma , \end{aligned}$$

in the viscosity sense.

Proof

Let \( x\in \Gamma \) and assume that

$$\begin{aligned} {\mathcal {V}}(x) < \min _{i=1,\ldots ,N} \{ v_i(x) + c_i(x) \},\quad \text {for all } n \in {\mathbb {N}}, \end{aligned}$$
(3.12)

it suffices to prove that \({\mathcal {V}}(x)\) satisfies

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) + H_\Gamma \left( x, \dfrac{\partial {\mathcal {V}}|_\Gamma }{\partial e_0} (x) \right) \ge 0, \end{aligned}$$

in the viscosity sense. Let \( \{\varepsilon _n\} \) be a sequence which tends to 0. For any n, let \(\alpha _n\) be an \(\varepsilon _n\)-optimal control, i.e. \({\mathcal {V}}(x) + \varepsilon _n > J(x,\alpha _n)\), and \( \tau _n \) be the first time the trajectory \(y_{x,\alpha _n}\) leaves \(\Gamma \), i.e.

$$\begin{aligned} \tau _n:= \inf _{i=1,\ldots ,N} T_{x,\alpha _n}^i, \text { where } T_{x,\alpha _n}^i := \{t\in {\mathbb {R}}^+ : y_{x,\alpha _n} (t) \in {\mathcal {P}}_i \backslash \Gamma \}. \end{aligned}$$

We note that \(\tau _n\) is possibly \(+\infty \), in which case the trajectory \(y_{x,\alpha _n}\) stays on \(\Gamma \) for all \(s\in [0,+\infty )\). We consider the two following cases:

Case 1::

There exists a subsequence of \(\{\tau _n\}\) (which is still denoted \(\{\tau _n\}\)) such that \(\tau _n \rightarrow 0\) as \(n\rightarrow +\infty \) and at time \(\tau _n\) the trajectory enters \({\mathcal {P}}_{i_0}\), for some \(i_0 \in \{1,\ldots ,N \}\). This implies

$$\begin{aligned} {\mathcal {V}}(x) + \varepsilon _n&> J(x,\alpha _n) \\&= \int _0^{\tau _n} \ell \left( y_{x,\alpha _n} (s), \alpha _n (s) \right) e^{-\lambda s} ds + c_{i_0}(y_{x,\alpha _n} (\tau _n)) e^{-\lambda \tau _n} \\&\quad + v_{i_0} (y_{x,\alpha _n} (\tau _n)) e^{-\lambda \tau _n}. \end{aligned}$$

Since \( \ell \) is bounded by M, sending n to \(+\infty \), yields

$$\begin{aligned} {\mathcal {V}}(x) \ge c_{i_0}(x) +v_{i_0} (x), \end{aligned}$$

which leads to a contradiction to (3.12).

Case 2::

There exist a subsequence of \(\{\tau _n\}\) (which is still denoted \(\{\tau _n\}\)) and a positive constant C such that \(\tau _n >C\). This means that from 0 to C, the trajectory \(y_{x,\alpha _n}\) still remains in \(\Gamma \). Thus, for all \(\tau \in [0,C],\)

$$\begin{aligned} {\mathcal {V}}|_\Gamma (x) + \varepsilon _n&\ge \int _{0}^{\tau }\ell \left( y_{x,n}\left( t\right) , \alpha _n(t) \right) e^{-\lambda t}dt+\mathcal {V}|_{\Gamma }\left( y_{x,n}\left( \tau \right) \right) e^{-\lambda \tau }\nonumber \\&\ge \int _{0}^{\tau }\ell \left( y_{x,n}\left( t\right) , \alpha _n(t) \right) dt+\mathcal {V}|_{\Gamma }\left( y_{x,n}\left( \tau \right) \right) e^{-\lambda \tau }+ o(\tau ), \end{aligned}$$
(3.13)

where \(o(\tau )/\tau \rightarrow 0\) as \(\tau \rightarrow 0\) and the last inequality is obtained by using the boundedness of \(\ell \). Let \(\varphi \in C^1 (\Gamma )\) such that \({\mathcal {V}}|_\Gamma -\varphi \) has a minimum on \( \Gamma \) at x, i.e.

$$\begin{aligned} \varphi (x) - \varphi (z) \ge {\mathcal {V}}|_\Gamma (x) - {\mathcal {V}}|_\Gamma (z), \quad \text {for all }z\in \Gamma . \end{aligned}$$
(3.14)

From Lemma 3.6, it suffices to prove that

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) + \max _{(\zeta ,\xi )\in f\ell _\Gamma (x) } \left\{ - (\zeta \cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) -\xi \right\} \ge 0. \end{aligned}$$
(3.15)

Since \( \lim _{n\rightarrow \infty } \varepsilon _n =0 \), it is possible to choose a sequence \( \{t_n\} \) such that \(0<t_n <C \) and \( \varepsilon _n / t_n \rightarrow 0 \) as \( n \rightarrow \infty \). Thus from (3.13) and (3.14), we obtain

$$\begin{aligned}&\dfrac{\varphi (x) - \varphi (y_{x,n}(t_n))}{t_n} -\dfrac{1}{t_n} \int _0^{t_n} \ell (y_{x,n} (t), \alpha _n (t) ) dt + \dfrac{1-e^{-\lambda t_n} }{t_n}{\mathcal {V}}|_\Gamma (y_{x,n} (t_n) )\nonumber \\&\quad \ge -\dfrac{\varepsilon _n}{t_n} + o(1). \end{aligned}$$
(3.16)

Since f and \(\ell \) are bounded, then the sequence \(\big \{ \frac{y_{x,n} (t_n) - x }{t_n}, \frac{1}{t_n} \int _0^{t_n} \ell (y_{x,n} (t), \alpha _n (t) ) dt \big \}\) is bounded in \(\Gamma \times {\mathbb {R}}\). Therefore, we can extract a subsequence of this sequence which converges to \(({\bar{\zeta }},{\bar{\xi }})\) as \(n\rightarrow +\infty \). Obviously, we have \(\left( {\bar{\zeta }},{\bar{\xi }} \right) \in f\ell _\Gamma (x)\). Hence, sending n to \(\infty \) in (3.16), we obtain

$$\begin{aligned} \lambda {\mathcal {V}}|_\Gamma (x) - ({\bar{\zeta }}\cdot e_0) \dfrac{\partial \varphi }{\partial e_0} (x) -{\bar{\xi }} \ge 0, \end{aligned}$$

and thus (3.15) holds. \(\square \)

3.2 The Hamilton–Jacobi system and viscosity solutions

3.2.1 Admissible test-functions

Definition 3.10

A function \(\varphi :{\mathcal {P}}_1 \times \ldots \times {\mathcal {P}}_N \times \Gamma \rightarrow {\mathbb {R}}^N\) is an admissible test-function if there exist \((\varphi _1,\ldots ,\varphi _N, \varphi _\Gamma )\), \(\varphi _i\in C^1({\mathcal {P}}_i)\) and \( \varphi _\Gamma \in C^1(\Gamma ) \), such that \(\varphi (x_1,\ldots ,x_N,x_\Gamma )=(\varphi _1 (x_1), \ldots , \varphi _N (x_N), \varphi _\Gamma (x_\Gamma ) )\). The set of admissible test-function is denoted by \(\mathcal {R}({\mathcal {S}})\).

3.2.2 Hamilton–Jacobi system

Define the Hamiltonian \(H_i : {\mathcal {P}}_i \times ({\mathbb {R}}e_0 \times {\mathbb {R}}e_i) \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} H_i(x,p)=\max _{a\in A_i} \left\{ -f_i(x,a)\cdot p - \ell _i (x,a) \right\} \end{aligned}$$

and the Hamiltonian \(H_i^+ : \Gamma \times ({\mathbb {R}}e_0 \times {\mathbb {R}}e_i) \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} H_i^+(x,p)=\max _{a\in A_i^+(x) } \left\{ -f_i(x,a)\cdot p - \ell _i (x,a) \right\} , \end{aligned}$$

where \(A_i^+ (x) = \{a\in A_i : f_i(x,a)\cdot e_i \ge 0 \}\) and consider the following Hamilton–Jacobi system

$$\begin{aligned} \left\{ \begin{array}{rr} \lambda u_{i}\left( x\right) +H_{i}\left( x, \partial u_i (x) \right) =0, &{} \text {if } x\in {\mathcal {P}}_{i}\backslash \Gamma ,\\ \lambda u_{i} (x)+\max \left\{ H_{i}^{+}\left( x,\partial u_i (x) \right) , -\lambda u_\Gamma (x) \right\} =0, &{} \text {if } x\in \Gamma , \\ \displaystyle \lambda u_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{u_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial u_\Gamma }{\partial e_0} (x) \right) \right\} =0,\\ i\in \{1,\ldots , N \}, \end{array}\right. \end{aligned}$$
(3.17)

and its viscosity solution \(U:=\left( u_{1},\ldots ,u_{N}, u_\Gamma \right) \).

Definition 3.11

(Viscosity solution with entry costs)

  • A function \(U:=\left( u_{1},\ldots ,u_{N},u_\Gamma \right) \) where \(u_{i}\in USC\left( {\mathcal {P}}_{i};\mathbb {R}\right) \) for all \(i\in \{1,\ldots , N \}\) and \( u_\Gamma \in C(\Gamma ,{\mathbb {R}}) \), is called a viscosity subsolution of (3.17) if for any \(\left( \varphi _{1},\ldots ,\varphi _{N}, \varphi _\Gamma \right) \in \mathcal {R}({\mathcal {S}})\), any \(i\in \{1,\ldots , N \}\) and any \(x_{i}\in {\mathcal {P}}_{i}\), \( x\in \Gamma \) such that \(u_{i}-\varphi _{i}\) has a local maximum point on \({\mathcal {P}}_{i}\) at \(x_{i}\) and \( u_\Gamma - \varphi _\Gamma \) has a local maximum point on \(\Gamma \) at x, then

    $$\begin{aligned} \lambda u_{i}\left( x_i\right) +H_{i}\left( x_i, \partial \varphi _i (x_i) \right) \le 0,&\quad \text {if } x_i\in {\mathcal {P}}_{i}\backslash \Gamma ,\\ \lambda u_{i} (x_i)+\max \left\{ H_{i}^{+}\left( x_i,\partial \varphi _i (x_i) \right) , -\lambda u_\Gamma (x_i) \right\} \le 0,&\quad \text {if } x_i\in \Gamma ,\\ \displaystyle \lambda u_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{u_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial \varphi _\Gamma }{\partial e_0} (x) \right) \right\} \le 0, \end{aligned}$$
  • A function \( U:=\left( u_{1},\ldots ,u_{N}, u_\Gamma \right) \) where \(u_{i}\in LSC\left( {\mathcal {P}}_{i};\mathbb {R}\right) \) for all \(i\in \{1,\ldots , N \}\) and \( u_\Gamma \in C(\Gamma ;{\mathbb {R}}) \), is called a viscosity supersolution of (3.17) if for any \(\left( \varphi _{1},\ldots ,\varphi _{N}, \varphi _\Gamma \right) \in \mathcal {R}({\mathcal {S}})\), any \(i\in \{1,\ldots , N \}\) and any \(x_{i}\in {\mathcal {P}}_{i}\) and \( x\in \Gamma \) such that \(u_{i}-\varphi _{i}\) has a local minimum point on \({\mathcal {P}}_{i}\) at \(x_{i}\) and \( u_\Gamma - \varphi _\Gamma \) has a local minimum point on \(\Gamma \) at x, then

    $$\begin{aligned} \lambda u_{i}\left( x_i\right) +H_{i}\left( x_i, \partial \varphi _i (x_i) \right) \ge 0&\quad \text {if } x_i\in {\mathcal {P}}_{i}\backslash \Gamma ,\\ \lambda u_{i} (x_i)+\max \left\{ H_{i}^{+}\left( x_i,\partial \varphi _i (x_i) \right) , -\lambda u_\Gamma (x_i) \right\} \ge 0&\quad \text {if } x_i\in \Gamma ,\\ \displaystyle \lambda u_{\Gamma }(x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{u_i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial \varphi _\Gamma }{\partial e_0} (x) \right) \right\} \ge 0, \end{aligned}$$
  • A functions \( U:=\left( u_{1},\ldots ,u_{N}, u_\Gamma \right) \) where \(u_{i}\in C\left( {\mathcal {P}}_{i};\mathbb {R}\right) \) for all \(i\in \{1,\ldots ,N\}\) and \( u_\Gamma \in C(\Gamma ;{\mathbb {R}}) \), is called a viscosity solution of (3.17) if it is both a viscosity subsolution and a viscosity supersolution of (3.17).

3.3 Relations between the value function and the HJ system

In this section, we wish to prove that

Theorem 3.12

Under Assumptions [A] and [A3], \( V:=(v_1,\ldots ,v_N, {\mathcal {V}}|_\Gamma )\) is a viscosity solution of (3.17), where the functions \(v_i\) are defined in (3.1).

Proof

By Theorem 3.5, \( {\mathcal {V}}|_\Gamma \) is a viscosity solution of (3.4). Furthermore, if \( x \in {\mathcal {P}}_i \backslash \Gamma \) , for any \( i \in \{1,\ldots ,N \} \) and \( (y_x,\alpha ) \in {\mathcal {T}}_{x} \), there exists a time \( \tau \) small enough so that \( y_{x,\alpha } (t) \in {\mathcal {P}}_i \backslash \Gamma \) for \( 0\le t \le \tau \). Thus, the proof in this case is classical by using dynamic programming principle (see [4, 6]) and we do not detail it. Now assume \( x\in \Gamma \), we shall prove that for all \(i\in \{1,\ldots ,N\}\), the function \(v_i\) satisfies

$$\begin{aligned} \lambda v_{i} (x)+\max \left\{ H_{i}^{+}\left( x,\partial v_i (x) \right) ,-\lambda {\mathcal {V}}|_\Gamma (x) \right\} =0,\quad \text {on } \Gamma , \end{aligned}$$
(3.18)

in the viscosity sense. The proof of this case is a consequence of Lemma 3.13 and Lemma 3.15 below. \(\square \)

Lemma 3.13

For \(i\in \{1,\ldots N\}\), the function \(v_i\) satisfies

$$\begin{aligned} \lambda v_{i} (x)+\max \left\{ H_{i}^{+}\left( x,\partial v_i (x) \right) , -\lambda {\mathcal {V}}|_\Gamma (x) \right\} \le 0,\quad \text {on } \Gamma , \end{aligned}$$

in the viscosity sense.

Proof

Let \(x\in \Gamma \). From Lemma 3.8 we have \(v_i(x)\le {\mathcal {V}}|_\Gamma (x)\). Hence, it suffices to prove that

$$\begin{aligned} \lambda v_i (x) + H^+_i \left( x, \partial v_i (x) \right) \le 0, \end{aligned}$$

in the viscosity sense. Let \(a_{i} \in A_i\) be such that \(f_{i}\left( x,a_i\right) \cdot e_i >0\). By the Lipschitz continuity of \( f_i(\cdot ,a_i) \), there exist \( r >0 \) such that \(f_{i}\left( z,a_i\right) \cdot e_i >0\) for all \( z\in B(x,r) \cap ({\mathcal {P}}_i \backslash \Gamma ) \). Thus, there exists \( \tau >0\) such that for all \( z\in B(x,r) \cap ({\mathcal {P}}_i \backslash \Gamma ) \), there exists \( (y_z, \alpha _z) \in {\mathcal {T}}_z\) for which

$$\begin{aligned} \alpha _z (t) = {\left\{ \begin{array}{ll} a_i &{} 0\le t \le \tau ,\\ {\hat{\alpha }} (t-\tau ) &{} t \ge \tau , \end{array}\right. } \end{aligned}$$

where \( {\hat{\alpha }} \) is chosen arbitrarily. It follows that \(y_{z}\left( t\right) \in {\mathcal {P}}_{i}\backslash \Gamma \) for all \( t\le \tau \). In other words, the trajectory \( y_z \) cannot approach \(\Gamma \) since the speed pushes it away from \(\Gamma \), for \(y_{z} (t)\in {\mathcal {P}}_{i}\cap B\left( \Gamma ,r\right) \). Note that it is not sufficient to choose \(a_{i}\in A_{i}\) such that \(f_i\left( x,a_{i}\right) \cdot e_i=0\) since it may lead to \(f\left( z,a_{i}\right) \cdot e_i <0\) for all \(z\in {\mathcal {P}}_{i}\backslash \Gamma \). Next, since \(y_{z}\left( t\right) \in {\mathcal {P}}_{i}\backslash \Gamma \) for all \( t\le \tau \), we have

$$\begin{aligned} v_{i}\left( z\right) \le J\left( z,\alpha _{z}\right) =\int _{0}^{\tau }\ell _{i}\left( y_{z}\left( s\right) ,a_{i}\right) e^{-\lambda s}ds+e^{-\lambda \tau }J\left( y_{z}\left( \tau \right) ,{\hat{\alpha }}\right) . \end{aligned}$$

This inequality holds for any \({\hat{\alpha }}\), thus

$$\begin{aligned} v_{i}\left( z\right) \le \int _{0}^{\tau }\ell _{i}\left( y_{z}\left( s\right) ,a_{i}\right) e^{-\lambda s}ds+e^{-\lambda \tau }v_{i}\left( y_{z}\left( \tau \right) \right) . \end{aligned}$$
(3.19)

Furthermore, since \(f_{i}\left( \cdot ,a\right) \) is Lipschitz continuous by \(\left[ A1\right] \), for all \(t\in \left[ 0,\tau \right] \),

$$\begin{aligned} \left| y_{z}\left( t\right) -y_{x}\left( t\right) \right|= & {} \left| z+\int _{0}^{t}f_{i}\left( y_{z}\left( s\right) ,a_{i}\right) ds-x-\int _{0}^{t}f_{i}\left( y_{x}\left( s\right) ,a_{i}\right) ds\right| \\\le & {} \left| z-x\right| +L\int _{0}^{t}\left| y_{z}\left( s\right) -y_{x}\left( s\right) \right| ds, \end{aligned}$$

and by Grönwall’s inequality,

$$\begin{aligned} \left| y_{z}\left( t\right) -y_{x}\left( t\right) \right| \le \left| z-x\right| e^{Lt}, \quad \text {for all } t\in [0,\tau ], \end{aligned}$$

yielding that \(y_{z}(s)\) tends to \(y_{x}(s)\) and \(\int _0^\tau \ell _i(y_{z}(s),\alpha _z (s) ) ds\) tends to \(\int _0^\tau \ell _i(y_{x}(s),\alpha _x (s) ) ds\) when z tends to x. Hence, from (3.19), by letting \(z \rightarrow x\), we obtain

$$\begin{aligned} v_{i}\left( x\right) \le \int _{0}^{\tau }\ell _{i}\left( y_{x}\left( s\right) ,a_{i}\right) e^{-\lambda s}ds+e^{-\lambda \tau }v_{i}\left( y_{x}\left( \tau \right) \right) . \end{aligned}$$

Let \(\varphi \) be a function in \(C^{1}\left( {\mathcal {P}}_{i}\right) \) such that \(0=v_{i}\left( x\right) -\varphi \left( x\right) =\max _{{\mathcal {P}}_{i}}\left( v_{i}-\varphi \right) \). This yields

$$\begin{aligned} \dfrac{\varphi \left( x\right) -\varphi \left( y_{x}\left( \tau \right) \right) }{\tau }\le \dfrac{1}{\tau }\int _{0}^{\tau }\ell _{i}\left( y_{x}\left( s\right) ,a_{i}\right) e^{-\lambda s}ds+\dfrac{(e^{-\lambda \tau }-1) v_{i}\left( y_{x}\left( \tau \right) \right) }{\tau }. \end{aligned}$$

By letting \(\tau \) tend to 0, we obtain that \( - f_{i}\left( x,a_{i}\right) \cdot \partial \varphi (x) \le \ell _{i}\left( x,a_{i}\right) - \lambda v_{i}\left( x\right) \). Hence,

$$\begin{aligned} \lambda v_{i}\left( x\right) +\sup _{a\in A_{i}:f_{i}\left( x,a\right) \cdot e_i >0}\left\{ - f_{i}\left( x,a_{i}\right) \cdot \partial \varphi (x) - \ell _{i}\left( x,a_{i}\right) \right\} \le 0. \end{aligned}$$

Finally, by [A] , it is easy to check that

$$\begin{aligned}&\sup _{a\in A_{i}:f_{i}\left( x,a\right) \cdot e_i >0}\left\{ - f_{i}\left( x,a_{i}\right) \cdot \partial \varphi (x) - \ell _{i}\left( x,a_{i}\right) \right\} \\&\quad =\max _{a\in A_{i}:f_{i}\left( x,a\right) \cdot e_i \ge 0}\left\{ - f_{i}\left( x,a_{i}\right) \cdot \partial \varphi (x) - \ell _{i}\left( x,a_{i}\right) \right\} . \end{aligned}$$

The proof is complete. \(\square \)

Before we give a proof of the fact that \(v_i\) is a viscosity supersolution of (3.18), we prove the following useful lemma.

Lemma 3.14

Let \(x\in \Gamma \) and assume that

$$\begin{aligned} v_i(x) < {\mathcal {V}}|_\Gamma (x). \end{aligned}$$
(3.20)

Then, there exist \({\bar{\tau }}>0\) and \(r>0\) such that for any \(z\in ({\mathcal {P}}_i \backslash \Gamma ) \cap B(x,r) \), any \(\varepsilon \) sufficiently small and any \(\varepsilon \)-optimal control law \(\alpha ^{\varepsilon }_z\) for z,

$$\begin{aligned} y_{z,\alpha ^\varepsilon _z} (s) \in {\mathcal {P}}_i \backslash \Gamma , \quad \text {for all } s\in [0,{\bar{\tau }}]. \end{aligned}$$

This lemma means that if (3.20) holds, then any trajectories starting from \(z\in ({\mathcal {P}}_i \backslash \Gamma ) \cap B(x,\varepsilon )\) still remains on \({\mathcal {P}}_i \backslash \Gamma \) for a fixed amount of time. Hence, this lemma takes into account the situation that the trajectory does not leave \({\mathcal {P}}_i \backslash \Gamma \).

Proof of Lemma 3.14

We proceed by contradiction. Suppose that there exist sequences of positive numbers \(\{\varepsilon _n\}, \{\tau _n\}\) and \(\{x_n\}\subset {\mathcal {P}}_i \backslash \Gamma \) such that \(\varepsilon \rightarrow 0^+\), \(x_n\rightarrow x\), \(\tau _n \rightarrow 0^+ \) and \((y_{x_n}, \alpha _n) \in {\mathcal {T}}_{x_n} \) where \(\alpha _n\), \(\varepsilon _n\)-optimal control law, satisfies \(y_{x_n} (\tau _n) \in \Gamma \). This implies that

$$\begin{aligned} v_i(x_n) + \varepsilon _n > J(x_n,\alpha _n)= & {} \int _0^{\tau _n} \ell (y_{x_n}(s), \alpha _n (s)) e^{-\lambda s}ds\\&+ e^{-\lambda \tau _n} J(y_{x_n}(\tau _n),\alpha _n (\cdot +\tau _n)). \end{aligned}$$

Since \(\ell \) is bounded by M by [A1], then \(v_i(x_n)+\varepsilon _n \ge -\tau _n M + e^{- \lambda \tau _n}{\mathcal {V}}|_\Gamma (y_{x_n}(\tau _n))\). Take a limit at infinity as \(n\rightarrow \infty \), we get \(v_i(x) \ge {\mathcal {V}}|_\Gamma (x)\) which contradicts (3.20). \(\square \)

Lemma 3.15

The function \(v_i\) is a viscosity supersolution of (3.18).

Proof

Let \(x\in \Gamma \). From Lemma 3.8, we have \( v_j (x) \le {\mathcal {V}}|_\Gamma (x) \) for all j. This yields that if the inequality (3.20) does not hold then \( v_i (x) = {\mathcal {V}}|_\Gamma (x) \) and therefore \( v_i \) satisfies

$$\begin{aligned} \lambda v_{i} (x)+\max \left\{ H_{i}^{+}\left( x,\partial v_i (x) \right) ,-\lambda {\mathcal {V}}|_\Gamma (x) \right\} \ge \lambda v_i (x) - \lambda {\mathcal {V}}|_\Gamma (x) = 0. \end{aligned}$$

Hence, in the rest of the proof, we assume that the inequality (3.20) holds and we aim to prove that

$$\begin{aligned} \lambda v_i (x) + H^+_i \left( x, \partial v_i (x) \right) \ge 0, \end{aligned}$$

in the viscosity sense. Let \(\varphi \in C^{1}\left( {\mathcal {P}}_{i}\right) \) be such that

$$\begin{aligned} 0=v_{i}\left( x\right) -\varphi \left( x\right) \le v_{i}\left( z\right) -\varphi \left( z\right) ,\quad \hbox {for all } z\in {\mathcal {P}}_{i}, \end{aligned}$$
(3.21)

and \(\left\{ x_{\varepsilon }\right\} \subset {\mathcal {P}}_{i}\backslash \Gamma \) be any sequence such that \(x_{\varepsilon }\) tends to x when \(\varepsilon \) tends to 0. From the dynamic programming principle and Lemma 3.14, there exists \({\bar{\tau }}\) such that for any \(\varepsilon >0\), there exists \(\left( y_{\varepsilon },\alpha _{\varepsilon }\right) :=\left( y_{x_{\varepsilon }},\alpha _{\varepsilon }\right) \in \mathcal {T}_{x_{\varepsilon }}\) such that \(y_{\varepsilon }\left( \tau \right) \in {\mathcal {P}}_{i}\backslash \Gamma \) for all \(\tau \in \left[ 0,{\bar{\tau }}\right] \) and

$$\begin{aligned} v_{i}\left( x_{\varepsilon }\right) +\varepsilon \ge \int _{0}^{\tau }\ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) e^{-\lambda s}ds+e^{-\lambda \tau }v_{i}\left( y_{\varepsilon }\left( \tau \right) \right) . \end{aligned}$$

Then, according to (3.21),

$$\begin{aligned}&v_{i}\left( x_{\varepsilon }\right) -v_{i}\left( x\right) +\varepsilon \ge \int _{0}^{\tau }\ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) e^{-\lambda s}ds\nonumber \\&\quad +e^{-\lambda \tau }\left[ \varphi \left( y_{\varepsilon }\left( \tau \right) \right) -\varphi \left( x\right) \right] -v_{i}\left( x\right) (1-e^{-\lambda \tau } ).\nonumber \\ \end{aligned}$$
(3.22)

Next, one has

$$\begin{aligned} \left\{ \begin{array}{lll} \displaystyle \int _{0}^{\tau }\ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) e^{-\lambda s}ds = \displaystyle \int _{0}^{\tau }\ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) ds+o\left( \tau \right) ,\\ \left[ \varphi \left( y_{\varepsilon }\left( \tau \right) \right) -\varphi \left( x\right) \right] e^{-\lambda \tau } =\varphi \left( y_{\varepsilon }\left( \tau \right) \right) -\varphi \left( x\right) +\tau o_{\varepsilon }\left( 1\right) +o\left( \tau \right) , \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{array}{lll} v_{i}\left( x_{\varepsilon }\right) -v_{i}\left( x\right) =o_{\varepsilon }\left( 1\right) ,\\ v_{i}\left( x\right) (1-e^{-\lambda \tau })=o\left( \tau \right) +\tau \lambda v_{i}\left( x\right) , \end{array} \right. \end{aligned}$$

where the notation \(o_{\varepsilon }\left( 1\right) \) is used for a quantity which is independent on \(\tau \) and tends to 0 as \(\varepsilon \) tends to 0. For a positive integer k, the notation \(o(\tau ^{k})\) is used for a quantity that is independent on \(\varepsilon \) and such that \(o(\tau ^{k})/\tau ^{k}\rightarrow 0\) as \(\tau \rightarrow 0\). Finally, \(\mathcal {O}(\tau ^k)\) stands for a quantity independent on \(\varepsilon \) such that \(\mathcal {O}(\tau ^{k})/\tau ^{k}\) remains bounded as \(\tau \rightarrow 0\). From (3.22), we obtain that

$$\begin{aligned} \tau \lambda v_{i}\left( x\right)\ge & {} \int _{0}^{\tau }\ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) ds+\varphi \left( y_{\varepsilon }\left( \tau \right) \right) -\varphi \left( x\right) +\tau o_{\varepsilon }\left( 1\right) \nonumber \\&+o\left( \tau \right) +o_{\varepsilon }\left( 1\right) . \end{aligned}$$
(3.23)

Since \(y_{\varepsilon }\left( \tau \right) \in {\mathcal {P}}_{i}\) for all \(\varepsilon \), we have

$$\begin{aligned} \varphi \left( y_{\varepsilon }\left( \tau \right) \right) -\varphi \left( x_{\varepsilon }\right)= & {} \int _{0}^{\tau } \partial \varphi \left( y_{\varepsilon }\left( s\right) \right) \cdot {\dot{y}}_{\varepsilon }\left( s\right) ds\\= & {} \int _{0}^{\tau } \partial \varphi \left( y_{\varepsilon }\left( s\right) \right) \cdot f_{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) ds. \end{aligned}$$

Hence, from (3.23)

$$\begin{aligned}&\tau \lambda v_{i}\left( x\right) -{\displaystyle \int _{0}^{\tau }}\left[ \ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) + \partial \varphi \left( y_{\varepsilon }\left( s\right) \right) \cdot f_{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) \right] ds\nonumber \\&\quad \ge \tau o_{\varepsilon }\left( 1\right) +o\left( \tau \right) +o_{\varepsilon }\left( 1\right) . \end{aligned}$$
(3.24)

Moreover, \(\varphi \left( x_{\varepsilon }\right) -\varphi \left( x\right) =o_{\varepsilon }\left( 1\right) \) and that \(\partial \varphi \left( y_{\varepsilon }\left( s\right) \right) =\partial \varphi \left( x\right) +o_{\varepsilon }\left( 1\right) +\mathcal {O}\left( s\right) \). Thus

$$\begin{aligned}&\displaystyle \lambda v_{i}\left( x\right) -\dfrac{1}{\tau }\int _{0}^{\tau }\left[ \ell _{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) + \partial \varphi \left( x\right) \cdot f_{i}\left( y_{\varepsilon }\left( s\right) ,\alpha _{\varepsilon }\left( s\right) \right) \right] ds\nonumber \\&\quad \ge o_{\varepsilon }\left( 1\right) +\dfrac{o\left( \tau \right) }{\tau }+\dfrac{o_{\varepsilon }\left( 1\right) }{\tau }. \end{aligned}$$
(3.25)

Let \(\varepsilon _{n}\rightarrow 0\) as \(n\rightarrow \infty \) and \(\tau _{m}\rightarrow 0\) as \(m\rightarrow \infty \) such that

$$\begin{aligned} {\displaystyle \left( a_{mn},b_{mn}\right) :=\left( \dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}}f_{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds,\dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}}\ell _{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds\right) }\longrightarrow \left( a,b\right) \end{aligned}$$

as \(n,m\rightarrow \infty \). In view of \(\left[ A1\right] \) and \(\left[ A2\right] \), we have

$$\begin{aligned}\begin{array}{lll} \max \{ |f_i(y_{\varepsilon _n} (s) , \alpha _{\varepsilon _n} (s)) - f_i(x , \alpha _{\varepsilon _n} (s))|, |\ell _i(y_{\varepsilon _n} (s) , \alpha _{\varepsilon _n} (s)) - \ell _i(x , \alpha _{\varepsilon _n} (s))| \} \\ \qquad \qquad \qquad \qquad \qquad \qquad \le L | y_{\varepsilon _n} (s) - x| \le L | y_{\varepsilon _n} (s) - x_{\varepsilon _n} | + L | x_{\varepsilon _n} - x| \\ \qquad \qquad \qquad \qquad \qquad \qquad \le LM \tau _m + L |x_{\varepsilon _n} - x|. \end{array} \end{aligned}$$

Therefore,

$$\begin{aligned} {\left\{ \begin{array}{ll} f_{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) \cdot e_{i} = f_{i}\left( x,\alpha _{\varepsilon _{n}}\left( s\right) \right) \cdot e_{i}+o_{n}\left( 1\right) +o_{m}\left( 1\right) ,\\ \ell _{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) = \ell _{i}\left( x,\alpha _{\varepsilon _{n}}\left( s\right) \right) + o_{n}\left( 1\right) +o_{m}\left( 1\right) . \end{array}\right. } \end{aligned}$$

Hence,

$$\begin{aligned} \left( a_{mn},b_{mn}\right)&=\displaystyle \left( \dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}} f_{i}\left( x,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds ,\dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}}\ell _{i}\left( x,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds\right) +o_{n}\left( 1\right) \\&\quad +o_{m}\left( 1\right) \in \text {FL}_{i}\left( x\right) +o_{n}\left( 1\right) +o_{m}\left( 1\right) , \end{aligned}$$

since \(\text {FL}_{i}\left( x\right) \) is closed and convex. Let \(n,m\rightarrow \infty \), then \(\left( a,b\right) \in \text {FL}_{i}\left( x\right) \) and therefore there exists \({\overline{a}}\in A_{i}\) such that

$$\begin{aligned}&\lim _{m,n\rightarrow \infty }\left( \dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}}f_{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds,\dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}}\ell _{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds\right) \nonumber \\&\quad =\left( f_{i}\left( x,{\overline{a}}\right) ,\ell _{i}\left( x,{\overline{a}}\right) \right) . \end{aligned}$$
(3.26)

On the other hand, by Lemma 3.14, \(y_{\varepsilon _{n}}\left( s\right) \in {\mathcal {P}}_{i}\backslash \Gamma \) for all \(s\in \left[ 0,\tau _{m}\right] \). This yields

$$\begin{aligned} y_{\varepsilon _{n}}\left( \tau _{m}\right) =x_{\varepsilon _{n}}+\int _{0}^{\tau _{m}}f_{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds. \end{aligned}$$

Since \(y_{\varepsilon _{n}}\left( \tau _{m}\right) \cdot e_i >0\), then

$$\begin{aligned} \dfrac{1}{\tau _{m}}\int _{0}^{\tau _{m}} f_{i}\left( y_{\varepsilon _{n}}\left( s\right) ,\alpha _{\varepsilon _{n}}\left( s\right) \right) ds \cdot e_i\ge -\dfrac{x_{\varepsilon _{n}} \cdot e_i}{\tau _{m}} \ge -\dfrac{\left| x_{\varepsilon _{n}}\right| }{\tau _{m}}. \end{aligned}$$

Let \(\varepsilon _{n} \rightarrow 0\) then let \(\tau _{m} \rightarrow 0\), to obtain \(f_{i}\left( x,{\overline{a}}\right) \cdot e_i \ge 0\), thus \({\overline{a}}\in A_{i}^{+}(x)\). Hence, from (3.25) and (3.26), replacing \(\varepsilon \) by \(\varepsilon _{n}\) and \(\tau \) by \(\tau _{m}\), let \(\varepsilon _{n} \rightarrow 0 \), then let \(\tau _{m} \rightarrow 0\), we finally obtain

$$\begin{aligned}&\lambda v_{i}\left( x\right) +\max _{a\in A_{i}^{+}(x)}\left\{ -f_{i}\left( x,a\right) \cdot \partial \varphi \left( x\right) -\ell _{i}\left( x,a\right) \right\} \\&\quad \ge \lambda v_{i}\left( x\right) +\left[ -f_{i}\left( x,{\overline{a}}\right) \cdot \partial \varphi \left( x\right) -\ell _{i}\left( x,{\overline{a}}\right) \right] \ge 0. \end{aligned}$$

\(\square \)

3.4 A comparison principle and uniqueness

In this section we establish a comparison principle for the Hamilton–Jacobi system (3.17). From the comparison principle, it easily follows that \(V:=(v_1,\ldots ,v_N,{\mathcal {V}}|_\Gamma )\) is the unique viscosity solution of (3.17).

Theorem 3.16

(Comparison Principle) Under Assumptions [A] and [A3], let U and W be respectively bounded continuous viscosity sub and supersolution of (3.17). Then \(U\le W\) componentwise.

We are going to give two proofs of Theorem 3.16. The first one, given below, is inspired by Lions and Souganidis [21, 22] by using arguments from the theory of PDE. The second one (displayed in the appendix) is inspired by the works of Achdou et al. [3] and Barles et al. [7, 8] by using arguments from the theory of optimal control and PDE techniques. Both proofs make use of the following important properties of viscosity subsolutions displayed in the next lemma.

Lemma 3.17

Under Assumptions [A] and [A3], let \( U = (u_1,\ldots ,u_N,u_\Gamma )\) be a bounded continuous viscosity subsolution of (3.17), for any \(i\in \{1,\ldots ,N\}\) and \(x\in \Gamma \), the function \(u_i\) is Lipschitz continuous in \(B(\Gamma ,r) \cap {\mathcal {P}}_i\). Therefore, there exists a test-function \(\varphi _i \in C^1 ({\mathcal {P}}_i)\) touching \(u_i\) from above at x.

Proof

The proof of Lemma 3.17 is based on the fact that if \( U = (u_1,\ldots ,u_N,u_\Gamma ) \) is a viscosity subsolution of (3.17), then for any \( i\in \{1,\ldots ,N\} \), \( u_i \) is a viscosity subsolution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \lambda u_i (x) + H_i \left( x,\partial u_i (x) \right) &{}=0, \quad \text {if } x\in {\mathcal {P}}_i \backslash \Gamma ,\\ \lambda u_i (x) + H^+_i \left( x,\partial u_i (x) \right) &{}=0, \quad \text {if } x\in \Gamma . \end{array}\right. } \end{aligned}$$
(3.27)

Therefore, the proof is complete by applying the result in [25, Section 3.2.3] (which is based on the proof of Ishii [19]). \(\square \)

A first proof of Theorem 3.16

First of all, we claim that there exists a positive constant \( {\bar{M}} \) such that \( (\phi _1,\ldots \phi _N, \phi _\Gamma ) \), where \(\phi _j: {\mathcal {P}}_j \rightarrow {\mathbb {R}}\), \(\phi _j(x)= -|x|^2 - {\bar{M}}\) and \(\phi _\Gamma : \Gamma \rightarrow {\mathbb {R}}\), \(\phi _\Gamma (z)= -|z|^2 - {\bar{M}}\), is a viscosity subsolution of (3.17). Indeed, for any \( j \in \{1,\ldots ,N\} \), since \( f_j \) and \( \ell _j \) are bounded by M, one has

$$\begin{aligned} H_j(x,p) = \max _{a\in A_j} \{-f_j(x,a) \cdot p - \ell _j (x,a)\} \le M (|p| + 1 ). \end{aligned}$$

Thus, using Cauchy-Schwarz inequality, there exists \( {\bar{M}} >0 \) such that

$$\begin{aligned} \lambda (-|x|^2 - {\bar{M}}) + M (2|x| + 1 ) < 0, \quad \text {for all } x\in {\mathcal {P}}_j. \end{aligned}$$

The claim for \( \phi _j \) is proved and we can prove similarly the one for \( \phi _\Gamma \). Next, for \( 0<\mu <1 \), \( \mu \) close to 1, setting \( u_j^\mu = \mu u_j + (1-\mu ) \phi _j \) and \( u_\Gamma ^\mu = \mu u_\Gamma + (1-\mu ) \phi _\Gamma \), then \( (u_1^\mu , \ldots , u_N^\mu , u_\Gamma ^\mu ) \) is a viscosity subsolution of (3.17). Moreover, since \( u^\mu _j \) and \( u^\mu _\Gamma \) tend to \( -\infty \) as |x| and |z| tend to \( +\infty \) respectively, the functions \( u_j^\mu - w_j \) and \( u_\Gamma ^\mu - w_\Gamma \) have maximum values \( M_j^\mu \) and \( M_\Gamma ^\mu \) which are reached at some points \( {\bar{x}}_j \) and \( {\bar{x}}_\Gamma \) respectively. We argue by contradiction, through considering the two following cases:

Case A:

Assume that \( M_i^\mu > M^\mu _\Gamma \) and \( M_i^\mu >0 \). Since \( (u_1^\mu , \ldots , u_N^\mu , u_\Gamma ^\mu ) \) is a viscosity subsolution of (3.17), by Lemma 3.17, there exists a positive number L such that \(u^\mu _i\) is Lipschitz continuous with Lipschitz constant L in \({\mathcal {P}}_{i}\cap B({\bar{x}}_i,r)\). We consider the function \( \Psi _{i,\varepsilon }:{\mathcal {P}}_{i}\times {\mathcal {P}}_{i} \rightarrow {\mathbb {R}}\) which is defined by

$$\begin{aligned}&\Psi _{i,\varepsilon }(x,y) := u_i^\mu \left( x\right) -w_{i}\left( y\right) -\dfrac{1}{2\varepsilon } \left( ( -x^0+y^0+\delta \left( \varepsilon \right) )^{2}\right. \nonumber \\&\quad \left. + ( -x^i + y^i + \delta \left( \varepsilon \right) )^{2} \right) -|x-{\bar{x}}_i|^2, \end{aligned}$$
(3.28)

where \( \varepsilon >0 \), \(\delta (\varepsilon ) :=\left( L+1\right) \varepsilon \) and \(x=(x^i,x^0),y=(y^i,y^0)\in {\mathbb {R}}e_i \times {\mathbb {R}}e_0\). It is clear that \(\Psi _{i,\varepsilon }\) attains its maximum \(M_{\varepsilon ,\gamma }\) at \(\left( x_{\varepsilon },y_{\varepsilon }\right) \in {\mathcal {P}}_{i}\times {\mathcal {P}}_{i}\). We claim that

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle M_{\varepsilon } \rightarrow \max _{{\mathcal {P}}_i} \{u^\mu _i - w_i\} = u^\mu _i ({\bar{x}}_i) - w_i ({\bar{x}}_i) ,\\ x_\varepsilon , y_\varepsilon \rightarrow {\bar{x}}_i \text { and } \dfrac{\left( x_{\varepsilon }-y_{\varepsilon }\right) ^2}{\varepsilon } \rightarrow 0, \end{array}\right. } \text {as } \varepsilon \rightarrow 0. \end{aligned}$$
(3.29)

Indeed, we have

$$\begin{aligned} M_{\varepsilon }= & {} u_{i}^\mu \left( x_{\varepsilon }\right) -w_{i}\left( y_{\varepsilon }\right) -\dfrac{\left( -x_{\varepsilon }^0 + y_{\varepsilon }^0 + \delta \left( \varepsilon \right) \right) ^2 + \left( - x_{\varepsilon }^i + y_{\varepsilon }^i + \delta \left( \varepsilon \right) \right) ^2}{2\varepsilon } \nonumber \\&- |x_\varepsilon - {\bar{x}}_i|^2\ge u_{i}^\mu \left( {\bar{x}}_i\right) -w_{i}\left( {\bar{x}}_i\right) -\dfrac{\left( L+1\right) ^{2}}{2}\varepsilon . \end{aligned}$$
(3.30)

Since \(M_i^\mu = u_{i}^\mu ({\bar{x}}_i)-v_{i} ({\bar{x}}_i )>0\), \( M_\varepsilon \) is positive when \(\varepsilon \) is small enough. Furthermore, since \( w_i \) is bounded and \( u_i^\mu \) is bounded from above, we have \( |x_\varepsilon - {\bar{x}}_i|^2\) is bounded and \( x_\varepsilon - y_\varepsilon \rightarrow 0 \) as \( \varepsilon \rightarrow 0 \). Hence, after extraction of a subsequence, \(x_{\varepsilon },y_{\varepsilon }\rightarrow {\bar{x}} \in {\mathcal {P}}_i\) as \(\varepsilon \rightarrow 0\), for some \({\bar{x}} \in {\mathcal {P}}_{i}\).

Thus, from (3.30) we obtain

$$\begin{aligned} M_i^\mu \ge u_i ({\bar{x}}) - w_i ({\bar{x}}) - |{\bar{x}} - {\bar{x}}_i|^2 \ge \limsup _{\varepsilon \rightarrow 0} M_{\varepsilon } \ge \liminf _{\varepsilon \rightarrow 0} M_{\varepsilon } \ge M_i^\mu . \end{aligned}$$

This implies that \( M_\varepsilon \rightarrow M_i^\mu \), \( (x_\varepsilon - y_\varepsilon )^2 / \varepsilon \rightarrow 0 \) as \( \varepsilon \rightarrow 0 \) and \( {\bar{x}} = {\bar{x}}_i \). The claim is proved.

From now on in this proof we only consider the case when \({\bar{x}}_i \in \Gamma \), since otherwise, the proof follows by applying the classical theory (see [4, 6]). We claim that \(x_{\varepsilon } \notin \Gamma \) for \( \varepsilon \) small enough. Indeed, assume by contradiction that \(x_{\varepsilon } \in \Gamma \), i.e. \( x_{\varepsilon }^i = 0 \), we have

(a):

If \(y_{\varepsilon } \notin \Gamma \), then let \( z_{\varepsilon } = (y^i_{\varepsilon },x^0_{\varepsilon }) \), we have

$$\begin{aligned} M_{\varepsilon }&= u_{i}^\mu (x_{\varepsilon } )-w_{i} (y_{\varepsilon } )-\dfrac{\left( -|x_{\varepsilon }^0|+|y_{\varepsilon }^0|+\delta \left( \varepsilon \right) \right) ^2 + \left( |y_{\varepsilon }^i|+\delta \left( \varepsilon \right) \right) ^2 }{2\varepsilon } -|x_{\varepsilon } - {\bar{x}}_i|^2\\&\ge u_{i}^\mu (z_{\varepsilon } )-w_{i} (y_{\varepsilon })-\dfrac{\left( -|x_{\varepsilon }^0|+|y_{\varepsilon }^0|+\delta \left( \varepsilon \right) \right) ^2 + \left( -|y_{\varepsilon }^i|+|y_{\varepsilon }^i|+\delta \left( \varepsilon \right) \right) ^2 }{2\varepsilon }\\&\quad - |z_\varepsilon - {\bar{x}}_i|^2. \end{aligned}$$

Since \(u_i^\mu \) is Lipschitz continuous in \(B\left( {\bar{x}}_i,r\right) \cap {\mathcal {P}}_{i}\), we see that for \(\varepsilon \) small enough

$$\begin{aligned} L|y^i_{\varepsilon }|\ge u_{i}\left( x_{\varepsilon }\right) -u_{i}\left( z_{\varepsilon }\right)&\ge \dfrac{|y^i_{\varepsilon }|^{2}}{2\varepsilon }+\dfrac{|y^i_{\varepsilon }|\delta \left( \varepsilon \right) }{\varepsilon } + |x_{\varepsilon } - {\bar{x}}_i|^2 - |z_\varepsilon - {\bar{x}}_i|^2\\&\ge \dfrac{|y_{\varepsilon }^i|\delta \left( \varepsilon \right) }{\varepsilon } - |y^i_{\varepsilon }| | x_\varepsilon + z_\varepsilon - 2{\bar{x}}_i|. \end{aligned}$$

Therefore, if \(y^i_{\varepsilon ,\gamma }\not =0\), then \(L\ge L + 1 - | x_\varepsilon + z_\varepsilon - 2{\bar{x}}_i| \), which leads to a contradiction, since \( x_\varepsilon \), \( z_\varepsilon \) tend to \( {\bar{x}}_i \) as \( \varepsilon \rightarrow 0 \).

(b):

If \(y_{\varepsilon }\in \Gamma \), i.e. \(y_{\varepsilon }^i = 0\), then let \( z_{\varepsilon } = (\varepsilon ,x_{\varepsilon }^0) \), we have

$$\begin{aligned} M_{\varepsilon }&= u_{i}\left( x_{\varepsilon }\right) -w_{i}\left( y_{\varepsilon }\right) -\dfrac{\left( -|x_{\varepsilon }^0|+|y_{\varepsilon }^0|+\delta \left( \varepsilon \right) \right) ^2 + \delta \left( \varepsilon \right) ^2 }{2\varepsilon } - |x_\varepsilon - {\bar{x}}_i|^2 \\&\ge u_{i}\left( z_{\varepsilon }\right) - w_{i}\left( y_{\varepsilon }\right) - \dfrac{\left( -|x_{\varepsilon }^0|+|y_{\varepsilon }^0|+\delta \left( \varepsilon \right) \right) ^2+\left( -|\varepsilon |+\delta \left( \varepsilon \right) \right) ^2 }{2\varepsilon } - |z_\varepsilon - {\bar{x}}_i|^2 . \end{aligned}$$

Since \(u^\mu _i\) is Lipschitz continuous in \(B\left( {\bar{x}}_i,r\right) \cap {\mathcal {P}}_{i}\), we obtain

$$\begin{aligned} L\varepsilon \ge u_i^\mu (x_{\varepsilon })- u^\mu _{i} (z_{\varepsilon }) \ge - \dfrac{\varepsilon }{2} + \delta (\varepsilon )- \varepsilon | x_\varepsilon + z_\varepsilon - 2{\bar{x}}_i|. \end{aligned}$$

This implies that \(L\ge L + 1/2 - | x_\varepsilon + z_\varepsilon - 2{\bar{x}}_i|\), which yields a contradiction since \( x_\varepsilon \), \( z_\varepsilon \) tend to \( {\bar{x}}_i \) as \( \varepsilon \rightarrow 0 \).

The second claim is proved. We consider the following three possible cases

Case A.1:

There exists a subsequence of \( \{y_{\varepsilon } \} \) (still denoted by \( \{y_{\varepsilon } \} \)) such that \( y_{\varepsilon } \in \Gamma \) and \( w_i (y_{\varepsilon }) \ge w_\Gamma (y_{\varepsilon })\). Since \( x_\varepsilon ,y_\varepsilon \rightarrow {\bar{x}}_i \) as \( \varepsilon \rightarrow 0 \) and \( u_i^\mu \) is continuous, for \( \varepsilon \) small enough, we have

$$\begin{aligned} w_\Gamma (y_\varepsilon ) \le w_i (y_\varepsilon ) < u_i^\mu (y_\varepsilon ) = u_i^\mu (x_\varepsilon ) +o_\varepsilon (1) \le u_\Gamma ^\mu (y_\varepsilon ). \end{aligned}$$
(3.31)

Recall that the second inequality of (3.31) holds since \( M_i^\mu >0 \) and the last inequality of (3.31) holds since \( u^\mu _i \) and \( u^\mu _\Gamma \) satisfy

$$\begin{aligned} \lambda u^\mu _{i} (x)+\max \left\{ H_{i}^{+}\left( x,\partial u^\mu _i (x) \right) , -\lambda u^\mu _\Gamma (x) \right\} \le 0, \end{aligned}$$
(3.32)

in the viscosity sense. From (3.31),

$$\begin{aligned} u_i^\mu (x_\varepsilon ) - w_i (y_\varepsilon ) + o_\varepsilon (1) \le u_\Gamma ^\mu (y_\varepsilon ) - w_\Gamma (y_\varepsilon ) \le M_\Gamma ^\mu . \end{aligned}$$

Let \( \varepsilon \rightarrow 0 \), thanks to (3.29), \( \sup \{u_i^\mu -w_i\} = M_i^\mu \le M_\Gamma ^\mu \), which leads us to a contradiction since we assume that \( M_i^\mu > M_\Gamma ^\mu \) in Case A.

Case A.2:

There exists a subsequence of \( \{y_{\varepsilon } \} \) such that \( y_{\varepsilon } \in \Gamma \) and

$$\begin{aligned} \lambda w_{i}\left( y_{\varepsilon }\right) +H^+_{i}\left( y_{\varepsilon },\dfrac{-x_{\varepsilon }+y_{\varepsilon }+\delta (\varepsilon )}{\varepsilon } \right) \ge 0. \end{aligned}$$

On the one hand, since \( H_i^+(x,p)\le H_i (x,p) \) for all \( x\in \Gamma \) and \( p\in {\mathbb {R}}e_i \times {\mathbb {R}}e_0 \), we have

$$\begin{aligned} \lambda w_{i}\left( y_{\varepsilon }\right) +H_{i}\left( y_{\varepsilon },\dfrac{-x_{\varepsilon }+y_{\varepsilon }+\delta \left( \varepsilon \right) }{\varepsilon } \right) \ge 0. \end{aligned}$$
(3.33)

On the other hand, we have a viscosity inequality for \( u^\mu _i \) at \( x_{\varepsilon } \in {\mathcal {P}}_i \backslash \Gamma \):

$$\begin{aligned} \lambda u_{i}^\mu \left( x_{\varepsilon }\right) +H_{i}\left( x_{\varepsilon },\dfrac{-x_{\varepsilon }+y_{\varepsilon } + \delta \left( \varepsilon \right) }{\varepsilon } + 2 (x_{\varepsilon } - {\bar{x}}_i)\right) \le 0. \end{aligned}$$
(3.34)

Subtracting (3.33) from (3.34), we obtain

$$\begin{aligned}&\lambda (u^\mu _{i}\left( x_{\varepsilon }\right) -w_{i}\left( y_{\varepsilon }\right) ) \\&\quad \le H_{i}\left( y_{\varepsilon },\dfrac{-x_{\varepsilon }+y_{\varepsilon }+\delta \left( \varepsilon \right) }{\varepsilon } \right) -H_{i}\left( x_{\varepsilon },\dfrac{-x_{\varepsilon }+y_{\varepsilon }+\delta \left( \varepsilon \right) }{\varepsilon } + 2 (x_\varepsilon - {\bar{x}}_i) \right) . \end{aligned}$$

In view of [A1] and [A2], there exists \(C_{i}>0\) such that for any \(x,y\in {\mathcal {P}}_{i}\) and \(p,q\in \mathbb {R}\)

$$\begin{aligned} \left| H_{i}\left( x,p\right) -H_{i}\left( y,q\right) \right|&\le \left| H_{i}\left( x,p\right) -H_{i}\left( y,p\right) \right| +\left| H_{i}\left( y,p\right) -H_{i}\left( y,q\right) \right| \\&\le C_i \left| x-y\right| \left( 1+\left| p\right| \right) + C_i \left| p-q\right| . \end{aligned}$$

This, in turn, yields

$$\begin{aligned}&\lambda (u^\mu _{i}\left( x_{\varepsilon }\right) -w_{i}\left( y_{\varepsilon }\right) ) \\&\quad \le C_i \left[ \left| x_{\varepsilon }-y_{\varepsilon }\right| (1 + \dfrac{\left| -x_{\varepsilon } + y_{\varepsilon }+\delta (\varepsilon ) \right| }{\varepsilon } + 2 |x_\varepsilon - {\bar{x}}_i| ) + 2 |x_\varepsilon - {\bar{x}}_i |\right] . \end{aligned}$$

Letting \(\varepsilon \) to \(\infty \) and applying (3.29), we obtain that \(\max _{{\mathcal {P}}_i} \{u^\mu _{i}-w_{i}\} = M_i^\mu \le 0\), which leads to a contradiction.

Case A.3:

There exists a subsequence of \( \{y_\varepsilon \} \) such that \( y_{\varepsilon } \in {\mathcal {P}}_i \backslash \Gamma \). Since the inequalities (3.33) and (3.34) still hold, a contradiction is obtained by using the similar arguments as in the previous case.

Case B:

Assume that

$$\begin{aligned} M_\Gamma ^\mu \ge \max _j \{ M_j^\mu \} \text { and } M_\Gamma ^\mu >0 . \end{aligned}$$
(3.35)

We consider the function

$$\begin{aligned} \Phi _{\varepsilon }:\Gamma \times \Gamma&\longrightarrow {\mathbb {R}}\\ \left( \zeta ,\xi \right)&\longrightarrow u_\Gamma ^\mu (\zeta ) - w_\Gamma (\xi ) - \dfrac{|\zeta - \xi |^2}{2\varepsilon } - |\zeta - {\bar{x}}_\Gamma |^2. \end{aligned}$$

By classical arguments, \(\Phi _{\varepsilon }\) attains its maximum \(K_{\varepsilon }\) at \(\left( \zeta _{\varepsilon }, \xi _{\varepsilon }\right) \in \Gamma \times \Gamma \) and

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle K_{\varepsilon } \rightarrow M_\Gamma ^\mu = \max _\Gamma \{u^\mu _\Gamma - w_\Gamma \} = u^\mu _\Gamma ({\bar{x}}_\Gamma ) - w_\Gamma ({\bar{x}}_\Gamma ) ,\\ \zeta _\varepsilon , \xi _\varepsilon \rightarrow {\bar{x}}_\Gamma \text { and } \dfrac{\left( \zeta _{\varepsilon } - \xi _{\varepsilon }\right) ^2}{\varepsilon } \rightarrow 0, \end{array}\right. } \text {as } \varepsilon \rightarrow 0. \end{aligned}$$
(3.36)

We consider the following two cases:

Case B.1:

There exists a subsequence \( \{\xi _\varepsilon \} \) (still denoted by \( \{\xi _\varepsilon \} \)) such that

$$\begin{aligned} \lambda w_\Gamma (\xi _\varepsilon ) + H_\Gamma \left( \xi _\varepsilon , \dfrac{\zeta _\varepsilon - \xi _\varepsilon }{2 \varepsilon } \right) \ge 0. \end{aligned}$$

We also have a viscosity inequality for \( u_\Gamma ^\mu \) at \( \zeta _\varepsilon \)

$$\begin{aligned} \lambda u_\Gamma ^\mu (\zeta _\varepsilon ) + H_\Gamma \left( \zeta _\varepsilon , \dfrac{\zeta _\varepsilon - \xi _\varepsilon }{2 \varepsilon } + 2(\zeta _\varepsilon - {\bar{x}}_\Gamma ) \right) \le 0. \end{aligned}$$

By applying the classical arguments, one has \( (\zeta _\varepsilon - \xi _\varepsilon )^2 / \varepsilon \rightarrow 0 \) and \( \zeta _\varepsilon , \xi _\varepsilon \rightarrow {\bar{x}}_\Gamma \) as \( \varepsilon \rightarrow 0 \). Subtracting the two above inequalities and sending \( \varepsilon \) to 0, we obtain that \( u_\Gamma ^\mu ({\bar{x}}_\Gamma ) - w_\Gamma ({\bar{x}}_\Gamma ) = M_\Gamma ^\mu \le 0\), which contradicts (3.35).

Case B.2:

There exist a subsequence \( \{\xi _\varepsilon \} \) and \( k\in \{1,\ldots ,N\} \) such that

$$\begin{aligned} w_\Gamma (\xi _\varepsilon ) \ge \min _j \{ w_j(\xi _\varepsilon ) + c_j (\xi _\varepsilon ) \} = w_k(\xi _\varepsilon ) + c_k (\xi _\varepsilon ). \end{aligned}$$
(3.37)

Since \( \zeta _\varepsilon ,\xi _\varepsilon \rightarrow {\bar{x}}_\Gamma \) as \( \varepsilon \rightarrow 0 \) (by (3.36)) and \( u_\Gamma ^\mu \) is continuous, for \( \varepsilon \) small enough, we have

$$\begin{aligned} w_k (\xi _\varepsilon ) + c_k (\xi _\varepsilon ) \le w_\Gamma (\xi _\varepsilon ) < u_\Gamma ^\mu (\xi _\varepsilon ) = u_\Gamma ^\mu (\zeta _\varepsilon ) + o_\varepsilon (1) \le u_k^\mu (\xi _\varepsilon ) + c_k (\xi _\varepsilon ). \end{aligned}$$

Recall that the second inequality holds since \( M_\Gamma ^\mu >0 \) and the last one comes from the fact that \( u_\Gamma ^\mu \) satisfies

$$\begin{aligned} \lambda u_{\Gamma }^\mu (x) + \max \left\{ -\lambda \min _{i=1,\ldots ,N} \{u^\mu _i (x) + c_i(x) \} , H_\Gamma \left( x,\dfrac{\partial u_\Gamma ^\mu }{\partial e_0} (x) \right) \right\} \ge 0, \end{aligned}$$

in the viscosity sense. This implies that

$$\begin{aligned} u_\Gamma ^\mu (\zeta _\varepsilon ) - w_\Gamma (\xi _\varepsilon ) + o_\varepsilon (1) \le u_k^\mu (\xi _\varepsilon ) - w_k (\xi _\varepsilon ) \le M_k^\mu . \end{aligned}$$
(3.38)

Letting \( \varepsilon \rightarrow 0 \) and applying (3.36), we get \( M_\Gamma ^\mu \le u_k^\mu ({\bar{x}}_\Gamma ) - w_k ({\bar{x}}_\Gamma ) \le M_k^\mu \). This implies from (3.35) that

$$\begin{aligned} M_\Gamma ^\mu = M_k^\mu =u_k^\mu ({\bar{x}}_\Gamma ) -w_k ({\bar{x}}_\Gamma )>0. \end{aligned}$$
(3.39)

Now we apply similar arguments as in the proof of Case A by considering the function \( \Psi _{k,\varepsilon }:{\mathcal {P}}_{k}\times {\mathcal {P}}_{k} \rightarrow {\mathbb {R}}\) which is defined in (3.28) with the index i replacing the index k. Remark that we may assume \( {\bar{x}}_k = {\bar{x}}_\Gamma \) since \( {\bar{x}}_\Gamma \) is a maximum point of \( u_k^\mu - w_k \) by (3.39). It follows that the three cases B.2.1, B.2.2 and B.2.3 which are similar to A.1, A.2 and A.3, respectively. If Case B.2.1 occurs, i.e. there exists a subsequence of \( \{y_\varepsilon \} \) (still denoted by \( \{y_\varepsilon \} \)) such that \( w_k(y_\varepsilon ) \ge w_\Gamma (y_\varepsilon )\). Letting \( \varepsilon \rightarrow 0 \), one gets \( w_k ({\bar{x}}_\Gamma ) \ge w_\Gamma ({\bar{x}}_\Gamma )\). In the other hands, letting \( \varepsilon \rightarrow 0 \) in (3.37), one also gets \( w_\Gamma ({\bar{x}}_\Gamma ) \ge w_k({\bar{x}}_\Gamma ) + c_k ({\bar{x}}_\Gamma ) > w_k({\bar{x}}_\Gamma )\) which leads to a contradiction. Finally, the two last cases are proved by using the same arguments as in the proofs of Case A.2 and Case A.3.

Finally, we get \( M_\Gamma ^\mu \le 0 \) and \( M_i^\mu \le 0\) for all \( i\in \{1,\ldots ,N\} \) and \( \mu \in (0,1) \), \( \mu \) close to 1. We conclude by letting \( \mu \) tend to 1. \(\square \)

Corollary 3.18

The value function \({\mathcal {V}}\) has the property that the vector \(V=(v_1,\ldots ,v_N, {\mathcal {V}}|_\Gamma ) \) is the unique viscosity solution of (3.17).

4 Hamilton–Jacobi system under a moderate controllability condition near the interface

In this section we derive the Hamilton–Jacobi system (HJ) associated with the above optimal control problem and prove that the value function given by (2.2) is the unique viscosity solution of that (HJ) system, under the condition \([{\tilde{A}}3]\) below, which is weaker that the strong controllability condition [A3] used above.

  • \( [{\tilde{A}}3]\) (Moderate controllability) There exist positive numbers \(\delta \) and R such that

    • for any \( i=1,\ldots ,N \) and for \( x\in \Gamma \)

      $$\begin{aligned}{}[-\delta , \delta ] \subset \{ f_i(x,a) \cdot e_i : a\in A_i \}. \end{aligned}$$
      (4.1)
    • for any \( x\in \Gamma \), there exists \( j\in \{1,\ldots ,N\} \) such that

      $$\begin{aligned} {[}-\delta , \delta ] \subset \{ f_j(x,a) \cdot e_0 : a\in A_j^{\Gamma } \}. \end{aligned}$$
      (4.2)

Remark 4.1

  1. 1.

    \([{\tilde{A}}3] \) allows us to construct an admissible control law and a corresponding trajectory that goes from one point on \( B(\Gamma ,R)\cap {\mathcal {P}}_i \) to another one on \( \Gamma \), see Lemma 4.4 below.

  2. 2.

    Assumption \( [{\tilde{A}}3] \) is rather stronger than the assumption (4.1) called normal “controllability” (and denoted \( [{\tilde{H}}3] \)) which is used in [25] related to the case without entry costs. With only \( [{\tilde{H}}3] \) and the effect of the entry costs, we could not prove that our value function is continuous on \( \Gamma \) and establish a corresponding Hamilton–Jacobi system. The moderate assumption \([{\tilde{A}}3]\) allows us to overcome the difficulties induced by the entry costs.

Remark 4.2

Since for any \( i \in \{1,\ldots ,N\} \), \(f_i\) is Lipschitz continuous with respect to the state variable (from Assumption [A1] ), it is easy to check that there exists a positive number R such that for all \( i\in \{1,\ldots ,N\} \) and \( x\in B(\Gamma ,R) \cap {\mathcal {P}}_i \)

$$\begin{aligned} \left[ -\dfrac{\delta }{2},\dfrac{\delta }{2}\right] \subset \{ f_i(x,a) \cdot e_i : a\in A_i \}. \end{aligned}$$

Under Assumptions  [A] and \( [{\tilde{A}}3] \), following the arguments used in [25, Theorem 3.3.1], it holds that for any \( x\in {\mathcal {S}}\), the set \( Y_x \) is not empty. Hence, we can define the set of admissible controlled trajectories starting from the initial datum x to be

$$\begin{aligned} {\mathcal {T}}_x=\left\{ \left( y_{x},\alpha \right) \in L_{loc}^{\infty }\left( \mathbb {R}^{+};\mathcal {M}\right) :y_{x}\in Lip\left( \mathbb {R}^{+};{\mathcal {S}}\right) \hbox { and }y_{x}\left( t\right) = x+\int _{0}^{t}f\left( y_{x}\left( s\right) ,\alpha \left( s\right) \right) ds\right\} . \end{aligned}$$

The cost functional \( {\mathcal {J}}\) associated to the trajectory \( (y_{x} , \alpha ) \in {\mathcal {T}}_x\) is defined by

$$\begin{aligned} {\mathcal {J}}(x,\alpha )=\int _0^\infty \ell (y_{x} (t),\alpha (t)) e^{-\lambda t} dt + \sum ^N_{i=1} \sum _{k\in K_i} c_i (x_{ik}) e^{-\lambda t_{ik}}, \end{aligned}$$

and

$$\begin{aligned} {\mathscr {V}}(x) = \inf _{(y_{x},\alpha ) \in {\mathcal {T}}_x} {\mathcal {J}}(x,\alpha ). \end{aligned}$$

Compared to the proof of Lemma 3.3, we cannot use the classical control theory arguments to prove that the value function \( {\mathscr {V}}\) is continuous on \( {\mathcal {P}}_i \backslash \Gamma \) for all \( i\in \{1,\ldots ,N\} \). The main problem is that, unlike under Assumption [A3], with the new assumption \( [{\tilde{A}}3]\), for xz close to \( \Gamma \), there is possibly no admissible trajectory \( y_{x} \in {\mathcal {T}}_x \) from x to z. We will later prove that \( {\mathscr {V}}\) is continuous on \({\mathcal {P}}_i \backslash \Gamma \) for any i by using the comparison principle, but for the moment, \( {\mathscr {V}}|_{{\mathcal {P}}_i \backslash \Gamma } \) is a priori a discontinuous function. In order to deal with such a discontinuity, we use the following notions

Definition 4.3

Let \( i\in \{1,\ldots ,N\} \) and \( u_i: {\mathcal {P}}_i \backslash \Gamma \rightarrow {\mathbb {R}}\) be a bounded function.

  • The upper semi-continuous envelope of u is defined by

    $$\begin{aligned} u^{\star }_i (x) = \limsup _{z\rightarrow x} u_i(z). \end{aligned}$$
  • The lower semi-continuous envelope of \( u_i \) is defined by

    $$\begin{aligned} u_{i\star } (x) = \liminf _{z\rightarrow x} u(z). \end{aligned}$$

Notice that both functions \( u_i^\star \) and \( u_{i \star } \) are defined on \( {\mathcal {P}}_i \) instead of \( {\mathcal {P}}_i \backslash \Gamma \).

Lemma 4.4

Under Assumptions [A] and \([{\tilde{A}}3]\),

(a):

there exists a positive constant C such that for all \( z\in \Gamma \) and \( x \in {\mathcal {S}}\cap B(\Gamma ,R)\), there exist \( (y_x,\alpha _{x,z}) \in {\mathcal {T}}_{x}\) and \( \tau _{x,z} \) such that \( z= y_x (\tau _{x,z}) \) and \( \tau _{x,z} \le C |x-z| \),

(b):

\( {\mathscr {V}}|_\Gamma \) is Lipschitz continuous on \( \Gamma \).

Proof

We first note that by applying the classical arguments, (b) is a direct consequence of (a) (see [4]). To prove (a), we consider the two following cases.

Case 1:

\( x, z\in \Gamma \). From (4.2), we can find \( (y_x,\alpha _{x,z}) \in {\mathcal {T}}_{x}\) for which \(\alpha _{x,z}\) satisfies

$$\begin{aligned} f\left( x+\dfrac{z-x}{|z-x|} \delta t, \alpha _{x,z} (t) \right) = \delta \dfrac{z-x}{|z-x|},\quad \text {for all } t\le \dfrac{|z-x|}{\delta }. \end{aligned}$$

It is easy to check that \( y_x (|z-x|/\delta ) =z \), i.e. \( \tau _{x,z} = |z-x|/\delta \).

Case 2:

\( x = (x^i,x^0) \in {\mathcal {P}}_i \backslash \Gamma \) and \( z = (0,z^0)\in \Gamma \). One the one hand, from (4.1) and (4.2), we can pick \( (y_x,\alpha _{x,z}) \in {\mathcal {T}}_{x}\) for which \(\alpha _{x,z}\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} f\left( y_x(t) , \alpha _{x,z} (t) \right) \cdot e_i = - \dfrac{\delta }{2}, &{} t\le \dfrac{2 x^i}{\delta },\\ f\left( y_x(t) , \alpha _{x,z} (t) \right) = \delta \dfrac{z- y_x(2 x^i / \delta ) }{|z- y_x(2 x^i / \delta )|}, &{} \dfrac{2 x^i}{\delta } \le t\le \dfrac{2 x^i}{\delta } + \dfrac{|z- y_x(2 x^i / \delta ) |}{ \delta }. \end{array}\right. } \end{aligned}$$

This simply means that \( 2x^i / \delta \) is the exit time of \( y_x \) from \( {\mathcal {P}}_i \backslash \Gamma \) and \( \tau _{x,z} = \frac{2 x^i}{\delta } + \frac{|z- y_x(2 x^i / \delta ) |}{ \delta } \). On the other hand, let \( {\bar{x}}= (0,x^0) \in \Gamma \), since f is bounded by M, it holds that

$$\begin{aligned} | z - y_x ( 2 x^i /\delta ) | \le |z- {\bar{x}}| + |{\bar{x}} - y_x ( 2 x^i /\delta ) | \le |z-x|+ \dfrac{ x^i(4M^2 \delta ^2 -1)^{1/2}}{\delta } . \end{aligned}$$

Since \( x^i \le |z-x| \), we finally obtain

$$\begin{aligned} \tau _{x,z} = \dfrac{2 x^i}{\delta } + \dfrac{|z- y_x(2 x^i / \delta ) |}{ \delta } \le \dfrac{|z-x|}{\delta } \left( 3+ \dfrac{(4M^2 \delta ^2 -1)^{1/2}}{\delta } \right) . \end{aligned}$$

\(\square \)

4.1 Value function on the interface

Theorem 4.5

Under Assumptions [A] and \([{\tilde{A}}3]\), the restriction of the value function \( {\mathscr {V}}\) to \(\Gamma \), \({\mathscr {V}}|_\Gamma \), is a unique viscosity solution of the equation

$$\begin{aligned} \lambda u_\Gamma (x)+ \max \left\{ -\lambda \min \{v_{i} (x) +c_i (x) \}, H_\Gamma \left( x,\dfrac{\partial u_\Gamma }{\partial e_0} (x)\right) \right\} = 0, \quad x\in \Gamma .\nonumber \\ \end{aligned}$$
(4.3)

Here a function \( u_\Gamma : \Gamma \rightarrow {\mathbb {R}}\) is called a viscosity solution of (4.3) if it satisfies

$$\begin{aligned}&\lambda u_\Gamma (x)+ \max \left\{ -\lambda \min \{v_i^\star (x) +c_i (x) \}, H_\Gamma \left( x,\dfrac{\partial u_\Gamma }{\partial e_0} (x)\right) \right\} \nonumber \\&\quad \le 0, \quad \text {in the viscosity sense}\end{aligned}$$
(4.4)
$$\begin{aligned}&\lambda u_\Gamma (x)+ \max \left\{ -\lambda \min \{v_{i \star } (x) +c_i (x) \}, H_\Gamma \left( x,\dfrac{\partial u_\Gamma }{\partial e_0} (x)\right) \right\} \nonumber \\&\quad \ge 0, \quad \text {in the viscosity sense} \end{aligned}$$
(4.5)

The proof of Theorem 4.5 is a consequence of the next three lemmas which are similar to Lemma 3.7, Lemma 3.8 and Lemma 3.9, respectively.

Lemma 4.6

The restriction of the value function \({\mathscr {V}}\) on the interface \(\Gamma \), \({\mathscr {V}}|_{\Gamma }\) satisfies

$$\begin{aligned} \lambda {\mathscr {V}}|_\Gamma (x)+ H_\Gamma \left( x,\dfrac{\partial {\mathscr {V}}|_\Gamma }{\partial e_0} (x)\right) \le 0, \quad x\in \Gamma , \end{aligned}$$

in the viscosity sense.

Proof

The proof of this lemma is similar to the proof of Lemma 3.7. \(\square \)

Lemma 4.7

For all \(x\in \Gamma \)

$$\begin{aligned} \max _{i=1,\ldots ,N} \{ v_i^\star (x) \} \le {\mathscr {V}}|_\Gamma (x) \le \min _{i=1,\ldots ,N} \{v_{i}^{\star } (x) +c_i (x) \}. \end{aligned}$$

Proof

Let \( i\in \{1,\ldots ,N\} \), it suffices to prove (a) \(v_i^\star (x) \le {\mathscr {V}}|_\Gamma (x)\) and (b) \({\mathscr {V}}|_\Gamma (x) \le v_{i}^{\star } (x) + c_i (x) \) below.

  1. (a)

    There exists a sequence \( \{x_n\}_{n\in {\mathbb {N}}} \subset ({\mathcal {P}}_i \backslash \Gamma ) \cap B(\Gamma ,R) \) such that \( x_n \rightarrow x \) and \( v_i (x_n) \rightarrow v_i^\star (x) \) as \( n \rightarrow \infty \). From Lemma 4.4, there exists a positive constant C such that there exist \( (y_{x_n}, \alpha _n) \in {\mathcal {T}}_{x_n}\) and \( \tau _n \) such that \( y_{x_n} (\tau _n) = x \) and \( \tau _n \le |x_n - x| \) for all n. This implies

    $$\begin{aligned} v_i (x_n)= & {} {\mathscr {V}}(x_n) \le \int _0^{\tau _n} \ell (y_{x_n} (s), \alpha _n (s) ) e^{-\lambda s} ds + {\mathscr {V}}(x) e^{-\lambda \tau _n}\\\le & {} MC |x_n - x| + {\mathscr {V}}(x) e^{-\lambda \tau _n}. \end{aligned}$$

    Letting \( n\rightarrow \infty \), one gets \( v_i^\star (x) \le {\mathscr {V}}(x) \).

  2. (b)

    From Assumption \([{\tilde{A}}3]\), it allows to pick \( (y_{n},\alpha _n) \in {\mathcal {T}}_{x}\) and \( \tau _n \rightarrow 0 \) where \(\alpha _n\) satisfies

    $$\begin{aligned} f(y_n(t), \alpha _n (t)) \cdot e_i = \dfrac{\delta }{2} >0, \quad \text {for all } t\le \tau _n. \end{aligned}$$

    Let \( x_n := y_n(\tau _n) \) then \( x_n\in {\mathcal {P}}_i \backslash \Gamma \) and \( x_n \rightarrow x \) as \( n \rightarrow \infty \). This implies

    $$\begin{aligned} {\mathscr {V}}(x)\le & {} \int _0^{\tau _n} \ell (y_{n} (s), \alpha _n (s) ) e^{-\lambda s} ds + c_i(x) + {\mathscr {V}}(x_n) e^{-\lambda \tau _n} \nonumber \\\le & {} MC \tau _n + c_i (x) + v_i (x_n) e^{-\lambda \tau _n}. \end{aligned}$$

    Taking limsup of both sides as \( n\rightarrow \infty \), one gets \( {\mathscr {V}}(x) \le c_i (x) + v_i^\star (x) \).

\(\square \)

Lemma 4.8

Let \( x\in \Gamma \) and assume that \({\mathscr {V}}|_\Gamma (x) < \min \{v_{i \star } (x) +c_i(x) \}\), then \( {\mathscr {V}}|_\Gamma \) satisfies

$$\begin{aligned} \lambda {\mathscr {V}}|_\Gamma (x) + H_\Gamma \left( x, \dfrac{\partial {\mathscr {V}}|_\Gamma }{\partial e_0} (x) \right) \ge 0. \end{aligned}$$

in the viscosity sense.

Proof

The proof of this lemma is similar to the proof of Lemma 3.9. \(\square \)

4.2 Hamilton–Jacobi system and viscosity solution

Definition 4.9

A function \( {\mathcal {U}}:= ( u_1,\ldots ,u_N, u_\Gamma ) \), where each \( u_i: {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) is a bounded function and \( u_\Gamma : \Gamma \rightarrow {\mathbb {R}}\) is a bounded continuous function, is called a discontinuous viscosity solution of (3.17) if \( {\mathcal {U}}^\star = (u_1^\star , \ldots u_N^\star , u_\Gamma ) \) is a viscosity subsolution of (3.17) and \( {\mathcal {U}}_\star = (u_{1\star }, \ldots u_{N\star }, u_\Gamma ) \) is a viscosity supersolution of (3.17).

Theorem 4.10

Under Assumptions [A] and \([{\tilde{A}}3]\), \( V:=(v_1,\ldots ,v_N, {\mathscr {V}}|_\Gamma ) \) is a viscosity solution of (3.17).

The proof of viscosity subsolution follows from that of Lemma 3.13 where instead of using an arbitrary sequence which tends to \( x \in \Gamma \) as \( n\rightarrow \infty \), we work with a sequence \( \{x_n\}_{n\in {\mathbb {N}}} \subset {\mathcal {P}}_i \backslash \Gamma \) which satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} x_n \rightarrow x,\\ v_i(x_n) \rightarrow v_i^\star (x), \end{array}\right. } \text {as } n\rightarrow \infty . \end{aligned}$$

The first difference between the proof of viscosity supersolution with the one of Lemma 3.15 is the following lemma, which has a similar proof as Lemma 3.14.

Lemma 4.11

Let \(x\in \Gamma \) and assume that

$$\begin{aligned} v_{i \star }(x) < {\mathscr {V}}|_\Gamma (x). \end{aligned}$$
(4.6)

Then, for any sequence \( \{z_n\}_{n\in {\mathbb {N}}}\subset {\mathcal {P}}_i \backslash \Gamma \) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} z_n \rightarrow x,\\ v_i(z_n) \rightarrow v_{i \star } (x), \end{array}\right. } \text {as } n\rightarrow \infty , \end{aligned}$$
(4.7)

there exist \({\bar{\tau }}>0\) and \(r>0\) such that, for any \(\varepsilon \) sufficiently small and any \(\varepsilon \)-optimal control law \(\alpha ^{\varepsilon }_{z_n}\) for \(z_n\),

$$\begin{aligned} y_{z_n,\alpha ^\varepsilon _{z_n}} (s) \in {\mathcal {P}}_i \backslash \Gamma , \quad \text {for all } s\in [0,{\bar{\tau }}]. \end{aligned}$$

The last difference between the proof of viscosity supersolution with the one of Lemma 3.15 is that instead of using an arbitrary sequence which tends to \( x \in \Gamma \) as \( n\rightarrow \infty \), we work with a sequence \( \{z_n\}_{n\in {\mathbb {N}}} \subset {\mathcal {P}}_i \backslash \Gamma \) satisfying (4.7).

4.3 Comparison principle and uniqueness

In the proof of the Comparison Principle under the strong controllability assumption [A3], the key argument is that the restrictions of the viscosity subsolutions of (3.17) to \({\mathcal {P}}_i \backslash \Gamma \) are Lipschitz continuous in a neighborhood of \( \Gamma \). This property is not obtained directly in the current framework, under the moderate controllability assumption \([{\tilde{A}}3]\). We rather proceed as follows. First, we regularize a restriction of viscosity subsolution to \( {\mathcal {P}}_i \backslash \Gamma \) using sup-convolution to obtain a family of Lipschitz continuous functions. Then we use this family of regularized functions to prove a local comparison principle which we finally extend to a global comparison principle by applying similar arguments as the ones in the proof of [25, Theorem 3.3.4].

We begin the first step with sup-convolution definition.

Definition 4.12

Let \( u_i : {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) be a bounded, USC function and \( \alpha ,p \) be positive numbers. Then sup-convolution of \( u_i \) with respect to the \( x^0 \)-variable is defined by

$$\begin{aligned} u_i^\alpha (x):= \sup _{z^0\in {\mathbb {R}}} \left\{ u( z^0 e_0 + x^i e_i ) - \left( \dfrac{|z^0 - x^0|^2}{\alpha ^2} + \alpha \right) ^\frac{p}{2} \right\} , \quad \text {if } x= x^0 e_0 + x^i e_i . \end{aligned}$$

We borrow the following useful lemmas from [25, Lemma 3.3.7 to 3.3.10].

Lemma 4.13

Let \( u_i : {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) be a bounded function and \( \alpha ,p \) be positive numbers. Then, for any \( x\in {\mathcal {P}}_i \), the supremum which defines \( u_i^\alpha \) is achieved at a point \( z^0\in {\mathbb {R}}\) such that

$$\begin{aligned} \left( \dfrac{|z^0 - x^0|^2}{\alpha ^2} +\alpha \right) ^\frac{p}{2} \le 2 || u_i ||_{L^\infty ({\mathcal {P}}_i)} + \alpha ^\frac{p}{2}. \end{aligned}$$
(4.8)

Lemma 4.14

Let \( u_i : {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) be a bounded function. Then for any \( \alpha ,p>0 \), the sup-convolution \( u_\alpha \) is locally Lipschitz continuous with respect to \( x^0 \), i.e. for any compact subset K in \( {\mathbb {R}}^3 \), there exists \( C_K>0 \) such that for all \( x= x^0 e_0 + x^i e_i \), \( y= y^0 e_0 + x^i e_i \in K \cap {\mathcal {P}}_i\)

$$\begin{aligned} |u^\alpha _i (x) - u^\alpha _i (y)| \le C_K |x^0 - y^0|. \end{aligned}$$

Lemma 4.15

Under Assumptions [A] and \([{\tilde{A}}3]\), let \( R>0 \) be a positive number as in Remark 4.2. Let \( u : {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) be a bounded, USC subsolution of (3.17) and \( \alpha ,p \) be some positive numbers. Then for all \( M>0 \), \( u_i^\alpha \) is Lipschitz continuous in \( B_M (\Gamma , R) \cap {\mathcal {P}}_i\), where \( B_M (\Gamma ,R):=\{x\in B(\Gamma ,R) : |x^0|\le M \} \).

Lemma 4.16

Under Assumptions [A] and \([{\tilde{A}}3]\), let \( {\bar{y}} \in \Gamma \) and \( R>0 \) be as in Remark 4.2. We denote by \( Q_i \) the set \( {\mathcal {P}}_i \cap B({\bar{y}}, R) \). Let \( u_i : {\mathcal {P}}_i \rightarrow {\mathbb {R}}\) be a bounded, USC viscosity subsolution of (3.17) in \( Q_i \). Then for all \( \alpha , p>0 \) sufficient small, if we set

$$\begin{aligned} Q_i^\alpha = \left\{ x\in Q_i : \text {dist} (x,\partial Q_i) > \alpha \left( 4^{1/p} ||u_i||^{2/p}_{L^\infty ({\mathcal {P}}_i)} - \alpha \right) ^{1/2} \right\} , \end{aligned}$$
(4.9)

the function \( u_i^\alpha \) defined in Definition 4.12 is Lipschitz continuous in \( Q_i^\alpha \). Moreover, there exists \( m: (0,\infty ) \rightarrow (0,\infty ) \) such that \( \lim _{\alpha \rightarrow 0} m(\alpha ) =0 \) and \( (u_1^\alpha - m(\alpha ), \ldots , u_N^\alpha - m(\alpha ) , u_\Gamma )\) is a viscosity subsolution of (3.17) in \( Q^\alpha := \cup _i Q_i^\alpha \).

Lemmas 4.13 and 4.14 on sup-convolution are well known result, but we give a short proof for completeness. First of all, it is easy to see that the supremum in Definition 4.12 is achieved at some point \( z^0 \in {\mathbb {R}}\). We have

$$\begin{aligned} u_i^\alpha (x) = u( z^0 e_0 + x^i e_i ) - \left( \dfrac{|z^0 - x^0|^2}{\alpha ^2} + \alpha ^2 \right) ^\frac{p}{2} \ge u( x^0 e_0 + x^i e_i ) + \alpha ^\frac{p}{2}, \end{aligned}$$

and because of the boundedness of u, we get (4.8). Next, let K be a compact subset of \( {\mathcal {P}}_i \), for \( x,y\in K \), from the definition of sup-convolution, we have

$$\begin{aligned} u_i^\alpha (x) - u_i^\alpha (y) \le \sup _{z^0\in {\mathbb {R}}} \left\{ - \left( \dfrac{|z^0 - x^0|^2}{\alpha ^2} + \alpha \right) ^\frac{p}{2} + \left( \dfrac{|z^0 - y^0|^2}{\alpha ^2} + \alpha \right) ^\frac{p}{2} \right\} . \end{aligned}$$

By the mean-value theorem and the fact that \( |z^0 - x^0|/\alpha ^2 \), \( |z^0 - y^0|/\alpha ^2\) are bounded, there exists a constant \( C_K>0 \) such that \( |u^\alpha _i (x) - u^\alpha _i (y)| \le C_K |x^0 - y^0| \). The proof of Lemma 4.14 is complete.

Finally, if \( u_i \) is a viscosity subsolution of (3.17), then it is also a viscosity subsolution of the following equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \lambda u_{i} (x) +H_{i}\left( x, \partial u_i (x) \right) \le 0 &{} \text {if } x\in {\mathcal {P}}_{i}\backslash \Gamma ,\\ \lambda u_{i} (x)+ H_{i}^{+}\left( x,\partial u_i (x) \right) \le 0 &{} \text {if } x\in \Gamma . \end{array}\right. } \end{aligned}$$
(4.10)

We may now apply the proofs of [25, Lemmas 3.3.9 and 3.3.10] to conclude Lemmas 4.15 and 4.16.

We are ready to prove a local comparison principle.

Theorem 4.17

Under Assumptions [A] and \([{\tilde{A}}3]\), let \( {\mathcal {U}}=(u_1,\ldots ,u_N, u_\Gamma )\) be a bounded, USC viscosity subsolution of (3.17) and \( {\mathcal {W}}= (w_1,\ldots ,w_N,w_\Gamma )\) be a bounded, LSC viscosity supersolution of (3.17). Let \( R>0 \) as in Remark 4.2, \( i\in \{1, \ldots ,N \} \) and \( {\bar{y}} \in \Gamma \) be fixed. Then, if we set \( Q_i = B({\bar{y}}, R) \cap {\mathcal {P}}_i\), we have

$$\begin{aligned} || (u_i- w_i)_+||_{L^\infty (Q_i)}&\le || (u_i- w_i)_+||_{L^\infty (\partial Q_i)}, \end{aligned}$$
(4.11)
$$\begin{aligned} || (u_\Gamma - w_\Gamma )_+||_{L^\infty (Q_i\cap \Gamma )}&\le || (u_\Gamma - w_\Gamma )_+||_{L^\infty (\partial (Q_i\cap \Gamma ))}. \end{aligned}$$
(4.12)

Proof

Take \( \alpha ,p>0 \) sufficient small so that Lemma 4.16 can be applied. From Lemma 4.16, we know the fact that \( u_i^\alpha \) is Lipschitz continuous in \( Q_\alpha \) with a Lipschitz constant \( L_\alpha \) and that there exists \( m : (0,\infty ) \rightarrow (0,\infty ) \) such that \( \lim _{\alpha \rightarrow 0} m (\alpha )=0 \) and \((u_1^\alpha - m,\ldots ,u_N^\alpha - m, u_\Gamma ) \) is a viscosity subsolution of (3.17). Set \( {\mathfrak {u}}_j:= u_j^\alpha - m \) for all \( j\in \{1,\ldots ,N\} \). Let us prove that

$$\begin{aligned} || ({\mathfrak {u}}_i - w_i)_+||_{L^\infty (Q_\alpha )}&\le || ({\mathfrak {u}}_i - w_i)_+||_{L^\infty (\partial Q_\alpha )},\quad \text {for all } i\in \{1,\ldots ,N\} \end{aligned}$$
(4.13)
$$\begin{aligned} || (u_\Gamma - w_\Gamma )_+||_{L^\infty (Q^\alpha \cap \Gamma )}&\le || (u_\Gamma - w_\Gamma )_+||_{L^\infty (\partial (Q^\alpha \cap \Gamma ))}. \end{aligned}$$
(4.14)

We argue by contradiction by considering the two following cases

Case A:

If there exist \( i\in \{1,\ldots ,N\} \) and \( {\bar{x}}_i \in Q_i^\alpha \backslash \partial Q_i^\alpha \) such that

$$\begin{aligned} M_i:= \max _{Q_i^\alpha } \{{\mathfrak {u}}_i - w_i\} = {\mathfrak {u}}_i ({\bar{x}}_i)- w_i({\bar{x}}_i) > \max \{M_\Gamma , 0\}, \end{aligned}$$

where \( M_\Gamma := \max _{Q_i^\alpha \cap \Gamma } \{u_\Gamma - w_\Gamma \} \). Now we can apply the similar arguments as in Case A in the proof of Theorem 3.16 to obtain that either \( M_i \le 0 \) or \( M_i \le M_\Gamma \), which leads us to a contradiction.

Case B:

If there exists \( {\bar{x}}_\Gamma \in (Q_i^\alpha \backslash \partial Q_i^\alpha ) \cap \Gamma \) such that

$$\begin{aligned} M_\Gamma = \max _{Q_i^\alpha \cap \Gamma } \{u_\Gamma - w_\Gamma \} = u_\Gamma ({\bar{x}}_\Gamma )- w_i({\bar{x}}_\Gamma ) \ge \max _{j\in \{1,\ldots ,N\}} M_j \text { and } M_\Gamma > 0. \end{aligned}$$

Apply the similar arguments as in Case B in the proof of Theorem 3.16 to obtain that either \( M_\Gamma \le 0 \) (which leads to a contradiction) or there exists \( k\in \{1,\ldots ,N\} \) such that

$$\begin{aligned} w_\Gamma ({\bar{x}}_\Gamma ) \ge \min _{j} \{w_j({\bar{x}}_\Gamma ) +c_j ({\bar{x}}_\Gamma )\} = w_k({\bar{x}}_\Gamma ) +c_k ({\bar{x}}_\Gamma ), \end{aligned}$$
(4.15)

which leads to \( M_k = {\mathfrak {u}}_k ({\bar{x}}_\Gamma ) - w_k ({\bar{x}}_\Gamma ) = M_\Gamma >0 \). We now repeat Case A and note that thanks to (4.15), the case \( w_k ({\bar{x}}_\Gamma ) \ge {\mathfrak {u}}_k ({\bar{x}}_\Gamma ) \) does not occur, we finally obtain that \( M_k \le 0 \) which contradicts (4.15).

We finish the proof of  (4.13) and (4.14).

In order to prove (4.11) and (4.12), we have to pass to the limit as \( \alpha \) tends to 0 in (4.13) and (4.14), respectively. Let \( \alpha _0>0 \) and \( y \in Q_{\alpha _0} \) be fixed. For all \( 0<\alpha \le \alpha _0 \), one has

$$\begin{aligned} (u_i^\alpha (y) -m(\alpha ) - w_i (y))_+ \le || (u_i^\alpha - m(\alpha ) - w_i)_+||_{L^\infty (\partial Q_\alpha )}. \end{aligned}$$
(4.16)

We claim that \( \limsup _{\alpha \rightarrow 0} || (u_i^\alpha - m(\alpha ) - w_i)_+||_{L^\infty (\partial Q_\alpha )} \le || (u_i^\alpha - m(\alpha ) - w_i)_+||_{L^\infty (\partial Q)}\). Indeed, there exists \( x_\alpha \in \partial Q_\alpha \) such that

$$\begin{aligned} (u_i^\alpha (x_\alpha ) -m(\alpha ) - w_i (x_\alpha ))_+ = || (u_i^\alpha - m(\alpha ) - w_i)_+||_{L^\infty (\partial Q_\alpha )}. \end{aligned}$$

Thus, for any subsequence such that \( || (u_i^\alpha - m(\alpha ) - w_i)_+||_{L^\infty (\partial Q_\alpha )} \) converges to a limit \( \ell \) as \( \alpha \rightarrow 0 \), we can assume that \( x_\alpha \rightarrow {\tilde{x}} \) as \( \alpha \rightarrow 0 \). Hence, from the properties of the sup-convolution, the fact that \( u_i \) is USC and \( w_i \) is LSC, we can check that \( \ell \le ( u_i ({\tilde{x}}) - w_i ({\tilde{x}}) )_+ \le || (u_i- w_i)_+||_{L^\infty (\partial Q_i)}. \) The claim is proved and therefore, by the pointwise convergence of \( u_i^\alpha \) to \( u_i \), passing to the \(\limsup \) as \( \alpha \rightarrow 0 \) in (4.16), we have

$$\begin{aligned} ( u_i (y) - w_i (y) )_+ \le || (u_i- w_i)_+||_{L^\infty (\partial Q_i)}, \quad \text {for all } y\in Q_{\alpha _0}. \end{aligned}$$

The above inequality holds for any \( y\in Q_{\alpha _0} \) with \( \alpha _0 \) arbitrarily chosen, then we obtain (4.11). Finally, (4.12) is obtained by using the similarly arguments, the proof is complete. \(\square \)

Thanks to the local comparison principle, apply the similar arguments in [25, Theorem 3.3.4], we now have the global comparison principle which leads us to the uniqueness of (3.17).

Theorem 4.18

Under Assumptions [A] and \( [{\tilde{A}}3] \), let \( {\mathcal {U}}=(u_1,\ldots ,u_N,u_\Gamma )\ \) be a bounded,USC viscosity subsolution of (3.17) and \({\mathcal {W}}= (w_1,\ldots ,w_N, w_\Gamma ) \) be a bounded,USC viscosity supersolution of (3.17). Then \( u_i \le w_i \) on \( {\mathcal {P}}_i \) for all \( i\in \{1,\ldots ,N\} \) and \( u_\Gamma \le w_\Gamma \) on \( \Gamma \).

Corollary 4.19

The value function \( {\mathscr {V}}\) satisfies \( (v_1,\ldots ,v_N,{\mathscr {V}}|_\Gamma ) \) is a unique viscosity solution of (3.17) and \( v_i \) is continuous on \( {\mathcal {P}}_i \) for all \( i\in \{1,\ldots ,N\} \).

5 A more general optimal control problem

In this section, we generalize the control problem studied in Sect. 3 by allowing some of the entry costs functions to be zero. The situation can be seen as intermediary between the one studied in [25] when all the entry costs are zero, and that studied previously under the strong controllability condition [A3], in this work when all the entry costs function are positive. This generalization holds under the moderate controllability condition \([{\tilde{A}}3]\), but we omit the details. Accordingly, every results presented below will mainly be obtained by combining the arguments proposed above with those used in [25]. Hence, we will present the results without the proofs.

Specifically, we consider the optimal control problems with non-negative entry cost , where if \( i\le m \) and for all \( x\in \Gamma \) if \( i>m \), keeping all the assumptions and definitions of Sect. 2 unchanged. The value function associated to is denoted by . Similarly to Lemma 3.3, is continuous and Lipschitz continuous in the neighborhood of \( \Gamma \). Hence, we can define as follows

Additionally, it is clear that for all \( i,j\le m \), for all \( x\in \Gamma \), i.e. is a continuous function which will be noted from now on.

Combining the arguments in [25] and in Sect. 3.1 leads us to the following lemma.

Lemma 5.1

The value function satisfies

We now define a set of admissible test-function and the Hamilton–Jacobi system that will characterize .

Definition 5.2

A function \(\varphi :\left( \cup _{i=1}^{m}{\mathcal {P}}_{i}\right) \times {\mathcal {P}}_{m+1}\times \ldots \times {\mathcal {P}}_{N}\rightarrow \mathbb {R}^{N-m+1}\) of the form

$$\begin{aligned} \varphi \left( x_{c},x_{m+1},\ldots ,x_{N}\right) =\left( \varphi _{c}(x_{c}),\varphi _{m+1} (x_{m+1}),\ldots ,\varphi _{N} (x_{N})\right) \end{aligned}$$

is an admissible test-function if

  • \(\varphi _{c}\) is continuous and for \(i\le m\), \(\varphi _{c}|_{{\mathcal {P}}_{i}}\) belongs to \(C^{1}\left( {\mathcal {P}}_{i}\right) \),

  • the space of admissible test-function is denoted by \(R\left( {\mathcal {S}}\right) \).

We ready to define a viscosity sub and supersolution of the Hamilton–Jacobi system related to .

Definition 5.3

A function , where and , is called a viscosity subsolution of the Hamilton–Jacobi system if for any test-function \(\varphi \in R\left( {\mathcal {S}}\right) \):

  1. 1.

    if has a local maximum at \(x_{c}\in \cup _{j=1}^{m}{\mathcal {P}}_{j}\) and if

    • \(x_{c}\in {\mathcal {P}}_{j}\backslash \Gamma \) for some \(j\le m\), then

    • \(x_{c} \in \Gamma \), then

  2. 2.

    if has a local maximum point at \(x_{i}\in {\mathcal {P}}_{i}\) for \(i>m\), and if

    • \(x_{i}\in {\mathcal {P}}_{i}\backslash \Gamma \), then

    • \(x_{i} \in \Gamma \), then

A function where is called a viscosity supersolution of the Hamilton–Jacobi system if for any test-function \(\varphi \in R\left( {\mathcal {S}}\right) \):

  1. 1.

    if has a local minimum at \(x_{c}\in \cup _{j=1}^{m}{\mathcal {P}}_{j}\) and if

    • \(x_{c}\in {\mathcal {P}}_{j}\backslash \Gamma \) for some \(j\le m\), then

    • \(x_{c} \in \Gamma \), then

  2. 2.

    if has a local minimum point at \(x_{i}\in {\mathcal {P}}_{i}\) for \(i>m\), and if

    • \(x_{i}\in {\mathcal {P}}_{i}\backslash \Gamma \), then

    • \(x_{i} \in \Gamma \), then

A function where and for all \(i>m\) is called a viscosity solution of the Hamilton–Jacobi system if it is both a viscosity subsolution and a viscosity supersolution of the Hamilton–Jacobi system.

Similarly to the previous sections, we have the relation between the value function and the Hamilton–Jacobi system.

Theorem 5.4

Let be the value function corresponding to , then is a viscosity solution of the Hamilton–Jacobi system.

We now state and prove the comparison principle for the Hamilton–Jacobi system which is the main result of this section

Theorem 5.5

Let and be respectively a bounded viscosity sub and supersolution of the Hamilton–Jacobi system. Then , i.e. on \( \cup _{j\le m} {\mathcal {P}}_j \) and on \( {\mathcal {P}}_i \) for all \( i>m \).

Proof

First of all, similarly to the first step of the proof of Theorem 3.16 (Sect. 3.4) we can check that there exists \( (\phi _c, \phi _{m+1}, \ldots , \phi _N) \) such that for all \( 0<\mu <1 \) and \( \mu \) close to 1, and for all \( i\in \{m+1, \ldots ,N\} \) then is a viscosity subsolution of HJ system related to . We also have the function and have maximum value \( M_c^\mu \) and \( M^\mu _j \) which are reached at some points \( {\bar{x}}_c \) and \( {\bar{x}}_j \). We argue by contradiction, through considering the two following cases:

Case A:

Assume that \( M_i^\mu > M_c^\mu \) and \( M_i^\mu >0 \). Repeat Case A as in the proof of Theorem 3.16, implies that either or \( M_i^\mu \le M_c^\mu \), the desired contradiction.

Case B:

Assume that \( M_c^\mu \ge \max _{j>m} \{M_j^\mu \} \) and \( M_c^\mu >0 \). If , we can check that satisfies the super-optimality [25, Theorem 3.2.4] at \( {\bar{x}}_c \). Furthermore, since is a viscosity subsolution of

Thanks to the comparison principle for HJ when all of entry costs are zero, see [25, Theorem 3.2.5], we obtain that which leads to a contradiction. Otherwise, if there exists \( k>m \) such that

(5.1)

repeat Case B.3 as in in the proof of Theorem 3.16, implies that there exists \( k> m \) such that

Repeat Case A and note that thanks to (5.1), the case does not occur, we finally obtain that \( M_k^\mu \le 0 \), the desired contradiction.

Finally, we get \( M_c^\mu \le 0 \) and \( M_i^\mu \le 0 \) for all \( i\in \{1,\ldots ,N\} \) and \( 0< \mu <1 \), \( \mu \) close to 1. We conclude by letting \( \mu \) to 1. \(\square \)