Skip to main content
Log in

A stochastic differential game approach toward animal migration

  • Original Article
  • Published:
Theory in Biosciences Aims and scope Submit manuscript

Abstract

A stochastic differential game model for animal migration between two habitats under uncertain environment, a new population dynamics model, is formulated. Its novelty is the use of an impulse control formalism to naturally describe migrations with different timings and magnitudes that the conventional models could not handle. Uncertainty of the environment that the population faces with is formulated in the context of the multiplier robust control. The optimal migration strategy to give the maximized minimal profit is found through a Hamilton–Jacobi–Bellman quasi-variational inequality (HJBQVI). A key message from HJBQVI is that its free boundary determines the optimal migration strategy. Solving the HJBQVI is carried out with a specialized stable and convergent finite difference scheme. This paper theoretically suggests that the sub-additivity of the performance index, the index to be optimized through the migration, critically affects the resulting strategy. The computational results with the established scheme are consistent with the theoretical predictions and support importance of the sub-additivity property. Social interaction to reduce the net mortality rate is also quantified, suggesting a linkage between the present and existing population dynamics models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Abbreviations

A i, j :

Diagonal coefficients of the discretized HJBQVI at the (ij)-th vertex vi,j

n :

Independent variable that represents the total population

B :

The bivariate map that sums the cost and benefit of migration

B t :

1-D standard Brownian motion

c :

A nonnegative, bounded, and smooth univariate function

C 1C 2 :

Positive constants

C ψ :

Parameter shaping the regularity of the awareness ψ

E:

Expectation

E :

Error norm in the policy iteration algorithm

f :

Generic sufficiently smooth function

G :

A continuous function

h :

Free boundary of the solution to the HJBQVI

J :

Performance index

k 0k 1k 2 :

Weighting constants appearing in the migration cost B

K :

Maximum body weight

m n :

Intervals in the n-direction

m x :

Intervals in the x-direction

N t :

The total population in the habitat at the time t

N max :

The maximum value of the total population Nt

p i, j :

Integer to define the optimal control η* at each vertex

s :

Time

t :

Time

\(u = \left({\tau_{1};\eta_{1},\tau_{2};\eta_{2}, \ldots} \right)\) :

Migration strategy

u * :

Optimized u

v i, j :

(ij)-th vertex

w t :

The uncertainty of the mortality rate at the time t

w * :

Optimized w

w max :

Maximum value of wt

\(W = \left\{{\left. w \right|0 \le w \le w_{\max}} \right\}\) :

Admissible range of wt

X t :

The representative body weight at the time t

x :

Independent variable that represents the body weight

αβ :

Model parameters shaping the functional form of the migration cost B

γ :

Parameter shaping the regularity of the awareness ψ

δ :

Discount rate

ɛ :

Error tolerance in the policy iteration algorithm

Δn :

Cell length in the n-direction

Δx :

Cell length in the x-direction

φ :

Test functions

\(\Phi\) :

Value function

\(\hat{\Phi}\) :

Piece-wise linear interpolation of discretized \(\Phi\)

η i :

Total number of migrants of the ith migration

η * :

Optimized η

κ :

Parameter shaping the accumulated profit in the current habitat

μ :

Deterministic body growth rate

ψ :

Coefficient that represents the state-dependent awareness

ψ 0 :

Parameter shaping ψ

Ψ:

Dummy functions for defining viscosity solutions

λ :

Base mortality rate

ϑ :

Weighting content appearing in the accumulated profit in the current habitat

ρ :

Penalty parameter

σ :

Amplitude of the stochastic growth rate

τ :

The smallest time s such that Ns = 0

τ i :

Timing of the ith migration

χ i, j :

Characteristic function to penalize the non-local term of the HJBQVI in numerical computation

\(\Omega\) :

The domain to define the HJBQVI

\(\bar{\Omega}\) :

Closure of \(\Omega\)

\(\Omega_{\text{m}}\) :

The migration sub-domain. A sub-domain of \(\Omega\)

\(\Omega_{\text{r}}\) :

The residency sub-domain. A sub-domain of \(\Omega\)

\({\mathcal{F}}_{t}\) :

Natural filtration generated by \(B_{t}\). Its subscript is sometimes dropped when there is no confusion

\({\mathcal{L}}\) :

Partial differential operator defining the HJBQVI

\({\mathcal{L}}_{w}\) :

Partial differential operator defining a linear counterpart of the HJBQVI

\({\mathcal{M}}\) :

Non-local operator defining the HJBQVI

\({\mathcal{U}}\) :

The admissible set of u

\({\mathcal{W}}\) :

The admissible set of w

\({\mathbf{\mathbb{R}}}\) :

1-D real space

References

Download references

Acknowledgements

JSPS Research Grant No. 17K15345 and a grant for ecological survey of a life history of the landlocked ayu Plecoglossus altivelis from MLIT of Japan supports this research. The author thanks the two reviewers for their critical comments and suggestions on biological and mathematical descriptions in this paper. The author is grateful to a reviewer for helpful comments, suggestions, and discussions on the performance index from a viewpoint of evolutionary biology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hidekazu Yoshioka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Proofs of several propositions in the main text are presented in this appendix. As described in the main text, we set K = 1 without any loss of generality.

Proof of Proposition 3-1

Set \(x_{1},x_{2},n_{1},n_{2} \in \left[{0,1} \right]\) and fix \(u \in {\mathcal{U}}\left({n_{1}} \right) \cap {\mathcal{U}}\left({n_{2}} \right)\) and \(w \in {\mathcal{W}}\). The process \(X\) with the initial condition \(x_{i}\) is denoted as \(X^{\left(i \right)}\) (\(i = 1,2\)). Similar notation applies to \(N\). Then, we have

$$\begin{aligned} & J\left({x_{1},n_{1};u,w} \right) - J\left({x_{2},n_{2};u,w} \right) \\ & \quad = \hbox{E}\left[{\sum\limits_{i \ge 1} {e^{{- \delta \tau_{i}}} \eta_{i} \left\{{\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right\}}} \right] \\ & \qquad + \frac{1}{2}\hbox{E}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left({\psi \left({N_{s}^{\left(1 \right)}} \right) - \psi \left({N_{s}^{\left(2 \right)}} \right)} \right)w_{s}^{2} {\text{d}}s}} \right] \\ & \quad \quad + {\text{E}}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \vartheta \left({N_{s}^{\left(1 \right)} \left({X_{s}^{\left(1 \right)}} \right)^{\kappa} - N_{s}^{\left(2 \right)} \left({X_{s}^{\left(2 \right)}} \right)^{\kappa}} \right){\text{d}}s}} \right]. \\ \end{aligned}$$
(54)

The right-hand side is bounded from above by the quantity

$$\begin{aligned} & \hbox{E}\left[{\sum\limits_{i \ge 1} {e^{{- \delta \tau_{i}}} \left| {\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right|}} \right] \\ & \quad + \frac{{w_{\max}^{2}}}{2}{\text{E}}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left| {\psi \left({N_{s}^{\left(1 \right)}} \right) - \psi \left({N_{s}^{\left(2 \right)}} \right)} \right|{\text{d}}s}} \right] \\ & \quad + \vartheta {\text{E}}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left| {N_{s}^{\left(1 \right)} \left({X_{s}^{\left(1 \right)}} \right)^{\kappa} - N_{s}^{\left(2 \right)} \left({X_{s}^{\left(2 \right)}} \right)^{\kappa}} \right|{\text{d}}s}} \right]. \\ \end{aligned}$$
(55)

Therefore, we obtain

$$\begin{aligned} & J\left({x_{1},n_{1};u,w} \right) - J\left({x_{2},n_{2};u,w} \right) \\ & \quad \le {\text{E}}\left[{\sum\limits_{i \ge 1} {e^{{- \delta \tau_{i}}} \left| {\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right|}} \right] \\&\qquad+ \frac{{w_{\max}^{2}}}{2}{\text{E}}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left| {\psi \left({N_{s}^{\left(1 \right)}} \right) - \psi \left({N_{s}^{\left(2 \right)}} \right)} \right|{\text{d}}s}} \right] \\ & \quad \quad + \vartheta {\text{E}}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left| {N_{s}^{\left(1 \right)} \left({X_{s}^{\left(1 \right)}} \right)^{\kappa} - N_{s}^{\left(2 \right)} \left({X_{s}^{\left(2 \right)}} \right)^{\kappa}} \right|{\text{d}}s}} \right]. \\ \end{aligned}$$
(56)

The three expectations appearing in (56) have to be evaluated. Firstly, by the path-wise uniqueness, the estimate

$$\begin{aligned} \hbox{E}\left[{\left| {\psi \left({N_{t}^{\left(1 \right)}} \right) - \psi \left({N_{t}^{\left(2 \right)}} \right)} \right|} \right] & \le \hbox{E}\left[{C_{\psi} \left| {N_{t}^{\left(1 \right)} - N_{t}^{\left(2 \right)}} \right|^{\gamma}} \right] \\ & \le C_{1} \left| {n_{1} - n_{2}} \right|^{\gamma} \\ \end{aligned}$$
(57)

with a positive constant \(C_{1}\) follows, which leads to

$$\begin{aligned} \hbox{E}\left[{\int_{0}^{+ \infty} {e^{- \delta s} \left| {\psi \left({N_{s}^{\left(1 \right)}} \right) - \psi \left({N_{s}^{\left(2 \right)}} \right)} \right|{\text{d}}s}} \right] & = \int_{0}^{+ \infty} {e^{- \delta s} {\text{E}}\left[{\left| {\psi \left({N_{s}^{\left(1 \right)}} \right) - \psi \left({N_{s}^{\left(2 \right)}} \right)} \right|} \right]{\text{d}}s} \\ & \le C_{1} \int_{0}^{+ \infty} {e^{- \delta s} \left| {n_{1} - n_{2}} \right|^{\gamma} {\text{d}}s} \\ & = \frac{{C_{1}}}{\delta}\left| {n_{1} - n_{2}} \right|^{\gamma}, \\ \end{aligned}$$
(58)

assuming that the expectation and integral are interchanged. Here, the coefficient \(C_{1}\) can be taken sufficiently large so that it is independent of the controls. This is because the intervention (4) is linear with respect to \(\eta_{i}\) as in Guo and Wu (2009).

The first expectation of (56) is evaluated as

$$\begin{aligned} {\text{E}}\left[{\sum\limits_{i \ge 1} {e^{{- \delta \tau_{i}}} \left| {\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right|}} \right] & \le {\text{E}}\left[{\sum\limits_{i = 1}^{I} {e^{{- \delta \tau_{i}}} \left| {\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right|}} \right] \\ & = \sum\limits_{i = 1}^{I} {{\text{E}}\left[{e^{{- \delta \tau_{i}}} \left| {\left({X_{{\tau_{i} - 0}}^{\left(1 \right)}} \right)^{\beta} - \left({X_{{\tau_{i} - 0}}^{\left(2 \right)}} \right)^{\beta}} \right|} \right]}. \\ \end{aligned}$$
(59)

By Proposition 1 with the argument in Proof of Lemma 3.1 of Davis et al. (2010), we have

$${\text{E}}\left[ {\left| {X_{s}^{\left( 1 \right)} - X_{s}^{\left( 2 \right)} } \right|} \right] \le \sqrt {{\text{E}}\left[ {\left( {X_{s}^{\left( 1 \right)} - X_{s}^{\left( 2 \right)} } \right)^{2} } \right]} \le e^{{\left( {\mu + \sigma^{2} /2} \right)s}} \left| {x_{1} - x_{2} } \right| ,\quad s \ge 0 .$$
(60)

On the other hand, application of Itô’s formula to \(Y_{s}^{\left( i \right)} = \left( {X_{s}^{\left( i \right)} } \right)^{\beta }\) (\(i = 1,2\)) yields

$$\begin{aligned} {\text{d}}Y_{t}^{\left( i \right)} & = Y_{t}^{\left( i \right)} \left\{ {\beta \mu \left( {1 - X_{t}^{\left( i \right)} } \right) - \frac{{\sigma^{2} }}{2}\beta \left( {1 - \beta } \right)\left( {1 - X_{t}^{\left( i \right)} } \right)^{2} } \right\}{\text{d}}t \\ & \quad + Y_{t}^{\left( i \right)} \beta \sigma \left( {1 - X_{t}^{\left( i \right)} } \right){\text{d}}B_{t} \\ & = Y_{t}^{\left( i \right)} \left( {F_{1} \left( {X_{t}^{\left( i \right)} } \right){\text{d}}t + F_{2} \left( {X_{t}^{\left( i \right)} } \right)} \right){\text{d}}B_{t} \\ \end{aligned}$$
(61)

with

$$F_{1} \left( {X_{t}^{\left( i \right)} } \right) = \beta \mu \left( {1 - X_{t}^{\left( i \right)} } \right) - \frac{{\sigma^{2} }}{2}\beta \left( {1 - \beta } \right)\left( {1 - X_{t}^{\left( i \right)} } \right)^{2} ,\quad F_{2} \left( {X_{t}^{\left( i \right)} } \right) = \beta \sigma \left( {1 - X_{t}^{\left( i \right)} } \right).$$
(62)

Then, again as in Proof of Lemma 3.1 of Davis et al. (2010), we have

$$\begin{aligned} & {\text{E}}\left[ {\left( {Y_{s}^{\left( 1 \right)} - Y_{s}^{\left( 2 \right)} } \right)^{2} } \right] - \left( {x_{1}^{\beta } - x_{2}^{\beta } } \right)^{2} \\ & \quad \le 2{\text{E}}\left[ {\int_{0}^{s} {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)\left( {Y_{t}^{\left( 1 \right)} F_{1} \left( {X_{t}^{\left( 1 \right)} } \right) - Y_{t}^{\left( 2 \right)} F_{1} \left( {X_{t}^{\left( 2 \right)} } \right)} \right){\text{d}}t} } \right] \\ & \quad + {\text{E}}\left[ {\int_{0}^{s} {\left( {Y_{t}^{\left( 1 \right)} F_{2} \left( {X_{t}^{\left( 1 \right)} } \right) - Y_{t}^{\left( 2 \right)} F_{2} \left( {X_{t}^{\left( 2 \right)} } \right)} \right)^{2} {\text{d}}t} } \right],\quad s \ge 0 \\ \end{aligned}$$
(63)

By the Fubini’s theorem and Proposition 1 (\(0 \le Y_{t}^{\left( i \right)} ,X_{t}^{\left( i \right)} \le 1\)), the expectations in the right-hand side of (63) are estimated from above as

$$\begin{aligned} & {\text{E}}\left[ {\int_{0}^{s} {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)\left( {Y_{t}^{\left( 1 \right)} F_{1} \left( {X_{t}^{\left( 1 \right)} } \right) - Y_{t}^{\left( 2 \right)} F_{1} \left( {X_{t}^{\left( 2 \right)} } \right)} \right){\text{d}}t} } \right] \\ & \quad = {\text{E}}\left[ {\int_{0}^{s} {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)\left( {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)F_{1} \left( {X_{t}^{\left( 1 \right)} } \right) + Y_{t}^{\left( 2 \right)} \left( {F_{1} \left( {X_{t}^{\left( 1 \right)} } \right) - F_{1} \left( {X_{t}^{\left( 2 \right)} } \right)} \right)} \right){\text{d}}t} } \right] \\ & \quad \le \beta \mu \int_{0}^{s} {{\text{E}}\left[ {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)^{2} } \right]{\text{d}}t} + \left( {\beta \mu + \sigma^{2} \beta \left( {1 - \beta } \right)} \right)\int_{0}^{s} {{\text{E}}\left[ {\left| {X_{t}^{\left( 1 \right)} - X_{t}^{\left( 2 \right)} } \right|} \right]{\text{d}}t} \\ \end{aligned}$$
(64)

and

$$\begin{aligned} & {\text{E}}\left[ {\int_{0}^{s} {\left( {Y_{t}^{\left( 1 \right)} F_{2} \left( {X_{t}^{\left( 1 \right)} } \right) - Y_{t}^{\left( 2 \right)} F_{2} \left( {X_{t}^{\left( 2 \right)} } \right)} \right)^{2} {\text{d}}t} } \right] \\ & \quad = {\text{E}}\left[ {\int_{0}^{s} {\left( {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)F_{2} \left( {X_{t}^{\left( 1 \right)} } \right) - Y_{t}^{\left( 2 \right)} \left( {F_{2} \left( {X_{t}^{\left( 2 \right)} } \right) - F_{2} \left( {X_{t}^{\left( 1 \right)} } \right)} \right)} \right)^{2} {\text{d}}t} } \right] \\ & \quad \le 2{\text{E}}\left[ {\int_{0}^{s} {\left( {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)^{2} \left( {F_{2} \left( {X_{t}^{\left( 1 \right)} } \right)} \right)^{2} + \left( {Y_{t}^{\left( 2 \right)} } \right)^{2} \left( {F_{2} \left( {X_{t}^{\left( 2 \right)} } \right) - F_{2} \left( {X_{t}^{\left( 1 \right)} } \right)} \right)^{2} } \right){\text{d}}t} } \right] \\ & \quad \le 2\beta^{2} \sigma^{2} \int_{0}^{s} {{\text{E}}\left[ {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)^{2} } \right]{\text{d}}t} { + }2\beta^{2} \sigma^{2} \int_{0}^{s} {{\text{E}}\left[ {\left( {X_{t}^{\left( 1 \right)} - X_{t}^{\left( 2 \right)} } \right)^{2} } \right]{\text{d}}t} \\ \end{aligned}$$
(65)

By (64) and (65), (63) reduces to

$$\begin{aligned} & {\text{E}}\left[ {\left| {Y_{s}^{\left( 1 \right)} - Y_{s}^{\left( 2 \right)} } \right|^{2} } \right] - \left| {x_{1}^{\beta } - x_{2}^{\beta } } \right|^{2} \le 2\left( {\beta \mu + \beta^{2} \sigma^{2} } \right)\int_{0}^{s} {{\text{E}}\left[ {\left( {Y_{t}^{\left( 1 \right)} - Y_{t}^{\left( 2 \right)} } \right)^{2} } \right]{\text{d}}t} \\ & \quad + 2\left( {\beta \mu + \sigma^{2} \beta \left( {1 - \beta } \right)} \right)\int_{0}^{s} {{\text{E}}\left[ {\left| {X_{t}^{\left( 1 \right)} - X_{t}^{\left( 2 \right)} } \right|} \right]{\text{d}}t} \\ & \quad { + }2\beta^{2} \sigma^{2} \int_{0}^{s} {{\text{E}}\left[ {\left( {X_{t}^{\left( 1 \right)} - X_{t}^{\left( 2 \right)} } \right)^{2} } \right]{\text{d}}t} ,\quad s \ge 0. \\ \end{aligned}$$
(66)

Therefore, there exists a sufficiently large constant \(C_{2} > 0\) such that

$$\begin{aligned} & {\text{E}}\left[ {\left| {\left( {X_{s}^{\left( 1 \right)} } \right)^{\beta } - \left( {X_{s}^{\left( 2 \right)} } \right)^{\beta } } \right|} \right] = {\text{E}}\left[ {\left| {Y_{s}^{\left( 1 \right)} - Y_{s}^{\left( 2 \right)} } \right|} \right] \\ & \quad \le \sqrt {{\text{E}}\left[ {\left| {Y_{s}^{\left( 1 \right)} - Y_{s}^{\left( 2 \right)} } \right|^{2} } \right]} = e^{{\left( {\beta \mu + \beta^{2} \sigma^{2} } \right)s}} \left| {x_{1}^{\beta } - x_{2}^{\beta } } \right| \\ & \quad + C_{2} \left( {e^{{\left( {\mu /2 + \sigma^{2} /4} \right)s}} \left| {x_{2} - x_{1} } \right|^{1/2} + e^{{\left( {\mu + \sigma^{2} /2} \right)s}} \left| {x_{2} - x_{1} } \right|} \right),\quad s \ge 0. \\ \end{aligned}$$
(67)

Thus, with a sufficiently large \(\delta\) (Assumption 2), by (60) we get

$${\text{E}}\left[ {\left| {e^{ - \delta s} \left( {X_{s}^{\left( 1 \right)} } \right)^{\beta } - e^{ - \delta s} \left( {X_{s}^{\left( 2 \right)} } \right)^{\beta } } \right|} \right] \le \left| {x_{1}^{\beta } - x_{2}^{\beta } } \right| + C_{2} \left| {x_{2} - x_{1} } \right|^{1/2} + C_{2} \left| {x_{2} - x_{1} } \right|,\quad s \ge 0.$$
(68)

Substituting (68) into (59) yields

$$\begin{aligned} {\text{E}}\left[ {\sum\limits_{i \ge 1} {e^{{ - \delta \tau_{i} }} \left| {\left( {X_{{\tau_{i} - 0}}^{\left( 1 \right)} } \right)^{\beta } - \left( {X_{{\tau_{i} - 0}}^{\left( 2 \right)} } \right)^{\beta } } \right|} } \right] & \le \sum\limits_{i = 1}^{I} {{\text{E}}\left[ {e^{{ - \delta \tau_{i} }} \left| {\left( {X_{{\tau_{i} - 0}}^{\left( 1 \right)} } \right)^{\beta } - \left( {X_{{\tau_{i} - 0}}^{\left( 2 \right)} } \right)^{\beta } } \right|} \right]} \\ & = \sum\limits_{i = 1}^{I} {{\text{E}}\left[ {\left| {e^{{ - \delta \tau_{i} }} \left( {X_{{\tau_{i} - 0}}^{\left( 1 \right)} } \right)^{\beta } - e^{{ - \delta \tau_{i} }} \left( {X_{{\tau_{i} - 0}}^{\left( 2 \right)} } \right)^{\beta } } \right|} \right]} \\ & \le I\left( {\left| {x_{1}^{\beta } - x_{2}^{\beta } } \right| + C_{2} \left| {x_{2} - x_{1} } \right|^{1/2} + C_{2} \left| {x_{2} - x_{1} } \right|} \right). \\ \end{aligned}$$
(69)

On the last expectation of (56), we have

$$\begin{aligned} \left| {N_{s}^{\left( 1 \right)} \left( {X_{s}^{\left( 1 \right)} } \right)^{\kappa } - N_{s}^{\left( 2 \right)} \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right| & \le \left| {N_{s}^{\left( 1 \right)} \left( {X_{s}^{\left( 1 \right)} } \right)^{\kappa } - N_{s}^{\left( 1 \right)} \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right| \\ & \quad + \left| {N_{s}^{\left( 1 \right)} \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } - N_{s}^{\left( 2 \right)} \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right| \\ & \le \left| {N_{s}^{\left( 1 \right)} } \right|\left| {\left( {X_{s}^{\left( 1 \right)} } \right)^{\kappa } - \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right| + \left| {\left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right|\left| {N_{s}^{\left( 1 \right)} - N_{s}^{\left( 2 \right)} } \right| \\ & \le \left| {\left( {X_{s}^{\left( 1 \right)} } \right)^{\kappa } - \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right| + \left| {N_{s}^{\left( 1 \right)} - N_{s}^{\left( 2 \right)} } \right|. \\ \end{aligned}$$
(70)

By (69) and an analogous discussion to (57), (70) leads to

$${\text{E}}\left[ {\int_{0}^{ + \infty } {e^{ - \delta s} \left| {N_{s}^{\left( 1 \right)} \left( {X_{s}^{\left( 1 \right)} } \right)^{\kappa } - N_{s}^{\left( 2 \right)} \left( {X_{s}^{\left( 2 \right)} } \right)^{\kappa } } \right|{\text{d}}s} } \right] \le C_{2} \left( {\left| {x_{1}^{\kappa } - x_{2}^{\kappa } } \right| + \left| {n^{\left( 1 \right)} - n^{\left( 2 \right)} } \right|} \right)$$
(71)

with \(C_{2} > 0\) that is taken to be larger if necessary, by Assumption 2. Combining (58), (69), and (71) leads to

$$\begin{aligned} J\left( {x_{1} ,n_{1} ;u,w} \right) - J\left( {x_{2} ,n_{2} ;u,w} \right) & \le I\left( {\left| {x_{1}^{\beta } - x_{2}^{\beta } } \right| + C_{2} \left| {x_{2} - x_{1} } \right|^{1/2} + C_{2} \left| {x_{2} - x_{1} } \right|} \right) \\ & \quad + \frac{{C_{1} w_{\hbox{max} }^{2} }}{2\delta }\left| {n_{1} - n_{2} } \right|^{\gamma } \\ & \quad + \vartheta C_{2} \left( {\left| {x_{1}^{\kappa } - x_{2}^{\kappa } } \right| + \left| {n^{\left( 1 \right)} - n^{\left( 2 \right)} } \right|} \right) \\ & \equiv G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right), \\ \end{aligned}$$
(72)

where \(G\) is continuous. Then, we have

$$\begin{aligned} J\left( {x_{1} ,n_{1} ;u,w} \right) & \le J\left( {x_{2} ,n_{2} ;u,w} \right) + G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right) \\ & \le \Phi \left( {x_{2} ,n_{2} } \right) + G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right) \\ \end{aligned}$$
(73)

and thus

$$\Phi \left( {x_{1} ,n_{1} } \right) \le \Phi \left( {x_{2} ,n_{2} } \right) + G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right) .$$
(74)

Similarly, exchanging \(\left( {x_{1} ,n_{1} } \right)\) and \(\left( {x_{2} ,n_{2} } \right)\) in (73) gives

$$\Phi \left( {x_{2} ,n_{2} } \right) \le \Phi \left( {x_{1} ,n_{1} } \right) + G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right).$$
(75)

Combining (74) and (75) gives

$$\left| {\Phi \left( {x_{1} ,n_{1} } \right) - \Phi \left( {x_{2} ,n_{2} } \right)} \right| \le G\left( {x_{1} ,x_{2} ,n_{1} ,n_{2} } \right).$$
(76)

The proof is thus completed.□

Proof of Proposition 3-2

Fix an admissible \(w \in {\mathcal{W}}\). Firstly, substituting the null control \(u \equiv 0\) where \(\eta_{i} = 0\) (\(i = 0,1,2,3,\ldots\)) to \(J\) yields \(0 \le \Phi\). Secondly, substituting the control \(\eta_{0} = n\), \(\tau_{0} = 0\), \(\eta_{i} = 0\) (\(i = 1,2,3,\ldots\)) to \(J\) yields \(- B\left({x,n} \right) \le \Phi\), which thus leads to the lower bound \(\hbox{max} \left\{{0, - B\left({x,n} \right)} \right\} \le \Phi \left({x,n} \right)\). Finally, we have the estimate

$$\begin{aligned} \Phi \left({x,n} \right) & = \mathop {\sup}\limits_{u \in A} \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} J\left({x,n;u,w} \right) \\ & \le \mathop {\sup}\limits_{u \in A} \hbox{E}\left[{\vartheta \int_{0}^{+ \infty} {e^{- \delta s} N_{s} X_{s}^{\kappa} {\text{d}}s} + \sum\limits_{i \ge 1} {e^{{- \delta \tau_{i}}} \eta_{i} X_{{\tau_{i} - 0}}^{\beta}}} \right] \\ & \le \mathop {\sup}\limits_{u \in A} \hbox{E}\left[{\frac{\vartheta}{\delta}n + \sum\limits_{{i \ge 0,\tau_{i} \ge 0}} {\eta_{i}}} \right] \\ & \le \left({1 + \frac{\vartheta}{\delta}} \right)n, \\ \end{aligned}$$
(77)

which is the upper bound.□

Proof of Proposition 3-3

The statement immediately follows from the order property \({\mathcal{U}}\left({n_{1}} \right) \subset {\mathcal{U}}\left({n_{2}} \right)\) for \(0 \le n_{1} \le n_{2} \le 1\) and the increasing property of the performance index with respect to \(N\) and \(X\).□

Proof of Proposition 3-4

The proof of the proposition is just the same with those of Propositions 2.3 through 2.5 of Yoshioka and Yaegashi (2018b).

Proof of Proposition 4

The proof of \(\alpha = 1\) is trivial, and we therefore assume \(0 < \alpha < 1\). An elementary calculation shows that the bivariate function \(F\left({\eta_{1},\eta_{2}} \right)\) for \(\eta_{1},\eta_{2} > 0\) is concave and takes the maximum value \(2^{\alpha} - 2 < 0\) at \(\left({\eta_{1},\eta_{2}} \right) = \left({1,1} \right)\). This shows that the right-hand side of inequality (28) is negative for \(\eta_{1},\eta_{2} > 0\), which completes the proof.□

Proof of Proposition 5

We have \(F\left({1,1} \right) = 2^{\alpha} - 2 > 0\), implying that the condition (28) fails if \(\frac{{k_{0}}}{{k_{1}}}\) is sufficiently small.□

Proof of Proposition 6

For \(\left({x,\eta} \right) \in \left[{0,1} \right] \times \left({0,1} \right]\), \(B\) is evaluated from below as

$$\begin{aligned} B\left({x,\eta} \right) & = k_{0} + k_{1} \eta^{\alpha} - \eta x^{\beta} \\ & \ge k_{0} + k_{1} \eta^{\alpha} - \eta. \\ \end{aligned}$$
(78)

By an elementary calculation, we have

$$\mathop {\inf}\limits_{\eta} \left\{{k_{0} + k_{1} \eta^{\alpha} - \eta} \right\} = k_{0} + \hbox{min} \left\{{0,k_{1} - 1, - k_{1}^{{\frac{- 1}{\alpha - 1}}} \alpha^{{\frac{- \alpha}{\alpha - 1}}} \left({\alpha - 1} \right)} \right\}.$$
(79)

If \(k_{0}\) is sufficiently large, then

$$\mathop {\min}\limits_{\eta} B\left({x,\eta} \right) \le \mathop {\min}\limits_{\eta} B\left({x,1} \right) < 0,$$
(80)

implying that the null control \(u \equiv 0\) is optimal, which completes the proof.□

Proof of Proposition 7

Substituting \(\Phi \left({x,n} \right) = c\left(x \right)n^{z}\) with a constant \(z\) into (15) under the stated assumptions yields

$$\begin{aligned} 0 & = \left({\left({\delta + z\lambda} \right)c - z\mu x\left({1 - x} \right)c^{\prime} - \frac{{\sigma^{2}}}{2}z\left({z - 1} \right)x^{2} \left({1 - x} \right)^{2} c^{\prime \prime}} \right)n^{z} \\ & \quad - \vartheta nx^{\kappa} + \frac{1}{2}w^{*} zcn^{z} \\ \end{aligned}$$
(81)

with

$$w^{*} = \hbox{min} \left\{{w_{\max},\hbox{max} \left\{{0,\frac{cz}{{\psi_{0}}}n^{z - \gamma}} \right\}} \right\}.$$
(82)

The relationship (81) holds for small \(n\) if \(z = 1\), leading to the desired result.

Proof of Proposition 8

To show that value function \(\Phi\) satisfies the viscosity property in \(\left({0,1} \right) \times \left({0,1} \right)\) is straightforward, which can be proven essentially following the argument in the proof of Theorem 9.8 in Øksendal and Sulem (2005). This is because of the regularity and boundedness assumptions on the coefficients in the SDEs of the population dynamics and the performance index \(J\). The condition for \(nx = 0\) is straightforward to check since the value function \(\Phi\) equals 0 when \(nx = 0\). Therefore, the condition for the boundaries \(n = 1\) and \(x = 1\) should be checked, which seems to be non-trivial as in the case for \(\left({0,1} \right) \times \left({0,1} \right)\). For the sake of brevity, define

$$\begin{aligned} {\mathcal{L}}_{w} f & = \delta f - \mu x\left({1 - x} \right)\frac{\partial f}{\partial x} + \lambda n\frac{\partial f}{\partial n} - \frac{{\sigma^{2}}}{2}x^{2} \left({1 - x} \right)^{2} \frac{{\partial^{2} f}}{{\partial x^{2}}} \\ & \quad + wn\frac{\partial f}{\partial n} - \frac{\psi \left(n \right)}{2}w^{2} - \vartheta nx^{\kappa} \\ \end{aligned}$$
(83)

for generic sufficiently regular \(f = f\left({x,n} \right)\) and \(w \in {\mathcal{W}}\). Then, we have

$$Lf = \mathop {\sup}\limits_{{w \in {\mathcal{W}}}} {\mathcal{L}}_{w} f.$$
(84)

A key in the proof for \(n = 1\) is that the process \(\left({X_{t},N_{t}} \right)\) is valued in \(\bar{\Omega}\) by Proposition 1 and Eqs. (3) and (4), without further constraints. This fact enables us to formally apply the existing proofs based on dynamic programming principles for controlled diffusion processes. The proof for this case proceeds in an essentially the same way with that of Theorem 9.8 in Øksendal and Sulem (2005) with minor modifications specific to the present problem. Notice that we have

$$\Phi - {\mathcal{M}}\Phi \ge 0\quad {\text{in}}\;\bar{\Omega}.$$
(85)

This is proven in the same way with Lemma 9.7 of Øksendal and Sulem (2005) based on an argument by contradiction using \(\varepsilon\)-optimal controls.The proof for \(n = 1\) is as follows. The proof for \(x = 1\) is not presented here, since it is essentially the same with that for \(n = 1\). The viscosity sub-solution property is firstly checked. Set a point \(\left({x,n} \right) = \left({x_{0},1} \right)\) with \(0 < x_{0} < 1\) and take a test function \(\varphi\) for viscosity sub-solutions that fulfills the conditions of Definition 2. We should prove the inequality

$$\hbox{min} \left\{{{\mathcal{L}}\varphi,\Phi - {\mathcal{M}}\Phi} \right\} \le 0\quad {\text{at}}\;\left({x_{0},1} \right).$$
(86)

If \(\Phi - {\mathcal{M}}\Phi = 0\) at \(\left({x_{0},1} \right)\), then (86) trivially holds. Therefore, we assume \(\Phi - {\mathcal{M}}\Phi > 0\) at this point. Hence, we have to show

$${\mathcal{L}}\varphi \le 0\quad {\text{at}}\;\left({x_{0},1} \right).$$
(87)

Since \(\tau_{1}^{*}\) is a stopping time, we have either \(\tau_{1}^{*} = 0\) or \(\tau_{1}^{*} > 0\) a.s.. If \(\tau_{1}^{*} = 0\) a.s., then the process \(N_{t}\) controlled by the optimizer \(u^{*}\) immediately jumps from \(n = 1\) to some \(n = \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n}\) with \(0 \le \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n} < 1\). Therefore,

$$\begin{aligned} \Phi \left({x_{0},1} \right) & = J\left({x_{0},1;u^{*},w^{*}} \right) \\ & = J\left({x_{0},1;\bar{u}^{*},w^{*}} \right) + e^{{- \delta \tau_{1}}} \left\{{- B\left({x_{0},\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n}} \right)} \right\} \\ & \le \Phi \left({x_{0},\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n}} \right) + e^{{- \delta \tau_{1}}} \left\{{- B\left({x_{0},\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n}} \right)} \right\} \\ & \le \mathop {\sup}\limits_{{\eta \in \left[{0,1} \right]}} \left\{{\Phi \left({x_{0},\eta} \right) + e^{{- \delta \tau_{1}}} \left\{{- B\left({x_{0},\eta} \right)} \right\}} \right\} \\ & = {\mathcal{M}}\Phi \left({x_{0},1} \right), \\ \end{aligned}$$
(88)

where \(\bar{u}^{*} = \left({\tau_{2}^{*};\eta_{2}^{*},\tau_{3}^{*};\eta_{3}^{*}, \ldots} \right)\). This is a contradiction since \(\Phi - {\mathcal{M}}\Phi > 0\) at \(\left({x_{0},1} \right)\). Therefore, \(\tau_{1}^{*} = 0\) a.s. is impossible. If \(\tau_{1}^{*} > 0\) a.s., then choose \(R_{0} < + \infty\) and sufficiently small \(\rho_{0} > 0\) such that

$$\bar{\tau} \equiv \hbox{min} \left\{{\tau_{1}^{*},\hbox{min} \left\{{R_{0},\inf \left\{{t > 0:\left| {X_{t} - x_{0}} \right| + \left| {N_{t} - 1} \right| \ge \rho_{0}} \right\}} \right\}} \right\}.$$
(89)

By the definition, we have \({\text{E}}\left[{\bar{\tau}} \right] > 0\). By the dynamic programming principle, for any \(\varepsilon_{0} > 0\), there exists an \(\varepsilon_{0}\)-optimal control \(u_{0} \in {\mathcal{U}}\) such that

$$\begin{aligned} & \Phi \left({x_{0},1} \right) \\ & \quad = \mathop {\sup}\limits_{{u \in {\mathcal{U}}}} \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} \left({\vartheta N_{s} X_{s}^{\kappa} + \frac{{\psi \left({N_{s}} \right)}}{2}\left({w_{s}} \right)^{2}} \right){\text{d}}s} + e^{{- \delta \bar{\tau}}} \Phi \left({X_{{\bar{\tau}}},N_{{\bar{\tau}}}} \right)} \right] \\ & \quad \le \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} \left({\vartheta N_{s} X_{s}^{\kappa} + \frac{{\psi \left({N_{s}} \right)}}{2}\left({w_{s}} \right)^{2}} \right){\text{d}}s} + e^{{- \delta \bar{\tau}}} \Phi \left({X_{{\bar{\tau}}},N_{{\bar{\tau}}}} \right)} \right] + \varepsilon_{0} \hbox{E}\left[{\bar{\tau}} \right], \\ \end{aligned}$$
(90)

where \(u = u_{0}\) in the second line of (90). Since \(\varphi \ge \Phi\), combining (90) and the Dynkin formula (Theorem 1.24 of Øksendal and Sulem 2005) for the process \(e^{- \delta s} \varphi \left({X_{s},N_{s}} \right)\) gives

$$\begin{aligned} & \Phi \left({x_{0},1} \right) \\ & \quad \le \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} \left({\vartheta N_{s} X_{s}^{\kappa} + \frac{{\psi \left({N_{s}} \right)}}{2}\left({w_{s}} \right)^{2}} \right){\text{d}}s} + e^{{- \delta \bar{\tau}}} \varphi \left({X_{{\bar{\tau}}},N_{{\bar{\tau}}}} \right)} \right] + \varepsilon_{0} \hbox{E}\left[{\bar{\tau}} \right] \\ & \quad = \varphi \left({x_{0},1} \right) + \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {\left({- {\mathcal{L}}_{{w_{s}}} \varphi \left({X_{s},N_{s}} \right)} \right)e^{- \delta s} {\text{d}}s}} \right] + \varepsilon_{0} \hbox{E}\left[{\bar{\tau}} \right]. \\ \end{aligned}$$
(91)

Using the assumption \(\Phi \left({x_{0},1} \right) = \varphi \left({x_{0},1} \right)\) leads to

$$\begin{aligned} - \varepsilon_{0} \hbox{E}\left[{\bar{\tau}} \right] & \le \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {\left({- {\mathcal{L}}_{{w_{s}}} \varphi \left({X_{s},N_{s}} \right)} \right)e^{- \delta s} {\text{d}}s}} \right] \\ & = - \mathop {\sup}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {{\mathcal{L}}_{{w_{s}}} \varphi \left({X_{s},N_{s}} \right)e^{- \delta s} {\text{d}}s}} \right] \\ & = - \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {{\mathcal{L}}_{{w_{s}^{*}}} \varphi \left({X_{s},N_{s}} \right)e^{- \delta s} {\text{d}}s}} \right] \\ \end{aligned}$$
(92)

and thus

$$- \varepsilon_{0} \le - \frac{1}{{E\left[{\bar{\tau}} \right]}}E\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} {\mathcal{L}}_{{w_{s}^{*}}} \varphi \left({X_{s},N_{s}} \right){\text{d}}s}} \right].$$
(93)

Since \(\varepsilon_{0}\) can be taken arbitrary small, letting \(\rho_{0} \to + 0\) in (93) gives

$${\mathcal{L}}\varphi \left({x_{0},1} \right) \le 0,$$
(94)

which is the desired result. This is because, if \({\mathcal{L}}\varphi \left({x_{0},1} \right) > 0\), then the inequality \({\mathcal{L}}\varphi > 0\) holds in a neighborhood of the point \(\left({x_{0},1} \right)\) (This neighborhood should be contained in \(\Omega\)). Then, for sufficiently small \(\varepsilon_{0}\) and \(\rho_{0}\), the inequality (93) is violated. This is a contradiction.

The viscosity super-solution property is secondly checked. Take a test function \(\varphi\) for viscosity super-solutions that fulfills the conditions of Definition 2. We have to show the inequality

$$\hbox{min} \left\{{{\mathcal{L}}\varphi,\Phi - {\mathcal{M}}\Phi} \right\} \ge 0\quad {\text{at}}\;\left({x_{0},1} \right).$$
(95)

Since we always have \(\Phi - {\mathcal{M}}\Phi \ge 0\), it is sufficient to show

$${\mathcal{L}}\varphi \ge 0\quad {\text{at}}\;\left({x_{0},1} \right).$$
(96)

Set a null control \(u \in {\mathcal{U}}\) with \(\tau_{1} \to + \infty\). Set \(\bar{\tau} = \hbox{min} \left\{{\tau,\rho_{0}} \right\}\) with a positive constant \(\rho_{0} > 0\). We have \({\text{E}}\left[{\bar{\tau}} \right] > 0\) by \(0 < x_{0} < 1\) and \(n = 1\). Since \(\varphi \le \Phi\), combining the dynamic programming principle and the Dynkin formula gives

$$\begin{aligned} \Phi \left({x_{0},1} \right) & \ge \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} \left({\vartheta N_{s} X_{s}^{\kappa} + \frac{{\psi \left({N_{s}} \right)}}{2}\left({w_{s}} \right)^{2}} \right){\text{d}}s} + e^{{- \delta \bar{\tau}}} \Phi \left({X_{{\bar{\tau}}},N_{{\bar{\tau}}}} \right)} \right] \\ & \ge \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} \left({\vartheta N_{s} X_{s}^{\kappa} + \frac{{\psi \left({N_{s}} \right)}}{2}\left({w_{s}} \right)^{2}} \right){\text{d}}s} + e^{{- \delta \bar{\tau}}} \varphi \left({X_{{\bar{\tau}}},N_{{\bar{\tau}}}} \right)} \right] \\ & = \varphi \left({x_{0},1} \right) + \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {\left({- {\mathcal{L}}_{{w_{s}}} \varphi \left({X_{s},N_{s}} \right)} \right)e^{- \delta s} {\text{d}}s}} \right] \\ \end{aligned}$$
(97)

and thus

$$0 \ge \mathop {\inf}\limits_{{w \in {\mathcal{W}}}} \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {\left({- {\mathcal{L}}_{w} \varphi \left({X_{s},N_{s}} \right)} \right)e^{- \delta s} {\text{d}}s}} \right] = - \hbox{E}\left[{\int_{0}^{{\bar{\tau}}} {e^{- \delta s} {\mathcal{L}}_{{w_{s}^{*}}} \varphi \left({X_{s},N_{s}} \right){\text{d}}s}} \right].$$
(98)

Dividing both sides of (101) by \({\text{E}}\left[{\bar{\tau}} \right]\) and letting \(\rho_{0} \to + 0\) gives the desired result (96).□

Proof of Proposition 10

The monotonicity statement directly follows from the positive coefficient condition by Proposition 9. The consistency follows from the fact that the one-sided first-order and exponential discretization used in the present scheme are consistent for smooth solutions, which is sufficient for guaranteeing the consistency in the above-mentioned sense.□

Appendix 2

Convergence of the present finite difference scheme is carried out to demonstrate that the computational resolution \(m_{x} = m_{n} = 256\) employed in the main text is sufficiently fine for investigations carried out in this paper. Table 2 summarizes the difference between the reference solution with \(m_{x} = m_{n} = 1024\) and numerical solutions having coarser resolutions with \(m_{x} = m_{n}\). The reference solution is used here for analyzing convergence of numerical solutions since the present HJBQVI is not exactly solvable. The difference between the reference and numerical solutions is measured through the \(l^{2}\) and \(l^{\infty}\) norms. Table 2 indicates that the errors between the numerical solution with \(m_{x} = m_{n} = 256\) and the reference solution are sufficiently small compared with the magnitude of \(\Phi\). This computational result justifies the employed resolution \(m_{x} = m_{n} = 256\). The results presented in Table 2 imply that the convergence rate of the present finite difference scheme is almost first order, which is consistent with the theoretical estimate (Oberman 2006).

Table 2 \(l^{2}\) and \(l^{\infty}\) norms and their convergence rates between the numerical solutions (\(m_{x} = m_{n} = m\)) and the reference solution (\(m = 1024\))

Next, the computed \(\Phi\) is examined against the theoretical upper and lower bounds. It was found by preliminary investigations that the lower bound is far sharper than the upper bound and the computed \(\Phi\) obviously satisfies the upper bound. Therefore, the computed \(\Phi\) is examined against the lower bound here. Figure 14 shows the difference between the computed \(\Phi\) and the lower bound: namely, \(\Delta \Phi = \Phi_{\text{computed}} - \Phi_{\text{lower bound}}\) with \(m_{x} = m_{n} = 256\), which should not be negative theoretically. Figure 14 shows that the violation of the lower bound is very small and the bound is sharp for small \(x\) or large \(n\) in particular. Table 3 shows the minimum value of \(\Delta \Phi\) over the computational domain. The computational results presented in Fig. 14 and Table 3 demonstrate that the present numerical scheme does not completely satisfy the lower bound, but the violation is almost the order of \(O\left({\rho^{- 1}} \right)\), which can be made sufficiently small. For example, \(\rho = O\left(m \right)\) is a reasonable choice from the viewpoint of computational accuracy of the proposed discretization method.

Fig. 14
figure 14

The difference \(\Delta \Phi\) between the computed \(\Phi\) and the lower bound with \(\rho = 10^{8}\) and \(m_{x} = m_{n} = 256\)

Table 3 The minimum value of \(\Delta \Phi\) with respect to \(\rho\) over the computational domain

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yoshioka, H. A stochastic differential game approach toward animal migration. Theory Biosci. 138, 277–303 (2019). https://doi.org/10.1007/s12064-019-00292-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12064-019-00292-4

Keywords

Navigation