1 Introduction

The rise of electric vehicles (EVs) is unstoppable due to factors such as the decreasing cost of batteries and various policy decisions [25]. Currently, the bottlenecks are the ability to charge a battery at a fast rate and the number of charging stations, but this bottleneck is expected to move toward the current grid infrastructure. This is illustrated in [24]; the authors evaluate the impact of the energy transition on a real distribution grid in a field study, based on a scenario for the year 2025. The authors confront a local low-voltage grid with electrical vehicles and ovens and show that charging a small number of EVs is enough to burn a fuse. Additional evidence of congestion is reported in [12]. This paper proposes to model and analyze such congestion by the use of the so-called layered queueing networks. Layered networks are specific queueing networks where some entities in the system have a dual role; for example, servers (in our context: parking spaces with EV chargers) become customers to a higher layer (here: the power grid). The use of layered queueing networks allows us to analyze the interaction of two sources of congestion: first, the number of available spaces with charging stations (as not all cars find a space), and second, the amount of available power that the power grid is able to feed to the charging station [24].

We consider a charging station (or parking lot) with finitely many parking spaces. Each space has an EV charger connecting with the power grid. EVs arrive at the charging station randomly in order to get charged. If an EV finds an available space, it enters the parking lot and charging starts immediately. An EV has a random parking time and a random charging time. It leaves the parking lot only when its parking time expires; i.e., it remains at its space without consuming power until its parking time expires if finishing its charge within the parking time. Both the charging rate per vehicle and the charging rate possible for the complete charging station are assumed to be limited. Thus, the charging rate of uncharged EVs depends on the number of cars charging simultaneously. Finally, we assume that all available power is shared at the same rate to all cars that need charging. The available power that can be delivered by the grid is assumed to be constant.

Using queueing terminology, our model can be described as a two-layered queueing network. An EV enters the charging station and connects its battery to an EV charger. In our context, EVs play the role of customers, while EV chargers are the servers. Thus, the system of EVs and EV chargers can be viewed as the first layer. Moreover, EV chargers are connected to the power grid. Thus, at the second layer, active EV chargers act as jobs that are served simultaneously by the power grid, which plays the role of a single server.

This paper focuses on the performance analysis of this system under Markovian assumptions. Specifically, we are interested in finding the fraction of fully charged EVs in the charging station, which is equivalent to the probability that an EV leaves the charging station with a fully charged battery. A mostly heuristic description of some partial results in this paper has appeared in [2]. We first start with the steady-state analysis of the original system, for which we can find explicit bounds for the fraction of fully charged EVs. To do so, we study three special cases of the original system: (i) There is enough power for all EVs, (ii) there are enough parking spaces for all EVs, and (iii) the parking lot is full. In these cases, we are able to find the explicit joint distribution in steady state of the total number of EVs and the number of not fully charged EVs in the charging station, which we call the vector process.

In order to improve the bounds for the fraction of fully charged EVs, we next develop a fluid approximation for the number of uncharged EVs in the parking lot. The mathematical results here are closely related to results on processor-sharing queues with impatience [22]. However, the model here is more complicated as there is a limited number of spaces in the system and fully charged cars may not leave immediately as they are still parked.

We then move to diffusion approximations, working in three asymptotic regimes. First, we consider the Halfin–Whitt regime, in which we prove a limit theorem for the vector process, showing that it converges to a two-dimensional reflected Ornstein–Uhlenbeck (OU) process with piecewise linear drift. Then, we consider an overloaded regime for the process describing the number of total EVs in the system. In this case, the limit reduces to a one-dimensional OU process with piecewise linear drift. Finally, we approximate the vector process by a two-dimensional OU process when the parking times are sufficiently large. The mathematical results here are based on martingale arguments [32].

EVs can be charged in several ways. Our setup can be seen as an example of slow charging, in which drivers typically park their EV and are not physically present during charging (but are busy shopping, working, sleeping, etc). For queueing models focusing on fast charging, we refer to [5, 48]. Both papers consider a gradient scheduler to control delays. Next, [47] presents a queueing model for battery swapping while [40] is an early paper on a queueing analysis of EV charging, focusing on designing safety control rules (in terms of voltage drops) with minimal communication overhead.

Despite being a relatively new topic, the engineering literature on EV charging is huge. We can only provide a small sample of the already vast but still emerging literature on EV charging. The focus of [39] is on a specific parking lot and presents an algorithm for optimally managing a large number of plug-in EVs. Algorithms to minimize the impact of plug-in EV charging on the distribution grid are proposed in [38]. In [31], the overall charging demand of plug-in EVs is considered. Mathematical models where vehicles communicate beforehand with the grid to convey information about their charging status are studied in [37]. In [30], EVs are the central object and a dynamic program is formulated that prescribes how EVs should charge their battery using price signals.

In addition, layered queueing networks have been successfully applied in analyzing interactive networks in communication networks and manufacturing systems. These are queueing networks where some entities in the system have a dual role. In such systems, the dynamics in layers are correlated and the service speeds vary over time. Layered queueing networks can be characterized by separate layers (see [36, 45]) or simultaneous layers (such as our model) [3]. In the first case, customers receive service with some delay. An application where layered networks with separate layers appear is manufacturing systems, for example, [17, 18]. On the other hand, in layered networks with simultaneous layers, customers receive service from the different layers simultaneously. Layered networks with simultaneous layers have applications in communication networks, for example, Web-based multitiered system architectures. In such environments, different applications compete for access to shared infrastructure resources, both at the software level and at the hardware level. For background, see [42, 43].

The paper is organized as follows: In Sect. 2, we provide a detailed model description—in particular, we introduce our stochastic model and we define the system dynamics. Next, in Sect. 3, we present some explicit bounds in steady state for the fraction of fully charged EVs. Section 4 contains several asymptotic approximations. First, a fluid approximation is presented; we then derive diffusion limits and approximations in three asymptotic regimes. Numerical validations are presented in Sect. 5. Finally, all proofs are gathered in Sect. 6.

2 Model

In this section, we provide a detailed formulation of our model and explain various notational conventions that are used in the remainder of this work.

2.1 Preliminaries

We use the following notational conventions: All vectors and matrices are denoted by bold letters. Further, \({\mathbb {R}}\) is the set of real numbers, \({\mathbb {R}}_+\) is the set of nonnegative real numbers and \({\mathbb {N}}\) is the set of strictly positive integers. For real numbers x and y, we define \(x{{\,\mathrm{\vee }\,}}y:=\max \{x,y\}\) and \(x{{\,\mathrm{\wedge }\,}}y:=\min \{x,y\}\). Furthermore, \(\varvec{I}\) represents the identity matrix and \(\varvec{e}\) and \(\varvec{e}_0\) are vectors consisting of 1’s and 0’s, respectively, the dimensions of which are clear from the context. Also, \(\varvec{e}_i\) is the vector whose \(i^\text {th}\) element is 1 and the rest are all 0.

Let \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. For \(T>0,\) let \({\mathcal {D}}[0,T]^2:={\mathcal {D}}[0,T]\times {\mathcal {D}}[0,T]\) be the two-dimensional Skorokhod space, i.e., the space of two-dimensional real-valued functions on [0, T] that are right continuous with left limits endowed with the \(J_1\) topology; cf. [11]. Observe that as all candidate limit objects we consider are continuous, we only need to work with the uniform topology. It is well known that the space \(({\mathcal {D}}[0,T]^2,J_1)\) is a complete and separate metric space (i.e., a Polish metric space) [7]. We denote by \({\mathcal {B}}({\mathcal {D}}[0,T]^2)\) the Borel \(\sigma -\)algebra of \({\mathcal {D}}[0,T]^2\). We assume that all the processes are defined from \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) to \(({\mathcal {B}}({\mathcal {D}}[0,T]^2),{\mathcal {D}}[0,T]^2).\) Further, we write \(X(\cdot ):=\{X(t), t\ge 0\}\) to represent a stochastic process and \(X(\infty )\) to represent a stochastic process in steady state. Moreover, \(\overset{d}{=}\) and \(\overset{d}{\rightarrow }\) denote equality and convergence in distribution (weak convergence). For two random variables XY, we write \(X\le _{st} Y\) (stochastic ordering) if \({\mathbb {P}}\left( X>a\right) \le {\mathbb {P}}\left( Y>a\right) \) for any \(a\in {\mathbb {R}}\). Further, \(\varPhi (\cdot )\) and \(\phi (\cdot )\) represent the cumulative probability function and the probability density function (pdf) of the standard normal distribution, respectively. Finally, let \(C^2_b(G)\) denote the space of twice continuously differentiable functions on G such that their first- and second-order derivatives are bounded.

2.2 Model description

We consider a charging station with \(K>0\) parking spaces. Each space has an EV charger which is connected to the power grid. EVs arrive independently at the charging station according to a Poisson process with rate \(\lambda \). They have a random charging requirement and a random parking time denoted by B and D, respectively. The random variables B and D are assumed to be mutually independent and exponentially distributed with rates \(\mu \) and \(\nu \), respectively. If an EV finishes its charging, it remains at its space without consuming power until its parking time expires. We call these EVs fully charged EVs. Thus, EVs leave the system only after their parking time expires, which implies that an EV may leave the system without its battery being fully charged. Furthermore, if all spaces are occupied, a newly arriving EV does not enter the system but leaves immediately. As such, the total number of vehicles in the system can be modeled by an Erlang loss system, though we need a more detailed description of the state space.

We denote by \(Q(t)\in \{0,1,\ldots , K\}\) the total number of EVs (charged and uncharged) in the system at time \(t\ge 0\), where Q(0) is the initial number of EVs. Further, we denote by \(Z(t)\in \{0,1,\ldots , Q(t)\}\) the number of EVs without a fully charged battery at time t and by Z(0) the number of such vehicles initially in the system. Thus, \(C(t)=Q(t)-Z(t)\) represents the number of EVs with a fully charged battery at time t.

The power consumed by the parking lot is limited and depends on the number of uncharged EVs at time t. We let it be given by the power allocation function\(L:{\mathbb {R}}_{+} \rightarrow {\mathbb {R}}_{+} \),

$$\begin{aligned} L(Z(t)):= Z(t){{\,\mathrm{\wedge }\,}}M. \end{aligned}$$

We assume that the parameter M is given and that \(0<M\le K\). For example, the parameter M can depend on the contract between the power grid and the charging station. Alternatively, M can be thought of as the maximum number of EVs the charging station can charge at a maximum rate, where without loss of generality we can assume that the maximum rate is one. The model is illustrated in Fig. 1.

Fig. 1
figure 1

A charging station with K EV chargers

Finally, note that the processes \(Q(\cdot )\), \(Z(\cdot )\), and \(C(\cdot )\) depend on K and M. We write \(Q^K_M(\cdot )\), \(Z^K_M(\cdot )\), and \(C^K_M(\cdot )\), when we wish to emphasize this. It is clear from our context that the two-dimensional process \(\left\{ (Q(t),Z(t)): t\ge 0\right\} \) is Markov. The transition rates in the interior and on the boundary are shown in Fig. 2.

Fig. 2
figure 2

Transition rates in the interior (left) and on the boundary (right) of the process \(\left\{ (Q(t),Z(t)): t\ge 0\right\} \)

2.2.1 Alternative model description in the case of infinitely many parking spaces

Here, we give an alternative description of our model in the case where there are infinitely many parking spaces; i.e., \(K=\infty \). In this case, the model can be described as a tandem queue with impatient customers; see Fig. 3. EVs arrive at the charging station, which has M servers, and charging starts immediately. There are two possible scenarios. First, an EV gets fully charged during D and moves to the second queue, which has an infinite number of servers. This happens with rate \(\mu (Z(t){{\,\mathrm{\wedge }\,}}M)\). In the second queue, EVs get served with rate \(\nu C(t)\). In the second scenario, an EV abandons its charging because its parking time expired (and thus leaves the first queue impatiently); this happens with rate \(\nu Z(t)\). Note that the total “rate in” in the system is \(\lambda \), and the total “rate out” is \(\nu (Z(t)+C(t))=\nu Q(t)\). In other words, Q(t) describes the number of customers in an \(M/M/ \infty \) queue; i.e., its steady-state distribution is a Poisson distribution with rate \(\lambda /\nu \). As we will see in Proposition 3.3, the process describing the number of uncharged EVs in the system (i.e., \(Z(\cdot )\)) behaves as a modified Erlang-A queue. The transition rates are shown in Fig. 4.

Fig. 3
figure 3

Model description in the case of infinitely many parking spaces

Fig. 4
figure 4

Transition rates of the process \(Z(\cdot )\) (Erlang-A)

2.3 System dynamics

In this section, we introduce the dynamics that describe the evolution of the system. We avoid a rigorous sample-path construction of the stochastic processes, and we refer to [11, 32] for background.

For a constant r, let \(N_{r}(\cdot )\) be a Poisson process with rate r. The total number of EVs in the system at time \(t\ge 0\), Q(t), is given by

$$\begin{aligned} Q(t)= & {} Q(0)+N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{Q(s)<K\}} \mathrm{d}s\right) -N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) \nonumber \\&-N_{\nu ,2}\left( \int _{0}^{t}C(s) \mathrm{d}s\right) , \end{aligned}$$
(2.1)

where \(N_{\lambda }(\cdot )\), \(N_{\nu ,1}(\cdot )\), and \(N_{\nu ,2}(\cdot )\) are independent Poisson processes. Here, the number of EVs that arrive at the charging station during the time interval [0, t] is given by the process \(N_{\lambda }\left( \int _{0}^{t} {\mathbb {1}}_{\{Q(s)<K\}} \mathrm{d}s\right) \), \(N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) \) is the number of uncharged EVs that depart up to time t, and \(N_{\nu ,2}\left( \int _{0}^{t}C(s) \mathrm{d}s\right) \) counts the departures of fully charged EVs up to time t. Hence, \(N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) +N_{\nu ,2}\left( \int _{0}^{t}C(s) \mathrm{d}s\right) \) is the total number of departures until time t (irrespective of whether the EVs are fully charged or not). In other words, and by the properties of the Poisson process, we have

$$\begin{aligned} N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) +N_{\nu ,2}\left( \int _{0}^{t}C(s) \mathrm{d}s\right) \overset{d}{=} N_{\nu }\left( \int _{0}^{t}Q(s) \mathrm{d}s\right) , \end{aligned}$$
(2.2)

and hence (2.1) describes the population in a well-known Erlang loss queue [29].

Another important process is the number of uncharged EVs in the system, Z(t), which can be written in the following form:

$$\begin{aligned} Z(t)= & {} Z(0)+N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{Q(s)<K\}} \mathrm{d}s\right) -N_{\mu }\left( \int _{0}^{t} L(Z(s)) \mathrm{d}s\right) \nonumber \\&-N_{\nu ,1}\left( \int _{0}^{t} Z(s) \mathrm{d}s\right) , \end{aligned}$$
(2.3)

where \(N_{\mu }\left( \int _{0}^{t} L(Z(s)) \mathrm{d}s\right) \) is the number of EVs that get fully charged during [0, t] and is independent of the aforementioned Poisson processes.

Finally, the process which describes the number of fully charged EVs is given by

$$\begin{aligned} C(t)=Q(t)-Z(t)=C(0) +N_{\mu }\left( \int _{0}^{t} L(Z(s)) \mathrm{d}s\right) -N_{\nu ,2}\left( \int _{0}^{t} C(s) \mathrm{d}s\right) . \end{aligned}$$

Observe that in the case \(K=\infty \), (2.3) is reduced to the Erlang-A system [21, 49]. All the previous equations hold almost surely and are defined on the same probability space.

It is clear that the vector process \((Q(\cdot ), Z(\cdot ))\) constitutes a two-dimensional Markov process. In the sequel, we are interested in finding the joint stationary distribution of \((Q(\cdot ), Z(\cdot ))\) and in deriving the fraction of fully charged EVs. Although the computation of the exact joint distribution does not seem promising, we are able to obtain exact bounds for the fraction of fully charged EVs in the next section.

3 Explicit bounds

The goal of this section is to give explicit results on some performance measures. In an EV charging setting, one may be interested in finding the fraction of EVs that get fully charged. This is an important performance measure from the point of view of both drivers and of the manager of the charging station. Note that the fraction of EVs that get fully charged (in steady-state) equals \(P_{s}=\frac{\nu {\mathbb {E}}\left[ C^K_M(\infty )\right] }{\nu {\mathbb {E}}\left[ Q^K_M(\infty )\right] } =\frac{{\mathbb {E}}\left[ C^K_M(\infty )\right] }{{\mathbb {E}}\left[ Q^K_M(\infty )\right] }= 1-\frac{{\mathbb {E}}\left[ Z^K_M(\infty )\right] }{{\mathbb {E}}\left[ Q^K_M(\infty )\right] }\), since in equilibrium \(\nu {\mathbb {E}}\left[ C^K_M(\infty )\right] \) is the departure rate of fully charged EVs and \(\nu {\mathbb {E}}\left[ Q^K_M(\infty )\right] \) is the departure rate of all EVs. Further, \(P_s\) gives the probability that a vehicle leaves the charging station with a fully charged battery.

For the general model (i.e., for \(K< \infty \) and \(M<\infty \)) given in Sect. 2, define the steady-state probabilities p(qz) \(:=\lim _{t\rightarrow \infty } {\mathbb {P}}\left( Q^K_M(t)=q,Z^K_M(t)=z\right) \). For simplicity, we use p(qz) instead of \(p^K_M(q,z)\). These steady-state probabilities are characterized by the following balance equations: For \((q,z)\in \{{\mathbb {R}}_{+}^2: z\le q\}\), we have that

$$\begin{aligned}&(q\nu {\mathbb {1}}_{\{q>0\}}+\lambda {\mathbb {1}}_{\{q<K\}}+\mu L(z) {\mathbb {1}}_{\{z>0\}})p(q,z) = \lambda {\mathbb {1}}_{\{z>0\}} p(q-1,z-1)\nonumber \\&\quad +\,(z+1)\nu {\mathbb {1}}_{\{q<K\}} p(q+1,z+1) + \mu L(z+1) {\mathbb {1}}_{\{q\ne z\}}p(q,z+1)\nonumber \\&\quad +\,(q-z+1)\nu {\mathbb {1}}_{\{q<K\}}p(q+1,z). \end{aligned}$$
(3.1)

A closed-form solution of the balance equations for any K and any M does not seem possible. However, we are able to obtain explicit solutions in some special cases. Below, we derive bounds for \(P_s\) based on three different cases: (i) There is enough power for everyone (\(M=\infty \)), (ii) there are enough parking spaces for everyone (\(K=\infty \)), (iii) a full parking lot (\(Q(t)\equiv K\)). In the next proposition, we give upper and lower bounds for the fraction of EVs that get fully charged.

Proposition 3.1

Let \(C^{K}_M(\infty )\) and \(Q^{K}_M(\infty )\) be the number of fully charged EVs and the total number of EVs in steady state for any K and any M. We have that

$$\begin{aligned} \frac{{\mathbb {E}}\left[ C^{\infty }_M(\infty )\right] }{{\mathbb {E}}\left[ Q^{\infty }_M(\infty )\right] } \le \frac{{\mathbb {E}}\left[ C^{K}_M(\infty )\right] }{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] } \le \frac{{\mathbb {E}}\left[ C^{K}_K(\infty )\right] }{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] }. \end{aligned}$$
(3.2)

Moreover, an additional lower bound is given by

$$\begin{aligned} \frac{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] -{\mathbb {E}}\left[ Z_{f}(\infty )\right] }{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] } \le \frac{{\mathbb {E}}\left[ C^{K}_M(\infty )\right] }{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] }, \end{aligned}$$
(3.3)

where \(Z_{f}(\cdot )\) is defined in Sect. 3.3.

The proof of this proposition makes use of coupling arguments and stochastic ordering of random variables, and it is given in Sect. 6. We now briefly present the solution of the balance equations for the three special cases described above.

3.1 Enough power for everyone

Assume that K is finite and that there is enough power for all EVs to be charged at a maximum rate, i.e., \(M=K\). In this case, the allocation function takes the form \(L(Z(t))=Z(t)\), and the balance equations can be solved explicitly and are given below.

Proposition 3.2

Let \(K<\infty \) and \(M=K\); then, the solution \(p_e(\cdot ,\cdot )\) to the balance Eq. (3.1) is given by the following (Binomial) distribution:

$$\begin{aligned} p_e(q,z):=p_Q(q) \frac{q!}{z!(q-z)!} \Big (\frac{\mu }{\nu +\mu }\Big )^{q-z} \Big (\frac{\nu }{\nu +\mu }\Big )^{z}, \end{aligned}$$
(3.4)

where

$$\begin{aligned} p_Q(q):=\sum _{z=0}^{q}p_e(q,z)=\frac{1}{q!}\left( \frac{\lambda }{\nu }\right) ^qp_Q(0). \end{aligned}$$
(3.5)

Moreover, the probability of an empty system is given by

$$\begin{aligned} p_e(0,0)=p_Q(0)= \left( \sum _{i=0}^{K}\frac{1}{i!} \left( \frac{\lambda }{\nu }\right) ^i\right) ^{-1}. \end{aligned}$$

3.2 Enough parking spaces for everyone

In the second case, we assume that there are infinitely many parking spaces; i.e., \(K=\infty \) and \(M<\infty \). In this case, all EVs can find a free position and the process \(Z(\cdot )\) can be modeled as a Markov process itself where its transition rates are given in Fig. 4. We see in the next proposition that the process \(Z(\cdot )\) behaves as a modified Erlang-A model with M servers [49]. The main difference here is that EVs can leave the system even if they are in service (i.e., are getting charged).

Proposition 3.3

For \(z=0,1,\ldots \), let \(p_Z(z):=\lim _{t \rightarrow \infty }{\mathbb {P}}\left( Z^{\infty }_M(t)=z\right) \) be the stationary distribution of the Markov process \(\{Z^{\infty }_M(t), t\ge 0\}\). It is given by

$$\begin{aligned} p_Z(z)= {\left\{ \begin{array}{ll} \frac{1}{z!}\left( \frac{\lambda }{\nu +\mu }\right) ^z p_Z(0), &{} \quad \text{ if } \ z\le M, \\ \frac{1}{M!}\left( \frac{\lambda }{\nu +\mu }\right) ^M\prod \nolimits _{k=M+1}^{z} \frac{\lambda }{M\mu +k\nu } p_Z(0), &{} \quad \text{ if } \ z>M, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} p_Z(0)=\left( \sum _{j=0}^{M} \frac{1}{j!}\left( \frac{\lambda }{\nu +\mu }\right) ^j + \sum _{j=M+1}^{\infty } \frac{1}{M!} \left( \frac{\lambda }{\nu +\mu }\right) ^M\prod _{k=M+1}^{j} \frac{\lambda }{M\mu +k\nu } \right) ^{-1}. \end{aligned}$$

3.3 A full parking lot

Finally, we consider the case where the parking lot is always full; i.e., the total number of EVs (uncharged and charged) is equal to the number of parking spaces. Roughly speaking, we assume that the arrival rate is infinite and that we replace (immediately) each departing EV by a newly arriving EV, which we assume to be uncharged. Hence, the total number of EVs always remains constant and it is equal to K. In other words, the original two-dimensional stochastic model reduces to a one-dimensional model. For this model, we find its steady-state distribution below. This result yields an upper bound for the number of uncharged EVs in the original system and hence a lower bound for the fraction of EVs that get fully charged. As we shall see later, the result in this section plays a crucial role in the study of the diffusion limit in the overloaded regime. Also, in the numerics, we see that a modification of the full parking lot case gives a very good approximation for the fraction of fully charged EVs.

Under these assumptions, all newly arriving EVs are uncharged and so it turns out that the process describing the number of uncharged EVs in the system, \(\{Z_{f}(t), t\ge 0\}\), is a birth–death process. In particular, the birth rate is \(\nu (K-Z_{f}(t))\) and the death rate is equal to \(\mu (Z_{f}(t){{\,\mathrm{\wedge }\,}}M)\). The steady-state distribution of the aforementioned birth–death process is given in the following proposition.

Proposition 3.4

The steady-state distribution of the Markov process \(\{Z_{f}(t),\)\(t\ge 0\}\) is given by

$$\begin{aligned} \pi _{f}(z)= {\left\{ \begin{array}{ll} \left( \frac{\mu }{\nu }\right) ^{M-z} \frac{\prod _{j=0}^{M-z-1} (M-j)}{\prod _{j=1}^{M-z} (K-M-j)} \pi _{f}(M), &{}\quad \text{ if } \ 0 \le z< M, \\ \frac{1}{M^{z-M}} \left( \frac{\nu }{\mu }\right) ^{z-M} {\prod }_{j=0}^{z-M-1} (K-M-j) \pi _{f}(M), &{} \quad \text{ if } \ M\le z \le K, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \pi _{f}(M)&=\Bigg (\sum _{l=0}^{M-1} \left( \frac{\mu }{\nu }\right) ^{M-l} \frac{\prod _{j=0}^{M-l-1} (M-j)}{\prod _{j=1}^{M-l} (K-M-j)}\\&\qquad +\sum _{l=M}^{K} \frac{1}{M^{l-M}} \left( \frac{\nu }{\mu }\right) ^{l-M}\ \prod _{j=0}^{l-M-1} (K-M-j) \Bigg )^{-1}. \end{aligned}$$

In Sect. 5, we validate these bounds in the three regimes: moderately, critically, and overloaded. As we will see, the bounds are not very close in general. For this reason, we move to asymptotic approximations.

4 Asymptotic approximations

In this section, we present asymptotic approximations. First we focus on the fluid approximation, and then we move to three diffusion approximations. Consider a family of systems indexed by \(n\in {\mathbb {N}},\) where n tends to infinity, with the same basic structure as that of the system described in Sect. 2. To indicate the position in the sequence of systems, a superscript n will be appended to the system parameters and processes. In the remainder of this section, we assume that \({\mathbb {E}}\left[ Q^n(0)\right] \) and \({\mathbb {E}}\left[ Z^n(0)\right] \) are finite. Finally, the proofs of the limit theorems are based on martingale arguments and are given in Sects. 6.26.5. We give a rigorous proof for Theorem 4.5 in Sect. 6.3, and we omit the full details for the other proofs.

4.1 Fluid approximation

Here we study a fluid model, which is a deterministic model that can be thought of as a formal law of large numbers approximation under appropriate scaling. We develop a fluid approximation for finite K, following a similar approach as in [22]. The main differences here are the finitely many servers in the system and that the state space consists of two regions: \(\{Z(t)>M\}\) and \(\{Z(t)\le M\}\).

To obtain a nontrivial fluid limit, we assume that the capacity of power in the \(n^{\text {th}}\) system is given by nM, the arrival rate by \(n\lambda \), the number of parking spaces by nK, and we do not scale the time. The fluid scaling of the process describing the number of uncharged EVs in the charging station is given by \(\frac{Z^n(\cdot )}{n}\). This scaling gives rise to the following definition of a fluid model.

Definition 4.1

(Fluid model) A continuous function \(z(t): {\mathbb {R}}_+ \rightarrow [0,K]\) is a fluid model solution if it satisfies the ODE

$$\begin{aligned} z'(t)= \lambda {{\,\mathrm{\wedge }\,}}\nu K- \nu z(t) - \mu (z(t){{\,\mathrm{\wedge }\,}}M), \end{aligned}$$
(4.1)

for \(t\in [0,t^*)\), where \(t^*=\inf \{s\ge 0: z(s)=0\}\) and \(z(t) \equiv 0\) for \(t\ge t^*\).

Note that (4.1) can be written as \(z'(t)= R(z(t))\), where \(R(\cdot )=\lambda {{\,\mathrm{\wedge }\,}}\nu K- \nu \cdot - \mu (\cdot {{\,\mathrm{\wedge }\,}}M)\). Further, the operator \(R(\cdot )\) is Lipschitz continuous in \({\mathbb {R}}_+\), which guarantees that (4.1) has a unique solution. In the proof of Proposition 4.2, we shall see that if the initial state of the fluid model solution is \(z(0)\in [0,K]\), then \(z(t)\le K\) for any \(t\ge 0\). This last statement ensures that our fluid model is well defined.

Next, we see that the fluid model solution can arise as a limit of the fluid-scaled process \(\frac{Z^n(\cdot )}{n}\). The proof of the following proposition is based on martingale arguments and is given in Sect. 6.2.

Proposition 4.1

If \(\frac{Z^n(0)}{n} \overset{d}{\rightarrow }z(0)\) and \(\frac{Q^n(0)}{n} \overset{d}{\rightarrow }K\), then we have that \(\frac{Z^n(\cdot )}{n} \overset{d}{\rightarrow }z(\cdot )\), as \(n \rightarrow \infty \). Moreover, the deterministic function \(z(\cdot )\) satisfies (4.1).

Remark 4.1

We point out that we can extend the previous proposition to the case \(\frac{Q^n(0)}{n} \overset{d}{\rightarrow }q(0)\le K\). This requires a modification of Definition 4.1. In particular, we need to replace the quantity \((\lambda {{\,\mathrm{\wedge }\,}}\nu K) t\) by \(\lambda t-y(t)\), where \(y(t)=\int _{0}^{t}{\mathbb {1}}_{\{q(s)=K\}}\mathrm{d}y(s)\) and q(s) is the fluid approximation of an Erlang loss queue, given by \(q(s)=\min \{q^*+(q(0)-q^*)\hbox {e}^{-\nu t},K \}\), with \(q^*=\min \{\lambda /\nu , K\}\); see [35, Proposition 6.17 and 6.21]. Then, Proposition 4.1 holds for any \(q(0)\le K\). Note that this assumption does not affect the analysis later, as the invariant point of this extended fluid model is independent of the initial state q(0).

Moreover, the next proposition states that the fluid model solution converges to the unique invariant point as time goes to infinity.

Proposition 4.2

Let B and D be exponential random variables with rates \(\mu \) and \(\nu \). We have that, for any \(z(0)\in [0,K]\), \(z(t) \rightarrow z^*\) exponentially fast as \(t \rightarrow \infty \). In addition, \(z^*\) is given by the unique positive solution to the following fixed-point equation:

$$\begin{aligned} z^* = (\lambda {{\,\mathrm{\wedge }\,}}\nu K) {\mathbb {E}}\left[ \min \left\{ D , B \max \left\{ 1, \frac{ z^*}{M}\right\} \right\} \right] . \end{aligned}$$
(4.2)

In the proof of Proposition 4.2, we shall see that if \(z(0)=z^*\) then \(z(t)=z^*\) for any \(t\ge 0\), i.e., \(z^*\) is the unique invariant point of (4.1). The point \(z^*\) can be viewed as an approximation of the expected number of uncharged EVs in the system for the original (stochastic) model. Observing that the quantity \({\mathbb {E}}\left[ \min \left\{ D , B \max \left\{ 1, \frac{ z^*}{M}\right\} \right\} \right] \) is the actual sojourn time of an uncharged EV in the system and that the quantity \((\lambda {{\,\mathrm{\wedge }\,}}\nu K)\) plays the role of the arrival rate, (4.2) can be seen as a version of Little’s law. Further, if we allow a processor-sharing discipline and infinity many servers (i.e., \(L(\cdot ) \equiv 1\) and \(K=\infty \)), then (4.2) reduces to [22, Eq. (4.1)].

Remark 4.2

We shall see in the proof of Proposition 4.2 that the invariant point \(z^*\) has a simpler form than (4.2) but the latter holds much more generally. If the random variables B and D are generally distributed and possibly dependent with \({\mathbb {E}}\left[ B{{\,\mathrm{\wedge }\,}}D\right] <\infty \), then (4.2) still holds. The mathematical analysis then requires the use of measure-valued processes, which is beyond of the scope of the current work; for a heuristic approach, see [4]. Thus, we present the proofs only under Markovian assumptions.

To ensure that \(z^*\) is indeed a fluid approximation, we show that we can interchange the fluid and the steady-state limits. First, note that \(Z(\cdot )\) has a limiting distribution. To see this, observe that \(Z(\cdot )\) is bounded almost surely from above by the queue length of an Erlang-A queue with M servers and infinite buffer. Alternatively, we can bound it by the queue length of an \(M/G/\infty \) queue. Now, using the same arguments as in [34], we conclude that \(Z(\cdot )\) is a regenerative process and that there exists a stationary limit, \(Z(\infty )\). The next proposition says that the stationary scaled sequence of random variables converges to the unique invariant point \(z^*\).

Proposition 4.3

The stationary fluid-scaled sequence of random variables \(\frac{Z^n(\infty )}{n}\) is tight and \(\frac{Z^n(\infty )}{n} \overset{d}{\rightarrow }z^*\), as \(n \rightarrow \infty \).

Note that the arrival rate in an Erlang loss queue is known and it is equal to \(\lambda (1-B(\lambda /\nu ,K))\), where \(B(\lambda /\nu ,K)\) is the blocking probability in a loss system with K servers and traffic intensity \(\lambda /\nu \). Furthermore, \(\lambda (1-B(\lambda /\nu ,K))\) is asymptotically exact for our fluid approximation in the sense that \(\lambda (1-B(n\lambda /\nu ,nK))\rightarrow (\lambda {{\,\mathrm{\wedge }\,}}\nu K)\), as \(n \rightarrow \infty \). To improve the fluid approximation, we replace \((\lambda {{\,\mathrm{\wedge }\,}}\nu K)\) by \(\lambda (1-B(\lambda /\nu ,K))\), leading to

$$\begin{aligned} z^* = \lambda (1-B(\lambda /\nu ,K)) {\mathbb {E}}\left[ \min \left\{ D , B \max \left\{ 1, \frac{ z^*}{M}\right\} \right\} \right] . \end{aligned}$$
(4.3)

Heuristically, we assume that an EV sees the system in stationarity throughout its sojourn and we use Little’s law and a version of the snapshot principle [33].

Having found the fluid approximation for the number of uncharged EVs in the charging station, we derive the fluid approximation for the fraction of EVs that get successfully charged. Let \({\overline{P}}_s\) denote the probability that an EV leaves the parking lot with a fully charged battery in the fluid model. This is given by \({\overline{P}}_s={\mathbb {P}}\left( D > B \max \{ 1, z^*/M\}\right) \), where \(z^*\) is the unique solution of (4.3). Under our assumptions, an explicit expression for this probability can be found. That is,

$$\begin{aligned} {\overline{P}}_s= {\left\{ \begin{array}{ll} \frac{\mu }{\nu +\mu }, &{} z^*\le M, \\ \frac{\mu M}{\lambda (1-B(\lambda /\nu ,K))}, &{} z^*> M. \end{array}\right. } \end{aligned}$$

We now focus on the fluid approximation for the number of uncharged EVs when the parking lot is full (see Sect. 3.3). Analogously to Definition 4.1, we can define a fluid model, which we call \(z_f(\cdot )\).

Proposition 4.4

Assume that the scaled parking spaces and the scaled power capacity are given by \(K^n=K n\) and \(M^n=M n\), respectively. If \(\frac{Z_{f}^n(0)}{n} \overset{d}{\rightarrow }z_f(0)\), we have that \(\frac{Z_{f}^n(\cdot )}{n} \overset{d}{\rightarrow }z_f(\cdot )\), and \(z_f(t)\rightarrow z_{f}^*\) as n and t go to infinity. Further, the limits can be interchanged and \(z_{f}^*\) is given by the following formula:

$$\begin{aligned} z_{f}^*= {\left\{ \begin{array}{ll} \frac{\nu K}{\nu +\mu }, &{} \quad \hbox {if } z_{f}^*\le M, \\ \frac{\nu K-\mu M}{\nu }, &{} \quad \hbox {if } z_{f}^*>M. \end{array}\right. } \end{aligned}$$
(4.4)

We give a heuristic approach to deriving (4.4), skipping the proof, which can be done by using a similar procedure as in the general case. The intuition behind (4.4) is as follows: Let \(S^n(B)\) be the sojourn time (in steady state) of an uncharged EV in the \(n^{th}\) system. By Little’s law, we have that

$$\begin{aligned} {\mathbb {E}}\left[ Z_{f}^n(\infty )\right] =\nu {\mathbb {E}}\left[ K^n-Z_{f}^n(\infty )\right] {\mathbb {E}}\left[ S^n(B)\right] . \end{aligned}$$
(4.5)

By the discussion after Theorem 2.4 in [23], because we observe the system in steady state at time 0, the number of uncharged EVs hardly changes for large enough n. By the snapshot principle, we have that \(S^n(B)=B \frac{Z_{f}^n(\infty )}{Z_{f}^n(\infty ){{\,\mathrm{\wedge }\,}}M^n}+o(n)= B\left( \frac{Z_{f}^n(\infty )}{M^n}{{\,\mathrm{\vee }\,}}1\right) +o(n)\). Applying the last equation in (4.5) and dividing by n yields

$$\begin{aligned} {\mathbb {E}}\left[ \frac{Z_{f}^n(\infty )}{n}\right] =\nu {\mathbb {E}}\left[ K-\frac{Z_{f}^n(\infty )}{n}\right] {\mathbb {E}}\left[ B \left( \frac{Z_{f}^n(\infty )}{M n}{{\,\mathrm{\vee }\,}}1\right) +\frac{o(n)}{n}\right] . \end{aligned}$$

Now, taking the limit as n goes to infinity leads to

$$\begin{aligned} z_{f}^*= \frac{\nu (K-z_{f}^*) \left( \frac{z_{f}^*}{M }{{\,\mathrm{\vee }\,}}1\right) }{\mu }= \frac{\nu (K-z_{f}^*)z_{f}^*}{\mu \left( z_{f}^*{{\,\mathrm{\wedge }\,}}M \right) }. \end{aligned}$$

Finally, \(z_{f}^*\) is given by the following fixed-point equation:

$$\begin{aligned} \mu \left( z_{f}^*{{\,\mathrm{\wedge }\,}}M \right) = \nu (K-z_{f}^*), \end{aligned}$$

and solving this last equation leads to (4.4).

We shall see in the numerical examples in Sect. 5 that the fluid approximation is a good approximation of the fraction of fully charged EVs in most cases. However, especially in the underloaded regime and for small number of EV chargers, the error becomes larger. In the next section, we move to diffusion approximations.

4.2 Diffusion approximations

In this section, we show diffusion limit theorems and diffusion approximations for the process describing the number of uncharged and the total number of EVs in the parking lot (the vector process). To do this, we follow the strategy set up in [32] using the martingale representation.

First, we work on the Halfin–Whitt regime (see Sect. 4.2.1). Using the “square-root staffing rule” to scale the system parameters, we extend [32, Theorem 7.1] and we obtain a limit which is a reflected two-dimensional OU process with piecewise linear drift. Then, we derive an equation which characterizes its steady-state distribution, the so-called basic adjoint relation (BAR). However, it turns out that the computation of the steady-state distribution is a hard problem which is beyond the scope of this paper, and it remains an open problem.

The second asymptotic regime we consider here is an overloaded regime (Sect. 4.2.2). Assuming that the process describing the total number of EVs is in an overloaded regime and using the “square-root staffing rule” to scale the total power capacity in the system, we can show that the scaled vector process converges weakly to a one-dimensional limit. Thus, we can compute its steady-state distribution.

Finally, in Sect. 4.2.3, we focus on the case where the parking times of the EVs are sufficiently large. We give a heavy traffic limit and a two-dimensional approximation for the vector process.

4.2.1 Diffusion approximation in the Halfin–Whitt regime

The main goal in this section is to prove a two-dimensional diffusion limit for the vector process. For \(-\infty< \beta , \kappa < \infty \), consider the following scaling:

  1. 1.

    \(\lambda ^n=n(\nu +\mu )\),

  2. 2.

    \(M^n=\frac{\lambda ^n}{\nu +\mu }+\beta \sqrt{n}\),

  3. 3.

    \(K^n=\frac{\lambda ^n}{\nu }+\kappa \sqrt{n}\).

Define a sequence of diffusion-scaled processes \({\hat{Q}}^n(\cdot ):=\frac{Q^n(\cdot )-\frac{\lambda ^n}{\nu }}{\sqrt{n}}\) and \({\hat{Z}}^n(\cdot ):=\frac{Z^n(\cdot )-\frac{\lambda ^n}{\nu +\mu }}{\sqrt{n}}\). The allocation function in the \(n^{\text {th}}\) system is given by \(L^n(Z^n(\cdot )):= Z^n(\cdot ){{\,\mathrm{\wedge }\,}}M^n\). We can then prove the following theorem.

Theorem 4.5

(Diffusion limit in the Halfin–Whitt regime) If \(({\hat{Z}}^n(0),{\hat{Q}}^n(0))\overset{d}{\rightarrow }\)\(({\hat{Z}}(0),{\hat{Q}}(0))\) as \(n \rightarrow \infty \), then \(({\hat{Z}}^n(\cdot ),{\hat{Q}}^n(\cdot ))\)\(\overset{d}{\rightarrow }({\hat{Z}}(\cdot ),{\hat{Q}}(\cdot ))\). The limit satisfies the following two-dimensional stochastic differential equation:

$$\begin{aligned} \begin{aligned} \left( \begin{array}{cc} \mathrm{d}{\hat{Z}}(t) \\ \mathrm{d}{\hat{Q}}(t) \end{array}\right)&= \left( \begin{array}{cc} b_1({\hat{Z}}(t)) \\ b_2({\hat{Q}}(t)) \end{array}\right) \mathrm{d}t +\left( \begin{array}{cc} \sqrt{2 (\nu +\mu )}&{} 0 \\ 0 &{} \sqrt{2 (\nu +\mu )} \end{array}\right) \left( \begin{array}{cc} \mathrm{d}W_{{\hat{Z}}}(t) \\ \mathrm{d}W_{{\hat{Q}}}(t) \end{array}\right) \\&\qquad - \left( \begin{array}{cc} \mathrm{d}{\hat{Y}}(t) \\ \mathrm{d}{\hat{Y}}(t) \end{array}\right) , \end{aligned} \end{aligned}$$
(4.6)

where \(b_1(x)=-\,\mu (x {{\,\mathrm{\wedge }\,}}\beta )-\nu x\) and \(b_2(x)=-\,\nu x\). Further, \(W_{{\hat{Z}}}(\cdot )\) and \(W_{{\hat{Q}}}(\cdot )\) are driftless, univariate Brownian motions such that \(2(\nu +\mu ){\mathbb {E}}\left[ W_{{\hat{Z}}}(t) W_{{\hat{Q}}}(t)\right] = (2\nu +\mu ) t\). In addition, \({\hat{Y}}(\cdot )\) is the unique nondecreasing nonnegative process such that (4.6) holds and \(\int _{0}^{\infty } {\mathbb {1}}_{\{{{\hat{Q}}}(t)<\kappa \}}d {\hat{Y}}(t)=0\).

Adapting [32, Sect. 7.3], we can show that the last theorem also holds if we allow the arrival process to be a general stochastic process under the assumption that it satisfies the functional central limit theorem.

The proof of Theorem 4.5 is given in Sect. 6.3.1 and is organized as follows:

  1. 1.

    We first establish a continuity result and show the existence and uniqueness of the candidate limit (Proposition 6.1).

  2. 2.

    We then rewrite the system dynamics using appropriate martingales and filtrations; see Eqs. (6.15), (6.16), and Proposition 6.2.

  3. 3.

    Next, we show in Proposition 6.3 that the corresponding fluid-scaled processes converge weakly to deterministic functions.

  4. 4.

    Finally, the proof of Theorem 4.5 is completed by applying the martingale central limit theorem in [20] and Proposition 6.1.

Next, we focus on characterizing the joint steady-state distribution of the limit given by (4.6). Our approach is to find a functional equation which describes the joint steady-state distribution, the so-called basic adjoint relation. The next step is to use the BAR in order to obtain a key relation for the moment-generating function of the vector process. The piecewise linear drift and the existence of the reflection in (4.6) make the key relation complicated, and its analysis is beyond of the scope of this paper.

For any \(t\ge 0\), we know that \({\hat{Z}}(t)\in {\mathbb {R}}\) and \({\hat{Q}}(t)\le \kappa \). It is more convenient to transform the previous processes such that \({\hat{Z}}(t)\in {\mathbb {R}}\) and \(\kappa -{\hat{Q}}(t)\ge 0\). To do so, we recall that \(b_2(x)=-\,\nu x\). Thus, the diffusion limit can be written in the following integral form—see (4.6):

$$\begin{aligned} {\hat{Z}}(t)&= {\hat{Z}}(0)+ \int _{0}^{t} b_1({\hat{Z}}(s))\mathrm{d}s+ \sqrt{2(\nu +\mu )}W_{{\hat{Z}}}(t)-{\hat{Y}}(t),\\ {\hat{Q}}(t)&= {\hat{Q}}(0)- \nu \int _{0}^{t} {\hat{Q}}(s)\mathrm{d}s+ \sqrt{2(\nu +\mu )}W_{{\hat{Q}}}(t)-{\hat{Y}}(t), \end{aligned}$$

where \({\hat{Y}}(\cdot )\) is defined in Theorem 4.5. Multiplying by \((-1)\), adding and subtracting the terms \(\kappa \) and \(\nu \kappa t\) in the last equation, we obtain

$$\begin{aligned} \kappa - {\hat{Q}}(t) =\kappa - {\hat{Q}}(0) +\nu \int _{0}^{t}( \kappa +{\hat{Q}}(s)-\kappa )\mathrm{d}s -\sqrt{2(\nu +\mu )}W_{{\hat{Q}}}(t)+{\hat{Y}}(t). \end{aligned}$$

Defining \({\hat{Q}}_{\kappa }(t):=\kappa -{\hat{Q}}(t)\) for \(t \ge 0\), we have that

$$\begin{aligned} {\hat{Q}}_{\kappa }(t)&= {\hat{Q}}_{\kappa }(0)+ \int _{0}^{t} b_{\kappa }({\hat{Q}}_{\kappa }(s))\mathrm{d}s- \sqrt{2(\nu +\mu )}W_{{\hat{Q}}}(t) +{\hat{Y}}(t), \end{aligned}$$

where \(b_{\kappa }(x)=\nu (\kappa -x)\). The process \({\hat{Q}}_{\kappa }(t)\) represents the number of available spots in the parking lot at time \(t\ge 0\) (after scaling and after taking the limit as n goes to infinity). Furthermore, \({\hat{Y}}(t)\) increases if and only if \({\hat{Q}}_{\kappa }(t)=0\). Define \(\varvec{X}(\cdot ):=({\hat{Z}}(\cdot ),{\hat{Q}}_{\kappa }(\cdot ) )\), and note that each component of \(\varvec{X}(\cdot )\) is a semimartingale. Let \(G=\{\varvec{x}:=(x_1,x_2)\in {\mathbb {R}}^2: x_2>0\}\). The boundary and the closure of G are given by \(\partial G=\{\varvec{x}\in {\mathbb {R}}^2: x_2=0\}\) and \({\bar{G}}=G \bigcup \partial G\), respectively. Now, observe that \(\varvec{X}(\cdot )\in {\bar{G}}\) for any \(t\ge 0\). A geometrical representation of the space G and its boundary is shown in the next figure (Fig. 5).

Fig. 5
figure 5

The space G and its boundary for \(\beta >0\)

Before we continue the analysis of deriving the BAR, we note some properties for the process \(\hat{Y}(\cdot )\), which is known as the regulator. It is known that \((\hat{Q}(\cdot ), \hat{Y}(\cdot ))\) satisfies a one-dimensional reflection mapping (or one-dimensional Skorokhod problem). The regulator \(\hat{Y}(\cdot )\) is continuous, nondecreasing and has the property

$$\begin{aligned} \int _{0}^{\infty } {\mathbb {1}}_{\{{{\hat{Q}}}_{\kappa }(t)>0\}}d {\hat{Y}}(t)=0, \end{aligned}$$

or equivalently, for all \(t\ge 0\),

$$\begin{aligned} {\hat{Y}}(t)=\int _{0}^{t} {\mathbb {1}}_{\{\hat{Q}_{\kappa }(t)=0\}}d {\hat{Y}}(s). \end{aligned}$$

By [11, Theorem 6.1], almost all the paths of the regulator are Lipschitz continuous on the space \(\{x(\cdot ) \in D(0,\infty ), x(0)\ge 0\}\) under the uniform topology, and are hence absolutely continuous. From the latter, it follows that \(\hat{Y}(\cdot )\) is of bounded variation. Moreover, by the proof of [46, Theorem 2.2] (and also in [6, Corollary 6]), there exists a (positive) constant w such that

$$\begin{aligned} {\hat{Y}}(t)= w\int _{0}^{t} {\mathbb {1}}_{\{ \hat{Q}_{\kappa }(s)=0\}}\mathrm{d}s. \end{aligned}$$
(4.7)

For more details, we refer to [8, Lemma 3.1] and [28].

In the sequel, we focus on deriving a functional equation which characterizes the steady-state distribution \(\pi (\cdot ,\cdot )\) of the process \(\{\varvec{X}(t), t\ge 0\}\), provided that it exists. To handle the boundary of the space G, we define a measure \(\sigma \) on \(({{\bar{G}}}, {\mathfrak {B}}({{\bar{G}}}))\) given by

$$\begin{aligned} \sigma (B)= {\mathbb {E}}^{\pi }\left[ \int _{0}^{1} {\mathbb {1}}_{\{\varvec{X}(s) \in B\}} d {\hat{Y}}(s)\right] , \ B \in {\mathfrak {B}}({{\bar{G}}}). \end{aligned}$$
(4.8)

Further, it follows by (4.7) that \(\sigma (B)\le w {\mathbb {E}}^{\pi }[ {\hat{Y}}(1)]<\infty \), which yields that \(\sigma \) is a finite measure. Moreover, we define it (for simplicity) in \({{\bar{G}}}\), but as \({\hat{Y}}(\cdot )\) increases only on the boundary \(\partial G\), the measure concentrates on the boundary. In other words, \(\sigma \) consists of a finite boundary measure.

Using [27, Theorem 1 and Remark 5.2] and Itô calculus, the BAR takes the following form:

$$\begin{aligned} \begin{aligned} \int _{{{\bar{G}}}}{\mathcal {L}}f(\varvec{x}) \pi (\mathrm{d} \varvec{x}) - \int _{{{\bar{G}}}}\frac{\partial f}{\partial x_1}(\varvec{x}) \sigma (d \varvec{x}) + \int _{{{\bar{G}}}}\frac{\partial f}{\partial x_2}(\varvec{x}) \sigma (\mathrm{d} \varvec{x})=0, \end{aligned} \end{aligned}$$
(4.9)

where the boundary measure \(\sigma \) is defined in (4.8) and \({\mathcal {L}}\) is the second-order operator, i.e.,

$$\begin{aligned} {\mathcal {L}}f(\varvec{x})&= b_1(x_1) \frac{\partial f}{\partial x_1}(\varvec{x}) +b_{\kappa }(x_2) \frac{\partial f}{\partial x_2}(\varvec{x}) +(\nu +\mu ) \frac{\partial ^2 f}{\partial x_1 \partial x_1}(\varvec{x})\\&\qquad +(\nu +\mu ) \frac{\partial ^2 f}{\partial x_2 \partial x_2}(\varvec{x}) -(2\nu +\mu ) \frac{\partial ^2 f}{\partial x_1 \partial x_2}(\varvec{x}). \end{aligned}$$

The next step is to derive a key relation between the moment-generating functions of \(\pi \) and \(\sigma \). Let us define the two-dimensional moment-generating function (MGF) of \(\pi \),

$$\begin{aligned} G^{\pi }(\varvec{\theta }):={\mathbb {E}}^{\pi }[\hbox {e}^{\varvec{\theta }\cdot \varvec{X}(\infty ) }]= \int _{{{\bar{G}}}} \hbox {e}^{\varvec{\theta }\cdot \varvec{x} } \pi (\mathrm{d}\varvec{x}), \end{aligned}$$

for \(\varvec{\theta }:=(\theta _1,\theta _2)\in {\mathbb {R}}^2\), and \(\varvec{\theta }\cdot \varvec{x}:=\theta _1x_1+\theta _2x_2\). In the same way, we define the one-dimensional MGF of \(\sigma \),

$$\begin{aligned} G^{\sigma }(\theta _1):= \int _{{{\bar{G}}}} \hbox {e}^{\theta _1 x_1 } \sigma (d \varvec{x}). \end{aligned}$$

Further, we assume that there exists a set \(\varTheta \) such that \(\varTheta =\{\varvec{\theta }\in {\mathbb {R}}^2: G^{\pi }(\varvec{\theta })<\infty ,\ G^{\sigma }(\theta _1)<\infty \}\). Assuming that \(\varvec{\theta }\in \varTheta \) and adapting [16, Lemma 4.1], we derive the following key relation:

$$\begin{aligned}&-\mu \beta \theta _1 {\mathbb {E}}^{\pi } [{\mathbb {1}}_{\{Z(\infty )>\beta \}} \hbox {e}^{\varvec{X} (\infty ) \cdot \varvec{\theta }}] -\mu \theta _1 {\mathbb {E}}^{\pi } [Z(\infty ){\mathbb {1}}_{\{Z(\infty )\le \beta \}} \hbox {e}^{\varvec{X}(\infty ) \cdot \varvec{\theta }}] -\nu \theta _1 G_{\theta _1}^\pi (\varvec{\theta })\nonumber \\&\quad -\nu \theta _2 G_{\theta _2}^\pi (\varvec{\theta }) +\gamma (\varvec{\theta }) G^\pi (\varvec{\theta }) +(\theta _2-\theta _1) G^\sigma (\theta _1)=0, \end{aligned}$$
(4.10)

where \(\gamma (\varvec{\theta })=\nu \kappa \theta _2 +(\nu +\mu ) (\theta _1^2+\theta _2^2) -(2\nu +\mu ) \theta _1 \theta _2\) and \(G_{\theta _i}^\pi (\cdot )\) denotes the derivative with respect to \(\theta _i\), \(i=1,2\).

Equation (4.10) is rather complicated due to the piecewise linear term and the existence of the boundary measure. Although the analysis of (4.10) is beyond the scope of the current paper, we conjecture that the Wiener–Hopf method [13] and boundary value techniques [14] may be applied.

It turns out that (4.10) remains quite complicated even if we assume \(K=\infty \), i.e., no boundary measure. Contrary to the one-dimensional case [9], the steady-state distribution of \((Z(\cdot ), Q(\cdot ))\) cannot be written as a linear combination of two distributions. To see this, define \(\pi _{-}(\varvec{x})\) to be a bivariate normal distribution with mean vector \(\varvec{\mu }_{-}= (0,0)\) and covariance matrix \( \varvec{\varSigma }_{-}=\left( \begin{array}{cc} 1 &{} 1 \\ 1 &{} \frac{\nu +\mu }{\nu } \\ \end{array} \right) \). In addition, let \(\pi _{+}(\varvec{x})\) be a bivariate normal distribution with mean vector \(\varvec{\mu }_{+}=(- \mu \beta /\nu ,0) \) and covariance matrix \(\varvec{\varSigma }_{+}=\left( \begin{array}{cc} \frac{\nu +\mu }{\nu } &{} \quad \frac{2\nu +\mu }{2\nu } \\ \frac{2\nu +\mu }{2\nu } &{} \quad \frac{\nu +\mu }{\nu } \\ \end{array} \right) \). The distributions \(\pi _{-}\) and \(\pi _{+}\) correspond to the solution of the Kolmogorov forward equations (or Fokker–Planck equations) of (4.6) with drift function \(-(\nu +\mu )x\) and \(-\mu \beta -\nu x \), respectively. Adapting [9], we define \(\pi _{\infty }(\varvec{x}):= c_1 \pi _{-}(\varvec{x}){\mathbb {1}}_{\{x_1\le \beta \}}+ c_2 \pi _{+}(\varvec{x}){\mathbb {1}}_{\{x_1> \beta \}}\), where the constants \(c_1\), \(c_2\) are given by [21, Eqs. (3.9), (3.10)]. Namely, we have that

$$\begin{aligned} c_1= & {} \left( \varPhi (\beta )+ \sqrt{\frac{\nu +\mu }{\nu }} \exp \left\{ \frac{\mu \beta ^2}{2\nu }\right\} \left( 1-\varPhi \left( \sqrt{\frac{\nu +\mu }{\nu }}\beta \right) \right) \right) ^{-1},\\ c_2= & {} c_1\sqrt{\frac{\nu +\mu }{\nu }}\exp \left\{ \frac{\mu \beta ^2}{2\nu }\right\} , \end{aligned}$$

where \(\varPhi (\cdot )\) represents the cumulative probability function of the standard normal distribution. It can be easily verified that \(\pi _{\infty }\) is indeed a probability distribution, but it does not satisfy the correct marginal distribution of \(\hat{Q}(\infty )\) as is shown in Fig. 6. It is well known that \(\hat{Q}(\infty )\) follows a normal distribution with zero mean and variance \(\frac{\nu +\mu }{\nu }\). For a discussion on this topic, see [15]. In the sequel, we move in different asymptotic regimes.

Fig. 6
figure 6

Marginal pfd of \(\hat{Q}(\infty )\) and Normal\((0,\frac{\nu +\mu }{\nu })\) pdf for \(\beta =0\) and \(K=\infty \)

4.2.2 Diffusion approximation in an overloaded parking lot

In this section, we study an overloaded parking lot. First, we show a diffusion limit in the case that the parking lot is always full (see Sect. 3.3). We then show that the diffusion-scaled vector process for the original system collapses to the one-dimensional limit in this case. Our motivation in this section comes from results in [1]. Specifically, the authors there show that under an appropriate scaling (including parameter and timescaling), the number of empty spaces in an overloaded parking lot behaves like an M / M / 1 queue. However, here we need a modification of this result by dropping the timescaling.

First, we define the dynamical equation that describes the evolution of the process of the number of uncharged EVs when the parking lot is always full. Let \(N_{\nu }^f(\cdot )\) and \(N_{\mu }^f(\cdot )\) denote two independent Poisson processes with rates \(\nu \) and \(\mu \), respectively. For any \(t\ge 0\), we have that

$$\begin{aligned} Z_{f}(t)= Z_{f}(0)+N_{\nu }^f\left( \int _{0}^{t}(K-Z_{f}(s))\mathrm{d}s\right) -N_{\mu }^f\left( \int _{0}^{t}Z_{f}(s){{\,\mathrm{\wedge }\,}}Mds\right) ,\nonumber \\ \end{aligned}$$
(4.11)

where \(Z_{f}(0)\le K\) almost surely.

Next, we introduce our asymptotic regime. Take \(K^n\) and \(M^n\) such that \(K^n=nK\) and \(M^n=\frac{\nu }{\nu +\mu }K^n+\sqrt{n}\beta \), where \(K,\beta \ge 0\). The following proposition gives a diffusion limit for the scaled process describing the number of uncharged EVs, i.e., \(Z_{f}(\cdot )\).

Proposition 4.6

Define the scaled process \(\hat{Z}^n_f(\cdot ):=\frac{Z_{f}^n(\cdot )-\frac{\nu }{\nu +\mu }K^n}{\sqrt{n}}\). If \(\hat{Z}^n_f(0)\overset{d}{\rightarrow }\hat{Z}_f(0)\), then \(\hat{Z}^n_f(\cdot )\overset{d}{\rightarrow }\hat{Z}_f(\cdot )\), where the limit satisfies the following stochastic differential equation:

$$\begin{aligned} \mathrm{d}\hat{Z}_f(t)= v(\hat{Z}_f(t))\mathrm{d}t+ \sqrt{\frac{2 \nu \mu K }{\nu +\mu }} \mathrm{d}W(t). \end{aligned}$$

Moreover, the drift function is given by \(v(x)= {\left\{ \begin{array}{ll} -(\nu +\mu )x, &{} \text{ if } x\le \beta , \\ -\nu x-\mu \beta , &{} \text{ if } x>\beta , \end{array}\right. }\) and \(W(\cdot )\) is a standard Brownian motion.

Next, we give the steady-state distribution of the process \(\hat{Z}_f(\cdot )\). This can be done by following [9, Eq. (3)]. Define the following truncated normal probability density functions:

$$\begin{aligned} \pi _{f}^-(x)= \frac{\phi \left( \frac{x}{\sigma _1}\right) }{\sigma _1 \varPhi \left( \frac{\beta }{\sigma _1}\right) }, \text{ for } x\le \beta , \text{ and } \pi _{f}^+(x)= \frac{\phi \left( \frac{x}{\sigma _2}\right) }{\sigma _1\left( 1- \varPhi \left( \frac{\beta -\frac{\mu \beta }{\nu }}{\sigma _2}\right) \right) }, \text{ for } x> \beta , \end{aligned}$$

where \(\sigma _1^2=\frac{1}{\nu +\mu }\frac{ \nu \mu K }{\nu +\mu }\) and \(\sigma _2^2=\frac{ \mu K }{\nu +\mu }\). Now, the pdf of \(\hat{Z}_f(\cdot )\) is given by

$$\begin{aligned} \pi _{f}(x)= d_1 \pi _{f}^-(x) {\mathbb {1}}_{\{x\le \beta \}}+d_2 \pi _{f}^+(x) {\mathbb {1}}_{\{x> \beta \}}, \end{aligned}$$
(4.12)

for \(x\in {\mathbb {R}}\). Moreover, the constants are \(d_1=\frac{1}{1+r}\) and \(d_2=1-d_1\) with \(r=\frac{\sigma _1^2}{\sigma _2^2}\frac{\pi _{f}^{-}(\beta )}{\pi _{f}^{+}(\beta )}\).

Having studied the system when it is always full, we now move to the original stochastic model. The first step is to find a relation between the process that gives the number of uncharged EVs and the process that gives the empty parking spaces. Recall that the total number of EVs in the system is given by the following equation:

$$\begin{aligned} Q(t)= & {} Q(0)+N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{Q(s)<K\}} \mathrm{d}s\right) -N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) \nonumber \\&-N_{\nu ,2}\left( \int _{0}^{t} \left( Q(s)-Z(s)\right) \mathrm{d}s\right) , \end{aligned}$$

and the number of uncharged EVs is

$$\begin{aligned} Z(t)&=Z(0)+N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{Q(s)<K\}} \mathrm{d}s\right) -N_{\mu }\left( \int _{0}^{t} Z(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) \\&\qquad -N_{\nu ,1}\left( \int _{0}^{t} Z(s) \mathrm{d}s\right) . \end{aligned}$$

Define the stochastic process that describes the number of empty parking spaces in the system, \(E(t):=K-Q(t)\), \(t\ge 0\). Using the definition of the process \(E(\cdot )\), we have that the system dynamics can be rewritten as follows:

$$\begin{aligned} E(t)= & {} E(0)-N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{E(s)>0\}} \mathrm{d}s\right) +N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) \nonumber \\&+\,N_{\nu ,2}\left( \int _{0}^{t} \left( K-E(s)-Z(s)\right) \mathrm{d}s\right) , \end{aligned}$$
(4.13)
$$\begin{aligned} Z(t)= & {} Z(0)+N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{E(s)>0\}} \mathrm{d}s\right) -N_{\mu }\left( \int _{0}^{t} Z(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) \nonumber \\&-\,N_{\nu ,1}\left( \int _{0}^{t} Z(s) \mathrm{d}s\right) . \end{aligned}$$
(4.14)

By (4.13), it follows that

$$\begin{aligned} N_{\lambda } \left( \int _{0}^{t} {\mathbb {1}}_{\{E(s)>0\}} \mathrm{d}s\right)&= E(0)-E(t) +N_{\nu ,1}\left( \int _{0}^{t}Z(s) \mathrm{d}s\right) \\&\quad +N_{\nu ,2}\left( \int _{0}^{t} \left( K-E(s)-Z(s)\right) \mathrm{d}s\right) . \end{aligned}$$

Applying the last equation in (4.14) yields

$$\begin{aligned} \begin{aligned} Z(t)&=Z(0)+E(0)-E(t) + N_{\nu ,2}\left( \int _{0}^{t} \left( K-E(s)-Z(s)\right) \mathrm{d}s\right) \\&\quad -N_{\mu }\left( \int _{0}^{t} Z(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) . \end{aligned} \end{aligned}$$
(4.15)

The last relation and an asymptotic bound for the process \(E^n(\cdot )\) (see Proposition 6.4) are the core elements we use to prove the main result in this section.

Theorem 4.7

Assume that \(\lambda ^n=\lambda n\), \(K^n= K n \), \(M^n=\frac{\nu }{\nu +\mu }K^n+\beta \sqrt{n}\), and \({\mathbb {P}}\left( E^n(0)=0\right) =1\). Further, we assume \(\nu K< \lambda \). If \(\frac{Z^n(0)-\frac{\nu }{\nu +\mu }K^n}{\sqrt{n}}\overset{d}{\rightarrow }\hat{Z}_f(0)\), then

$$\begin{aligned} \frac{Z^n(\cdot )-\frac{\nu }{\nu +\mu }K^n}{\sqrt{n}} \overset{d}{\rightarrow }\hat{Z}_f(\cdot ), \text{ as } n\rightarrow \infty , \end{aligned}$$

where the process \(\hat{Z}_f(\cdot )\) is given in Proposition 4.6.

A diffusion approximation for the expected number in the original system in an overloaded regime is now given by

$$\begin{aligned} {\mathbb {E}}\left[ Z_{f}(\infty )\right] \approx \sqrt{K}{\mathbb {E}}\left[ \hat{Z}_{f}(\infty )\right] +\frac{\nu }{\nu +\mu }K, \end{aligned}$$
(4.16)

and, by using (4.12), we have that

$$\begin{aligned} {\mathbb {E}}\left[ \hat{Z}_{f}(\infty )\right] =d_1\int _{-\infty }^{\beta } \pi _{f}^-(x)\mathrm{d}x+d_2\int _{\beta }^{\infty } \pi _{f}^+(x)\mathrm{d}x. \end{aligned}$$
(4.17)

The asymptotic regime for an overloaded system leads to a one-dimensional approximation. In the next section, motivated by [41], we consider an asymptotic regime where we scale the parking times, which leads to a two-dimensional diffusion approximation.

4.2.3 Diffusion approximation for small parking rates

In this section, we study a diffusion approximation in the case where the parking rate \(\nu \) is “small.” First, we focus on the system with infinitely many parking spaces and we show a heavy traffic limit theorem; see Sect. 2.2.1 for an alternative model description when \(K=\infty \). In this case, the limit is a two-dimensional OU process with reflection. Then, making an overloaded assumption (for the uncharged EVs), we derive a two-dimensional OU limit process and we obtain the same limit if we assume a sufficiently large number of parking spaces.

Assume that \(K=\infty \). Define the traffic intensity for this model as \(\rho :=\frac{\lambda }{\mu M}\). Let \(\mu , M\) be fixed. Further, define \(\nu ^n=\frac{1}{n}\) and \(\lambda ^n=\mu M\left( 1-\frac{c}{\sqrt{n}}\right) \) for some constant c. Note that \(\sqrt{n}(1-\rho ^n)=c\), which is our heavy traffic assumption. Moreover, define the diffusion-scaled process as follows:

$$\begin{aligned} {\tilde{Z}}^n(t):= \frac{Z^n(nt)}{\sqrt{n}} \quad \text {and} \quad {\tilde{Q}}^n(t):= \frac{ Q^n(nt)-\mu M n}{\sqrt{n}}. \end{aligned}$$

The next proposition states a heavy traffic result for the two-dimensional scaled process.

Proposition 4.8

(Heavy traffic) Assume that \(({{\tilde{Z}}}^n(0),{{\tilde{Q}}}^n(0))\)\(\overset{d}{\rightarrow }({{\tilde{Z}}}(0),{{\tilde{Q}}}(0))\) as \(n \rightarrow \infty \). We have that \(({{\tilde{Z}}}^n(\cdot ),{{\tilde{Q}}}^n(\cdot )) \overset{d}{\rightarrow }({{\tilde{Z}}}(\cdot ),{{\tilde{Q}}}(\cdot ))\) and that the limit satisfies the following two-dimensional stochastic differential equation:

$$\begin{aligned} \left( \begin{array}{cc} d{{\tilde{Z}}}(t) \\ d{{\tilde{Q}}}(t) \end{array}\right)&= -\left( \begin{array}{cc} c\mu M+{{\tilde{Z}}}(t) \\ c\mu M+{{\tilde{Q}}}(t) \end{array}\right) \mathrm{d}t + \left( \begin{array}{cc} \sqrt{2\mu M} &{}\quad 0 \\ 0 &{} \quad \sqrt{2\mu M}\\ \end{array} \right) \left( \begin{array}{cccc} \mathrm{d}W_{{{\tilde{Z}}}}(t) \\ \mathrm{d}W_{{{\tilde{Q}}}}(t) \end{array}\right) \\&\qquad + \left( \begin{array}{cc} d{{\tilde{Y}}}(t) \\ 0 \end{array}\right) , \end{aligned}$$

where \({\tilde{Y}}(t)\) satisfies the relation \(\int _{0}^{\infty } {\mathbb {1}}_{\{{\tilde{Z}}(t)>0\}}d{\tilde{Y}}(t)=0\). Further, \(W_{{{\tilde{Z}}}}(\cdot )\) and \(W_{{{\tilde{Q}}}}(\cdot )\) are driftless, univariate Brownian motions such that \({\mathbb {E}}\left[ W_{{{\tilde{Z}}}}(t) W_{{{\tilde{Q}}}}(t)\right] = t/2\).

Observe that the limit process in the last proposition depends on the reflection at zero, which makes the calculation of the joint distribution hard. We next consider an overloaded regime for the number of uncharged EVs. In this regime, the process \({{\tilde{Z}}}^n(\cdot )\) will not reach zero for large enough n [41]. To this end, let \(\lambda , \mu , M\) be fixed with \(\lambda >\mu M\) and \(\nu ^n=1/n\). Modifying slightly the scaled processes, i.e.,

$$\begin{aligned} {\tilde{Z}}^n_o(t):= \frac{ Z^n(nt)-(\lambda -\mu M)n}{\sqrt{n}} \quad \text {and} \quad {\tilde{Q}}^n_o(t):= \frac{ Q^n(nt)-\lambda n}{\sqrt{n}}, \end{aligned}$$

we are able to show the following proposition.

Proposition 4.9

Let \(\lambda >\mu M\). If \(({{\tilde{Z}}}^n_o(0),{{\tilde{Q}}}^n_o(0)) \overset{d}{\rightarrow }({{\tilde{Z}}}_o(0),{{\tilde{Q}}}_o(0))\) as \(n \rightarrow \infty \), then \(({{\tilde{Z}}}^n_o(\cdot ),{{\tilde{Q}}}^n_o(\cdot )) \overset{d}{\rightarrow }({{\tilde{Z}}}_o(\cdot ),{{\tilde{Q}}}_o(\cdot ))\). The diffusion limit satisfies the following two-dimensional stochastic differential equation:

$$\begin{aligned} \left( \begin{array}{cc} d{{\tilde{Z}}}_o(t) \\ d{{\tilde{Q}}}_o(t) \end{array}\right) = -\left( \begin{array}{cc} {{\tilde{Z}}}_o(t) \\ {{\tilde{Q}}}_o(t) \end{array}\right) \mathrm{d}t + \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \sqrt{\lambda }&{} -\sqrt{\mu M}&{} -\sqrt{\lambda -\mu M} &{} 0 \\ \sqrt{\lambda }&{} 0&{} -\sqrt{\lambda -\mu M} &{} -\sqrt{\mu M} \end{array}\right) \mathrm{d}\varvec{W}(t), \end{aligned}$$

where \(\varvec{W}(\cdot )=(W_1(\cdot ),W_2(\cdot ),W_3(\cdot ),W_4(\cdot ))^{\mathrm{T}}\), with \(W_i(\cdot )\) independent standard Brownian motions.

Note that we derive the same limit if we assume that \(K<\infty \) and we scale the number of parking spaces in the \(n^{th}\) system, \(K^n\), such that \(\frac{K^n-\lambda n}{\sqrt{n}}\rightarrow \infty \), as \(n \rightarrow \infty \). In this case, the fraction of time that the scaled process \({{\tilde{Q}}}_o^n(\cdot )\) spends on the boundary is negligible. This is made rigorous in the following lemma.

Lemma 4.10

For \(T>0\), we have that for any \(\epsilon >0\) there exists \(n_{\epsilon }\) such that

$$\begin{aligned} {\mathbb {P}}\left( \sup _{t\le T} {\tilde{Q}}^n_o(t)< \frac{K^n-\lambda n}{\sqrt{n}}\right) > 1-\epsilon , \end{aligned}$$

for all \(n>n_{\epsilon } \).

Remark 4.3

The sequence \(\{{\tilde{Q}}^n_o(t), t\ge 0\}\) is stochastically bounded as it converges in distribution in \(({\mathcal {D}}[0,\infty ),J_1)\) which is a complete and separate metric space [44, Corollary 3.1]. Then, Lemma 4.10 follows and holds true for any deviating sequence \(R^n\) instead of \(\frac{K^n-\lambda n}{\sqrt{n}}\). Finally, note that we only need the weak convergence of the process \({\tilde{Q}}^n_o(\cdot )\) and the fact that the quantity \(\frac{K^n-\lambda n}{\sqrt{n}}\) goes to infinity.

The joint steady-state distribution of \(({{\tilde{Z}}}_o(\cdot ),{{\tilde{Q}}}_o(\cdot ))\), say \(\pi _o(\cdot ,\cdot )\), is given by a bivariate normal distribution with mean \(\varvec{\mu }=\left( 0, 0\right) \) and covariance matrix \(\varvec{\varSigma }=\left( \begin{array}{cc} \frac{\lambda }{\nu } &{} \frac{2\lambda -\mu M}{2\nu } \\ \frac{2\lambda -\mu M}{2\nu } &{} \frac{\lambda }{\nu } \\ \end{array} \right) \). Note that \(\varvec{\varSigma }\) is indeed a covariance matrix as it is positive definite. To see this, observe that

$$\begin{aligned} \begin{aligned} \det ({\varvec{\varSigma }})=\frac{1}{4\nu ^2}(4\lambda ^2-4\lambda ^2-\mu ^2 M^2 +4\lambda \mu M)>\frac{1}{4\nu ^2}(-\mu ^2 M^2+4\mu ^2 M^2)>0, \end{aligned} \end{aligned}$$

where the first inequality holds by the assumption \(\lambda >\mu M\).

Now, for parameters of the original system such that \(\lambda >\mu M\), sufficiently “small” \(\nu \), and \(K>\lambda /\nu \), we suggest the following diffusion approximation:

$$\begin{aligned} {\mathbb {E}}\left[ Z(\infty )\right]&\approx \frac{{\mathbb {E}}\left[ {\tilde{Z}}^n_o(\infty )\right] }{\sqrt{\nu }}+ \frac{(\lambda -\mu M)}{\nu },\\ {\mathbb {E}}\left[ Q(\infty )\right]&\approx \frac{{\mathbb {E}}\left[ {\tilde{Q}}^n_o(\infty )\right] }{\sqrt{\nu }}+ \frac{\lambda }{\nu }. \end{aligned}$$

5 Numerical evaluation

In this section, we validate numerically the previous bounds and approximations for three cases: the moderately (\(\lambda <\nu K\)), critically (\(\lambda =\nu K\)), and overloaded (\(\lambda >\nu K\)) systems. We focus on the expected number of uncharged EVs in the system and the probability that an EV leaves the charging station with a fully charged battery (the success probability). In all the numerical examples, we solve the flow balance equation (3.1) using standard numerical methods and we let \(\nu =1\). Finally, the relative error is calculated by the following formula: \(\text{ RE }=\frac{|{\mathbb {E}}\left[ Z(\infty )\right] -{\mathbb {E}}\left[ Z^{ap}(\infty )\right] |}{{\mathbb {E}}\left[ Z(\infty )\right] }100\%\), where \({\mathbb {E}}\left[ Z(\infty )\right] \) denotes the expected number of EVs in the original system by solving the two-dimensional Markov process and \({\mathbb {E}}\left[ Z^{ap}(\infty )\right] \) denotes the expected number of uncharged EVs for the aforementioned approximations.

First, we evaluate the fluid approximation. Table 1 gives the relative error between the expected number of uncharged EVs for the original system and the fluid approximation given in (4.2) for different values of the number of parking spaces K and for \(\mu =1\). For a given K, we give only the maximum relative error for \(0<M\le K\). As expected, the relative error decreases as \(\lambda \) and K increase. In Tables 23, and 4, we present the relative error between the expected number of uncharged EVs for the original system and the modified fluid approximation given in Eq. (4.3) for different values of the charging rate: \(\mu =1/2,1,3/2\). Not surprisingly, the relative error is much smaller in this case. For high values of \(\lambda \) and K the relative error is approximately 2–5% rather than 10–20%. In addition, the modified fluid approximation seems to be reasonable also in the moderate regime.

Table 1 Evaluation of the original fluid approximation for \(\mu =1\)
Table 2 Evaluation of the modified fluid approximation for \(\mu =1/2\)
Table 3 Evaluation of the modified fluid approximation for \(\mu =1\)
Table 4 Evaluation of the modified fluid approximation for \(\mu =3/2\)

Next, we evaluate the approximation in the case of a full parking lot (see Sect. 3.3) and the diffusion approximation in an overloaded regime given by (4.16). To improve the approximations, we directly modify them by replacing the parameter K by the expected total number of EVs in the original system, i.e., \(\lambda (1-B(\lambda /\nu ,K))/\nu \). In particular, the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) is calculated based on the stationary distribution in Proposition 3.4 after replacing K by \(\lambda (1-B(\lambda /\nu ,K))/\nu \). Moreover, the modified diffusion approximation in an overloaded regime is given by

$$\begin{aligned} {\mathbb {E}}\left[ Z_{f}(\infty )\right] \approx \sqrt{\frac{\lambda (1-B(\lambda /\nu ,K))}{\nu }} {\mathbb {E}}\left[ \hat{Z}_{f}(\infty )\right] +\frac{\lambda (1-B(\lambda /\nu ,K))}{\nu +\mu }, \end{aligned}$$

where \({\mathbb {E}}\left[ \hat{Z}_{f}(\infty )\right] \) is given in (4.17). Tables 56, and 7 give the relative error for the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) and \(\mu =1/2,1,3/2\), respectively. As we expect, it decreases as \(\lambda \) and K increase for all values of the charging rate \(\mu \). Furthermore, this approximation results in small relative errors in all regimes \((<3\%)\). The (prelimit) approximation \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) is better than the modified diffusion approximation in (4.16), as we see in Tables 89, and 10.

Table 5 Evaluation of the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) for \(\mu =1/2\)
Table 6 Evaluation of the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) for \(\mu =1\)
Table 7 Evaluation of the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) for \(\mu =3/2\)
Table 8 Evaluation of the modified diffusion approximation in an overloaded regime for \(\mu =1/2\)
Table 9 Evaluation of the modified diffusion approximation in an overloaded regime for \(\mu =1\)
Table 10 Evaluation of the modified diffusion approximation in an overloaded regime for \(\mu =3/2\)

In the sequel, we depict the bounds in (3.2), the modified bound in (3.3)—where we replace K by \(\lambda (1-B(\lambda /\nu ,K))\)—(dotted line), the modified fluid approximation (4.3) (dashed line), and the modified diffusion approximation in (4.16) (dash-dot line). Note that the modified bound in (4.16) becomes

$$\begin{aligned} \frac{\lambda (1-B(\lambda /\nu ,K))/\nu -{\mathbb {E}}\left[ Z_{f}(\infty )\right] }{\lambda (1-B(\lambda /\nu ,K))/\nu }, \end{aligned}$$

where the modified \({\mathbb {E}}\left[ Z_{f}(\infty )\right] \) is as we discussed earlier. In Figs. 78910111213, 14151617, and 18, the vertical axes give the probability that an EV leaves the parking lot with a fully charged battery (the success probability) and the horizontal axes give the ratio M / K. For each regime, we plot the success probability for \(K=10,20,30,50\) and \(\mu =1\). In the moderate regime, the lower bound (\(K=\infty \)) is very close for high values of parking spaces. This is not surprising because the time that the process spends on the boundary is negligible in this case. The fluid approximation seems to be quite good in most of the cases. Finally, note that the modified bound (3.3) does not give a lower bound for the original system. However, it seems to be the best approximation for all the cases, even in the moderate regime.

Fig. 7
figure 7

\(K=10\) and \(\lambda =0.8K\)

Fig. 8
figure 8

\(K=20\) and \(\lambda =0.8K\)

Fig. 9
figure 9

\(K=30\) and \(\lambda =0.8K\)

Fig. 10
figure 10

\(K=50\) and \(\lambda =0.8K\)

Fig. 11
figure 11

\(K=10\) and \(\lambda =K\)

Fig. 12
figure 12

\(K=20\) and \(\lambda =K\)

Fig. 13
figure 13

\(K=30\) and \(\lambda =K\)

Fig. 14
figure 14

\(K=50\) and \(\lambda =K\)

Fig. 15
figure 15

\(K=10\) and \(\lambda =1.2K\)

Fig. 16
figure 16

\(K=20\) and \(\lambda =1.2K\)

Fig. 17
figure 17

\(K=30\) and \(\lambda =1.2K\)

Fig. 18
figure 18

\(K=50\) and \(\lambda =1.2K\)

6 Proofs

6.1 Proofs for Sect. 3

Proof of Proposition 3.1

First, we show the upper bound. Recall that the probability (in stationarity) that an EV leaves with a full battery is given by \(P_s=\frac{{\mathbb {E}}\left[ C^K_M(\infty )\right] }{{\mathbb {E}}\left[ Q^K_M(\infty )\right] }\). As the parking times do not depend on M, we have that \({\mathbb {E}}\left[ Q^K_M(\infty )\right] ={\mathbb {E}}\left[ Q^K_K(\infty )\right] \) and hence the upper bound would follow from \({\mathbb {E}}\left[ C^K_M(\infty )\right] \le {\mathbb {E}}\left[ C^K_K(\infty )\right] \), which is equivalent to \({\mathbb {E}}\left[ Z^K_M(\infty )\right] \ge {\mathbb {E}}\left[ Z^K_K(\infty )\right] \) (as the total number of EVs remains the same for both systems). We now proceed to show the last inequality using coupling arguments. Fix a sample path \(\omega \in \varOmega \). Assume for simplicity that \(Z^K_M(0) = Z^{K}_K(0)\le M\), and take identical arrival, charging, and parking times for both systems. Note that the invariant distributions do not depend on the initial conditions, and so what we assume on the initial conditions is without loss of generality. Further, define \(T:=\inf \{t\ge 0:Z^{K}_M(t)=M \}\). Note that the arrival process is the same for both systems as the total number of EVs remains the same for both systems. That is, \(Z^K_M(t)=Z^{K}_K(t)\) for \(t\le T\) since the charging rate is the same for both systems. Moreover, using the fact that \(Z^K_M(t)\le K\) and \(M\le K\), the following inequality holds for any \(t>T\):

$$\begin{aligned} \frac{Z^K_M(t){{\,\mathrm{\wedge }\,}}M}{Z^K_M(t)} \le \frac{Z^K_M(t){{\,\mathrm{\wedge }\,}}K}{Z^K_M(t)}=1= \frac{Z^K_K(t){{\,\mathrm{\wedge }\,}}K}{Z^K_K(t)}. \end{aligned}$$

That is, the charging rate in the system (KM) is always bounded above by the charging rate in the system (KK) (the latter is always constant and it is equal to one). That is, a departed customer in the system (KM) has already departed in the system (KK). In other words, \(Z^{K}_M(t)\ge Z^{K}_K(t) \) for any \(t\ge 0\), and removing the conditioning on the sample path \(\omega \), we derive \(Z^{K}_M(t)\ge _{st} Z^{K}_K(t) \). By the existence of the stationary distribution, we obtain \(Z^{K}_M(\infty )\ge _{st} Z^{K}_K(\infty ) \) and hence \({\mathbb {E}}\left[ Z^K_M(\infty )\right] \ge {\mathbb {E}}\left[ Z^K_K(\infty )\right] \) which proves the upper bound in (3.2).

We move now to the proof of the lower bound in (3.2). First, we show that \(Z^{\infty }_M(\infty ) \ge _{st} Z^{K}_M(\infty )\) by using coupling arguments. Fix a sample path \(\omega \in \varOmega \). Assume that \(Z^{\infty }_M(0) = Z^{K}_M(0)\), and take identical arrival, charging, and parking times for both systems. Define \(T^*=\inf \{t>0:Q^{K}_M(t)=K \}\). It follows that \(Z^{\infty }_M(t) = Z^{K}_M(t)\) for \(t\le T^*\). After time \(T^*\), the blocked arrivals in the loss queue will enter in the queue with infinitely many parking spaces. That is, \(Z^{\infty }_M(t)\ge Z^{K}_M(t) \) for all \(t\ge 0\). Removing now the conditioning on the sample path \(\omega \), we derive \(Z^{\infty }_M(t)\ge _{st} Z^{K}_M(t) \) for all \(t\ge 0\), and by the existence of the stationary distribution, we have that \(Z^{\infty }_M(\infty )\ge _{st} Z^{K}_M(\infty )\). Hence, we have the following inequality for the sojourn time of an uncharged EV in the system (in stationarity): \(S^{\infty }_M(B)\ge _{st} S^{K}_M(B)\). That is, \( {\mathbb {E}}\left[ S^{K}_M(B)\right] \le {\mathbb {E}}\left[ S^{\infty }_M(B)\right] \). By Little’s law, we obtain that \({\mathbb {E}}\left[ S^{K}_M(B)\right] =\frac{{\mathbb {E}}\left[ Z^{K}_M(\infty )\right] }{\lambda (1-B(\lambda /\nu ))} =\frac{1}{\nu }\frac{{\mathbb {E}}\left[ Z^{K}_M(\infty )\right] }{{\mathbb {E}}\left[ Q^{K}_M(\infty )\right] }\) and \({\mathbb {E}}\left[ S^{\infty }_M(B)\right] =\frac{{\mathbb {E}}\left[ Z^{\infty }_M(\infty )\right] }{\lambda } =\frac{1}{\nu }\frac{{\mathbb {E}}\left[ Z^{\infty }_M(\infty )\right] }{{\mathbb {E}}\left[ Q^{\infty }_M(\infty )\right] }\). Hence, the lower bound in (3.2) follows.

It remains to show (3.3). Let \((Q_{\lambda }(\cdot ),Z_{\lambda }(\cdot ))\) denote the total number of EVs and the number of uncharged EVs if the arrival rate is \(\lambda \). First, using coupling arguments, we prove that if \(\lambda _1\le \lambda _2\), then \(Q_{\lambda _1}(t) \le _{st} Q_{\lambda _2}(t)\) and \(Z_{\lambda _1}(t) \le _{st} Z_{\lambda _2}(t)\) for any \(t\ge 0\). Assume the following coupling: If an arrival occurs to the system with arrival rate \(\lambda _1\), it also occurs to system with arrival rate \(\lambda _2\). Hence, as \(\lambda _1\le \lambda _2\), there are more arrivals in the second system. Further, we assume that all other parameters, i.e., \(\mu \), \(\nu \), M, K, are equal in both systems. Assume that both systems start empty and define \(T^{**}=\inf \{t>0:Q_{\lambda _2}(t)=K \}\). As in the second system there are more arrivals, we have that \(Q_{\lambda _1}(t) \le Q_{\lambda _2}(t)\) and \(Z_{\lambda _1}(t) \le Z_{\lambda _2}(t)\), for \(t\le T^{**}\). By the Markovian assumptions, we have that the residual charging and parking times are exponential with rates \(\mu \) and \(\nu \). That is, at any new event after time \(T^{**}\), we can resample the charging and parking times and hence the probability of a departure in the system with arrival rate \(\lambda _1\) is higher or equal to the probability of a departure in the system with arrival rate \(\lambda _2\). In other words, for \(t\ge 0\) and for \(x>0\), \({\mathbb {P}}\left( Q_{\lambda _2}(t) \le x\right) \le {\mathbb {P}}\left( Q_{\lambda _1}(t) \le x\right) \) and \({\mathbb {P}}\left( Z_{\lambda _2}(t) \le x\right) \le {\mathbb {P}}\left( Z_{\lambda _1}(t) \le x\right) \). The last relation is equivalent to \(Q_{\lambda _1}(t) \le _{st} Q_{\lambda _2}(t)\) and \(Z_{\lambda _1}(t) \le _{st} Z_{\lambda _2}(t)\), for \(t\ge 0\). In the sequel, we see that \(Z_{f}(\cdot )\) can arise as the limit of \(Z_{\lambda }(\cdot )\) as \(\lambda \rightarrow \infty \), assuming that \(Q_{\lambda }(0)\overset{d}{\rightarrow }K\) and \(Z_{\lambda }(0)\overset{d}{\rightarrow }Z_{f}(0)\). To see this, first observe that \(Q_{\lambda }(\cdot )\overset{d}{\rightarrow }K\) as \(\lambda \rightarrow \infty \). Now, combining (2.1) and (2.3), we have that

$$\begin{aligned} Z_{\lambda }(t)&=Z_{\lambda }(0)-Q_{\lambda }(0)+Q_{\lambda }(t) + N_{\nu ,2}\left( \int _{0}^{t} \left( Q_{\lambda }(s)-Z_{\lambda }(s)\right) \mathrm{d}s\right) \\&\quad -N_{\mu }\left( \int _{0}^{t}Z_{\lambda }(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) . \end{aligned}$$

Taking \(\lambda \rightarrow \infty \) and using the continuous mapping theorem, we have that \(Z_{\lambda }(\cdot )\overset{d}{\rightarrow }Z_{\infty }(\cdot )\), where

$$\begin{aligned} Z_{\infty }(t)&=Z_{f}(0)+ N_{\nu ,2}\left( \int _{0}^{t} \left( K-Z_{\infty }(s)\right) \mathrm{d}s\right) -N_{\mu }\left( \int _{0}^{t}Z_{\infty }(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) \\&\overset{d}{=}Z_{f}(t), \end{aligned}$$

and where the last equality follows by (4.11). Furthermore, \(Z_{\lambda }(\cdot )\) is nondecreasing. That is, \(Z_{\lambda }(t)\le _{st} Z_{f}(t)\) for any \(t\ge 0\) and, by the existence of the stationary distributions, we obtain \(Z_{\lambda }(\infty )\le _{st} Z_{f}(\infty )\). By the last inequality, it follows that \({\mathbb {E}}\left[ Z_{\lambda }(\infty )\right] \le {\mathbb {E}}\left[ Z_{f}(\infty )\right] \), and hence (3.3). \(\square \)

Proof of Proposition 3.2

Note that the distribution \(p_Q(q)\) corresponds to the stationary distribution of a one-dimensional Erlang loss system. Furthermore, by [29, Sect. 1.3], we know that

$$\begin{aligned} p_Q(q)=\frac{1}{q!}\left( \frac{\lambda }{\nu }\right) ^qp_Q(0), \text{ where } p_Q(0)= \left( \sum _{i=0}^{K}\frac{1}{i!} \left( \frac{\lambda }{\nu }\right) ^i\right) ^{-1}. \end{aligned}$$

Thus, the probability of an empty system is \(p_e(0,0)=p_Q(0)\).

As it is well known that a solution of the balance equations of a Markov process is unique, we shall show that \(p_e(q,z)\) for \(z\le q\) satisfies the flow balance equation (3.1). Then, the proof of the proposition is completed. First, we note the relations between \(p_e(q+a,z+b)\) and \(p_e(q,z)\) for \(a,b \in \{-1,0,1\}\). By (3.5), we obtain that

$$\begin{aligned} \begin{aligned} p_e(q)=&\frac{1}{q(q-1)!}\frac{\lambda }{\nu }\left( \frac{\lambda }{\nu }\right) ^{q-1}p_Q(0) =\frac{1}{q}\frac{\lambda }{\nu }p_e(q-1). \end{aligned} \end{aligned}$$
(6.1)

Now, applying the previous relation in (3.4), we have that

$$\begin{aligned} \begin{aligned} p_e(q-1,z-1)&= q \frac{\nu }{\lambda }p_Q(q) \frac{(q-1)!}{(z-1)!(q-z)!} \left( \frac{\mu }{\nu +\mu }\right) ^{q-z} \left( \frac{\nu }{\nu +\mu }\right) ^{z-1} \\&= z\frac{\nu }{\lambda }\frac{\nu +\mu }{\nu }p_Q(q) \frac{q!}{z!(q-z)!} \left( \frac{\mu }{\nu +\mu }\right) ^{q-z} \left( \frac{\nu }{\nu +\mu }\right) ^{z}\\&=z\frac{\nu +\mu }{\lambda }p_e(q,z). \end{aligned} \end{aligned}$$

Working analogously, we derive the following relations:

$$\begin{aligned} p_e(q-1,z-1)&= z\frac{\nu +\mu }{\lambda }p_e(q,z), \end{aligned}$$
(6.2)
$$\begin{aligned} p_e(q+1,z+1)&= \frac{1}{z+1} \frac{\lambda }{\nu +\mu }p_e(q,z), \end{aligned}$$
(6.3)
$$\begin{aligned} p_e(q,z+1)&= \frac{q-z}{z+1} \frac{\nu }{\mu }p_e(q,z), \end{aligned}$$
(6.4)
$$\begin{aligned} p_e(q+1,z)&= \frac{1}{q-z+1} \frac{\lambda }{\nu }\frac{\mu }{\nu +\mu }p_e(q,z). \end{aligned}$$
(6.5)

Using the above equations and recalling that \(L(z)=z\) when \(M=K\), the right-hand side of (3.1) for \(0<z,q<K\) and \(z\ne q\) can be written as follows:

$$\begin{aligned} \begin{aligned}&\left( z(\nu +\mu )+ \frac{\nu (z+1)}{z+1} \frac{\lambda }{\nu +\mu }+ \nu (z+1)\frac{q-z}{z+1}+ \frac{(q-z+1)}{q-z+1}\frac{\lambda \mu }{\nu +\mu }\right) p_e(q,z)\\&\quad = \left( z(\nu +\mu ) +\frac{\lambda \nu }{\nu +\mu } +(q-z)\nu + \frac{\lambda \mu }{\nu +\mu }\right) p_e(q,z) =\left( q\nu +\lambda +z \mu \right) p_e(q,z). \end{aligned} \end{aligned}$$

That is, \(p_e(q,z)\) satisfies (3.1) for \(0<z,q<K\) and \(z\ne q\). To show that \(p_e(q,z)\) satisfies (3.1) for \(0<q<K\) and \(z=0\), we apply (3.4) and (6.1) in the right-hand side of (3.1). This leads to

$$\begin{aligned} \begin{aligned}&(q+1)\nu p_e(q+1) \left( \frac{\mu }{\nu +\mu }\right) ^{q+1}+ \nu p_e(q+1) (q+1)\left( \frac{\mu }{\nu +\mu }\right) ^{q}\frac{\nu }{\nu +\mu }\\&\quad \quad +\mu p_e(q) q \left( \frac{\mu }{\nu +\mu }\right) ^{q-1}\frac{\nu }{\nu +\mu }\\&\quad =\Bigg ( \lambda \left( \frac{\mu }{\nu +\mu }\right) ^{q+1}+ \lambda \left( \frac{\mu }{\nu +\mu }\right) ^{q}\frac{\nu }{\nu +\mu }+ \mu q \left( \frac{\mu }{\nu +\mu }\right) ^{q-1}\frac{\nu }{\nu +\mu }\Bigg ) p_Q(q)\\&\quad =\left( \lambda + \mu q \frac{\nu +\mu }{\mu }\frac{\nu }{\nu +\mu }\right) \left( \frac{\mu }{\nu +\mu }\right) ^q p_Q(q) = (\lambda + q\nu )p_Q(q,0). \end{aligned} \end{aligned}$$

In the same way, we show that the right-hand side of (3.1) for \(0<z<K\) and \(q=K\) becomes

$$\begin{aligned}&\bigg (\lambda K \frac{\nu }{\lambda } \frac{(K-1)!}{(z-1)!(K-z)!} \Big (\frac{\mu }{\nu +\mu }\Big )^{K-z}\Big (\frac{\nu }{\nu +\mu }\Big )^{z-1}\\&\quad +\mu (z+1)\frac{K!}{(z+1)!(K-z-1)!}\Big (\frac{\mu }{\nu +\mu }\Big )^{K-z-1} \Big (\frac{\nu }{\nu +\mu }\Big )^{z+1} \bigg ) p_Q(K). \end{aligned}$$

The last quantity is equal to

$$\begin{aligned}&\left( z \nu \frac{\nu +\mu }{\nu } + (K-z)\mu \frac{\nu }{\nu +\mu }\frac{\nu +\mu }{\mu }\right) \frac{K!}{z!(K-z)!} \Big (\frac{\mu }{\nu +\mu }\Big )^{K-z}\Big (\frac{\nu }{\nu +\mu }\Big )^{z} p_Q(K)\\&\quad =(K \nu + z \mu ) p_e(K,z). \end{aligned}$$

Using again relations (6.2), (6.3), and (6.5), the right-hand side of (3.1) is written, for \(q<K\) and \(q=z\), as follows:

$$\begin{aligned} \left( \lambda q \frac{\nu +\mu }{\lambda }+ \frac{\nu \lambda }{\nu +\mu } + \frac{\lambda \mu }{\nu +\mu }\right) p_e(q,q)&=\left( q (\nu +\mu )+ \frac{\lambda \nu }{\nu +\mu }+ \frac{\lambda \mu }{\nu +\mu }\right) p_e(q,q)\\&=(q(\nu +\mu )+\lambda ) p_e(q,q). \end{aligned}$$

Using again relations (6.2)–(6.5), it follows immediately that \(p_e(q,z)\) satisfies Eq. (3.1) also for the remaining cases, i.e., \((q,z)=(0,0)\), \((q,z)=(K,K)\), and finally \((q,z)=(K,0)\). \(\square \)

Proof of Proposition 3.3

We show that \(Z^{\infty }_M(\cdot )\) behaves as a modified Erlang-A queue. Although we adapt the proof in [49, Sect. 6.6.1], we briefly describe it here for completeness. First, we write the flow balance equations for the Markov process \(Z^{\infty }_M(\cdot )\) and then we solve them. The balance equations for the Markov process \(Z^{\infty }_M(\cdot )\) are given by

$$\begin{aligned} {\left\{ \begin{array}{ll} (\lambda +z(\nu +\mu ))p_Z(z)=\lambda p_Z(z-1)+(z+1)(\nu +\mu ) p_Z(z+1), \\ (\lambda +M\mu +z\nu )p_Z(z)=\lambda p_Z(z-1)+(M \mu +(z+1)\nu ) p_Z(z+1), \end{array}\right. } \end{aligned}$$
(6.6)

for \(0<z<M\) and \(z\ge M\), respectively. For \(z=0\), we have that

$$\begin{aligned} \lambda p_Z(0)=(\nu +\mu ) p_Z(1). \end{aligned}$$

Using the last equation and (6.6), we derive inductively the following relations:

$$\begin{aligned} \lambda p_Z(z-1)=z(\nu +\mu )p_Z(z),\quad \text {if}\ z<M, \end{aligned}$$

and

$$\begin{aligned} (M\mu +z\nu )p_Z(z)=\lambda p_Z(z-1) ,\quad \text {if}\ z \ge M. \end{aligned}$$

The balance equations now can be simplified as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \lambda p_Z(z)=(z+1)(\nu +\mu ) p_Z(z+1),&{} \text{ if } z<M,\\ \lambda p_Z(z)=(M \mu +(z+1)\nu ) p_Z(z+1),&{} \text{ if } z\ge M. \end{array}\right. } \end{aligned}$$
(6.7)

Observe that we can directly solve the system (6.7). For \(z<M\), it is easy to see that

$$\begin{aligned} p_Z(z) = \frac{1}{z!}\left( \frac{\lambda }{\nu +\mu }\right) ^z p_Z(0). \end{aligned}$$
(6.8)

We show that, for \(z=M\), the solution of (6.7) is also given by the last formula. By the first equation of (6.7) for \(z=M-1\) and (6.8), we obtain the following equation:

$$\begin{aligned} p_Z(M) = \frac{1}{M}\left( \frac{\lambda }{\nu +\mu }\right) ^M p_Z(M-1)= \frac{1}{M!}\left( \frac{\lambda }{\nu +\mu }\right) ^M p_Z(0). \end{aligned}$$

It remains to find the solution in the case \(z>M\). We do so by induction. Note that, by the second equation of (6.7) for \(z=M\), we have that

$$\begin{aligned} p_Z(M+1) = \frac{\lambda p_Z(M)}{M\mu +(M+1)\nu }= \frac{\lambda }{M\mu +(M+1)\nu } \frac{1}{M!}\left( \frac{\lambda }{\nu +\mu }\right) ^M p_Z(0). \end{aligned}$$

Finally, it is easy to verify that the solution of (6.7) for \(z>M\) is given by

$$\begin{aligned} p_Z(z) = \frac{1}{M!}\left( \frac{\lambda }{\nu +\mu }\right) ^M \prod _{k=M+1}^{z} \frac{\lambda }{M\mu +k\nu } p_Z(0). \end{aligned}$$

The probability of an empty system (there are not uncharged vehicles in the parking lot) can be found by the normalization condition and it is given by (3.3). Finally, we show that the infinite summation in (3.3) converges. To this end, note that \(L(z)\mu +z\nu \ge z \min \{\nu ,\mu \}\). Applying the last observation in (3.3), we have that

$$\begin{aligned} \begin{aligned}&\sum _{j=0}^{M} \frac{1}{j!}\Big (\frac{\lambda }{\nu +\mu }\Big )^j + \sum _{j=M+1}^{\infty } \frac{1}{M!} \Big (\frac{\lambda }{\nu +\mu }\Big )^M\prod _{k=M+1}^{j}\frac{\lambda }{M\mu +k\nu }\\&\quad \le \sum _{j=0}^{\infty } \frac{1}{j!} \Big (\frac{\lambda }{\min \{\nu ,\mu \}}\Big )^j = \exp \Big \{\frac{\lambda }{\min \{\nu ,\mu \}} \Big \}. \end{aligned} \end{aligned}$$

\(\square \)

Proof of Proposition 3.4

First, we write the balance equations for the one-dimensional birth–death process \(\{Z_f(t), t \ge 0\}\). These are given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( \nu (K-z)+\mu z \right) \pi _{f}(z) = ( \nu (K-z+1)\pi _{f}(z-1))+\mu (z+1) \pi _{f}(z+1),\\ \left( \nu (K-z)+\mu M \right) \pi _{f}(z)= ( \nu (K-z+1)\pi _{f}(z-1))+\mu M \pi _{f}(z+1), \end{array}\right. } \end{aligned}$$

for \(0<z<M\) and \(M\le z<K\), respectively. On the boundary, the following equations hold:

$$\begin{aligned} \nu K \pi _{f}(0) = \mu \pi _{f}(1) \text{ and } \mu M \pi _{f}(K)=\nu \pi _{f}(K-1). \end{aligned}$$

Note that the balance equations can be simplified to

$$\begin{aligned} {\left\{ \begin{array}{ll} \nu (K-z) \pi _{f}(z) = \mu (z+1) \pi _{f}(z+1), &{} \text{ if } 0\le z<M,\\ \mu M \pi _{f}(z)=\nu (K-z+1)\pi _{f}(z-1), &{} \text{ if } M\le z\le K. \end{array}\right. } \end{aligned}$$
(6.9)

Applying \(z=M-1\) in the first equation of (6.9), we obtain

$$\begin{aligned} \pi _{f}(M-1)=\frac{\mu M}{\nu (K-M+1)}p_Z(M), \end{aligned}$$

and recursively we have that

$$\begin{aligned} \pi _{f}(M-i)=\left( \frac{\mu }{\nu }\right) ^i \frac{\prod _{j=0}^{i-1} (M-j)}{\prod _{j=1}^{i} (K-M+j)} \pi _{f}(M), \text{ if } 0<i \le M. \end{aligned}$$

Changing the variable \(z=M-i \) in the last equation yields

$$\begin{aligned} \pi _{f}(z)=\left( \frac{\mu }{\nu }\right) ^{M-z} \frac{\prod _{j=0}^{M-z-1} (M-j)}{\prod _{j=1}^{M-z} (K-M-j)} \pi _{f}(M), \text{ if } 0 \le z< M. \end{aligned}$$

Working analogously, by the second equation of (6.9) we derive

$$\begin{aligned} \pi _{f}(M+i)=\frac{1}{M^i} \left( \frac{\mu }{\nu }\right) ^i \prod _{j=0}^{i-1} (K-M-j)\pi _{f}(M) , \text{ if } 0\le i \le K-M, \end{aligned}$$

which leads to

$$\begin{aligned} \pi _{f}(z)=\frac{1}{M^{z-M}} \left( \frac{\nu }{\mu }\right) ^{z-M}\ \prod _{j=0}^{z-M-1} (K-M-j) \pi _{f}(M), \text{ if } M\le z \le K. \end{aligned}$$

Finally, \(\pi _{f}(M)\) is determined by the normalization equation \(\sum _{z=0}^{K}\pi _{f}(z)=1\). \(\square \)

6.2 Proofs for Sect. 4.1

Proof of Proposition 4.1

In this proof, we use martingale arguments. Define the filtration \({\mathcal {F}}^n_t:=\sigma \left( Z^n(0), Q^n(0), Z^n(s), Q^n(s):0\le s\le t \right) \) augmented by including all the null sets for \(t\ge 0\) and \(n\ge 1\). Applying the fluid scaling to the dynamical equation (2.3), we have that

$$\begin{aligned} Z^n(t)&=Z^n(0)+N_{\lambda } \left( n \int _{0}^{t} {\mathbb {1}}_{\{\frac{Q^n(s)}{n}<K\}} \mathrm{d}s\right) -N_{\mu }\left( n\int _{0}^{t} \frac{Z^n(s)}{n}{{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) \\&\quad -N_{\nu ,1}\left( n\int _{0}^{t} \frac{Z^n(s)}{n} \mathrm{d}s\right) . \end{aligned}$$

Defining the operator \(\bar{{\mathcal {M}}}_r^n=\frac{1}{n}\left( N_r(\cdot )-r \cdot \right) \) and following [32], we can write

$$\begin{aligned} \frac{Z^n(t)}{n}&=\frac{Z^n(0)}{n}+ \bar{{\mathcal {M}}}^n_{\lambda } \left( n t\right) -\bar{{\mathcal {M}}}^n_{\mu }\left( n\int _{0}^{t} \frac{Z^n(s)}{n}{{\,\mathrm{\wedge }\,}}M \mathrm{d}s\right) \\&\qquad -\bar{{\mathcal {M}}}^n_{\nu ,1}\left( n\int _{0}^{t} \frac{Z^n(s)}{n} \mathrm{d}s\right) -\frac{1}{n}\int _{0}^{t} {\mathbb {1}}_{\{\frac{Q^n(s)}{n}=K\}} \mathrm{d}N(\lambda n s) +\lambda t\\&\qquad -\mu \int _{0}^{t} \frac{Z^n(s)}{n}{{\,\mathrm{\wedge }\,}}M \mathrm{d}s -\nu \int _{0}^{t} \frac{Z^n(s)}{n} \mathrm{d}s. \end{aligned}$$

The term \(\frac{1}{n}\int _{0}^{t} {\mathbb {1}}_{\{\frac{Q^n(s)}{n}=K\}} \mathrm{d}N(\lambda n s)\) denotes the number of EVs that are lost due to finding the system full under the fluid scaling. By [26, Proposition 4.11], this is a weakly convergent sequence. Further, by [35, Propositions 6.17 and 6.21] and by the assumption \(\frac{Q^n(0)}{n} \overset{d}{\rightarrow }K\), we have that \(\frac{Q^n(\cdot )}{n}\overset{d}{\rightarrow }q(\cdot )\equiv K\) if \(\lambda /\nu \ge K\). On the other hand, if \(\lambda /\nu < K\), we have that \(q(t)<K \) for \(t>0\). Now, by the proof of [26, Theorem 3.6] (see [26, Eqs. (3.34) and (3.40)]), it turns out that

$$\begin{aligned} \frac{1}{n}\int _{0}^{t} {\mathbb {1}}_{\{\frac{Q^n(s)}{n}=K\}} \mathrm{d}N(\lambda n s) \overset{d}{\rightarrow }\max \{\lambda -\nu K,0\} t. \end{aligned}$$

Further, observing that \(\bar{{\mathcal {M}}}^n_{r}(\cdot )\) are zero mean martingales with respect to the filtration \({\mathcal {F}}_t^n\) for any \(n\in {\mathbb {N}}\) (cf. [32]) and taking \(n \rightarrow \infty \), we derive that \(\frac{Z^n(\cdot )}{n} \overset{d}{\rightarrow }z(\cdot )\). Moreover, the limit function is characterized by the following functional equation:

$$\begin{aligned} z(t)=z(0)+\lambda t -\max \{\lambda -\nu K,0\} t -\mu \int _{0}^{t} z(s){{\,\mathrm{\wedge }\,}}M \mathrm{d}s -\nu \int _{0}^{t}z(s) \mathrm{d}s, \end{aligned}$$

which is equivalent to (4.1). \(\square \)

Proof of Proposition 4.2

It is not hard to solve the ODE (4.1) explicitly in the two regions, namely

$$\begin{aligned} z(t)= {\left\{ \begin{array}{ll} \frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K}{\nu +\mu }+ \left( z(0)-\frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K}{\nu +\mu }\right) \hbox {e}^{-(\nu +\mu )t} , &{} \text{ if } z(t)\le M,\\ \frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K-\mu M}{\nu }+ \left( z(0)-\frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K-\mu M}{\nu }\right) \hbox {e}^{-\nu t} , &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(6.10)

Define \(z_1:=\frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K}{\nu +\mu }\) and \(z_2:=\frac{\lambda {{\,\mathrm{\wedge }\,}}\nu K-\mu M}{\nu }\). First, note that for a given initial state \(z(0)\in [0,K]\), we have that \(z(t)\le K\) for any \(t\ge 0\). To see this, observe that if \(z(t)\le M\), then by the model assumptions \(z(t)\le M \le K\). On the other hand, if \(z(t)> M\), we have that

$$\begin{aligned} z(t)=z_2+ \left( z(0)-z_2\right) \hbox {e}^{-\nu t} \le z_2 \le K, \end{aligned}$$

if \(z(0)-z_2\le 0\) and

$$\begin{aligned} z(t)=z_2+ \left( z(0)-z_2\right) \hbox {e}^{-\nu t} \le z_2 +z(0)-z_2=z(0)\le K, \end{aligned}$$

if \(z(0)-z_2\ge 0\). So, for any \(t\ge 0\), it follows that \(z(t)\le K\) and hence Definition 4.1 is well defined.

In the sequel, we show that z(t) converges as t goes to infinity to the following point:

$$\begin{aligned} z= {\left\{ \begin{array}{ll} z_1, &{} \text{ if } z_1\le M,\\ z_2 , &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(6.11)

Then, we show that z is unique and, using the Markovian assumptions, we see that it is equivalent to (4.2), and hence \(z=z^*\). Also, observe that \(z^*\) is an invariant point. Indeed, if we set \(z(0)=z^*\), then by (6.10) we have that \(z(t)=z^*\) for \(t\ge 0\).

First, assume that \(z_1\le M\). If \(z(0)\le M\), then \(z(t)\le M\) for any \(t\ge 0\). To see this, note that if \(z(0)-z_1\le 0\) then we have that

$$\begin{aligned} z_1+ \left( z(0)-z_1\right) \hbox {e}^{-(\nu +\mu )t} \le z_1 \le M. \end{aligned}$$

On the other hand, if the quantity \(z(0)-z_1\) is positive, we show that there does not exist \(t^*>0\) such that \(z(t^*)>M\). Supposing that there exists \(t^*\) such that \(z(t^*)>M\), we have that

$$\begin{aligned} z_1+\left( z(0)-z_1\right) \hbox {e}^{-(\nu +\mu )t^*}>M; \end{aligned}$$

by assumption \(z(0)-z_1>0\), so the last inequality leads to

$$\begin{aligned} t^* < -\frac{1}{\nu +\mu } \ln \left( \frac{M-z_1}{z(0)-z_1} \right) . \end{aligned}$$

Now, by the assumption \(z(0)\le M\), we obtain \(\frac{M-z_1}{z(0)-z_1}\ge 1\) and hence \(t^*\le 0\), which yields a contradiction. Next, we assume that \(z(0)>M\). In this case, we show that there exists \(t^*_2\) such that \(z(t^*_2)\le M\). First, observe that if \(z_1\le M\) then \(z_2\le M\). Hence, \(z(0)-z_2>M-z_2\ge 0\). Now, we note that

$$\begin{aligned} z_2+\left( z(0)-z_2\right) \hbox {e}^{-\nu t}\le M \end{aligned}$$

leads to

$$\begin{aligned} t> -\frac{1}{\nu } \ln \left( \frac{M-z_2}{z(0)-z_2} \right) >0, \end{aligned}$$

due to \(z(0)-z_2>M-z_2\). Let \(t^*_2\) be the first time that \(z(t)\le M\) starting from a point \(z(0)>M\). Setting now as initial point \(z(t^*_2)\), we conclude that if \(z_1\le M\) then \(z(t)\le M\) for \(t\ge t^*_2\) and hence

$$\begin{aligned} z(t)=z_1+ \left( z(t^*_2)-z_1\right) \hbox {e}^{-(\nu +\mu )t}, \text{ for } t\ge t^*_2, \end{aligned}$$

which yields

$$\begin{aligned} |z(t)-z_1|\le \left( z(t^*_2)-z_1\right) \hbox {e}^{-(\nu +\mu )t} \le \left( z(0)-z_1\right) \hbox {e}^{-(\nu +\mu )t}, \text{ for } t\ge t^*_2. \end{aligned}$$

This concludes the proof for the case \(z_1\le M\). The case \(z_2>M\) follows the same logic.

Now, we prove that (4.2) is equal to z. To this end, observe that

$$\begin{aligned} B \max \left\{ 1, \frac{ z^*}{M}\right\} \overset{d}{=}B', \end{aligned}$$

where \(B'\) is an exponential random variable with \({\mathbb {E}}\left[ B'\right] = \frac{1}{\mu \max \big \{1, \frac{z^*}{M}\big \}}\). We note that (4.2) can be written as

$$\begin{aligned} z^* = (\lambda {{\,\mathrm{\wedge }\,}}\nu K) {\mathbb {E}}\left[ \min \left\{ D , B \max \left\{ 1, \frac{ z^*}{M}\right\} \right\} \right]&=(\lambda {{\,\mathrm{\wedge }\,}}\nu K) {\mathbb {E}}\left[ \min \{D , B'\}\right] \\&=(\lambda {{\,\mathrm{\wedge }\,}}\nu K)\frac{1}{\nu +\mu \max \left\{ 1, \frac{z^*}{M}\right\} }. \end{aligned}$$

Solving the last equation yields \(z^*=z\).

To conclude the proof of Proposition 4.2, it remains to show the uniqueness of the invariant point \(z^*\). In other words, we show that (6.11) [and hence (4.2)] has a unique solution. It is not hard to see that if \(z_1< M\), then \(z_2< M\). So, \(z_2\) cannot be the solution of (6.11). That is, \(z_1\) is the unique solution of (6.11). On the other hand, if \(z_1>M\) [i.e., it is not solution of (6.11)], then \(z_2>M\). That is, \(z_2\) is the unique solution of (6.11). Finally, if \(z_1=M\), then we have that \(z_2=z_1=M\). In any case, (6.11) has a unique solution. \(\square \)

Proof of Proposition 4.3

Let \(\frac{Z^n(\infty )}{n}\) be the stationary fluid-scaled number of uncharged EVs. We know that \(0 \le Z^n(\infty )\le K^n\), which yields \(\frac{Z^n(\infty )}{n}\le K\) almost surely. In other words, the sequence of random variables \(\frac{Z^n(\infty )}{n}\) is stochastically bounded in \({\mathbb {R}}\) and hence it is tight. Now, we consider the process \(\{Z^n(t), t\ge 0\}\) starting at point \(Z^n(\infty )\). That is, \(Z^n(t) \overset{d}{=} Z^n(\infty )\) for any \(t\ge 0\). Since \(\frac{Z^n(\infty )}{n}\) is tight for any convergent subsequence, there exists a further subsequence, say \(\frac{Z^{{\bar{n}}}(\infty )}{{\bar{n}}}\), such that \(\frac{Z^{{\bar{n}}}(\infty )}{{\bar{n}}}\overset{d}{\rightarrow }{\bar{z}}^*\), as \({\bar{n}}\rightarrow \infty \). We now have that, for any \(t\ge 0\),

$$\begin{aligned} \frac{Z^{{\bar{n}}}(t)}{{\bar{n}}}\overset{d}{=} \frac{Z^{{\bar{n}}}(\infty )}{{\bar{n}}}\overset{d}{\rightarrow }{\bar{z}}^*, \text{ as } {\bar{n}}\rightarrow \infty , \end{aligned}$$

and so \({\bar{z}}^*\) in an invariant point. By the uniqueness of the invariant point, we derive \({\bar{z}}^*=z^*\). This concludes the proof. \(\square \)

6.3 Proofs for Sect. 4.2

6.3.1 Proof of Theorem 4.5

We start the analysis by establishing a continuity result, which can be proved by using results in [32].

Proposition 6.1

Let \(t\ge 0\) and \(-\infty< \kappa < \infty \). Consider the following system:

$$\begin{aligned} \begin{aligned} x_1(t)&=b_1+g_1(t)+\int _{0}^{t} h_1(x_1(s))\mathrm{d}s-y(t), \\ x_2(t)&=b_2+g_2(t)+\int _{0}^{t} h_2(x_2(s))\mathrm{d}s-y(t), \end{aligned} \end{aligned}$$
(6.12)

where \(b_i\) are positive constants, \(h_i: {\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfy \(h_i(0)=0\) and are Lipschitz continuous functions for \(i=1,2\), and \(x_2(t)\le \kappa \). In addition, \(y(\cdot )\) is a nondecreasing nonnegative function in \(D[0,\infty )\) such that (6.12) holds and \(\int _{0}^{\infty } {\mathbb {1}}_{\{ x_2(t)<\kappa \}}\mathrm{d}y(t)=0\). Given \(b_i \in {\mathbb {R}}\) and \(g_i(\cdot ) \in D[0,\infty )\), we have that the system (6.12) has a unique solution \((x_1(\cdot ),x_2(\cdot ),y(\cdot ))\). Moreover, the functions \((x_1(\cdot ),x_2(\cdot ),y(\cdot ))\) are continuous in \(D[0,\infty )^3\) if \(D[0,\infty )\) is endowed with the uniform topology over bounded intervals or the \(J_1\) topology.

Proof of Proposition 6.1

First, observe that the function \(y(\cdot )\) is independent of the function \(x_1(\cdot )\). We know by [32, Theorem 7.3] that the second equation of (6.12) has a unique solution \((x_2(\cdot ),y(\cdot ))\) and that \(x_2(\cdot )\) and \(y(\cdot )\) are continuous in \(D[0,\infty )\) (endowed with the uniform topology over bounded intervals or the \(J_1\) topology). Furthermore, we have that \(y(\cdot )\), \(g_2(\cdot ) \in D[0,\infty )\) which implies \(y(\cdot )+g_2(\cdot ) \in D[0,\infty )\). The last observation together with [32, Theorem 4.1] implies that the first equation of (6.12) has a unique continuous solution. That is, system (6.12) has a unique solution \((x_1(\cdot ),x_2(\cdot ),y(\cdot ))\) and each function is continuous.\(\square \)

In order to continue our analysis, we need to define appropriate filtrations. Take the following filtrations, for \(n\ge 1\):

$$\begin{aligned} {\mathcal {F}}_{t,1}^{n}&=\sigma \Bigg (Z^n(0), N_{\lambda ^n} (s), N_{\mu }\left( \int _{0}^{s} L^n(Z^n(z)) \mathrm{d}z\right) , N_{\nu ,1}\left( \int _{0}^{s} Z^n(z) \mathrm{d}z\right) \\&\qquad \qquad : 0 \le s \le t \Bigg ) \end{aligned}$$

and

$$\begin{aligned} {\mathcal {F}}_{t,2}^{n}=\sigma \left( Q^n(0), N_{\lambda ^n} (s), N_{\nu }\left( \int _{0}^{s} Q^n(z) \mathrm{d}z\right) : 0 \le s \le t \right) . \end{aligned}$$

In the sequel, we work with the filtrations

$$\begin{aligned} {\mathcal {F}}_{t}^{n}=\sigma \left( {\mathcal {F}}_{t,1}^{n}, {\mathcal {F}}_{t,2}^{n} \right) , \end{aligned}$$

augmented by including all the null sets for \(t\ge 0\) and \(n\ge 1\).

Now, notice that the system dynamics (2.1) and (2.3) can be rewritten in the following form:

$$\begin{aligned} \begin{aligned} Q^n(t)=Q^n(0)+N_{\lambda ^n} (t) -N_{\nu }\left( \int _{0}^{t}Q^n(s) \mathrm{d}s\right) -Y^n(t), \end{aligned} \end{aligned}$$

and

$$\begin{aligned} Z^n(t)&=Z^n(0)+N_{\lambda ^n} (t) -N_{\mu }\left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) -N_{\nu ,1}\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \\&\quad -Y^n(t), \end{aligned}$$

where \(Y^n(t)=\int _{0}^{t} {\mathbb {1}}_{\{Q^n(s)=K^n\}}\mathrm{d}N_{\lambda ^n} (s)\). The process \(Y^n(t)\) counts all the customers that are lost when all the servers (chargers) are busy up to time t in the \(n^{\text {th}}\) system. Defining the operator \({\mathcal {M}}_{r}(\cdot ):=N_{r} (\cdot )-(r\cdot )\), where “r” indicates the rate of the Poisson process \(N_{r} (\cdot )\), the system dynamics take the following form:

$$\begin{aligned} \begin{aligned} Q^n(t)&=Q^n(0) +{\mathcal {M}}_{\lambda ^n}(t) -{\mathcal {M}}_{\nu }\left( \int _{0}^{t}Q^n(s) \mathrm{d}s\right) +\lambda ^n t -\nu \int _{0}^{t}Q^n(s) \mathrm{d}s \\&\quad -Y^n(t), \end{aligned} \end{aligned}$$
(6.13)

and

$$\begin{aligned} \begin{aligned} Z^n(t)&=Z^n(0) +{\mathcal {M}}_{\lambda ^n}(t) -{\mathcal {M}}_{\mu }\left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) -{\mathcal {M}}_{\nu ,1}\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \\&\quad +\lambda ^n t -\mu \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s -\nu \int _{0}^{t} Z^n(s) \mathrm{d}s -Y^n(t). \end{aligned} \end{aligned}$$
(6.14)

In order to derive appropriate equations (in the prelimit) for the diffusion-scaled processes, subtract and add the terms \(n \frac{\nu +\mu }{\nu }\), \(n \frac{\nu +\mu }{\nu } t\) in (6.13) and the terms n, \(n \mu t\), \(n \nu t\) in (6.14), and then divide both by \(\sqrt{n}\). Recalling that \(L^n(Z^n(t))= Z^n(t){{\,\mathrm{\wedge }\,}}M^n\), \(\lambda ^n=n(\nu +\mu )\), \(M^n=\frac{\lambda ^n}{\nu +\mu }+\beta \sqrt{n}\) and observing that \(M^n-n=\beta \sqrt{n}\) and \(\lambda ^n t- n(\nu +\mu )t=0 \), we obtain the following equations for the diffusion-scaled processes \({\hat{Q}}^n(\cdot )\) and \({\hat{Z}}^n(\cdot )\):

$$\begin{aligned} {\hat{Q}}^n(t)={\hat{Q}}^n(0) + \mathcal {{\hat{M}}}_{\lambda ^n}^n(t) - \mathcal {{\hat{M}}}_{\nu }^n \left( \int _{0}^{t}Q^n(s) \mathrm{d}s\right) -\nu \int _{0}^{t}{\hat{Q}}^n(s) \mathrm{d}s -{\hat{Y}}^n(t),\nonumber \\ \end{aligned}$$
(6.15)

and

$$\begin{aligned} {\hat{Z}}^n(t)=&{\hat{Z}}^n(0) + \mathcal {{\hat{M}}}_{\lambda ^n}^n (t) -\mathcal {{\hat{M}}}_{\mu }^n \left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) - \mathcal {{\hat{M}}}_{\nu ,1}^n\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \nonumber \\&-\mu \int _{0}^{t} ({\hat{Z}}^n(s){{\,\mathrm{\wedge }\,}}\beta ) \mathrm{d}s -\nu \int _{0}^{t} {\hat{Z}}^n(s) \mathrm{d}s -{\hat{Y}}^n(t), \end{aligned}$$
(6.16)

where \(\mathcal {{\hat{M}}}_{r}^n(\cdot ):= \frac{ {\mathcal {M}}_{r}(\cdot )}{\sqrt{n}}\) and the scaling for the process \({\hat{Y}}^n(\cdot )\) is analogous. The following proposition shows that the processes \(\mathcal {{\hat{M}}}_r(\cdot )\) are martingales.

Proposition 6.2

Under the assumptions \({\mathbb {E}}\left[ Z^n(0)\right] < \infty \) and \({\mathbb {E}}\left[ Q^n(0)\right] < \infty \), we have that the processes \(\mathcal {{\hat{M}}}_{\lambda ^n}^n (\cdot )\), \(\mathcal {{\hat{M}}}_{\mu }^n(\cdot )\), \(\mathcal {{\hat{M}}}_{\nu ,1}^n(\cdot )\), and \(\mathcal {{\hat{M}}}_{\nu }^n(\cdot )\) are square-integrable martingales with respect to the filtration \({\mathcal {F}}^{n}:=\{{\mathcal {F}}_{t}^{n}, t \ge 0\}\). Their associated predictable quadratic variations, denoted by \(\langle \cdot \rangle \), are

$$\begin{aligned} \langle \mathcal {{\hat{M}}}_{\lambda ^n}^n(t)\rangle&= \frac{\lambda ^n}{n} t=(\nu +\mu )t, \end{aligned}$$
(6.17)
$$\begin{aligned} \left\langle \mathcal {{\hat{M}}}_{\mu }^n \left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) \right\rangle&= \mu \frac{\int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s}{n}, \end{aligned}$$
(6.18)
$$\begin{aligned} \left\langle \mathcal {{\hat{M}}}_{\nu ,1}^n\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \right\rangle&= \nu \frac{\int _{0}^{t} Z^n(s) \mathrm{d}s}{n}, \end{aligned}$$
(6.19)
$$\begin{aligned} \left\langle \mathcal {{\hat{M}}}_{\nu }^n\left( \int _{0}^{t} Q^n(s) \mathrm{d}s\right) \right\rangle&= \nu \frac{\int _{0}^{t} Q^n(s) \mathrm{d}s}{n}, \end{aligned}$$
(6.20)

and \({\mathbb {E}}\left[ \langle \mathcal {{\hat{M}}}_{r}^n(t)\rangle \right] <\infty \), for \(t\ge 0\) and \(n \ge 1\).

Proof

Fix \(n\ge 1\). The result for the process \(\mathcal {{\hat{M}}}_{\lambda ^n}^n (\cdot )\) follows immediately by applying [32, Lemma 3.1].

Now, note that by the system dynamics (2.1), (2.3), the fact that \(Q^n(t)=Z^n(t)+C^n(t)\) and (2.2), we have that

$$\begin{aligned} \mathcal {{\hat{M}}}_{\nu }^n\left( \int _{0}^{t} Q^n(s) \mathrm{d}s\right) = \mathcal {{\hat{M}}}_{\nu ,1}^n\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) +\mathcal {{\hat{M}}}_{\nu ,2}^n\left( \int _{0}^{t} C^n(s) \mathrm{d}s\right) , \end{aligned}$$
(6.21)

where

$$\begin{aligned} \mathcal {{\hat{M}}}_{\nu ,2}^n\left( \int _{0}^{t} C^n(s) \mathrm{d}s\right) = \frac{ N_{\nu ,2}^n\left( \int _{0}^{t} C^n(s) \mathrm{d}s\right) -\nu \int _{0}^{t} C^n(s) \mathrm{d}s }{\sqrt{n}}. \end{aligned}$$

Since \(N_{\nu ,1}(\cdot )\) and \(N_{\nu ,2}(\cdot )\) are independent Poisson processes, [32, Lemma 3.1] implies that \(\mathcal {{\hat{M}}}_{\nu ,i}^n(\cdot )\) are \({\mathcal {F}}^{n}\)-martingales for \(i=1,2\). Observe that \(\mathcal {{\hat{M}}}_{\nu ,2}^n(\cdot )\) is adapted to the filtration \({\mathcal {F}}^{n}\) as the latter contains all the information about the processes \(Q^n(\cdot )\) and \(Z^n(\cdot )\) for fixed n. This is enough to determine the process \(C^n(\cdot )\) at any \(t\ge 0\). Using (6.21), the assumptions \({\mathbb {E}}\left[ Z^n(0)\right] , {\mathbb {E}}\left[ Q^n(0)\right] < \infty \), the inequalities

$$\begin{aligned} Z^n(t)&\le Z^n(0)+N_{\lambda ^n}(t), \nonumber \\ Q^n(t)&\le Q^n(0)+N_{\lambda ^n}(t), \end{aligned}$$
(6.22)

and adapting the proof in [32, Lemma 3.4], we obtain that, for fixed \(n\ge 1\), the following moment conditions are satisfied:

$$\begin{aligned}&{\mathbb {E}}\left[ \int _{0}^{t} Q^n(s) \mathrm{d}s\right]< \infty ,\qquad {\mathbb {E}}\left[ \int _{0}^{t} Z^n(s) \mathrm{d}s\right]< \infty , \\&{\mathbb {E}}\left[ \int _{0}^{t} C^n(s) \mathrm{d}s\right] \le {\mathbb {E}}\left[ \int _{0}^{t} Q^n(s) \mathrm{d}s\right]< \infty , \\&{\mathbb {E}}\left[ \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right] = {\mathbb {E}}\left[ \int _{0}^{t} Z^n(s){{\,\mathrm{\wedge }\,}}M^n \mathrm{d}s\right] < \infty . \end{aligned}$$

Also, again by [32, Lemma 3.4], we derive that the following moments related to the Poisson processes are finite:

$$\begin{aligned}&{\mathbb {E}}\left[ N_{\nu }^n\left( \int _{0}^{t} Q^n(s) \mathrm{d}s\right) \right]< \infty ,\qquad {\mathbb {E}}\left[ N_{\nu ,1}^n\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \right]< \infty , \\&{\mathbb {E}}\left[ N_{\nu ,2}^n\left( \int _{0}^{t} C^n(s) \mathrm{d}s\right) \right]< \infty , \qquad {\mathbb {E}}\left[ N_{\mu }^n\left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) \right] < \infty . \end{aligned}$$

Now, the result follows by the conclusion of the proof of [32, Theorem 7.1]. To this end, observe that \(\left( \int _{0}^{t} Z^n(s) \mathrm{d}s, \int _{0}^{t} C^n(s) \mathrm{d}s, \int _{0}^{t} Z^n(s){{\,\mathrm{\wedge }\,}}M^n \mathrm{d}s\right) \) is a \({\mathcal {F}}^{n}\)-stopping time. Thus, applying [20, Chapter 2, Theorem 8.7], we derive that

$$\begin{aligned} \Big (\mathcal {{\hat{M}}}_{\lambda ^n}^n (t), \mathcal {{\hat{M}}}_{\mu }^n\Big (\int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\Big ), \mathcal {{\hat{M}}}_{\nu ,1}^n\Big (\int _{0}^{t} Z^n(s) \mathrm{d}s\Big ), \mathcal {{\hat{M}}}_{\nu ,2}^n\Big (\int _{0}^{t} C^n(s) \mathrm{d}s\Big )\Big ) \end{aligned}$$

is an \({\mathcal {F}}^{n}\)-martingale. Finally, note that by (6.21), as \(\mathcal {{\hat{M}}}_{\mu }^n(\cdot )\) is adapted to the filtration \({\mathcal {F}}^{n}\), we obtain that \(\mathcal {{\hat{M}}}_{\nu }^n\left( \int _{0}^{t} Q^n(s) \mathrm{d}s\right) \) is also an \({\mathcal {F}}^{n}\)-martingale. \(\square \)

In order to apply the martingale central limit theorem, we first need to show that the corresponding fluid-scaled processes converge to a deterministic function (step 3), i.e.,

$$\begin{aligned} {{\bar{Z}}}^n(\cdot ):= \frac{Z^n(\cdot )}{n} \overset{d}{\rightarrow }e(\cdot ), \end{aligned}$$
(6.23)

and

$$\begin{aligned} {{\bar{Q}}}^n(\cdot ):= \frac{Q^n(\cdot )}{n} \overset{d}{\rightarrow }\frac{\nu +\mu }{\mu }e(\cdot ), \end{aligned}$$
(6.24)

where the function \(e: [0,\infty )\rightarrow [0,\infty )\) is defined by \(e(t)\equiv 1\). The following proposition presents the fluid limit.

Proposition 6.3

If \( {{\bar{Z}}}^n(0):= \frac{Z^n(0)}{n} \overset{d}{\rightarrow }e(0) \) and \( {{\bar{Q}}}^n(0):= \frac{Q^n(0)}{n} \overset{d}{\rightarrow }\frac{\nu +\mu }{\mu }e(0), \) then (6.23) and (6.24) hold as \(n \rightarrow \infty \).

Proof

We prove the fluid limits using the martingale representations (6.15) and (6.16). If the sequences \(\{{\hat{Z}}^n(\cdot )\}\) and \(\{{\hat{Q}}^n(\cdot )\}\) are stochastically bounded in \(D[0,\infty )\), then by [32, Lemma 5.9] we have that

$$\begin{aligned} \frac{{\hat{Z}}^n(\cdot )}{\sqrt{n}} \overset{d}{\rightarrow }\eta (\cdot ) \text{ and } \frac{{\hat{Q}}^n(\cdot )}{\sqrt{n}} \overset{d}{\rightarrow }\eta (\cdot ), \end{aligned}$$

where the function \(\eta : [0,\infty )\rightarrow [0,\infty )\) is defined by \(\eta (t)\equiv 0\). Note that the last limits are equivalent to (6.23) and (6.24).

The diffusion-scaled processes \({\hat{Z}}^n(\cdot )\) and \({\hat{Q}}^n(\cdot )\) have the martingale representation (6.15) and (6.16). In order to prove that they are stochastically bounded in \(D[0,\infty )\), it is enough to show that the corresponding martingales are stochastically bounded in \(D[0,\infty )\); see [32, Lemma 5.5]. By [32, Lemma 5.8] the martingales are stochastically bounded if the sequences of their predictable quadratic variations (6.17)–(6.20) are stochastically bounded in \({\mathbb {R}}\) for each \(t\ge 0\). To prove that the sequences of the predictable quadratic variations of the martingales in expressions (6.15) and (6.16) are stochastically bounded in \({\mathbb {R}}\), we use (6.22), [32, Lemma 6.2] and the fact that for any \(t\ge 0\), \(\frac{Q^n(t)}{n}\le \frac{K^n}{n}< \infty \).

The predictable quadratic variation for the arrival process (6.17) is obviously bounded as it is a deterministic function. For (6.17), we have that

$$\begin{aligned} \mu \frac{\int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s}{n}\le \mu t \frac{M^n}{\sqrt{n}} =\mu t\left( 1+\frac{\beta }{\sqrt{n}}\right) \le \mu t(1+\beta ). \end{aligned}$$

The result for (6.19) follows by applying (6.22) and [32, Lemma 6.2]. Finally, applying the inequality \(\frac{Q^n(t)}{n}\le \frac{K^n}{n}\) in (6.20), we obtain

$$\begin{aligned} \nu \frac{\int _{0}^{t} Q^n(s) \mathrm{d}s}{n}\le \nu t \frac{K^n}{n}= \nu t\left( \frac{\nu +\mu }{\nu }+\frac{\kappa }{\sqrt{n}}\right) \le \nu t\left( \frac{\nu +\mu }{\nu }+\kappa \right) . \end{aligned}$$

\(\square \)

Before we move to the final step of the proof of Theorem 4.5, we make a remark about the fluid limit of the number of fully charged EVs in the system, which we need later.

Remark 6.1

Note that the diffusion-scaled process \(\{{\hat{C}}^n(\cdot )\}\) is also stochastically bounded in \(D[0,\infty )\) and the fluid limit is given by the difference \(\frac{\nu +\mu }{\nu }-1=\frac{\mu }{\nu }\).

Now, we are ready to put all the pieces together, leading to the last step of the proof of Theorem 4.5.

Proof of Theorem 4.5

By Proposition 6.3 and the continuous mapping theorem [19, Theorem 1.2], we have that, as \(n\rightarrow \infty \),

$$\begin{aligned}&\langle \mathcal {{\hat{M}}}_{\lambda ^n}^n(t)\rangle \rightarrow (\nu +\mu )t, \quad \left\langle \mathcal {{\hat{M}}}_{\mu }^n \left( \int _{0}^{t} L^n(Z^n(s)) \mathrm{d}s\right) \right\rangle \rightarrow \mu t,\\&\left\langle \mathcal {{\hat{M}}}_{\nu ,1}^n\left( \int _{0}^{t} Z^n(s) \mathrm{d}s\right) \right\rangle \rightarrow \nu t , \quad \left\langle \mathcal {{\hat{M}}}_{\nu }^n\left( \int _{0}^{t} Q^n(s) \mathrm{d}s\right) \right\rangle \rightarrow (\nu +\mu )t. \end{aligned}$$

Applying the martingale central limit theorem in [20], we have that

$$\begin{aligned} \begin{aligned}&\left( \mathcal {{\hat{M}}}_{\lambda ^n}^n(\cdot ), \mathcal {{\hat{M}}}_{\mu }^n (\cdot ), \mathcal {{\hat{M}}}_{\nu ,1}^n(\cdot ), \mathcal {{\hat{M}}}_{\nu }^n(\cdot ) \right) \overset{d}{\rightarrow }\\&\quad \left( \sqrt{\nu +\mu }W_{\lambda }(\cdot ), \sqrt{\mu }W_{\mu }(\cdot ), \sqrt{\nu }W_{\nu ,1}(\cdot ), \sqrt{\nu +\mu }W_{\nu }(\cdot ) \right) , \end{aligned} \end{aligned}$$
(6.25)

where \(W_{\lambda }(\cdot )\), \(W_{\mu }(\cdot )\), \(W_{\nu ,1}(\cdot )\), and \(W_{\nu }(\cdot )\) are (nonindependent) standard Brownian motions. It is essential to observe that, by (6.21) and Remark 6.1, we have that

$$\begin{aligned} \sqrt{\nu +\mu }W_{\nu }\overset{d}{=} \sqrt{\nu }W_{\nu ,1}+\sqrt{\mu }W_{\nu ,2}, \end{aligned}$$

where now \(W_{\lambda }(\cdot )\), \(W_{\mu }(\cdot )\), \(W_{\nu ,i}(\cdot )\), \(i=1,2\), are independent standard Brownian motions. Furthermore, by the properties of Brownian motion, we obtain

$$\begin{aligned} \sqrt{\nu +\mu }W_{\lambda }-\sqrt{\mu }W_{\mu }-\sqrt{\nu }W_{\nu ,1} \overset{d}{=}&\sqrt{2(\nu +\mu )} W_{{\hat{Z}}}, \end{aligned}$$
(6.26)
$$\begin{aligned} \sqrt{\nu +\mu }W_{\lambda }-\sqrt{\nu }W_{\nu ,1}-\sqrt{\mu }W_{\nu ,2} \overset{d}{=}&\sqrt{2(\nu +\mu )} W_{{\hat{Q}}}, \end{aligned}$$
(6.27)

where \(W_{\hat{Z}}(\cdot )\) and \(W_{\hat{Q}}(\cdot )\) are nonindependent standard Brownian motions. Further, we have that

$$\begin{aligned} {\mathbb {E}}\left[ W_{{\hat{Z}}}(t) W_{{\hat{Q}}}(t)\right] = \frac{ {\mathbb {E}}\left[ (\nu +\mu )W_{\lambda }(t)^2\right] + {\mathbb {E}}\left[ \nu W_{\nu ,1}(t)^2\right] }{2(\nu +\mu )} =\frac{(2\nu +\mu )}{2(\nu +\mu )}t. \end{aligned}$$

In addition, by [32, Theorem 7.3], we know that \((Q^n(\cdot ), Y^n(\cdot ))\) satisfies a one-dimensional reflection mapping; see [11, Sect. 6.2] for background on the reflection mapping. That is, \(Y^n(\cdot )\) is the unique nondecreasing nonnegative process such that \(Q^n(t)\le K^n\), (6.15) holds and

$$\begin{aligned} \int _{0}^{\infty } {\mathbb {1}}_{\{Q^n(t)<K^n\}}\mathrm{d}Y^n(t)=0, \end{aligned}$$

which is equivalent to

$$\begin{aligned} \int _{0}^{\infty } {\mathbb {1}}_{\{{\hat{Q}}^n(t)<\kappa \}}\mathrm{d}Y^n(t)=0. \end{aligned}$$

Now, combining Theorem 6.1, Proposition 6.2, (6.25), (6.26), and (6.27), we derive that

$$\begin{aligned} ( {\hat{Z}}^n(\cdot ),{\hat{Q}}^n(\cdot ), {\hat{Y}}^n(\cdot ) ) \overset{d}{\rightarrow }({\hat{Z}}(\cdot ), {\hat{Q}}(\cdot ), {\hat{Y}}(\cdot )) \quad \text {in}\quad D[0,\infty )^3, \end{aligned}$$

where the vector \(({\hat{Z}}(\cdot ),{\hat{Q}}(\cdot ), {\hat{Y}}(\cdot ))\) is characterized by (4.6). This concludes the proof of Theorem 4.5. \(\square \)

6.4 Proofs for Sect. 4.2.2

Proof of Proposition 4.6

Adding and subtracting the terms \(\frac{\nu }{\nu +\mu }K^n\), \( \frac{\mu \nu }{\nu +\mu }K^n t\), \(\frac{\nu ^2}{\nu +\mu }K^nt\), and the means of the Poisson processes in (4.11), we have that

$$\begin{aligned} Z_{f}^n(t)-\frac{\nu }{\nu +\mu }K^n&= Z_{f}^n(0)-\frac{\nu }{\nu +\mu }K^n+ \Bigg (N_{\nu }^f\left( n \int _{0}^{t}\left( \frac{K^n-Z_{f}^n(s)}{n}\right) \mathrm{d}s\right) \\&\qquad - \nu n \int _{0}^{t}\left( \frac{K^n-Z_{f}^n(s)}{n}\right) \mathrm{d}s \Bigg ) -\Bigg (N_{\mu }^f\left( \mu n \int _{0}^{t} \frac{Z_{f}^n(s)}{n}{{\,\mathrm{\wedge }\,}}\frac{M^n}{n} \mathrm{d}s\right) \\&\qquad - \mu n \int _{0}^{t} \frac{Z_{f}^n(s)}{n}{{\,\mathrm{\wedge }\,}}\frac{M^n}{n} \mathrm{d}s \Bigg ) +\nu \int _{0}^{t}\left( K^n-Z_{f}^n(s)+\frac{\nu }{\nu +\mu }K^n\right) \mathrm{d}s \\&\qquad - \mu \int _{0}^{t} \left( Z_{f}^n(s)-\frac{\nu }{\nu +\mu }K^n\right) {{\,\mathrm{\wedge }\,}}\sqrt{n}\beta \mathrm{d}s-(\nu +\mu ) \frac{\nu }{\nu +\mu }K^n t. \end{aligned}$$

Recalling that \(\hat{{\mathcal {M}}}_{r}^n(\cdot ):=\frac{N_{r}^f (\cdot )-(r\cdot )}{\sqrt{n}}\) and dividing the last equation by \(\sqrt{n}\), we have that

$$\begin{aligned} \hat{Z}^n_f(t)&= \hat{Z}^n_f(0)+ \hat{{\mathcal {M}}}_{\nu }^n\left( n \int _{0}^{t}\frac{K^n-Z_{f}^n(s)}{n}\mathrm{d}s\right) -\hat{{\mathcal {M}}}_{\mu }^n\left( n \int _{0}^{t} \frac{Z_{f}^n(s)}{n}{{\,\mathrm{\wedge }\,}}\frac{M^n}{n} \mathrm{d}s\right) \\&\qquad -\nu \int _{0}^{t} \hat{Z}^n_f(s)\mathrm{d}s- \mu \int _{0}^{t} \hat{Z}^n_f(s){{\,\mathrm{\wedge }\,}}\beta \mathrm{d}s+\frac{1}{\sqrt{n}} \left( \nu K^n-\nu K^n\right) t. \end{aligned}$$

Observing that the quantity \(\int _{0}^{t}\frac{K^n-Z_{f}^n(s)}{n}\mathrm{d}s\) is stochastically bounded and allowing \(n \rightarrow \infty \) in the last equation, we derive

$$\begin{aligned} \begin{aligned} \hat{Z}_f(t)=&-\nu \int _{0}^{t} \hat{Z}_f(s)\mathrm{d}s - \mu \int _{0}^{t} \hat{Z}_f(s){{\,\mathrm{\wedge }\,}}\beta \mathrm{d}s+ \sqrt{\nu \left( K-\frac{\nu K}{\nu +\mu }\right) }W_1(t)\\&-\sqrt{\mu \left( \frac{\nu K}{\nu +\mu }\right) }W_2(t), \end{aligned} \end{aligned}$$

where \(W_1(t)\) and \(W_2(t)\) are (independent) standard Brownian motions. Finally, by the properties of Brownian motion, we get

$$\begin{aligned} \sqrt{\nu \left( K-\frac{\nu K}{\nu +\mu }\right) }W_1(t) -\sqrt{\mu \left( \frac{\nu K}{\nu +\mu }\right) }W_2(t) \overset{d}{=} \sqrt{\frac{2 \nu \mu K }{\nu +\mu }}W(t), \end{aligned}$$

where W(t) is a standard Brownian motion. \(\square \)

6.4.1 Proof of Theorem 4.7

The rest of this section gives a proof of Theorem 4.7. We first show a bound for the process \(E^n(\cdot )\).

Proposition 6.4

Let \(T>0\). We have that, for any \(\epsilon >0\), there exists \(n_{\epsilon }\) such that

$$\begin{aligned} {\mathbb {P}}\left( \sup _{0\le t\le T} E^n(t) \le L(nT)^{1/4}+ L\log (nT) \right) >1-\epsilon , \end{aligned}$$

for any \(n \ge n_{\epsilon }\), where L is a positive constant.

Before we proceed to the proof of Proposition 6.4, we show a preliminary result.

Lemma 6.5

Let \(E_{M}(t)\) denote the queue length process in an M / M / 1 queue at time \(t\ge 0\), with arrival rate \(\nu K\) and service rate \(\lambda \) such that \(\nu K<\lambda \). For any \(T>0\), we have that

$$\begin{aligned} \sup \limits _{0\le t\le nT} E_M(t) \le L(nT)^{1/4}+L\log (nT), \end{aligned}$$

almost surely as \(n \rightarrow \infty \), where L is a positive constant.

Proof

The proof of this lemma is based on results in [11]. By [11, Theorem 6.16], there exists a reflected (at zero) Brownian motion, \({\tilde{E}}_M(\cdot )\) with drift \(\nu K-\lambda <0\) such that

$$\begin{aligned} \sup \limits _{0\le t\le nT}| E_M(t)-{\tilde{E}}_M(t)|= o\left( (nT)^{1/4}\right) , \end{aligned}$$

or equivalently

$$\begin{aligned} \sup \limits _{0\le t\le nT}| E_M(t)-{\tilde{E}}_M(t)|\le L'(nT)^{1/4}, \end{aligned}$$
(6.28)

for all \(L'>0\), almost surely as \(n \rightarrow \infty \). Further, by [11, Theorem 6.3],

$$\begin{aligned} \sup \limits _{0\le t\le nT}|{\tilde{E}}_M(t)|= O\left( \log (nT)\right) , \end{aligned}$$

or alternatively there exists \(L>0\) such that

$$\begin{aligned} \sup _{0\le t \le nT} |{\tilde{E}}_M(t)|\le L \log (nT), \end{aligned}$$
(6.29)

almost surely as \(n \rightarrow \infty \).

Now, using the triangle inequality leads to

$$\begin{aligned} \sup \limits _{0\le t\le nT} E_M(t)&= \sup \limits _{0\le t\le nT}| E_M(t)-{\tilde{E}}_M(t)+{\tilde{E}}_M(t)|\\&\le \sup \limits _{0\le t\le nT}| E_M(t)-{\tilde{E}}_M(t)| +\sup \limits _{0\le t\le nT} |{\tilde{E}}_M(t)|. \end{aligned}$$

Applying (6.28) (by choosing \(L'=L\)) and (6.29) in the last inequality, we have that

$$\begin{aligned} \sup \limits _{0\le t\le nT} E_M(t) \le L (nT)^{1/4}+L\log (nT), \end{aligned}$$

almost surely as \(n \rightarrow \infty \). \(\square \)

Now, we are ready to show Proposition 6.4.

Proof of Proposition 6.4

Let \(E_{M}^n(t)\) denote the queue length process in an M / M / 1 queue at time \(t\ge 0\), with arrival rate \(n \nu K\) and service rate \(n \lambda \). Using standard coupling arguments, it can be shown that \(\sup \limits _{0\le t\le T} E^n(t) \le \sup \limits _{0\le t\le T} E^n_M(t)\) almost surely. Further, we have that \(\sup \limits _{0\le t\le T} E^n_M(t)\overset{d}{=} \sup \limits _{0\le t\le nT} E_M(t)\). Hence, for any \(q^n>0\),

$$\begin{aligned} {\mathbb {P}}\left( \sup \limits _{0\le t\le T} E^n(t) \le q^n \right) \ge {\mathbb {P}}\left( \sup \limits _{0\le t\le nT} E_M(t) \le q^n \right) . \end{aligned}$$

Choosing \(q^n=L(nT)^{1/4}+L\log (nT)\) and applying Lemma 6.5, we have that for any \(\epsilon >0\) there exists \(n_{\epsilon }\) such that

$$\begin{aligned} {\mathbb {P}}\left( \sup \limits _{0\le t\le nT} E_M(t) \le L(nT)^{1/4}+L\log (nT) \right) >1-\epsilon , \end{aligned}$$

for \(n>n_\epsilon \). This concludes the proof as \(\epsilon \) is arbitrary. \(\square \)

Remark 6.2

Define the sample path set \({\mathcal {G}}^n\subseteq \varOmega \) such that

$$\begin{aligned} {\mathcal {G}}^n:=\left\{ \omega \in \varOmega : \sup \limits _{0\le t\le T} E^n(t) \le L(nT)^{1/4}+L\log (nT) \right\} . \end{aligned}$$

By Proposition 6.4, it follows that \({\mathbb {P}}\left( {\mathcal {G}}^n\right) \rightarrow 1\), as \(n \rightarrow \infty \). In the sequel, we assume that \(\omega \in {\mathcal {G}}^n\).

Proof of Theorem 4.7

Rewriting (4.15) for the \(n^{th}\) system and using the assumption \({\mathbb {P}}\left( E^n(0)=0\right) =1\) yields

$$\begin{aligned} Z^n(t)&=Z^n(0)-E^n(t) + N_{\nu ,2}\left( \int _{0}^{t} \left( K^n-E^n(s)-Z^n(s)\right) \mathrm{d}s\right) \\&\qquad -N_{\mu }\left( \int _{0}^{t} Z^n(s){{\,\mathrm{\wedge }\,}}M^n \mathrm{d}s\right) . \end{aligned}$$

Let \(T>0\) and \(\omega \in {\mathcal {G}}^n\). We have that

$$\begin{aligned} 0\le \frac{E^n(t)}{\sqrt{n}} \le \sup \limits _{0\le t\le T} \frac{E^n(t)}{\sqrt{n}} \le \frac{L(nT)^{1/4}+L\log (nT)}{\sqrt{n}}\rightarrow 0, \end{aligned}$$

as \(n\rightarrow \infty \). The result now follows by adapting the proof of Proposition 4.6. \(\square \)

6.5 Proofs for Sect. 4.2.3

Proof of Proposition 4.8

First, we note that it is enough to show the result for \(M=1\). Then, we replace \(\mu \) by \(\mu M\); see [41, Remark 5].

Scale the time by n, and add and subtract the means of the Poisson processes in (2.1) and (2.3) to obtain

$$\begin{aligned} Q^n(n t)&=Q^n(0)+ \left( N_{\lambda ^n} (n t) - \lambda ^n n t \right) -\Bigg ( N_{\nu ^n}\left( n \int _{0}^{t}Q^n(ns) \mathrm{d}s \right) \\&\qquad - n \nu ^n \int _{0}^{t}Q^n(ns)\mathrm{d}s \Bigg )+\lambda ^n n t- n \nu ^n \int _{0}^{t}Q^n(ns)\mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned}&Z^n(n t)=Z^n(0)+\left( N_{\lambda ^n} (n t) -\lambda ^n n t\right) -\Bigg ( N_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s \right) \\&\quad -n \mu \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s\Bigg ) -\left( N_{\nu ^n,1}\left( n\int _{0}^{t} Z^n(ns) \mathrm{d}s \right) -n \nu ^n\int _{0}^{t} Z^n(ns) \mathrm{d}s \right) \\&\quad +\lambda ^n n t-n \mu \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s -n \nu ^n\int _{0}^{t} Z^n(ns) \mathrm{d}s. \end{aligned}$$

Define \({\bar{Q}}^n(t)=Q^n(nt)/n\), \({\bar{Z}}^n(t)=Z^n(nt)/n\), and recall that \({\mathcal {M}}_{r}(\cdot ):=N_{r} (\cdot )-(r\cdot )\). Adding and subtracting the terms \(\mu n\) and \(\lambda ^n n t\) in the first equation yields

$$\begin{aligned} Q^n(n t)- \mu n&=Q^n(0)- \mu n +{\mathcal {M}}_{\lambda ^n}(n t) -{\mathcal {M}}_{1}\left( n\int _{0}^{t}{\bar{Q}}^n(s) \mathrm{d}s\right) -c\mu \sqrt{n} t\\&\quad - \int _{0}^{t} \left( Q^n(ns)-\mu n\right) \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned} Z^n(n t)&=Z^n(0) +{\mathcal {M}}_{\lambda ^n }(n t) -{\mathcal {M}}_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\right) \\&\qquad -{\mathcal {M}}_{1,1}\left( n \int _{0}^{t} {\bar{Z}}^n(s) \mathrm{d}s\right) +\mu \left( 1-\frac{c}{\sqrt{n}}\right) n t -\mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\\&\qquad - \int _{0}^{t} \left( Z^n(n s)\right) \mathrm{d}s. \end{aligned}$$

Dividing the last equations by \(\sqrt{n}\) and observing that

$$\begin{aligned} \mu \left( 1-\frac{c}{\sqrt{n}}\right) n t -\mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s= -c\mu \sqrt{n} t+\mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)=0\}}\mathrm{d}s, \end{aligned}$$

we obtain

$$\begin{aligned} {{\tilde{Q}}}^n(t)={{\tilde{Q}}}^n(0) +\hat{ {\mathcal {M}}}^n_{\lambda ^n}(n t) -\hat{{\mathcal {M}}}^n_{1}\left( n\int _{0}^{t}{\bar{Q}}^n(s) \mathrm{d}s\right) -c\mu t- \int _{0}^{t} {\tilde{Q}}^n(s) \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned} {\tilde{Z}}^n(t)&={\tilde{Z}}^n(0) +\hat{{\mathcal {M}}}^n_{\lambda ^n }(nt) -\hat{{\mathcal {M}}}^n_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\right) \\&\qquad -\hat{{\mathcal {M}}}^n_{1,1}\left( n\int _{0}^{t} {\bar{Z}}^n(s) \mathrm{d}s\right) -c\mu t+\mu \sqrt{n} \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)=0\}} \mathrm{d}s - \int _{0}^{t} {\tilde{Z}}^n( s)\mathrm{d}s. \end{aligned}$$

Now, taking the limit \(n\rightarrow \infty \) and using the reflection mapping [11], we derive

$$\begin{aligned} d{\tilde{Q}}(t)&=-\,(c\mu +{\tilde{Q}}(t)) \mathrm{d}t+\sqrt{2\mu } \mathrm{d}W_{{{\tilde{Q}}}}(t) ,\\ d{\tilde{Z}}(t)&=-\,(c\mu +{\tilde{Z}}(t)) \mathrm{d}t+ \sqrt{2\mu } \mathrm{d}W_{{{\tilde{Z}}}}(t) +d{\tilde{Y}}(t), \end{aligned}$$

where \(\int _{0}^{\infty } {\mathbb {1}}_{\{{\tilde{Z}}(t)>0\}}d{\tilde{Y}}(t)=0\). Further, \({\mathbb {E}}\left[ W_{{{\tilde{Z}}}}(t) W_{{{\tilde{Q}}}}(t)\right] = \frac{\mu M t}{2 \mu M}=t/2\). \(\square \)

Proof of Proposition 4.9

First, as in Proposition 4.8, we note that it is enough to show the result for \(M=1\) and then replace \(\mu \) by \(\mu M\). Scale the time by n, and add and subtract the means of the Poisson processes in (2.1) and (2.3) to obtain

$$\begin{aligned} Q^n(n t)&=Q^n(0)+ \left( N_{\lambda } (n t) - \lambda n t \right) -\Bigg ( N_{\nu ^n}\left( n \int _{0}^{t}Q^n(ns) \mathrm{d}s \right) \\&\quad - n \nu ^n \int _{0}^{t}Q^n(ns)\mathrm{d}s \Bigg )+\lambda n t- n \nu ^n \int _{0}^{t}Q^n(ns)\mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned}&Z^n(n t)=Z^n(0)+\left( N_{\lambda } (n t) -\lambda n t\right) -\bigg ( N_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s \right) \\&\quad -n \mu \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s \Bigg ) - \left( N_{\nu ^n,1}\left( n\int _{0}^{t} Z^n(ns) \mathrm{d}s \right) -n \nu ^n\int _{0}^{t} Z^n(ns) \mathrm{d}s \right) \\&\qquad +\lambda n t-n \mu \int _{0}^{t} {\mathbb {1}}_{\{Z^n(ns)>0\}} \mathrm{d}s -n \nu ^n\int _{0}^{t} Z^n(ns) \mathrm{d}s. \end{aligned}$$

Adding and subtracting the terms \(\lambda n\) and \(\lambda n t\) in the first equation and the terms \((\lambda -\mu ) n\) and \((\lambda -\mu ) n t\) in the second equation yields

$$\begin{aligned} Q^n(n t)- \lambda n&=Q^n(0)- \lambda n +{\mathcal {M}}_{\lambda }(n t) -{\mathcal {M}}_{1}\left( n\int _{0}^{t}{\bar{Q}}^n(s) \mathrm{d}s\right) \\&\qquad - \int _{0}^{t} \left( Q^n(ns)-\lambda n\right) \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned} Z^n(n t)-(\lambda -\mu ) n&=Z^n(0)-(\lambda -\mu ) n +{\mathcal {M}}_{\lambda }(n t) \\&\qquad -{\mathcal {M}}_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\right) -{\mathcal {M}}_{1,1}\left( n \int _{0}^{t} {\bar{Z}}^n(s) \mathrm{d}s\right) \\&\qquad +\lambda n t-(\lambda -\mu ) n t-\mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\\&\qquad - \int _{0}^{t} \left( Z^n(n s)-(\lambda -\mu ) n\right) \mathrm{d}s. \end{aligned}$$

Dividing the last equations by \(\sqrt{n}\) and observing that

$$\begin{aligned} \lambda n t-(\lambda -\mu ) n t -\mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s= \mu n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)=0\}}\mathrm{d}s, \end{aligned}$$

we obtain

$$\begin{aligned} {{\tilde{Q}}}^n_o(t)={{\tilde{Q}}}^n_o(0) +\hat{ {\mathcal {M}}}^n_{\lambda }(n t) -\hat{{\mathcal {M}}}^n_{1}\left( n\int _{0}^{t}{\bar{Q}}^n(s) \mathrm{d}s\right) - \int _{0}^{t} {\tilde{Q}}^n_o(s) \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned} {\tilde{Z}}^n_o(t)&={\tilde{Z}}^n_o(0) +\hat{{\mathcal {M}}}^n_{\lambda }(n t) -\hat{{\mathcal {M}}}^n_{\mu }\left( n \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)>0\}} \mathrm{d}s\right) \\&\qquad -\hat{{\mathcal {M}}}^n_{1,1}\left( n\int _{0}^{t} {\bar{Z}}^n(s) \mathrm{d}s\right) -\mu \sqrt{n} \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n_o(s)=0\}} \mathrm{d}s - \int _{0}^{t} {\tilde{Z}}^n( s)\mathrm{d}s. \end{aligned}$$

By the overloaded assumption, it follows that \(\sqrt{n} \int _{0}^{t} {\mathbb {1}}_{\{{\bar{Z}}^n(s)=0\}} \mathrm{d}s \rightarrow 0\), as \(n \rightarrow \infty \) [41]. Now, taking the limit \(n\rightarrow \infty \), we derive

$$\begin{aligned} d{\tilde{Q}}_o(t)&=-\,{\tilde{Q}}_o(t) \mathrm{d}t+\sqrt{\lambda }\mathrm{d}W_1(t) - \sqrt{\mu }\mathrm{d}W(t),\\ d{\tilde{Z}}_o(t)&=-\,{\tilde{Z}}_o(t) \mathrm{d}t+ \sqrt{\lambda }\mathrm{d}W_1(t)-\sqrt{\mu }\mathrm{d}W_2(t)-\sqrt{\lambda -\mu }\mathrm{d}W_3(t), \end{aligned}$$

and we can write \(\sqrt{\mu }W(t)\overset{d}{=} \sqrt{\lambda -\mu }W_3(t)+\sqrt{\mu }W_4(t)\), where \(W_i\) are independent standard Brownian motions. \(\square \)

7 Concluding remarks

This paper proposes modelling an electric vehicle charging system by using layered queueing networks. We develop several bounds and approximations for the number of uncharged EVs in the system and the probability that an EV leaves the charging station with a fully charged battery. In the numerical examples, it seems that a modification of the approximation for a full parking lot leads to a good approximation. Further, the fluid approximation seems to be good in most cases and we believe that the diffusion approximation in the Halfin–Whitt regime will improve the fluid approximation. Unfortunately, the exact (or even numerical) solution of (4.9) seems very hard.

From an application standpoint, it is important to remove various model assumptions. If parking and charging times are given by the (possibly dependent) generally distributed random variables B and D, we can develop a measure-valued fluid model by extending [22]. In addition, we can include in the model time-varying arrival rates, multiple EV types, and multiple parking lots, thus extending results in [34]. Moreover, the distribution grid (low-voltage network) plays a crucial role and it should be included in the model; see [4] for a heuristic approach. For another application in EV charging including the distribution grid, see [10], where simulation results are presented for a Markovian model.