1 Introduction

Modern communication networks are complex, and handle huge amounts of data. This is especially true closer to the backbone of the networks, where large numbers of connections share the same resources. The design and operation of these networks greatly benefits from tractable theoretical models that are able to describe and predict the performance of the system. In order to obtain such tractable models, a common practice is to represent the network’s nodes as single server queues with an appropriate service discipline. Moreover, given the high level of traffic aggregation, it is appealing to approximate the incoming traffic to the network by Gaussian processes [1, 2]. Since these networks are often operated in a regime where the packet loss probabilities are very small, there is a need for understanding the large-deviations behavior of these networks.

While a queueing network with Gaussian inputs is a rather streamlined model, the analysis of its large-deviations behavior is notoriously difficult outside the case of an isolated queue, which has been thoroughly studied [3,4,5,6]. The main reason for this is that after the (initially Gaussian) incoming traffic goes through the first queue, it is no longer Gaussian. Then, when it is fed to a different queue, the analysis of this queue is significantly harder. For the special case of two queues in tandem, with work arriving only to the first queue and all the departing work of the first queue going into the second one, a useful trick involving subtracting the first queue (which has Gaussian input) from the sum of both queues (which behaves exactly as a single-server queue with a Gaussian input) yields a tractable analysis of the second queue in the tandem [7], even if it does not have a Gaussian input; see also the more refined approach in [8] based on the delicate busy-period analysis developed in [9]. However, this trick does not work for more complex networks (not even for two queues in tandem with inputs to both queues, or when not all departures from the first queue join the second one [10]). Another factor that further complicates the analysis of complex networks is the fact that the input processes to the different queues can be correlated. This becomes a problem when the output of queues with correlated inputs are merged into another queue.

In this paper, we consider acyclic networks of single-server queues, where work arrives to the queues as (possibly correlated) Gaussian processes, and where the work departing from each queue is deterministically split among its neighbors, with a fraction of it leaving the system altogether. This deterministic split of the departing work was also considered in, for example, [11], and it is particularly suitable for modeling single-class networks (where all work is essentially exchangeable), or for modeling networks where all work needs to be routed to the same node (and thus where the splitting of departure streams is only performed to load balance the network).

In terms of our approach, this paper fits in the framework of the analysis of a single Gaussian queue [5], and the subsequent analysis of tandem, priority, and generalized processor sharing queues [7, 12]; we refer to [13] for a textbook account on Gaussian queues. In terms of our scope, this paper is perhaps most similar to [11], where the authors obtained large-deviations results for acyclic networks of G/G/1 queues. However, in that paper, there were certain limitations regarding the correlation structure of the input processes (in that they have to be independent across different queues), and regarding the structure of the network (in that any two directed paths cannot meet in more than one node).

1.1 Our contribution

In this paper, we generalize the analysis of a pair of queues in tandem, fed by a single Gaussian process [7], to acyclic networks of single-server queues, fed by (possibly correlated) Gaussian processes. As in [7], we assume that the arrival processes are the superposition of n i.i.d. (multi-dimensional) Gaussian processes, and scale the processing rates of the servers by a factor of n, which corresponds to the so called ‘many sources regime.’ In this regime, for any given node i, we work toward characterizing the asymptotic exponential decay rate of its ‘overflow probability,’ that is, the limit

$$\begin{aligned} - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) , \end{aligned}$$
(1)

where \(Q^{(n)}_i\) is the steady-state queue length at the ith node, and b is any positive threshold. In particular:

  1. (i)

    We obtain a general lower bound on the asymptotic exponential decay rate by leveraging the power of a generalized version of Schilder’s theorem (Theorem 3).

  2. (ii)

    Under additional technical conditions, we prove the tightness of the lower bound by finding the most likely sample paths, and showing that the corresponding asymptotic exponential decay rates coincide with the lower bound (Theorems 45, and 6).

  3. (iii)

    We show that if the input processes to the different queues are nonnegatively correlated, non-short-range-dependent fractional Brownian motions, and if the processing rates are large enough, then the asymptotic exponential decay rates of the queues coincide with those of isolated queues with appropriate Gaussian inputs (Theorem 7).

1.2 Organization of the paper

The paper is organized as follows: In Sect. 2, we introduce some notation, the network model, and a few preliminaries on large-deviations theory. In Sect. 3, we present our main results. In Sect. 4, we introduce an interesting example where the large-deviations behavior of any queue in the network coincides with the behavior of a single-server queue with Gaussian input. Finally, we conclude in Sect. 5.

2 Model and preliminaries

In this section, we introduce some notation, the queueing network model that we analyze, and present a few preliminaries on sample-path large deviations theory.

2.1 Notation for underlying graph

Given a directed graph \(G=(V,E)\), and a node \(i\in V\), we introduce the following notation: Let

$$\begin{aligned} \mathcal {N}_\mathrm{in}(i) \triangleq \big \{j\in V : (j,i)\in E \big \} \end{aligned}$$

be the set of all inbound neighbors of i. Let

$$\begin{aligned} \mathcal {P}_m(i) \triangleq \bigcup _{l=m}^{|V|} \Big \{ r \in V^l : r_l=i, \text { and } (r_\ell ,r_{\ell +1})\in E,\,\, \forall \, \ell \le l-1 \Big \} \end{aligned}$$

be the set of all directed paths that contain at least m nodes, and end at node i. In particular, note that the trivial path (i) is only in \(\mathcal {P}_1(i)\). For any path \(r\in \mathcal {P}_2(i)\), let \(r_+\in \mathcal {P}_1(i)\) be the path that results from removing the node \(r_1\) from the path r. Finally, for any path \(r\in \mathcal {P}_1(i)\), let |r| be the number of nodes that it contains.

2.2 Queueing network

In this subsection, we introduce the basic structure of our queueing network. Consider a directed acyclic graph with k nodes, and a scaling parameter \(n\in \mathbb {Z}_+\). Each node i of the graph is equipped with a single server with rate \(n\mu _i\), and a queue with infinite capacity. Work arrives to the network in a number of stochastic processes, \(A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot )\), with stationary increments and positive rates \(n\lambda _1, \ldots ,n\lambda _k\), respectively. (More details about these processes are given in Sect. 2.3.) In particular, \(A^{(n)}_i(\cdot )\) is the stream of work that enters the network at node i. Work departing from node i is split deterministically so that, for each edge (ij) with \(i\ne j\), a fraction \(p_{i,j}\in [0,1]\) is routed to node j. The remaining fraction of the work departing from node i, denoted by \(p_{i,i}\in [0,1]\), leaves the network; evidently, \(\sum _{i} p_{i,j} = 1\). In order to simplify notation, for any directed path r, let us denote

$$\begin{aligned} \Pi _r \triangleq \prod \limits _{\ell =1}^{|r|-1} p_{r_\ell ,r_{\ell +1}}. \end{aligned}$$

In particular, we have \(\Pi _{(i)}=1\).

For \(s\le t\), we interpret

$$\begin{aligned} A^{(n)}_i(s,t)\triangleq A^{(n)}_i(t)-A^{(n)}_i(s) \end{aligned}$$

as the amount of exogenous work that arrived to the ith node during the time interval (st]. Let \(D^{(n)}_i(s,t)\) be the amount of work that departed the ith node during (st]. Then, the total amount of work arriving to the ith node during (st] is

$$\begin{aligned} I^{(n)}_i(s,t) \triangleq A^{(n)}_i(s,t) + \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} D^{(n)}_j(s,t), \end{aligned}$$
(2)

recalling that \(\mathcal {N}_\mathrm{in}(i)\) is the set of inbound neighbors of i. Furthermore, for \(t\in \mathbb {R}\), Reich’s formula states that the amount of remaining work in the ith queue at time t (also called the ‘queue length’) is given by

$$\begin{aligned} Q^{(n)}_i(t)\triangleq \sup \limits _{s< t} \left\{ I^{(n)}_i(s,t) - n\mu _i (t-s) \right\} . \end{aligned}$$
(3)

Moreover, we evidently have

$$\begin{aligned} D^{(n)}_i(s,t)&= Q^{(n)}_i(s) + I^{(n)}_i(s,t) - Q^{(n)}_i(t). \end{aligned}$$
(4)

Since we are interested in the steady-state of the queue lengths, we need to ensure that the service rate of each server is strictly larger than the total arrival rate to its node. This is enforced by imposing the following assumption.

Assumption 1

For each \(i\in \{1, \ldots ,k\}\), we have

$$\begin{aligned} \sum \limits _{r\in \mathcal {P}_1(i)} \lambda _{r_1} \Pi _r < \mu _i. \end{aligned}$$

Note that, even under Assumption 1, the existence and uniqueness of k-dimensional processes \(D^{(n)}(\cdot )\), \(I^{(n)}(\cdot )\), and \(Q^{(n)}(\cdot )\) that satisfy Eqs. (2), (3), and (4) is not immediate. This is shown in in Sect. 3.1, by expressing them as functionals of the exogenous arrival processes \(A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot )\).

2.3 Gaussian arrival processes

In this subsection, we specify the nature of the exogenous arrivals to the network. Let \(\{X^{(j)}(\cdot )\}_{j\in {\mathbb Z}_+}\) be a sequence of i.i.d. k-dimensional Gaussian processes with continuous sample paths and stationary increments, and with \(X^{(j)}(0)=(0, \ldots ,0)\), for all \(j\in {\mathbb Z}_+\). Each one of these k-dimensional processes is characterized by its drift vector \(\lambda =(\lambda _1, \ldots ,\lambda _k)\), where

$$\begin{aligned} \lambda \triangleq \mathbb {E}\left[ X^{(1)}(1)\right] , \end{aligned}$$

and by its covariance matrix \(\Sigma :\mathbb {R}^2\rightarrow \mathbb {R}^{k\times k}\), where

$$\begin{aligned} \Sigma _{i,j}(t,s) = {\mathbb C}\mathrm{ov}\left( X_i^{(1)}(t),\, X_j^{(1)}(s) \right) . \end{aligned}$$

Throughout this paper, we assume that the process \(A^{(n)}(\cdot )\triangleq \big ( A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot ) \big )\) is a k-dimensional Gaussian process such that

$$\begin{aligned} A^{(n)}_i(\cdot ) = \sum \limits _{j=1}^n X_i^{(j)}(\cdot ), \end{aligned}$$
(5)

for all \(i\in \{1, \ldots ,k\}\). Therefore, \(A^{(n)}(\cdot )\) also has continuous sample paths and stationary increments, and satisfies \(A^{(n)}(0)=(0, \ldots ,0)\). Moreover, the k-variate process \(A^{(n)}(\cdot )\) has drift vector \(n\lambda \), and covariance matrix \(n\Sigma \).

Remark 1

Equation (5) corresponds to the setting where the arrival processes are a superposition of individual streams, which is also called the ‘many-sources regime’ [14].

Finally, the following assumption is in place. It is required for a generalized version of Schilder’s theorem to hold, which is introduced in the following subsection.

Assumption 2

  1. (i)

    The covariance matrix \(\Sigma \) is differentiable.

  2. (ii)

    For every \(i,j\in \{1, \ldots ,k\}\), we have

    $$\begin{aligned} \lim \limits _{t^2+s^2\rightarrow \infty } \frac{\Sigma _{i,j}(t,s)}{t^2+s^2} = 0. \end{aligned}$$

2.4 Sample-path large deviations

In this paper, our aim is to study the limit

$$\begin{aligned} \mathbb {I}_i(b) \triangleq - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) , \end{aligned}$$
(6)

where \(Q^{(n)}_i\) is the steady-state queue length of the ith node, and \(\mathbb {I}:\mathbb {R}_+\rightarrow \mathbb {R}_+^k\) is a function that only depends on the server rates \(\mu \triangleq (\mu _1, \ldots ,\mu _k)\), on the routing matrix p, on the drift vector \(\lambda \), and on the covariance matrix \(\Sigma \). In order to do this, we rely on a sample-path large deviations principle for centered Gaussian processes, based on the generalized Schilder’s theorem. Before stating this theorem, we introduce its framework.

First, we introduce the sample-path space

$$\begin{aligned}&\Omega ^k \triangleq \left\{ \omega :\mathbb {R}\rightarrow \mathbb {R}^k, \,\,\right. \\&\qquad \qquad \left. \text {continuous},\,\, \omega (0)=(0, \ldots ,0),\,\,\lim \limits _{t\rightarrow \infty } \frac{\Vert \omega (t)\Vert _2}{1+|t|}=\lim \limits _{t\rightarrow -\infty } \frac{\Vert \omega (t)\Vert _2}{1+|t|}=0 \right\} , \end{aligned}$$

equipped with the norm

$$\begin{aligned} \Vert \omega \Vert _{\Omega ^k} \triangleq \sup \left\{ \frac{\Vert \omega (t)\Vert _2}{1+|t|} : t\in \mathbb {R} \right\} , \end{aligned}$$

which is a separable Banach space [15]. Next, we introduce the Reproducing Kernel Hilbert Space (rkhs) \(\mathcal {R}^k\subset \Omega ^k\) (see [16] for more details) induced by using the covariance matrix \(\Sigma (\cdot ,\cdot )\) as the kernel. In order to define it, we start from the smaller space

$$\begin{aligned} \mathcal {R}^k_* \triangleq \text {span}\left\{ \Sigma (t, \cdot )v : t\in \mathbb {R},\, v\in \mathbb {R}^k \right\} , \end{aligned}$$

with the inner product \(\langle \cdot , \cdot \rangle _{\mathcal {R}^k}\) defined as

$$\begin{aligned} \big \langle \Sigma (t,\cdot )u,\, \Sigma (s,\cdot )v \big \rangle _{\mathcal {R}^k} \triangleq u^\top \Sigma (t,s)v, \end{aligned}$$

for all \(t,s\in \mathbb {R}\) and \(u,v\in \mathbb {R}^k\). The closure of \(\mathcal {R}^k_*\) with respect to the topology induced by its inner product is the rkhs \(\mathcal {R}^k\). Using this inner product and its corresponding norm \(\Vert \cdot \Vert _{\mathcal {R}^k}\), we define a rate function by

$$\begin{aligned} \mathbb {I}(\omega ) \triangleq {\left\{ \begin{array}{ll} \frac{1}{2}\Vert \omega \Vert ^2_{\mathcal {R}^k}, &{} \text {if } \omega \in \mathcal {R}^k,\\ \infty , &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Remark 2

In [12, 15], the authors defined an appropriate multi-dimensional rkhs as the product of single-dimensional spaces that use the individual variance functions as kernels. There this could be done because the different coordinates of the multi-dimensional Gaussian process of interest were assumed independent. In our case, since the coordinates of our Gaussian process of interest need not be independent, we needed to define the multi-dimensional space directly, using the whole covariance matrix as the kernel. When the coordinates are indeed independent, both definitions are equivalent.

Under the framework define above, the following sample-path large deviations principle holds.

Theorem 1

(Generalized Schilder [17]) Under Assumption 2, the following holds:

  1. (i)

    For any closed set \(F\subset \Omega ^k\),

    $$\begin{aligned} \limsup \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in F \right) \le -\inf \limits _{\omega \in F} \big \{ \mathbb {I}(\omega ) \big \}. \end{aligned}$$
  2. (ii)

    For any open set \(G\subset \Omega ^k\),

    $$\begin{aligned} \liminf \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( \frac{A^{(n)}(\cdot ) -n\lambda \,\cdot \,}{n} \in G \right) \ge -\inf \limits _{\omega \in G} \big \{ \mathbb {I}(\omega ) \big \}. \end{aligned}$$

Schilder’s theorem typically only gives implicit results, as it is often hard to explicitly compute the infimum over the set of sample paths. However, as in [7, 12, 15], we will leverage the properties of our rkhs to obtain explicit results.

3 Main results

In this section, we will establish large deviations results for the steady-state queue-length distributions. In particular, we will use Theorem 1 to show that for any \(\{1, \ldots ,k\}\), and for every \(b>0\), the limit

$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) \end{aligned}$$
(7)

exists, and to find (tight) bounds for it. The first step is to express this probability as a function of the Gaussian arrival processes (Sect. 3.1), and to show that the limit exists (Sect. 3.2). Second, we obtain a general upper bound for this limit (Sect. 3.3), and prove that it is tight under additional technical assumptions (Sect. 3.4). The arguments largely follow the same structure as the arguments for the analysis of the second queue in a tandem [7], but without the simplifications that come from having only two queues in tandem, with arrivals only to the first one.

3.1 Overflow probability as a function of the arrival processes

In this subsection, we obtain a set \(\mathcal {E}_i(b)\) of sample paths such that

$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i > nb\right) = \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \mathcal {E}_i(b)\right) . \end{aligned}$$

By Reich’s formula, we have

$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i> nb\right)&= \mathbb {P}\left( \sup \limits _{t< 0} \left\{ I^{(n)}_i(t,0) + n\mu _i t \right\}> nb\right) \\&= \mathbb {P}\left( \exists \, t< 0 : I^{(n)}_i(t,0) + n\mu _i t > nb\right) , \end{aligned}$$

where \(I^{(n)}_i(t,0)\) is the total amount of work that arrived to the ith queue in the time interval (t, 0]. If i is a node with no inbound neighbors, i.e., if \(\mathcal {N}_\mathrm{in}(i)=\emptyset \), we have that \(I^{(n)}_i(t,0)=-A^{(n)}_i(t)\), and thus

$$\begin{aligned}&\mathbb {P}\left( Q^{(n)}_i> nb\right) \\&\quad = \mathbb {P}\left( \exists \, t< 0 : n\mu _i t -A^{(n)}_i(t)> nb\right) \\&\quad = \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \Big \{f\in \Omega ^k : \exists \, t< 0,\, (\mu _i-\lambda _i) t - f_i(t) > b \Big \}\right) . \end{aligned}$$

In this case, a large-deviations analysis can be performed through a straightforward application of Schilder’s theorem (this is exactly the same as in the case of an isolated Gaussian queue [5]). However, in general the input process is the sum of the local Gaussian arrival process, and the departure processes of its inbound neighbors, which are not Gaussian. In the following lemma, we obtain the input process as a functional of the exogenous arrival processes of all the upstream nodes.

Lemma 1

For each \(i\in \{1, \ldots ,k\}\), and for all \(t<0\), we have

$$\begin{aligned} I^{(n)}_i(t,0)&= A^{(n)}_i(t,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_i(t)} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \nonumber \\&\quad - \sup \limits _{{\varvec{s}}\in \mathcal {T}_i(0)} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)+ n\mu _{r_1}({\varvec{s}}_{r}-{\varvec{s}}_{r_+}) \right] \Pi _r \right\} , \end{aligned}$$
(8)

where

$$\begin{aligned} \mathcal {T}_i(t)&\triangleq \Big \{ {\varvec{t}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{t}}_i = t \quad \text {and} \quad {\varvec{t}}_r < {\varvec{t}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \}. \end{aligned}$$

The proof is given in Appendix A, and consists of solving a recursive equation on the input processes by using induction on the maximum length of paths that end in node i.

Remark 3

Let \({\varvec{t}}^*\) and \({\varvec{s}}^*\) be finite optimizers of the two suprema in (8) over the closure of their domains. These have the following interpretation: for each path \(r\in \mathcal {P}_2(i)\), the time \({\varvec{t}}^*_{r}\) (respectively, \({\varvec{s}}^*_{r}\)) is the starting point of the busy period of the \(r_1\)th queue that contains the time \({\varvec{t}}^*_{r_+}\) (respectively, \({\varvec{s}}^*_{r_+}\)). Then, since \({\varvec{t}}_i=t <0\) and \({\varvec{s}}_i=0\), it follows that \({\varvec{t}}^*_{r}\le {\varvec{s}}^*_{r}\), for all \(r\in \mathcal {P}_1(i)\). Combining this with (8), and using the continuity of \(A^{(n)}(\cdot )\), we obtain

$$\begin{aligned} I^{(n)}_i(t,0)&= A^{(n)}_i(t,0) - n\left( \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \mu _j p_{j,i} \right) t \\&\quad + \sup \limits _{{\varvec{t}}\in \mathcal {T}_i}\left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)\right. \right. \\&\quad \left. \left. +\,\, n \left( \mu _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) {\varvec{t}}_{r} \right] \Pi _r \right. \\&\quad \left. - \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)\right. \right. \right. \\&\quad \left. \left. \left. +\,\, n\left( \mu _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) {\varvec{s}}_{r} \right] \Pi _r \right\} \right\} , \end{aligned}$$

where

$$\begin{aligned} \mathcal {T}_i&\triangleq \Big \{ {\varvec{t}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{t}}_i< 0 \quad \text {and} \quad {\varvec{t}}_r< {\varvec{t}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \},\\ \mathcal {S}_i({\varvec{t}})&\triangleq \Big \{ {\varvec{s}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{s}}_i=0 \quad \text {and} \quad {\varvec{t}}_{r}< {\varvec{s}}_r < {\varvec{s}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \}. \end{aligned}$$

Note that the continuity of \(A^{(n)}(\cdot )\) is what allows us to have the condition \({\varvec{t}}_{r} < {\varvec{s}}_{r}\) instead of \({\varvec{t}}_{r} \le {\varvec{s}}_{r}\). This distinction will be convenient later.

We now state the main result of this subsection.

Theorem 2

For each \(i\in \{1, \ldots ,k\}\), and for every \(b>0\), we have

$$\begin{aligned} \mathbb {P}\left( Q_i^{(n)} > b n \right)&= \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {E}_i(b) \right) , \end{aligned}$$
(9)

where

$$\begin{aligned} \mathcal {E}_i(b)&\triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \\&\qquad \left. > b - \sum \limits _{r\in \mathcal {P}_1(i)} \left[ \left( \mu _{r_1}-\lambda _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1}\right) \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )\right] \Pi _r \right\} . \end{aligned}$$

The proof follows immediately from Reich’s formula and Lemma 1, and it is given in Appendix B.

3.2 Decay rate of the overflow probability

In this subsection, we establish the existence of the limit

$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) , \end{aligned}$$

for all \(b>0\). Recall that Theorem 2 states that \(\mathbb {P}( Q_i^{(n)} > b n)\) satisfies (9), where \(\mathcal {E}_i(b)\) is an open set of the path space \(\Omega ^k\). Then, by Schilder’s theorem (Theorem 1), we have

$$\begin{aligned} -\liminf \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \mathcal {E}_i(b) \right) \right)&\le \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \}, \end{aligned}$$

and

$$\begin{aligned} -\limsup \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \overline{\mathcal {E}_i(b)} \right) \right)&\ge \inf \limits _{f\in \overline{\mathcal {E}_i(b)}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$

Then, the existence of the limit is equivalent to showing that \(\mathcal {E}_i(b)\) is an \(\mathbb {I}\)-continuity set, which is stated in the following proposition. The proof follows along the lines of the proof of [7, Thm. 3.1], and it is thus omitted.

Proposition 1

For each \(i\in \{1, \ldots ,k\}\), and for every \(b>0\), we have

$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( Q_i^{(n)} > b n \right) \right) = \inf \limits _{f\in \overline{\mathcal {E}_i(b)}} \big \{ \mathbb {I}(f) \big \} = \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
(10)

Since the existence of the decay rate of interest given in (6) has been established now, in the following subsections, we focus on finding lower and upper bounds on it.

3.3 Lower bound on the decay rate

In this subsection, we present a general lower bound for the asymptotic exponential decay rate of the overflow probability in steady state. We start by introducing some notation. Given a vector v and a scalar a, we denote \(v-(a, \ldots ,a)\) as \(v-a\). For each node \(i\in \{1, \ldots ,k\}\), we denote

$$\begin{aligned} \hat{A}_i(t)&\triangleq \frac{A_i^{(n)}(t) - n\lambda _i t}{\sqrt{n}}. \end{aligned}$$

Note that \(\hat{A}(\cdot )\) is a k-dimensional Gaussian process with zero mean, and covariance matrix \(\Sigma \). For each node, \(i\in \{1, \ldots ,k\}\),

$$\begin{aligned} \overline{\lambda }_i&\triangleq \sum \limits _{r\in \mathcal {P}_1(i)} \lambda _{r_1} \Pi _r, \\ \bar{A}_i({\varvec{s}},{\varvec{t}})&\triangleq \sum \limits _{r\in \mathcal {P}_1(i)} \Big [ \hat{A}_{r_1}({\varvec{t}}_{r}) - \hat{A}_{r_1}({\varvec{s}}_{r}) \Big ] \Pi _r. \end{aligned}$$

Moreover, let us define the functions

$$\begin{aligned} k^b_i({\varvec{t}},{\varvec{s}})&\triangleq \mathbb {E}\left[ \left. \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \,\right| \, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) =b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \right] ,\\ h^b_i({\varvec{t}},{\varvec{s}})&\triangleq \mathbb {E}\left[ \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \,\left| \, \bar{A}_i({\varvec{s}},{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right. \right] , \end{aligned}$$

where

$$\begin{aligned} c_i({\varvec{t}},{\varvec{s}})&\triangleq \left( \overline{\lambda }_i - \lambda _i - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \mu _j p_{j,i} \right) {\varvec{t}}_i\\&\quad +\sum \limits _{r\in \mathcal {P}_2(i)} \left( \mu _{r_1}-\lambda _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big ) \Pi _r. \end{aligned}$$

Note that \(c_i({\varvec{t}},{\varvec{t}}-{\varvec{t}}_i)=0\).

Using the above notation, we now state our lower bound.

Theorem 3

Under Assumptions 1 and 2, for each \(i\in \{1, \ldots ,k\}\) and for every \(b > 0\),

$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \ge \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \Big \{ \mathbb {I}_i^b({\varvec{t}},{\varvec{s}}) \Big \}, \end{aligned}$$

where

$$\begin{aligned} \mathbb {I}_i^b({\varvec{t}},{\varvec{s}}) \triangleq {\left\{ \begin{array}{ll} \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}, \quad \text {if } \ k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}}), \\ \qquad \qquad \quad \qquad \qquad \qquad \qquad \text {or} \ \quad {\varvec{s}}={\varvec{t}}-{\varvec{t}}_i, \\ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \Big )}, \quad \text {if } \ h^b_i({\varvec{t}},{\varvec{s}})> c_i({\varvec{t}},{\varvec{s}}), \\ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}, \quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(11)

The proof is given in Appendix C, and it essentially consists of two steps. First, we decompose the event \(\mathcal {E}_i(b)\) given in Theorem 2 as a union of intersections of simpler events that only involve the sample paths at fixed times, and we upper bound the probability of the intersection by the probability of the least likely one. Then, we use Cramér’s theorem to obtain the decay rate of the least likely of these simpler events by solving the additional quadratic optimization problem that arises by its application.

Remark 4

As part of the proof of Theorem 3, it is established that conditions \(k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}})\) or \({\varvec{s}}={\varvec{t}}-{\varvec{t}}_i\), and \(h^b_i({\varvec{t}},{\varvec{s}})>c_i({\varvec{t}},{\varvec{s}})\) cannot be satisfied at the same time. As a result, the three cases in the definition of \(\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\) are disjoint.

Remark 5

The lower bound in Theorem 3 generalizes the lower bound given in [7, Corollary 3.5], not only by generalizing the network structure from a set of tandem queues to any acyclic network of queues, but also by removing a concavity assumption on the square root of the variance of the input processes. However, the removal of this assumption makes the expression of the lower bound more convoluted, even if we restrict it to the case of a pair of queues in tandem.

Remark 6

It is worth highlighting that, even if the bound of Theorem 3 is not tight, it provides an upper bound for the asymptotic exponential decay rate of overflow probability that can be used as a performance guarantee in applications.

3.4 Tightness of the lower bound

In this subsection, we obtain conditions under which the lower bound in Theorem 3 is tight. We present three results, one for each of the cases in the definition of \(\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\) in (11), with different technical conditions for each case.

Let \(({\varvec{t}}^*,{\varvec{s}}^*)\) be an optimizer of (11) over the closure of its domain. We first establish that, if the optimum of (11) is achieved in the first case, then the lower bound of Theorem 3 is tight under an additional technical condition. This is formalized in the following theorem.

Theorem 4

Under Assumptions 1 and 2, the following holds. If

$$\begin{aligned} k^b_i\left( {\varvec{t}}^*,{\varvec{s}}\right) < c_i\left( {\varvec{t}}^*,{\varvec{s}}\right) , \end{aligned}$$
(12)

for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}} \ne {\varvec{t}}^*- {\varvec{t}}^*_i\), then

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} . \end{aligned}$$

The proof is given in Appendix D, and it essentially consists of two steps. First, we identify a most likely sample path in the least likely event of the intersection given in the decomposition of the event \(\mathcal {E}_i(b)\) that was used in the proof of Theorem 3. Then, we show that under the assumptions imposed this most likely sample path is in all the sets featuring in the intersection, thus implying optimality in \(\mathcal {E}_i(b)\).

Since the condition in (12) requires an optimizer \(t^*\) of (11), it is generally hard to verify. In the following lemma, we present a sufficient condition that is easier to verify.

Lemma 2

A sufficient condition for (12) to hold is that

$$\begin{aligned} k^b_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) < c_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) , \end{aligned}$$
(13)

for all \({\varvec{s}}\in \mathcal {S}_i\left( \tilde{\varvec{t}}\right) \) such that \({\varvec{s}}\ne \tilde{\varvec{t}}- \tilde{\varvec{t}}_i\), where

$$\begin{aligned} \tilde{\varvec{t}} \in \underset{{\varvec{t}}\in \overline{\mathcal {T}_i}}{\arg \min }\left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} . \end{aligned}$$

The proof is given in Appendix E.

Remark 7

Although the condition of (13) looks almost the same as the original one of (12), the key simplification is that for (13), we only need \(\tilde{\varvec{t}}\) instead of \({\varvec{t}}^*\), which is an optimizer of an easier optimization problem.

We now present the second result of this subsection. It asserts that, if the optimum of (11) is achieved in the second case, then the lower bound of Theorem 3 is tight under an additional technical condition.

Theorem 5

Under Assumptions 1 and 2, the following holds: Suppose that

$$\begin{aligned}&\mathbb {E}\left[ \bar{A}_i({\varvec{s}},{\varvec{s}}^*) \,\left| \, \bar{A}_i({\varvec{s}}^*,{\varvec{t}}^*) = b -\left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^*\right. \right. \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \quad \left. \left. - c_i({\varvec{t}}^*,{\varvec{s}}^*) \right. \right] \ge c_i({\varvec{t}}^*,{\varvec{s}}^*) - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$

for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\). If

$$\begin{aligned} h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) > c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \end{aligned}$$

then

$$\begin{aligned}&\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \\&\quad = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right\} . \end{aligned}$$

The proof is analogous to the proof of Theorem 4, and it is thus omitted.

Remark 8

Note that the second condition in Theorem 5 is satisfied if the first one is satisfied with strict inequality for \({\varvec{s}}={\varvec{t}}^*-{\varvec{t}}_i^*\).

Finally, we show that if the optimum of (11) is achieved in the third case, then the lower bound of Theorem 3 is tight under a different additional technical condition.

Theorem 6

Under Assumptions 1 and 2, the following holds: Suppose that

$$\begin{aligned}&\mathbb {E}\left[ \left. \bar{A}_i({\varvec{s}},{\varvec{s}}^*) \,\right| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i-\overline{\lambda }_i\right) t^*_i;\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*)\right. \nonumber \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \,\,\left. =c_i({\varvec{t}}^*,{\varvec{s}}^*) \right] \ge c_i({\varvec{t}}^*,{\varvec{s}}^*) - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$
(14)

for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\). If

$$\begin{aligned} h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \le c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) , \qquad \text {and} \qquad k^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \ge c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) , \end{aligned}$$

then

$$\begin{aligned}&\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \\&\quad = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}\right. \\&\qquad \left. + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )} \right\} . \end{aligned}$$

The structure of the proof is the same as the proof of Theorem 4, and it is given in Appendix F.

4 Example: equivalence to a single server queue

In this section, we show that if the input process is a multivariate fractional Brownian motion with non-short-range dependence and nonnegative correlation between its coordinates, and if the service rates are sufficiently large, then the large deviations behavior of any fixed queue in the network is the same as if all inputs to upstream queues were inputs to the queue itself. This phenomenon was also observed in [7] for the second queue in a tandem, and here we generalize the conditions under which it occurs.

4.1 Preliminaries on multivariate fractional Brownian motions

Consider the case where the exogenous arrival process \(A^{(n)}(\cdot )\) is a multivariate fractional Brownian motion (mfBm). Since each coordinate is a real-valued fBm, for each \(i\in \{1, \ldots ,k\}\), and for every \(t<s<0\), we have

$$\begin{aligned} {\mathbb C}\mathrm{ov}\left( A^{(n)}_i(t),\,\, A^{(n)}_i(s)\right) = \frac{\sigma _i^2}{2} \Big [ |t|^{2H_i} + |s|^{2H_i} - |s-t|^{2H_i} \Big ], \end{aligned}$$

where \(H_i\in (0,1)\) is its Hurst index, and

$$\begin{aligned} \sigma _i \triangleq \sqrt{ \mathbb {V}\text {{ar}}\left( A^{(n)}_i(1)\right) } \end{aligned}$$

is its variance. Furthermore, it is known [18] that for each \(i,j \in \{1, \ldots ,k\}\), and for every \(t<s<0\), we have

$$\begin{aligned}&{\mathbb C}\mathrm{ov}\left( A^{(n)}_i(t),\,\, A^{(n)}_j(s)\right) \nonumber \\&\quad = {\left\{ \begin{array}{ll} \frac{\sigma _i \sigma _j}{2} \Big [ (\rho _{i,j}-\eta _{i,j})|t|^{H_i+H_j} + (\rho _{i,j}+\eta _{i,j})|s|^{H_i+H_j} - (\rho _{i,j}-\eta _{i,j})|s-t|^{H_i+H_j} \Big ], \\ \quad \text { if } H_i+H_j\ne 1, \\ \frac{\sigma _i \sigma _j}{2} \Big [ \rho _{i,j}\big (|t| + |s| - |s-t| \big ) + \eta _{i,j}\big ( s\log |s| - t\log |t| -(s-t)\log |s-t| \big ) \Big ], \\ \quad \text { if } H_i+H_j= 1, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \rho _{i,j} \triangleq {\mathbb C}\mathrm{orr}\,\left( A^{(n)}_i(1),\,\, A^{(n)}_j(1)\right) \end{aligned}$$

are their covariances, and \(\eta _{i,j}=-\eta _{j,i}\in \mathbb {R}\) represents the inter-correlation in time between the two coordinates. Note that, contrary to the single-dimensional fBm, they need not be time-reversible. In particular, a mfBm is time-reversible if and only if \(\eta _{i,j}=0\) for all ij [19, Prop. 6]. Moreover, the parameters \(\eta _{i,j}\) have the following interpretation [19]:

  1. (i)

    If the one-dimensional fBm s are short-range dependent (i.e., if \(H_i,H_j<1/2\)), then they are either short-range interdependent if \(\rho _{i,j}\ne 0\) or \(\eta _{i,j}\ne 0\), or independent if \(\rho _{i,j}=\eta _{i,j}=0\). This also holds when \(H_i+H_j<1\), even if one of them is larger than or equal to 1/2.

  2. (ii)

    If the one-dimensional fBm s are long-range dependent (i.e., if \(H_i,H_j>1/2\)), then they are either long-range interdependent if \(\rho _{i,j}\ne 0\) or \(\eta _{i,j}\ne 0\), or independent if \(\rho _{i,j}=\eta _{i,j}=0\). This also holds when \(H_i+H_j>1\), even if one of them is smaller than or equal to 1/2.

  3. (iii)

    If the one-dimensional fBm s are Brownian motions (i.e., if \(H_i=H_j=1/2\)), then they are either long-range interdependent if \(\eta _{i,j}\ne 0\), or independent if \(\eta _{i,j}=0\). This also holds whenever \(H_i+H_j=1\), even if neither of them are equal to 1/2.

4.2 Nonnegatively correlated, non-short-range-dependent inputs

We now present the main result of this section.

Theorem 7

Fix some node i. Suppose that \(H_j=H \ge 1/2\), for all \(j\in \{1, \ldots ,k\}\), that \(\eta _{j,l}=0\), for all \(j,l\in \{1, \ldots ,k\}\), and that \(\rho _{j,l}\ge 0\), for all \(j,l\in \{1, \ldots ,k\}\). Moreover, suppose that

$$\begin{aligned}&\min \left\{ \mu _j - \lambda _j -\sum \limits _{l\in \mathcal {N}_\mathrm{in}(j)} \mu _l p_{l,j} : j\ne i \right\} >\nonumber \\&\quad \sup \limits _{\alpha \in (0,1)^{|\mathcal {P}_2(i)|}} \left\{ \frac{ \sum \limits _{r\in \mathcal {P}_2(i)} \left( \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \left[ \left( \alpha _r\right) ^{2H}+ 1 - \left( 1- \alpha _r\right) ^{2H}\right] \Pi _r }{ \left( \sum \limits _{r\in \mathcal {P}_2(i)} \alpha _r \Pi _r \right) } \right\} \left( \frac{\mu _i-\overline{\lambda }_i}{2H \overline{\sigma }_i^2}\right) , \end{aligned}$$
(15)

where

$$\begin{aligned} \overline{\sigma }_i^2 \triangleq \sigma _i^2 + \sum \limits _{r\in \mathcal {P}_2(i)} \left( 2 \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r. \end{aligned}$$

Then, for every \(b>0\),

$$\begin{aligned} - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb \right) = \frac{1}{2\overline{\sigma }_i^2}\left( \frac{b}{1-H} \right) ^{2-2H}\left( \frac{\mu _i - \overline{\lambda }_i}{H} \right) ^{2H}. \end{aligned}$$

The proof is given in Appendix G, and amounts to checking that Theorem 4 applies in this case, to then compute the exact decay rate.

Remark 9

Note that this decay rate is the same as the one that we would obtain in a single-server queue with processing rate \(\mu _i\) and input

$$\begin{aligned} \sum \limits _{r\in \mathcal {P}_1(i)} A^{(n)}_{r_1}(\cdot )\Pi _r. \end{aligned}$$

This means that under the assumptions of Theorem 7, in this regime, the queues upstream of node i are ‘transparent.’ In particular, this implies that the most likely overflow path is the one where all upstream queues are empty.

Remark 10

In the case of a pair of queues tandem with arrivals only to the first queue, the condition in (15) is the same as the one obtained in the analysis of the tandem queues done in [7].

5 Conclusions

We have considered an acyclic network of queues with (possibly correlated) Gaussian inputs and static routing, and characterized the large deviations behavior of the steady-state queue length in each queue of the network. We achieved this by defining an appropriate multi-dimensional reproducing kernel Hilbert space, and using Schilder’s theorem to obtain lower and upper bounds for the asymptotic exponential decay rate. This generalizes previous results, which focused on isolated queues and two-queue tandem systems (with arrivals only to the first queue).

While the results that we obtain are quite general both in terms of the network structure and in terms of the correlation structure among the arrival processes to the different nodes, there are still interesting open problems. For instance:

  1. (i)

    While we considered essentially only single-class traffic with a deterministic split of the work departing from each server, it would be interesting to extend our results to multi-class networks, where the servers are shared by using, for example, the generalized processor sharing discipline [12].

  2. (ii)

    While we only obtained large-deviations results for each queue separately, it would be interesting to obtain similar results for the joint queue lengths.