1 Introduction

Atmospheric cluster formation processes [22], where certain species of the gas molecules (called monomers) can stick together and eventually produce macroscopic particles, are an important component in cloud formation and radiation scattering. The above cluster formation processes are modelled with the so-called General Dynamic Equation (GDE) [22]. Under atmospheric conditions, the particle clusters are often aggregates of various molecular species and formed by collisions of several different monomer types, cf. [36, 44] for more details and examples. Accordingly, in the GDE one needs to label clusters not only by the total number of monomers in them but also by counting each monomer type. This results in multicomponent labels for the concentration vector, with nonlinear interactions between the components. Another feature of the GDE which has been largely absent from most of the previous mathematical work on coagulation equations, is the presence of an external monomer source term. Such sources are nevertheless important for atmospheric phenomena (for more details about the chemical and physical origin and relevance of the sources we refer for instance to [16, 27]), although this problem has been barely considered in the mathematical literature.

In this work, we focus on the effect the addition of a source term has on solutions of standard one-component coagulation equations. This is by no means to imply that multicomponent coagulation equations would not have interesting new mathematical features but these will be the focus of a separate work. Here, we consider only one species of monomers, and we are interested in the distribution of the concentration of clusters formed out of these monomers. Let \(n_\alpha \ge 0\) denote the concentration of clusters with \(\alpha \in {{\mathbb {N}}}\) monomers.

Considering the regime in which the precise spatial structure and loss of particles by deposition are not important, the GDE yields the following nonlinear evolution equation for the concentrations \(n_\alpha \):

$$\begin{aligned}&\partial _{t}n_{\alpha } =\frac{1}{2}\sum _{0< \beta<\alpha }K_{\alpha -\beta ,\beta }n_{\alpha -\beta }n_{\beta }-n_{\alpha }\sum _{\beta>0}K_{\alpha ,\beta }n_{\beta } \nonumber \\&\quad +\,\sum _{\beta >0}\Gamma _{\alpha +\beta ,\alpha }n_{\alpha +\beta }-\frac{1}{2} \sum _{0<\beta <\alpha }\Gamma _{\alpha ,\beta }n_{\alpha }+ s_{\alpha }\,. \end{aligned}$$
(1.1)

The coefficients \(K_{\alpha ,\beta }\) describe the coagulation rate joining two clusters of sizes \(\alpha \) and \(\beta \) into a cluster of size \(\alpha +\beta \), as dictated by mass conservation. Analogously, the coefficients \(\Gamma _{\alpha ,\beta }\) describe the fragmentation rate of clusters of size \(\alpha \) into two clusters which have sizes \(\beta \) and \(\alpha -\beta \). We denote with \(s_{\alpha }\) the (external) source of clusters of size \(\alpha \). In applications, typically only monomers or small clusters are being produced, so we make the assumption that the function \(\alpha \mapsto s_\alpha \) has a bounded, non-empty support. In what follows, we make one further simplification and consider only cases where also fragmentation can be ignored, \(\Gamma _{\alpha ,\beta }=0\); the reasoning behind this choice is discussed later in Section 1.1. An overview of the currently available mathematical results for coagulation-fragmentation models can be found in [9, 30].

Therefore, we are led to study the evolution equation

$$\begin{aligned} \partial _{t}n_{\alpha }=\frac{1}{2}\sum _{\beta <\alpha }K_{\alpha -\beta ,\beta }n_{\alpha -\beta }n_{\beta }-n_{\alpha }\sum _{\beta >0}K_{\alpha ,\beta }n_{\beta } + s_{\alpha }\,. \end{aligned}$$
(1.2)

In this paper, we are concerned with the existence or nonexistence of steady state solutions to (1.2) for general coagulation rate kernels K, including in particular the physically relevant kernels discussed in Section 1.1. The source is here assumed to be localized on the “left boundary” of the system which have small cluster sizes. Such source terms often lead to nontrivial stationary solutions towards which the time-dependent solutions evolve as time increases. These stationary solutions are nonequilibrium steady states since they involve a steady flux of matter from the source into the system. The characterization of nonequilibrium stationary states exhibiting transport phenomena is one of the central problems in statistical mechanics.

The main result of this paper gives a contribution in this direction. More precisely, we address the question of existence of such stationary solutions to (1.2). We prove that for a large class of kernels—including in particular the diffusion limited aggregation kernel given in (1.9)—stationary solutions to (1.2) yielding a constant flux of monomers towards clusters with large sizes exist. On the contrary, for a different class of kernels—including the free molecular coagulation kernel with the form (1.7)—such a class of stationary solutions does not exist.

In the case of collision kernels for which stationary nonequilibrium solutions to (1.2) exist, we can even compute the rate of formation of macroscopic particles, which we identify here with infinitely large particles, from an analysis of the properties of these stationary solutions, cf. Section 2.1. We find that in this case the main mechanism of transport of monomers to large clusters corresponds to coagulation between clusters with comparable sizes, cf. Lemma 6.1, Section 6.

The non-existence of such stationary solutions under a monomer source for a general class of coagulation kernels yielding coagulation for arbitrary cluster sizes is one of the novelties of our work. It has been pointed out in Remark 8.1 of [12] that for kernels \(K_{\alpha ,\beta }\) which vanish if \(\alpha >1\) or \(\beta >1\), and sources \(s_{\alpha }\) which are different from zero for \(\alpha {\geqq }2\), stationary solutions of (1.2) cannot exist. Although the example in [12] refers to the continuous counterpart of (1.2) (c.f. (1.3)), the argument works similarly for discrete kernels. The example of non existence of stationary solutions in [12] relies on the fact that coagulation does not take place for sufficiently large particles and therefore cannot compensate for the addition of particles due to the source term \(s_{\alpha }.\) In the class of kernels considered in this paper coagulation takes place for all particle sizes and therefore the nonexistence of steady states must be due to a different reason. At first glance this result might appear counterintuitive, since this non-existence result includes kernels for which the dynamics seems to be well-posed. Hence, one needs to explain what will happen at large times to the monomers injected into the system. Our results suggest that for such kernels the aggregation of monomers with large clusters is so fast that it cannot be compensated by the constant addition of monomers described by the injection term \(s_\alpha \). Then the cluster concentration \(n_{\alpha }\) would converge to zero as \(t\rightarrow \infty \) for bounded \(\alpha \) even if \(n_{\alpha }=0\) is not a stationary solution to (1.2) if \((s_{\beta })\ne 0\).

We remark that our non-existence result of stationary solutions includes in particular the so called free molecular kernel (cf. (1.6) below) derived from kinetic theory which is commonly used for microscopic computations involving aerosols (cf. for instance [36]).

In this paper we consider, in addition to the stationary solutions of (1.2), the stationary solutions of the continuous counterpart of (1.2),

$$\begin{aligned}&\partial _{t}f(x,t)=\frac{1}{2}\int _{0}^{x}K\left( x-y,y\right) f\left( x-y,t\right) f\left( y,t\right) \text {d}y\nonumber \\&\quad -\,\int _{0}^{\infty }K\left( x,y\right) f\left( x,t\right) f\left( y,t\right) \text {d}y+\eta \left( x\right) . \end{aligned}$$
(1.3)

In fact, we will allow f and \(\eta \) in this equation to be positive measures. This will make it possible to study the continuous and discrete equations simultaneously, using Dirac \(\delta \)-functions to connect \(f(\xi )\) and \(n_\alpha \) via the formula \(f(\xi ) d\xi = \sum _{\alpha =1}^\infty n_\alpha \delta (\xi - \alpha )d\xi \).

In most of the mathematical studies of the coagulation equation to date, it has been assumed that the injection terms \(s_{\alpha }\) and \(\eta \left( x\right) \) are absent. In the case of homogeneous kernels, that is, kernels satisfying

$$\begin{aligned} K(rx,ry)=r^\gamma K(x,y) \end{aligned}$$
(1.4)

for any \(r>0\), the long time asymptotics of the solutions of (1.3) with \(\eta \left( x\right) =0\) might be expected to be self-similar for a large class of initial data. This has been rigorously proved in [32] for the particular choices of kernels \(K( x,y) =1\) and \(K( x,y) =x+y\). In the case of discrete problems, the distribution of clusters \( n_{\alpha }\) has also been proved to behave in self-similar form for large times and for a large class of initial data if the kernel is constant, \( K_{\alpha ,\beta }=1\), or additive, \(K_{\alpha ,\beta }=\alpha +\beta \) [32]. For these kernels it is possible to find explicit representation formulas for the solutions of (1.2), (1.3) using Laplace transforms.

For general homogeneous kernels construction of explicit self-similar solutions is no longer possible. However, the existence of self-similar solutions of (1.3) with \(\eta =0\) has been proved for certain classes of homogeneous kernels K(xy) using fixed point methods. These solutions might have a finite monomer density (that is, \(\int _{0}^{\infty }xf\left( x,t\right) \text {d}x<\infty \)) as in [18, 21], or infinite monomer density (that is, \(\int _{0}^{\infty }xf \left( x,t\right) \text {d}x=\infty \)) as in [3, 4, 34, 35]. Similar strategies can be applied to other kinetic equations [25, 26, 33].

Problems like (1.2), (1.3) with nonzero injection terms \(s_{\alpha }\), \( \eta \left( x\right) \) have been much less studied both in the physical and mathematical literature. In [10] it has been observed using a combination of asymptotic analysis arguments and numerical simulations that solutions of (1.2), (1.3) with a finite monomer density behave in self-similar form for long times and for a class of homogeneous coagulation kernels, even considering source terms which depend on time following a power law \(t^{\omega }\). Coagulation equations with sources have also been considered in [31] using Renormalization Group methods and leading to predictions of analogous self-similar behaviour. For what concerns the rigorous mathematical literature, in [12], the existence of stationary solutions has been obtained in the case of bounded kernels. Well-posedness of the time-dependent problem for a class of homogeneous coagulation kernels with homogeneity \(\gamma \in [0,2]\) has been proven in [17]. For the constant kernel, the stability of the corresponding solutions has been proven using Laplace transform methods (cf. [12]). Convergence to equilibrium for a class of coagulation equations containing also growth terms as well as sources has been studied in [23, 24]. Analogous stability results for coagulation equations with the form of (1.1) but containing an additional removal term on the right-hand side with the form \(-r_{\alpha }n_{\alpha },\ r_{\alpha }>0\) have been obtained in [28].

In this paper we study the stationary solutions of (1.2), (1.3) for coagulation kernels satisfying

$$\begin{aligned}&c_{1} w(x,y){\leqq }K( x,y) {\leqq }c_{2}w(x,y)\,, \qquad w(x,y) := x^{\gamma +\lambda }y^{-\lambda }+ y ^{\gamma +\lambda }x^{-\lambda } \,, \end{aligned}$$
(1.5)

for some \(c_1,c_2>0\) and for all xy. The weight function w depends on two real parameters: the homogeneity parameter \(\gamma \) and the “off-diagonal rate” parameter \(\lambda \). The parameter \(\gamma \) yields the behaviour of kernel K under the scaling of the particle size while the parameter \(\lambda \) measures how relevant the coagulation events between particles of different sizes are. However, let us stress that we do not assume the kernel K itself to be homogeneous, even though the weight functions are.

The main result of this paper is the following: given \(\eta \) compactly supported there exists at least one nontrivial stationary solution to the problem (1.3) if and only if \(|\gamma +2\lambda |<1\). In particular, if \(|\gamma +2\lambda |\ge 1\) no such stationary solutions can exist. Note that the parameters \(\gamma \) and \(\lambda \) are arbitrary real numbers and they may be negative or greater than one here. Therefore, these results do not depend on having global well-posedness of mass-preserving solutions for the time-dependent problem (1.3). In particular, our theorems cover ranges of parameters for which the solutions to the time-dependent problem (1.3) can exhibit gelation in finite or zero time. A detailed description of the current state of the art concerning wellposedness and gelation results can be found in [1]. At a first glance, the fact that the existence of stationary solutions of (1.3) does not depend on having or not solutions for the time dependent problem might appear surprising. However, the reason for this becomes clearer if we notice that the homogeneity of the kernel is one of the main factors determining the wellposedness of the time dependent problem (1.3). On the other hand, the homogeneity of the kernel K is not an essential property of the stationary solution problem as it can be seen (cf. [11]) noticing that if f is a stationary solution of (1.3), then \(x^{\theta }f\left( x\right) \) is a stationary solution of (1.3) with kernel \(\frac{K\left( x,y\right) }{\left( xy\right) ^{\theta }}\) and the same source \(\eta \). This new kernel satisfies (1.5) with \(\gamma \) and \(\lambda \) replaced by \(\left( \gamma -2\theta \right) \) and \(\lambda +\theta \) respectively. In particular, we can so obtain kernels with arbitrary homogeneity and having basically the same steady states, up to the product by a power law.

We also prove in this paper the analog of these existence and non-existence results for the discrete coagulation equation (1.2). Moreover, we derive upper and lower estimates, as well as regularity results, for the stationary solutions to (1.2), (1.3) for the range of parameters for which these solutions exist, that is \(|\gamma +2\lambda |<1\). Finally, we also describe the asymptotics for large cluster sizes of these stationary solutions.

1.1 On the Choice of Coagulation and Fragmentation Rate Functions

Although we do not keep track of any spatial structure, the coagulation rates \(K_{\alpha ,\beta }\) do depend on the specific mechanism which is responsible for the aggregation of the clusters. These coefficients need to be computed for example using kinetic theory and the result will depend on what is assumed about the particle sizes and the processes yielding the motion of the clusters.

For instance, in the case of electrically neutral particles with a size much smaller than the mean free path between two collisions between clusters, the coagulation kernel is (cf. [22])

$$\begin{aligned} K_{\alpha ,\beta }=\left( \frac{3}{4\pi }\right) ^{\frac{1}{6}}\sqrt{6k_{B}T} \left( \frac{1}{m( \alpha ) }+\frac{1}{m( \beta ) } \right) ^{\frac{1}{2}}\left( V( \alpha )^{\frac{1}{3}} +V( \beta ) ^{\frac{1}{3}}\right) ^{2}, \end{aligned}$$
(1.6)

where \(V( \alpha ) \) and \(m( \alpha ) \) are respectively the volume and the mass of the cluster characterized by the composition \( \alpha \). We denote as \(k_{B}\) the Boltzmann constant, as T the absolute temperature, and if \(m_1\) is the mass of one monomer, we have above \(m(\alpha )= m_1 \alpha \). In the derivation, one also assumes a spherical shape of the clusters. If the particles are distributed inside the sphere with a uniform mass density \(\rho \), assumed to be independent of the cluster size, we also have \(V(\alpha )=\frac{m_1}{\rho } \alpha \). Changing the time-scale we can set all the physical constants to one. Finally, it is possible to define a continuum function K(xy) by setting \(\alpha =x\), \(\beta =y\) in the above formula. We call this function the free molecular coagulation kernel, given explicitly by

$$\begin{aligned} K( x,y) =\big (x^{\frac{1}{3}}+y^{\frac{1}{3}}\big )^{2}\big ( x^{-1}+y^{-1}\big )^{\frac{1}{2}}. \end{aligned}$$
(1.7)

It is now straightforward to check that with the parameter choice \(\gamma =\frac{1}{6}\), \(\lambda =\frac{1}{2}\) there are \(c_1,c_2>0\) such that (1.5) holds for all \(x,y>0\). Since here \(\gamma + 2 \lambda = \frac{7}{6}>1\), the free molecular kernel belongs to the second category which has no stationary state.

Another often encountered example is diffusion limited aggregation which was studied already in the original work by Smoluchowski [40]. Suppose that there is a background of non-aggregating neutral particles producing cluster paths resembling Brownian motion between their collisions. Then one arrives at the coagulation kernel

$$\begin{aligned} K_{\alpha ,\beta }=\frac{2k_{B}T}{3\mu }\left( \frac{1}{ V( \alpha ) ^{\frac{1}{3}}}+\frac{1}{ V( \beta )^{\frac{1}{3}}}\right) \left( V( \alpha )^{\frac{1}{3}} + V( \beta ) ^{\frac{1}{3}}\right) , \end{aligned}$$
(1.8)

where \(\mu >0\) is the viscosity of the gas in which the clusters move.

As before, we then set \(V(\alpha )=\frac{m_1}{\rho } \alpha \) and define a continuum function K(xy) by setting \(\alpha =x\), \(\beta =y\) on the right hand side of (1.8). The constants may then be collected together and after rescaling time one may use the following kernel function

$$\begin{aligned} K( x,y)= \left( x^{-\frac{1}{3}} + y^{-\frac{1}{3}} \right) \left( x^{\frac{1}{3}}+ y^{\frac{1}{3}}\right) \,, \end{aligned}$$
(1.9)

which we call here diffusive coagulation kernel or Brownian kernel. In this case, for the parameter choice \(\gamma =0\), \(\lambda =\frac{1}{3}\) there are \(c_1,c_2>0\) such that for all \(x,y>0\) (1.5) holds. Since here we have that \(0<\gamma + 2 \lambda = \frac{2}{3}<1\), the diffusive kernel belongs to the first category which will have some stationary solutions.

Several other coagulation kernels can be found in the physical and chemical literature. For instance, the derivation of the free molecular kernel (1.6) and the Brownian kernel (1.8) is discussed in [22]. The derivation of coagulation describing the aggregation between charged and neutral particles can be found in [41]. Applications of these three kernels to specific problems in chemistry can be found for instance in [36].

Concerning the fragmentation coefficients \(\Gamma _{\alpha ,\beta }\), it is commonly assumed in the physics and chemistry literature that these coefficients are related to the coagulation coefficients by means of the following detailed balance condition (cf. for instance [36])

$$\begin{aligned} \Gamma _{\alpha +\beta ,\beta }=\frac{P_{\text {ref}}}{k_{B}T}\exp \!\left( \frac{\Delta G_{\text {ref},\alpha +\beta }-\Delta G_{\text {ref},\alpha }-\Delta G_{\text {ref},\beta }}{k_{B}T} \right) K_{\alpha ,\beta }, \end{aligned}$$
(1.10)

where \(\Delta G_{\text {ref},\alpha }\) is the Gibbs energy of formation of the cluster \(\alpha \) and \(P_{\text {ref}}\) is the reference pressure at which these energies of formation are calculated. Since we assume the coagulation kernel to be symmetric, \( K_{\alpha ,\beta }=K_{\beta ,\alpha }\), the fragmentation coefficients then satisfy a symmetry requirement \(\Gamma _{\alpha +\beta ,\alpha }=\Gamma _{\alpha +\beta ,\beta }\) for all \(\alpha ,\beta \in {\mathbb {N}}^{d}\).

In the processes of particle aggregation, usually the formation of larger particles is energetically favourable, which means that

$$\begin{aligned} \Delta G_{\text {ref},\alpha +\beta }\ll \Delta G_{\text {ref},\alpha }+\Delta G_{\text {ref},\beta }\,. \end{aligned}$$

Under this assumption, it follows from (1.10) that

$$\begin{aligned} \Gamma _{\alpha +\beta ,\beta }\ll K_{\alpha ,\beta }\,, \end{aligned}$$

and then we might expect to approximate the solutions of (1.1) by means of the solutions of (1.2). The description of the precise conditions on the Gibbs free energy \(\Delta G_{\text {ref},\alpha }\) which would allow to make this approximation rigorous is an interesting mathematical problem that we do not address in the present paper. Therefore, we restrict our analysis here to the coagulation equations (1.2) and (1.3).

1.2 Notations and Plan of the Paper

Let I be any interval such that \(I \subset {{\mathbb {R}}}_+=[0,\infty )\). We reserve the notation \({{\mathbb {R}}}_*\) for the case \(I=(0,\infty )\). We will denote by \(C_{c}(I )\) the space of compactly supported continuous functions on I and by \(C_b(I)\) the space of functions that are continuous and bounded on I. Unless mentioned otherwise, we endow both spaces with the standard supremum norm. Then \(C_b(I)\) is a Banach space and \(C_{c}(I )\) is its subspace. We denote the completion of \(C_{c}(I)\) in \(C_b(I)\) by \(C_0(I)\) which naturally results in a Banach space. For example, then \(C_0({{\mathbb {R}}}_+)\) is the space of continuous functions vanishing at infinity and \(C_{0}(I )=C_{c}(I )=C_b(I)\) if I is a finite, closed interval.

Moreover, we denote by \( {\mathcal {M}}_{+}(I) \) the space of nonnegative Radon measures on I. Since I is locally compact, \( {\mathcal {M}}_{+}(I) \) can be identified with the space of positive linear functionals on \(C_{c}(I )\) via Riesz–Markov–Kakutani theorem. For measures \(\mu \in {\mathcal {M}}_{+}(I) \), we denote its total variation norm by \(\Vert \mu \Vert \) and recall that since the measure is positive, we have \(\Vert \mu \Vert =\mu (I)\). Unless I is a closed finite interval, not all of these measures need to be bounded. The collection of bounded, positive measures is denoted by \({\mathcal {M}}_{+,b}(I):=\{\mu \in {\mathcal {M}}_{+}(I) \,|\,\mu (I)<\infty \}\) and we denote the collection of bounded complex Radon measures by \({\mathcal {M}}_{b}(I)\). We recall that the total variation indeed defines a norm in \({\mathcal {M}}_{b}(I)\), and this space is a Banach space which can be identified with the dual space \(C_{0}(I )^*=C_c(I)^*\). In addition, \({\mathcal {M}}_{+,b}(I)\) is a norm-closed subset of \({\mathcal {M}}_{b}(I)\). Alternatively, we can endow both \({\mathcal {M}}_{b}(I)\) and \({\mathcal {M}}_{+,b}(I)\) with the \(*\)–weak topology which is generated by the functionals \(\langle \varphi ,\mu \rangle =\int _I \varphi (x) \mu (d x)\) with \(\varphi \in C_c(I)\).

We will use indistinctly \(\eta (x) \text {d}x\) and \(\eta (\text {d}x)\) to denote elements of these measure spaces. The notation \(\eta ( \text {d}x) \) will be preferred when performing integrations or when we want to emphasize that the measure might not be absolutely continuous with respect to the Lebesgue measure.

For the sake of notational simplicity, in some of the proofs we will resort to a generic constant C which may change from line to line.

The paper is structured as follows: in Section 2 we discuss the types of solutions considered here and we state the main results. In Section 3 we prove the existence of steady states for the coagulation equation with source in the continuum case (1.3) assuming \(|\gamma +2\lambda |<1\). We prove the complementary nonexistence of stationary solutions to (1.3) for \(|\gamma +2\lambda |{\geqq }1\) in Section 4. The analogous existence and nonexistence results for the discrete model (1.2) are collected into Section 5. In Section 6 we derive several further estimates for the solutions of both continuous and discrete models, including also estimates for moments of the solutions. These estimates imply in particular that the only relevant collisions are those between particles of comparable sizes. Finally, in Section 7 we prove that the stationary solutions of the discrete model (1.2) behave as the solutions of the continuous model (1.3) for large cluster sizes.

2 Setting and Main Results

2.1 Different Types of Stationary Solutions for Coagulation Equations

The stationary solutions to the discrete equation (1.2) satisfy

$$\begin{aligned} 0=\frac{1}{2}\sum _{\beta <\alpha }K_{\alpha -\beta ,\beta }n_{\alpha -\beta } n_{\beta }-n_{\alpha }\sum _{\beta >0}K_{\alpha ,\beta }n_{\beta }+s_\alpha , \end{aligned}$$
(2.1)

where \(\alpha \in {\mathbb {N}}\) and \(s_\alpha \) is supported on a finite set of integers. Analogously, in the continuous case, the stationary solutions to (1.3) satisfy

$$\begin{aligned} 0=\frac{1}{2}\int _{0}^{x}K\left( x-y,y\right) f\left( x-y\right) f\left( y\right) \text {d}y-\int _{0}^{\infty }K\left( x,y\right) f\left( x\right) f\left( y\right) \text {d}y+\eta \left( x\right) , \end{aligned}$$
(2.2)

where the source term \(\eta \left( x\right) \) is compactly supported in \([1, \infty )\). Although we write the equation using a notation where f and \(\eta \) are given as functions, the equation can be extended in a natural manner to allow for measures. The details of the construction are discussed in Section 3 and the explicit weak formulation may be found in (2.15).

We remark that equation (2.1) can be written as

$$\begin{aligned} J_\alpha \left( n\right) - J_{\alpha -1}\left( n\right) = \alpha s_\alpha \,, \qquad \text { for } \alpha {\geqq }1\,, \end{aligned}$$
(2.3)

where we define \(J_0(n)=0\) and, for \(\alpha {\geqq }1\), we set

$$\begin{aligned} J_\alpha (n) = \sum \limits _{\beta =1}^\alpha \sum \limits _{\gamma =\alpha -\beta +1}^\infty K(\beta ,\gamma ) \beta n_\beta n_\gamma \,. \end{aligned}$$

Notice that we will use indistintly the notation \( K_{\beta ,\gamma }\) or \( K(\beta ,\gamma )\). On the other hand, for sufficiently regular functions f equation  (2.2) can similarly be written as

$$\begin{aligned} \partial _{x}J\left( x;f\right) =x\eta \left( x\right) , \end{aligned}$$
(2.4)

where

$$\begin{aligned} J\left( x;f\right) =\int _{0}^{x}\text {d}y\int _{x-y}^{\infty }\text {d}z K(y,z) y f\left( y\right) f\left( z\right) . \end{aligned}$$
(2.5)

This implies that the fluxes \(J_\alpha (n)\) and J(xf) are constant for \(\alpha \) and x sufficiently large due to the fact that s is supported in a finite set and \(\eta \) is compactly supported, and we prove in Lemma 2.8 that this property continues to hold even when f is a measure. If \(s_\alpha \) or \(\eta (x)\) decay sufficiently fast for large values of \(\alpha \) or x then \(J_\alpha (n)\) or J(xf) converges to a positive constant as \(\alpha \) or x tend to infinity.

Given that other concepts of stationary solutions are found in the physics literature, we will call the solutions of (2.1) and (2.2) stationary injection solutions. In this paper we will be mainly concerned with these solutions. The physical meaning of these solutions, when they exist, is that it is possible to transport monomers towards large clusters at the same rate at which the monomers are added into the system.

For comparison, let us also discuss briefly other concepts of stationary solutions and the relation with the stationary injection solutions. One case often considered in the physics literature are constant flux solutions (cf. [42]). These are solutions of (2.2) with \(\eta \equiv 0\) satisfying

$$\begin{aligned} J(x;f)=J_0\,, \quad \text {for } x>0\,, \end{aligned}$$
(2.6)

where \(J_0 \in {{\mathbb {R}}}_+\) and J(xf) is defined in (2.5). Explicit stationary solutions for coagulation equations have been obtained and discussed in [13,14,15, 38, 39]. In these papers the collision kernel K under consideration is not homogeneous. In the case of homogeneous kernels K there is an explicit method to obtain power solutions of (2.2) by means of some transformations of the domain of integration that were introduced by Zakharov in order to study the solutions of the Weak Turbulence kinetic equations (cf. [45, 46]). Zakharov’s method has been applied to coagulation equations in [7].

Alternatively, we can obtain power law solutions of (2.6) using the homogeneity \(\gamma \) of the kernel (cf. (1.4)). Indeed, suppose that \(f\left( x\right) =c_{s}\left( x\right) ^{-\alpha }\) for some \(c_{s}\) positive and \(\alpha \in {{\mathbb {R}}}\). Using the homogeneity of the kernel K we obtain

$$\begin{aligned} J(x;f)=G\left( \alpha \right) \left( c_{s}\right) ^{2}\left( x\right) ^{3+\gamma -2\alpha } \end{aligned}$$

under the assumption that

$$\begin{aligned} G\left( \alpha \right) =\int _{0}^{1}\text {d}y\int _{1-y}^{\infty }\text {d}zK\left( y,z\right) \left( y\right) ^{1-\alpha }\left( z\right) ^{-\alpha } <\infty \,. \end{aligned}$$
(2.7)

Using (2.6), we then obtain \(\alpha = (3+\gamma )/2\) and \(c_{s}=\sqrt{\frac{J_{0}}{G\left( \alpha \right) }}.\) Therefore, (2.7) holds if and only if \(|\gamma +2\lambda | <1\). Notice that (2.7) yields a necessary and sufficient condition to have a power law solution of (2.6). However, one should not assume that all solutions of (2.6) are given by a power law; indeed, we have preliminary evidence that there exist smooth homogeneous kernels satisfying (1.5) for which there are non- power law solutions to (2.6).

Finally, let us mention one more type of solutions associated with the discrete coagulation equation (2.1) that have some physical interest. This is the boundary value problem in which the concentration of monomers is given and the coagulation equation (2.1) is satisfied for clusters containing two or more monomers (\(\alpha {\geqq }2\)). The problem then becomes

$$\begin{aligned} 0= & {} \frac{1}{2}\sum _{\beta <\alpha }K_{\alpha -\beta ,\beta }n_{\alpha -\beta } n_{\beta }-n_{\alpha }\sum _{\beta >0}K_{\alpha ,\beta }n_{\beta }\,, \quad \text {for }\alpha {\geqq }2\, \nonumber \\ n_{1}= & {} c_{1}, \end{aligned}$$
(2.8)

where \(c_1 >0\) is given.

Notice that if we can solve the injection problem (2.1) for some source \(s=s_1\delta _{\alpha ,1} \) with \(s_1 >0\), then we can solve the boundary value problem (2.8) for any \(c_1>0\). Indeed, let us denote by \(N_\alpha (s_1)\), \(\alpha \in {{\mathbb {N}}}\), the solution to (2.1) with source \(s=s_1\delta _{\alpha ,1} \). Then equation (2.1) for \(\alpha =1\) reduces to

$$\begin{aligned} N_1(s_1) \sum _{\beta >0} K_{1\beta }N_\beta (s_1) = s_1\, . \end{aligned}$$

This implies that \(0<N_1(s_1) <\infty \). Then the solution to (2.8) is given by

$$\begin{aligned} n_\alpha = c_1\frac{N_\alpha (s_1)}{N_1(s_1)}. \end{aligned}$$

Moreover, if we can solve (2.1) for some \(s_1 > 0\), then we can solve (2.1) for arbitrary values of \(s_1\) due to the fact that if n is a solution of (2.1) with source s then for any \(\Lambda >0,\) \(\sqrt{\Lambda }n\) solves (2.1) with source \(\Lambda n.\)

In this paper we will consider the problems (2.1) and (2.2) in Sections 2 to 6. In Section 7 we prove that a rescaled version of the solutions to (2.1) and (2.2) behaves for large cluster sizes as a solution to (2.6). We will not discuss solutions to (2.8) in this paper.

In this paper we will study the solutions of (2.1), (2.2) for kernels \(K\left( x,y\right) \) which behave for large clusters as \(x^{\gamma +\lambda } y^{-\lambda }+y^{\gamma +\lambda }x^{-\lambda }\) for suitable coefficients \(\gamma ,\lambda \in {\mathbb {R}}\) in the case of the equation  (2.2) as well as their discrete counterpart in the case of (2.1). (See next Subsection for the precise assumptions on the kernels, in particular (2.11), (2.12).) The main result that we prove in this paper is that the equations (2.1), (2.2) with nonvanishing source terms \(s_{\alpha },\ \eta \), respectively, have a solution if \(\left| \gamma +2\lambda \right| <1\) and they do not have solutions at all if \(\left| \gamma +2\lambda \right| {\geqq }1.\) The heuristic idea behind this result is easy to grasp. We will describe it in the case of the equation (2.2), since the main ideas are similar for (2.1).

The equation (2.2) can be reformulated as (2.4), (2.5). Since \(\eta \) is compactly supported we obtain that \(J\left( x;f\right) \) is a constant \(J_{0}>0\) for x sufficiently large. The homogeneity of the kernel \(K\left( x,y\right) =x^{\gamma +\lambda }y^{-\lambda }+y^{\gamma +\lambda }x^{-\lambda }\) suggests that the solutions of the equation \(J\left( x;f\right) =J_{0}\) should behave as \(f\left( x\right) \approx Cx^{-\frac{\gamma +3}{2}}\) for large \(x,\ \)with \(C>0.\) Actually this statement holds in a suitable sense that will be made precise later. However, this asymptotic behaviour for \(f\left( x\right) \) cannot take place if \(\left| \gamma +2\lambda \right| {\geqq }1\) because the integral in (2.5) would be divergent. Therefore, solutions to (2.2) can only exist for \(\left| \gamma +2\lambda \right| <1.\)

2.2 Definition of Solution and Main Results

We restrict our analysis to the kernels satisfying (1.5), or at least one of the inequalities there. To account for all the relevant cases, let us summarize the assumptions on the kernel slightly differently here. We always assume that

$$\begin{aligned} K:{\mathbb {R}}_*\times {\mathbb {R}}_*\rightarrow {\mathbb {R}}_{+}\, ,\quad K \text { is continuous} \,, \end{aligned}$$
(2.9)

and for all xy,

$$\begin{aligned} K(x,y){\geqq }0\, ,\qquad K(x,y)=K(y,x)\, . \end{aligned}$$
(2.10)

We also only consider kernels for which one may find \(\gamma ,\lambda \in {{\mathbb {R}}}\) such that at least one of the following holds: there is \(c_1>0\) such that for all \((x,y)\in {\mathbb {R}}_*^{2}\),

$$\begin{aligned} K\left( x,y\right) {\geqq }c_{1}\left( x^{\gamma +\lambda }y^{-\lambda }+y^{\gamma +\lambda }x^{-\lambda }\right) \,, \end{aligned}$$
(2.11)

and/or there is \(c_2>0\) such that for all \((x,y)\in {\mathbb {R}}_*^{2}\)

$$\begin{aligned} K\left( x,y\right) {\leqq }c_{2}\left( x^{\gamma +\lambda }y^{-\lambda }+y^{\gamma +\lambda }x^{-\lambda }\right) \,. \end{aligned}$$
(2.12)

The class of kernels satisfying all of the above assumptions includes many of the most commonly encountered coagulation kernels. It includes in particular the Smoluchowski (or Brownian) kernel (cf. (1.9)) and the free molecular kernel (cf. (1.7)).

The source rate is assumed to be given by \(\eta \in {\mathcal {M}}_{+}\left( {\mathbb {R}} _*\right) \) and to satisfy

$$\begin{aligned} {\mathrm{supp}}\left( \eta \right) \subset \left[ 1,L_{\eta }\right] \text { for some } L_{\eta }{\geqq }1\, . \end{aligned}$$
(2.13)

Note that then we always have \(\eta \left( {\mathbb {R}} _*\right) <\infty \), that is, the measure \(\eta \) is bounded.

We study the existence of stationary injection solutions to equation (1.3) in the following precise sense:

Definition 2.1

Assume that \(K:{{{\mathbb {R}}}}_*^{2}\rightarrow {{\mathbb {R}} }_{+}\) is a continuous function satisfying (2.10) and the upper bound (2.12). Assume further that \(\eta \in {\mathcal {M}}_{+}\left( {\mathbb {R}}_*\right) \) satisfies (2.13). We will say that \(f\in {\mathcal {M}}_{+}\left( {\mathbb {R}}_*\right) ,\) satisfying \(f\left( \left( 0,1\right) \right) =0\) and

$$\begin{aligned} \int _{{{\mathbb {R}}}_* }x^{\gamma +\lambda }f\left( \text {d}x\right) + \int _{{{\mathbb {R}}}_* }x^{-\lambda }f\left( \text {d}x\right) <\infty \,, \end{aligned}$$
(2.14)

is a stationary injection solution of (1.3) if the following identity holds for any test function \(\varphi \in C_{c}({{{\mathbb {R}}}}_*)\):

$$\begin{aligned}&\frac{1}{2}\int _{{{\mathbb {R}}}_*}\int _{{{\mathbb {R}}}_*} K\left( x,y\right) \left[ \varphi \left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \nonumber \\&\qquad +\int _{{{\mathbb {R}}}_*}\varphi \left( x\right) \eta \left( \text {d}x\right) =0\,. \end{aligned}$$
(2.15)

Remark 2.2

Definition 2.1, or a discrete version of it, will be used throughout most of the paper (cf. Sections 2 to 6). In Section 7, we will use a more general notion of a stationary injection solution to (1.3), considering source terms \(\eta \) which satisfy \( {\mathrm{supp}} \eta \subset [a, b]\) for some given constants a and b such that \(0<a<b\). Then we require that \(f\in {\mathcal {M}}_{+}\left( {\mathbb {R}}_*\right) \) and \(f((0, a))=0\), in addition to (2.14). Note that for such measures we have \(\int _{{{\mathbb {R}}}_*} f(\text {d}x) = \int _{[a,\infty )} f(\text {d}x)\). The generalized case is straightforwardly reduced to the above setup by rescaling space via the change of variables \(x'=x/a\).

The condition \(f\left( \left( 0,1\right) \right) =0\) is a natural requirement for stationary solutions of the coagulation equation, given that \(\eta \left( \left( 0,1\right) \right) =0\). As we show next, the second integrability condition (2.14) is the minimal one needed to have well defined integrals in the coagulation operator.

First, note that all the integrals appearing in (2.15) are well defined for any \(\varphi \in C_{c}\left( {\mathbb {R}}_*\right) \) with \({\mathrm{supp}} \varphi \subset ( 0,L]\), because we can then restrict the domain of integration to the set \(\left\{ \left( x,y\right) \in \left[ 1,L\right] \times \left[ 1,\infty \right) \right\} \) in the term containing \(\varphi \left( x\right) \), and to the set \(\left\{ \left( x,y\right) \in \left[ 1,L\right] ^{2}\right\} \) in the term containing \(\varphi \left( x+y\right) \). In addition, (2.12) implies that \(K\left( x,y\right) {\leqq }{\tilde{C}}_{L}[y^{\gamma + \lambda }+y^{-\lambda }]\) for \(\left( x,y\right) \in \left[ 1,L\right] \times \left[ 1,\infty \right) \). Therefore,

$$\begin{aligned}&\int _{{\mathbb {R}}_*}\int _{{\mathbb {R}}_*}K\left( x,y\right) \left| \varphi \left( x+y\right) \right| f\left( \text {d}x\right) f\left( \text {d}y\right) {\leqq }C_{L}\left( \int _{\left[ 1,L\right] }f\left( \text {d}x\right) \right) ^{\! 2}\,, \\&\int _{{\mathbb {R}}_*}\int _{{\mathbb {R}}_*}K\left( x,y\right) \left| \varphi \left( x\right) \right| f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\quad {\leqq }C_{L}\left( \int _{{\mathbb {R}}_*}y^{\gamma +\lambda }f\left( \text {d}y\right) +\int _{{\mathbb {R}}_*}y^{-\lambda }f\left( \text {d}y\right) \right) \int _{\left[ 1,L\right] }f\left( \text {d}x\right) \,, \end{aligned}$$

where \(C_{L}\) depends on \(\varphi \), \(\gamma \), and \(\lambda \). Then, the assumption (2.14) in the Definition 2.1 implies that all the integrals appearing in (2.15) are convergent.

We now state the main results of this paper.

Theorem 2.3

Assume that K satisfies (2.9)– (2.12) and \(| \gamma +2\lambda | <1.\) Let \(\eta \ne 0 \) satisfy (2.13). Then, there exists a stationary injection solution \(f\in {\mathcal {M}}_{+}\left( {\mathbb {R}}_*\right) \), \(f\ne 0\), to  (1.3) in the sense of Definition 2.1.

Theorem 2.4

Suppose that \(K\left( x,y\right) \) satisfies  (2.9)–(2.12) as well as \(| \gamma +2\lambda | {\geqq }1.\) Let us assume also that \(\eta \ne 0\) satisfies (2.13). Then, there is no solution of (1.3) in the sense of Definition 2.1.

Remark 2.5

Notice that the free molecular kernel defined as in (1.7) satisfies (2.10)–(2.12) with \(\gamma =\frac{1}{6},\ \lambda =\frac{1}{2}\). Then, since \(\gamma +2\lambda >1\), we are in the Hypothesis of Theorem 2.4 which implies that there are no solutions of (1.3) in the sense of the Definition 2.1 for the kernel (1.7) and some \(\eta \ne 0\). On the other hand, in the case of the Brownian kernel defined in (1.9) with \(\gamma = 0 \) and \(\lambda = \frac{1}{3}\) the assumptions of Theorem 2.3 hold and nontrivial stationary injection solutions in the sense of Definition 2.1 exist for each \(\eta \) satisfying (2.13).

Remark 2.6

We observe that if \(\eta =0\), there is a trivial stationary solution to  (1.3) given by \(f=0\). On the other hand, if \(\eta \ne 0\), then \(f=0\) cannot be a solution.

Remark 2.7

Assumption (2.13) is motivated by specific problems in chemistry [36] which have a source of monomers \(s_{\alpha }=s\delta _{\alpha ,1}\) only. However, in all the results of this paper this assumption could be replaced by the most general condition

$$\begin{aligned} {\text {*}}{supp}\left( \eta \right) \subset \left[ 1,\infty \right) ,\ \ \int _{\left[ 1,\infty \right) }x\eta \left( \text {d}x\right) <\infty \end{aligned}$$
(2.16)

and in the discrete case, the analogous condition (5.2) could be replaced by \(\sum _{\alpha =1}^{\infty }\alpha s_{\alpha }<\infty \). Indeed, it is easily seen that the only property of the source term \(\eta \) that is used in the arguments of the proofs, both in the existence and non-existence results, is that:

$$\begin{aligned} \frac{J}{2}{\leqq }\int _{\left[ 1,L_{\eta }\right] }x\eta \left( \text {d}x\right) {\leqq }J\ \ ,\ \ \text {where }\int _{\left[ 1,\infty \right) }x\eta \left( \text {d}x\right) =J\in \left( 0,\infty \right) \end{aligned}$$
(2.17)

for some \(L_{\eta }\) sufficiently large, or an analogous condition in the discrete case, which follows immediately from (2.16). Moreover, it seems feasible to extend the support of \(\eta \) to all positive real numbers \({\mathbb {R}}_{*}\), by assuming suitable smallness conditions for f and \(\eta \) near the origin (for instance in the form of a bounded moment) in order to avoid fluxes of volume of particles coming from \(x=0\). This would lead us to consider issues different from the main ones considered in this paper, therefore we decided to not further consider this case here.

The flux of mass from small to large particles at the stationary state is computed in the next lemma for the above measure-valued solutions. In comparison to (2.5), then one needs to refine the definition by using a right-closed interval for the first integration and an open interval for the second integration, as stated in (2.18) below.

Lemma 2.8

Suppose that the assumptions of Theorem 2.3 hold. Let f be a stationary injection solution in the sense of Definition 2.1. Then f satisfies for any \(R>0\)

$$\begin{aligned} \int _{ (0,R]}\int _{(R-x,\infty )} K(x,y)x f(\text {d}x) f(\text {d}y) = \int _{(0,R]} x \eta (\text {d}x) \, . \end{aligned}$$
(2.18)

Remark 2.9

Note that if \(R{\geqq }L_\eta \), the right-hand side of (2.18) is always equal to \(J=\int _{[1,L_\eta ]} x \eta (\text {d}x)>0\). Therefore, the flux is constant in regions involving only large cluster sizes.

Proof

If \(R<1\), both sides of (2.18) are zero, and the equality holds. Consider then some \(R\ge 1\) and for all \(\varepsilon \) with \(0<\varepsilon <R\) choose some \(\chi _\varepsilon \in C_c^\infty ({{\mathbb {R}}}_*)\) such that \(0\le \chi _\varepsilon \le 1\), \(\chi _\varepsilon (x) = 1\), for \(1\le x {\leqq }R\), and \(\chi _\varepsilon (x) = 0\), for \(x {\geqq }R+\varepsilon \). Then for each \(\varepsilon \) we may define \(\varphi (x) = x \chi _\varepsilon (x) \) and thus obtain a valid test function \(\varphi \in C_{c}({{{\mathbb {R}}}}_*)\). Since then (2.15) holds, we find that for all \(\varepsilon \)

$$\begin{aligned}&\frac{1}{2}\int _{{{\mathbb {R}}}_* }\int _{{{\mathbb {R}}}_* }K\left( x,y\right) \left[ (x+y) \chi _\varepsilon (x+y) -x \chi _\varepsilon (x) - y \chi _\varepsilon (y) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \nonumber \\&\quad +\, \int _{{{\mathbb {R}}}_*}x\chi _\varepsilon (x)\eta (\text {d}x)=0\,. \end{aligned}$$
(2.19)

The first term can be rewritten as follows:

$$\begin{aligned}&\frac{1}{2}\iint _{ \{(x,y)| x+y> R\}} K\left( x,y\right) \left[ (x+y) \chi _\varepsilon (x+y) -x \chi _\varepsilon (x) - y \chi _\varepsilon (y) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\quad = \frac{1}{2}\iint _{ \{(x,y)| x+y> R,\ x{\leqq }R,\ y{\leqq }R\}} K\left( x,y\right) \left[ (x+y) \chi _\varepsilon (x+y) -x - y \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\qquad +\,\frac{1}{2}\iint _{ \{(x,y)| x> R,\ y{\leqq }R \}} K\left( x,y\right) \left[ (x+y) \chi _\varepsilon (x+y) -x \chi _\varepsilon (x) - y \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\qquad +\,\frac{1}{2}\iint _{ \{(x,y)| y> R,\ x {\leqq }R \}} K\left( x,y\right) \left[ (x+y) \chi _\varepsilon (x+y) -x - y \chi _\varepsilon (y) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\qquad +\,\frac{1}{2}\iint _{ \{(x,y)| y> R,\ x>R \}} K\left( x,y\right) \left[ -x \chi _\varepsilon (x) - y \chi _\varepsilon (y) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) . \end{aligned}$$

We readily see that the terms involving \(\chi _\varepsilon \) on the right hand side tend to zero as \(\varepsilon \) tends to zero due to the fact that for Radon measures \(\mu \) the integrals \(\int _{[a-\varepsilon , a)}d\mu \) and \(\int _{(a,a+\varepsilon ]} d\mu \) converge to 0 as \(\varepsilon \) tends to zero. Then we obtain from (2.19) that

$$\begin{aligned}&\frac{1}{2}\iint \limits _{ \{(x,y)| x+y> R,\ x {\leqq }R,\ y {\leqq }R \}} K\left( x,y\right) (x+y)f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\quad +\, \frac{1}{2}\iint \limits _{ \{(x,y)| x> R,\ y {\leqq }R \}} K\left( x,y\right) y f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\quad +\,\frac{1}{2}\iint \limits _{ \{(x,y)| y> R,\ x {\leqq }R \}} K\left( x,y\right) x f\left( \text {d}x\right) f\left( \text {d}y\right) = \int _{(0,R]}x\eta (\text {d}x)\,. \end{aligned}$$

Rearranging the terms we obtain

$$\begin{aligned}&\frac{1}{2}\iint \limits _{ \{(x,y)| x+y> R,\ x {\leqq }R \}} K\left( x,y\right) x f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\qquad +\frac{1}{2}\iint \limits _{ \{(x,y)| x+y> R,\ y {\leqq }R \}} K\left( x,y\right) y f\left( \text {d}x\right) f\left( \text {d}y\right) \\&\quad =\int _{(0,R]}x\eta (\text {d}x)\, , \end{aligned}$$

which implies (2.18) using a symmetrization argument. \(\square \)

The following Lemma will be used several times throughout the paper to convert bounds for certain “running averages” into uniform bounds of integrals. The function \(\varphi \) below is included mainly for later convenience.

Lemma 2.10

Suppose \(a>0\) and \(b \in (0,1)\), and assume that \(R\in (0,\infty ]\) is such that \(R\ge a\). Consider some \(f \in \mathcal {M_+}({{\mathbb {R}}}_*)\) and \(\varphi \in C({{\mathbb {R}}}_*)\), with \(\varphi \ge 0\).

  1. 1.

    Suppose \(R<\infty \), and assume that there is \(g \in L^1([a,R])\) such that \(g\ge 0\) and

    $$\begin{aligned} \frac{1}{z}\int _{[bz,z]} \varphi (x) f(\text {d}x) {\leqq }g(z)\,, \quad \text {for } z \in [a,R] \,. \end{aligned}$$
    (2.20)

    Then

    $$\begin{aligned} \int _{[a,R]} \varphi (x) f(\text {d}x) {\leqq }\frac{\int _{[a,R]}g(z)\text {d}z}{\vert \ln b\,\vert } + R g(R)\,. \end{aligned}$$
    (2.21)
  2. 2.

    Consider some \(r\in (0,1)\), and assume that \(a/r\le R<\infty \). Suppose that (2.20) holds for \(g(z)=c_0 z^q\), with \(q\in {{\mathbb {R}}}\) and \(c_0\ge 0\). Then there is a constant \(C>0\), which depends only on r, b and q, such that

    $$\begin{aligned} \int _{[a,R]} \varphi (x) f(\text {d}x) {\leqq }C c_0 \int _{[a,R]} z^q \text {d}z \,. \end{aligned}$$
    (2.22)
  3. 3.

    If \(R=\infty \) and there is \(g \in L^1([a,\infty ))\) such that \(g\ge 0\) and

    $$\begin{aligned} \frac{1}{z}\int _{[bz,z]} \varphi (x) f(\text {d}x) {\leqq }g(z)\,, \quad \text {for } z \ge a \,, \end{aligned}$$
    (2.23)

    then

    $$\begin{aligned} \int _{[a,\infty )} \varphi (x) f(\text {d}x) {\leqq }\frac{\int _{[a,\infty )}g(z)\text {d}z}{\vert \ln b\,\vert }\,. \end{aligned}$$
    (2.24)

Proof

We first prove the general case in item 1. Assume thus that \(R<\infty \) and that \(g\ge 0\) is such that (2.20) holds. We recall that then \(0<a\le R\). If \(a\ge b R\), we can estimate

$$\begin{aligned} \int _{[a,R]} \varphi (x) f(\text {d}x) {\leqq }\int _{[bR,R]} \varphi (x) f(\text {d}x){\leqq }Rg(R)\,, \end{aligned}$$

using the assumption (2.20) with \(z=R\). Thus (2.21) holds in this case since \(g\ge 0\).

Otherwise, we have \(0<a<b R\). By assumption, the constant \(C_1 := \int _{[a,R]} g(z) \text {d}z{\geqq }0\) is finite. Integrating (2.20) over z from a to R, we obtain

$$\begin{aligned} \int _{[a,R]}\int _{[bz,z]}\frac{1}{z} \varphi (x)f(\text {d}x) \text {d}z{\leqq }C_1\, . \end{aligned}$$

The iterated integral satisfies the assumptions of Fubini’s theorem, and thus it can be written as an integral over the set

$$\begin{aligned}&\{(z,x)\ |\ a {\leqq }z{\leqq }R, \ bz {\leqq }x {\leqq }z \}\\&\quad = \{(z,x)\ |\ ba{\leqq }x{\leqq }R, \ \max \{a,x\} {\leqq }z {\leqq }\min \{\frac{1}{b}x,R\} \}\\&\quad \supset \{(z,x)\ |\ a{\leqq }x{\leqq }bR, \ x {\leqq }z {\leqq }\frac{1}{b}x \}\,. \end{aligned}$$

Therefore, after using Fubini’s theorem to obtain an integral where z-integration comes first, we find that

$$\begin{aligned} \int _{[a,bR]} \int _{[x, x/b]}\frac{1}{z} \varphi (x) f(\text {d}x ) \text {d}z \ {\leqq }\ C_1. \end{aligned}$$

The integral over z yields \(\ln (x/(bx))=\vert \ln b\,\vert \), and thus \(\int _{[a,bR]} \varphi (x) f(\text {d}x ) {\leqq }\ C_1/ \vert \ln b\,\vert \). To get an estimate for the integral over [bRR], we use (2.20) for \(z=R\). Hence, (2.21) follows also in this case which completes the proof of the first item.

For item 2, let us assume that \(0<r<1\), \(a\le r R<\infty \), and that (2.20) holds for \(g(z)=c_0 z^q\), with \(q\in {{\mathbb {R}}}\) and \(c_0\ge 0\). Since then \(g\in L^1([a,R])\) and \(g\ge 0\), we can conclude from the first item that that (2.21) holds. Thus we only need find a suitable bound for the second term therein, for \(R g(R)=c_0 R^{q+1}\). By changing the integration variable from z to \(y=z/R\), we find

$$\begin{aligned} \int _{[a,R]} z^q \text {d}z= R^{q+1}\int _{[a/R,1]} y^q \text {d}y \ge R^{q+1}\int _{[r,1]} y^q \text {d}y\,. \end{aligned}$$

Here, \(C':=\int _{[r,1]} y^q \text {d}y\) satisfies \(0<C'<\infty \) for any choice of \(q\in {{\mathbb {R}}}\). Therefore, we can now conclude that (2.22) holds for \(C=|\ln b\,|^{-1} + 1/C'\) which depends only on q, b, and r.

For item 3, let us suppose that \(R=\infty \) and \(g \in L^1([a,\infty ))\) is such that \(g\ge 0\) and (2.23) holds. Then for all intergers \(n\ge a\) we necessarily have \(\inf _{x\ge n} (x g(x))=0\) since otherwise g is not integrable. Therefore, there is \(R_n\rightarrow \infty \) such that \(\lim _{n\rightarrow \infty } R_n g(R_n)=0\). We apply item 1 with \(R=R_n\), and taking \(n\rightarrow \infty \) proves that (2.24) holds. This completes the proof of the Lemma. \(\square \)

3 Existence Results: Continuous Model

Our first goal is to prove the existence of a stationary injection solution (cf. Theorem  2.3) under the assumption \(|\gamma + 2\lambda |<1\). This will be accomplished in three steps: We first prove in Proposition 3.6 existence and uniqueness of time-dependent solutions for a particular class of compactly supported continuous kernels. Considering these solutions at large times allows us to prove in Proposition 3.10 existence of stationary injection solutions for this class of kernels using a fixed point argument. We then extend the existence result to general unbounded kernels supported in \({ {\mathbb {R}}}_*^{2}\) and satisfying (2.10)– (2.12) with \(| \gamma +2\lambda | <1\).

Compactly supported continuous kernels are automatically bounded from above but, for the first two results, we will also assume that the kernel has a uniform lower bound on the support of the source. To pass to the limit including the more general kernel functions, it will be necessary to control the dependence of the solutions on both of the bounds and on the size of support of the kernel. To fix the notations, let us first choose an upper bound \(L_\eta \) for the support of the source, that is, a constant satisfying (2.13). In the first two Propositions, we will consider kernel functions which are continuous, non-negative, have a compact support, and for which we may find \(R_{*}{\geqq }L_{\eta }\) and \(a_1\), \(a_2\) such that \(0<a_{1}<a_{2}\) and \( K(x,y)\in [a_{1},a_{2}],\) for \((x,y)\in [1,2R_{*}]^{2}\). This allows us to prove first that the time-evolution is well-defined, Proposition 3.6, and then in Proposition 3.10 the existence of stationary injection solutions for this class of kernels using a fixed point argument. The proofs include sufficient control of the dependence of the solutions on the cut-off parameters to remove the restrictions and obtain the result in Theorem 2.3.

In fact, not only we regularize the kernel, but we also introduce a cut-off for the coagulation gain term. This will guarantee that the equation is well-posed and has solutions whose support never extends beyond the interval \([1,2R_{*}]\). To this end, let us choose \(\zeta _{R_{*}}\in C\left( {\mathbb {R}}_*\right) \) such that \(0\le \zeta _{R_{*}}\le 1\), \(\zeta _{R_{*}}\left( x\right) =1\) for \(0{\leqq }x{\leqq }R_{*}\), and \(\zeta _{R_{*}}\left( x\right) =0\) for \(x{\geqq }2R_{*}\). We then regularize the time evolution equation (1.3) as

$$\begin{aligned}&\partial _{t}f(x,t)=\frac{\zeta _{R_{*}\!}(x) }{2}\int _{\left( 0,x\right] }K( x-y,y) f( x-y,t) f( y,t) \text {d}y \nonumber \\&\quad -\, \int _{{{{{\mathbb {R}}}}_*}}\! K( x,y) f( x,t) f( y,t) \text {d}y+\eta ( x) \,. \end{aligned}$$
(3.1)

As we show later, this will result in a well-posedness theory such that any solution of (3.1) has the following property: \(f\left( \cdot ,t\right) \) is supported on the interval \(\left[ 1,2R_{*}\right] \) for each \(t{\geqq }0\). Let us also point out that since we are interested in solutions f such that \(f\left( \left( 0,1\right) ,t\right) =0\), the above integral \(\int _{\left( 0,x \right] }\left( \cdot \cdot \cdot \right) \) can be replaced by \(\int _{\left[ 1,x-1\right] }\left( \cdot \cdot \cdot \right) \) if \(x {\geqq }1\).

Assumption 3.1

Consider a fixed source term \(\eta \in {\mathcal {M}}_{+}\left( {{{\mathbb {R}}}}_*\right) \) and assume that \(L_\eta \ge 1\) satisfies (2.13). Suppose \(R_{*}\), \(a_1\), \(a_2\), and T are constants for which \(R_{*}>L_\eta \), \(0<a_{1}<a_{2}\), and \(T>0\). Suppose \(K:{ {\mathbb {R}}}_*^{2}\rightarrow {{\mathbb {R}}}_{+}\) is a continuous, non-negative, symmetric function such that \(K(x,y)\le a_2\) for all xy, and we also have \(K(x,y)\in [a_{1},a_{2}]\) for \((x,y)\in [1,2R_{*}]^{2}\), and \(K(x,y)=0\), if \(x{\geqq }4R_{*}\) or \(y{\geqq }4R_{*}\). Moreover, we assume that there is given a function \(\zeta _{R_{*}}\) such that \(\zeta _{R_{*}}\in C\left( {\mathbb {R}}_*\right) \), \(0\le \zeta _{R_{*}}\le 1\), \(\zeta _{R_{*}}\left( x\right) =1\) for \(0{\leqq }x{\leqq }R_{*}\), and \(\zeta _{R_{*}}\left( x\right) =0\) for \(x{\geqq }2R_{*}\).

We will now study measure-valued solutions of the regularized problem (3.1) in an integrated form. To this end, we use a fairly strong notion of continuous differentiability although uniqueness of the regularized problem might hold in a larger class. However, since we cannot prove uniqueness after the regularization has been removed, it is not a central issue here.

Definition 3.2

Suppose Y is a normed space, \(S\subset Y\), and \(T>0\). We use the notation \(C^1([0,T],S;Y)\) for the collection of maps \(f:\left[ 0,T\right] \rightarrow S\) such that f is continuous and there is \({\dot{f}}\in C([0,T],Y)\) for which the Fréchet derivative of f at any \(t\in \left( 0,T\right) \) is given by \({\dot{f}}(t)\).

We also drop the normed space Y from the notation if it is obvious from the context, in particular, if \(S={\mathcal {M}}_{+,b}(I)\) and \(Y=C_0(I)^*\) or \(Y=S\).

Clearly, if \(f\in C^1([0,T],S;Y)\), the function \({\dot{f}}\) above is unique and it can be found by requiring that for all \(t\in \left( 0,T\right) \)

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{\Vert f(t+\varepsilon )-f(t)-\varepsilon {\dot{f}}(t)\Vert _Y}{|\varepsilon |} = 0\,, \end{aligned}$$

and then taking the left and right limits to obtain the values \({\dot{f}}(0)\) and \({\dot{f}}(T)\). What is sometimes relaxed in similar notations is the existence of the left and right limits.

Definition 3.3

Suppose that Assumption 3.1 holds. Consider some initial data \(f_{0}\in {\mathcal {M}}_{+}({{\mathbb {R}}}_*)\) for which \(f_{0}\left( \left( 0,1\right) \cup \left( 2R_{*},\infty \right) \right) =0\). Then \(f_0\in {\mathcal {M}}_{+,b}({{\mathbb {R}}}_*)\).

We will say that \(f\in {C^{1}(\left[ 0,T \right] ,{\mathcal {M}}_{+,b}({{\mathbb {R}}}_*))}\) satisfying \(f\left( \cdot ,0\right) =f_{0}\left( \cdot \right) \) is a time-dependent solution of (3.1) if the following identity holds for any test function \(\varphi \in C^{1}(\left[ 0,T\right] ,C_{c}\left( {{\mathbb {R}}}_*\right) )\) and all \(0<t<T\),

$$\begin{aligned}&\frac{d}{dt}\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,t\right) f\left( \text {d}x,t\right) -\int _{{{{\mathbb {R}}}_*}}{\dot{\varphi }}\left( x,t\right) f\left( \text {d}x,t\right) \nonumber \\&\quad =\frac{1}{2}\int _{{{{\mathbb {R}}}_*}}\int _{{{{\mathbb {R}}}_*}}K\left( x,y\right) \left[ \varphi \left( x+y,t\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x,t\right) -\varphi \left( y,t\right) \right] \nonumber \\&\quad f\left( \text {d}x,t\right) f\left( \text {d}y,t\right) \nonumber \\&\qquad +\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,t\right) \eta \left( \text {d}x\right) . \end{aligned}$$
(3.2)

Remark 3.4

Note that for any such solution f, automatically by continuity and compactness of [0, T] one has

$$\begin{aligned} \sup \limits _{t\in [0,T]}\left( \int _{{{{\mathbb {R}}}_*}}f\left( \text {d}x,t\right) \right) <\infty \,, \end{aligned}$$
(3.3)

since \(\Vert f\Vert =f({{\mathbb {R}}}_*)\). Let us also point out that whenever \(\varphi \in C^{1}(\left[ 0,T\right] ,C_{c}\left( {{\mathbb {R}}}_*\right) )\) and \(f\in {C^{1}(\left[ 0,T\right] ,{\mathcal {M}}_{+,b}({{\mathbb {R}}}_*))}\), the map \(t\mapsto \int _{{{{\mathbb {R}}}_*}}\varphi \left( x,t\right) f\left( \text {d}x,t\right) \) indeed belongs to \(C^{1}(\left[ 0,T\right] ,{\mathbb {R}}_{*})\). Thus the derivative on the left hand side of (3.2) is defined in the usual sense and, in fact, it is equal to \(\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,t\right) {\dot{f}}\left( \text {d}x,t\right) \). In addition, there is sufficient regularity that after integrating (3.2) over the interval \(\left[ 0,t \right] \) we obtain

$$\begin{aligned}&\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,t\right) f\left( \text {d}x,t\right) -\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,0\right) f_{0}\left( \text {d}x\right) \nonumber \\&\quad =\int _{0}^{t}\text {d}s\int _{{{{\mathbb {R}}}_*}}{\dot{\varphi }}\left( x,s\right) f\left( \text {d}x,s\right) \nonumber \\&\qquad +\frac{1}{2}\int _{0}^{t}\text {d}s\int _{{{{\mathbb {R}}}_*}}\int _{{{{\mathbb {R}}}_*} }K\left( x,y\right) \left[ \varphi \left( x+y,s\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x,s\right) -\varphi \left( y,s\right) \right] \nonumber \\&\quad f\left( \text {d}x,s\right) f\left( \text {d}y,s\right) \nonumber \\&\qquad +\int _{0}^{t}\text {d}s\int _{{{{\mathbb {R}}}_*}}\varphi \left( x,s\right) \eta \left( \text {d}x\right) . \end{aligned}$$
(3.4)

We can define also weak stationary solutions of (3.1). It is straightforward to check that if \(F(\text {d}x)\) is a stationary solution, then \(f(d x ,t)=F(\text {d}x)\) is a solution to (3.4) with initial condition \(f_0(\text {d}x)=F(\text {d}x)\).

Definition 3.5

Suppose that Assumption 3.1 holds. We will say that \(f\in {{\mathcal {M}}_{+}({{\mathbb {R}}}_*)},\) satisfying \(f((0,1) \cup (2R_*,\infty ))=0\) is a stationary injection solution of (3.1) if the following identity holds for any test function \(\varphi \in C_{c}\left( {{\mathbb {R}}} _*\right) \):

$$\begin{aligned} 0= & {} \frac{1}{2}\int _{{{\mathbb {R}}}_*}\int _{{{\mathbb {R}}}_*}K\left( x,y\right) \left[ \varphi \left( x+y\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f\left( \text {d}x\right) f\left( \text {d}y\right) \nonumber \\&+\,\int _{{{\mathbb {R}}}_*}\varphi \left( x\right) \eta \left( \text {d}x\right) . \end{aligned}$$
(3.5)

Proposition 3.6

Suppose that Assumption 3.1 holds. Then, for any initial condition \(f_{0}\) satisfying \(f_{0}\in {\mathcal {M}}_{+}({{\mathbb {R}}}_*)\), \( f_{0}\left( \left( 0,1\right) \cup \left( 2R_{*},\infty \right) \right) =0\) there exists a unique time-dependent solution \(f\in {C^{1}(\left[ 0,T\right] ,{\mathcal {M}}_{+,b}({{\mathbb {R}}}_*))}\) to (3.1) which solves it in the classical sense. Moreover, we have

$$\begin{aligned} f\left( \left( 0,1\right) \cup \left( 2R_{*},\infty \right) ,t\right) =0\, ,\quad \text {for }0{\leqq }t \le T\,, \end{aligned}$$
(3.6)

and the estimate

$$\begin{aligned} \int _{{{\mathbb {R}}}_*}f(\text {d}x,t){\leqq }\int _{{{\mathbb {R}}}_*}f_0(\text {d}x)+ C t \, ,\quad 0 {\leqq }t {\leqq }T \, \end{aligned}$$
(3.7)

holds for \(C = \int _{{{\mathbb {R}}}_*}\eta (\text {d}x)\ge 0\), which is independent of \(f_{0}\), t, and T.

Remark 3.7

We remark that the lower estimate \(K(x,y){\geqq }a_{1}>0\) will not be used in the proof of Proposition 3.6. However, this assumption will be used later in the proof of the existence of stationary injection solutions.

Proof

In this proof we skip some standard computations which may be found in [43, Section 5]. We define \({\mathcal {X}}_{R_{*}}=\left\{ f\in {\mathcal {M}}_{+}({{\mathbb {R}}} _*):f\left( \left( 0,1\right) \cup \left( 2R_{*},\infty \right) \right) =0\right\} \). Since \([1,2R_{*}]\) is compact, for any \(f\in {\mathcal {X}}_{R_{*}}\) we have \(f({{\mathbb {R}}}_*)<\infty \), and thus \({\mathcal {X}}_{R_{*}}\subset {\mathcal {M}}_{+,b}({\mathbb {R}}_*)\). For \(f\in {\mathcal {M}}_{+,b}({\mathbb {R}}_*)\), we clearly have \(f\in {\mathcal {X}}_{R_{*}}\) if and only if \(\int \varphi (x) f(d x)=0\) for all \(\varphi \in C_0({{\mathbb {R}}}_*)\) whose support lies in \(\left( 0,1\right) \cup \left( 2R_{*},\infty \right) \). Therefore, \({\mathcal {X}}_{R_{*}}\) is a closed subset both in the \(*-\)weak and norm topology of \(C_0({{\mathbb {R}}}_*)^*={\mathcal {M}}_{b}({{\mathbb {R}}}_*)\).

For the rest of this proof, we endow \({\mathcal {X}}_{R_{*}}\) with the norm topology which makes it into a complete metric space. We look for solutions f in the subset \(X:=C( \left[ 0,T\right] , {\mathcal {X}}_{R_{*}})\) of the Banach space \(C\left( \left[ 0,T\right] , {\mathcal {M}}_{b}({{\mathbb {R}}}_*)\right) \). The space X is endowed with the norm

$$\begin{aligned} \left\| f\right\| _{T}=\sup _{0{\leqq }t{\leqq }T}\left\| f\left( \cdot ,t\right) \right\| \,. \end{aligned}$$

By the uniform limit theorem, also X is then a complete metric space.

We now reformulate (3.1) as the following integral equation acting on \({\mathcal {X}}_{R_{*}}\): we define for \(0\le t\le T\), \(x\in {{\mathbb {R}}}_*\), and \(f\in X\) first a function

$$\begin{aligned} a\left[ f\right] \left( x,t\right) =\int _{{{\mathbb {R}}}_*}K\left( x,y\right) {f\left( \text {d}y,t\right) } \,, \end{aligned}$$
(3.8)

and using this we obtain a measure, written for convenience using the function notation,

$$\begin{aligned}&{\mathcal {T}}\left[ f\right] \left( x,t\right) := f_{0}\left( x\right) e^{-\int _{0}^{t}a\left[ f\right] \left( x, s\right) \text {d}s} +\eta \left( x\right) \int _{0}^{t}e^{-\int _{s}^{t}a\left[ f\right] \left( x,\xi \right) d\xi }\text {d}s \nonumber \\&\quad +\frac{\zeta _{R_{*}}(x) }{2}\int _{0}^{t}e^{-\int _{s}^{t}a\left[ f\right] \left( x,\xi \right) d\xi }\int _{0}^{x}K\left( x-y,y\right) f\left( x-y,s\right) f\left( y,s\right) \text {d}y \text {d}s \,. \end{aligned}$$
(3.9)

Notice that the definition (3.8) indeed is pointwise well defined and yields a function \((x,s)\mapsto a\left[ f\right] \left( x,s\right) \) which is continuous and non-negative for any \(f\in X\). Moreover, we claim that if \(f\in X\), then (3.9) defines a measure in \({\mathcal {M}}_{+}({{\mathbb {R}}} _*)\) for each \(t\in [0,T]\), and we have in addition \({\mathcal {T}}\left[ f\right] \in X\). The only non-obvious term is the term on the right-hand side containing \(\int _{0}^{x}K\left( x-y,y\right) f\left( x-y,s\right) f\left( y,s\right) \text {d}y\). We first explain how this term defines a continuous linear functional on \(C_c\left( {{\mathbb {R}}}_*\right) \). Define \(g(x,s)=\frac{\zeta _{R_{*}}(x) }{2}e^{-\int _{s}^{t}a\left[ f\right] \left( x,\xi \right) d\xi }\) which is a jointly continuous function with \(g(x,s)=0\) if \(x\ge 2 R_{*}\). Given \(\varphi \in C_{c}\left( {{\mathbb {R}}}_*\right) \) we then set

$$\begin{aligned}&\left\langle \varphi ,\int _{0}^{t} g(\cdot ,s)\int _{0}^{\cdot }K\left( \cdot -y,y\right) f\left( \cdot -y,s\right) f\left( y,s\right) \text {d}y \text {d}s \right\rangle \nonumber \\&\quad = \int _{0}^{t} \int _{{{\mathbb {R}}}_*} \left[ \int _{{{\mathbb {R}}}_*}K\left( x,y\right) g(x+y,s) \varphi \left( x+y\right) f\left( \text {d}x,s\right) \right] f\left( \text {d}y,s\right) \text {d}s \, . \end{aligned}$$
(3.10)

Here the right-hand side of (3.10) is well defined since \( f\left( \cdot ,s\right) \in {\mathcal {X}}_{R_{*}}\) for each \(s\in \left[ 0,t\right] .\) Moreover, this operator defines a continuous linear functional from \(C_c\left( {{\mathbb {R}}}_*\right) \) to \({\mathbb {R}}\), and thus is associated with a unique positive Radon measure. Finally, if \(\varphi (x)=0\) for \(1\le x\le 2 R_{*}\), then \(g(x+y,s) \varphi \left( x+y\right) =0\) for \(x+y<1\), which implies that the right hand side of (3.10) is zero. Therefore, the measure belongs to \({\mathcal {X}}_{R_{*}}\) for all t. Continuity in t follows straightforwardly.

The operator \({\mathcal {T}}\left[ \cdot \right] \) defined in (3.9) is thus a mapping from \(C([0,T],{\mathcal {X}}_{R_{*}})\) to \(C([0,T],{\mathcal {X}} _{R_{*}})\) for each \(T>0.\) We now claim that it is a contractive mapping from the complete metric space

$$\begin{aligned} {X}_{T} :=\left\{ f\in X \, |\, \left\| f-f_{0}\right\| _{T}{\leqq }1\right\} \end{aligned}$$

to itself if T is sufficiently small. This follows by means of standard computations using the assumption \(K\left( x,y\right) {\leqq }a_{2}\), as well as the inequality \(\left| e^{-x_{1}}-e^{-x_{2}}\right| {\leqq }\left| x_{1}-x_{2}\right| \) valid for \(x_{1}{\geqq }0,\ x_{2}{\geqq }0.\)

Therefore, there exists a unique solution of \(f={\mathcal {T}}\left[ f\right] \) in \({X}_{T}\) assuming that T is sufficiently small. Notice that \(f{\geqq }0\) by construction.

In order to show that the obtained solution can be extended to arbitrarily long times we first notice that if \(f={\mathcal {T}}\left[ f\right] \), then \(f\in C^{1}(\left[ 0,T\right] ,{\mathcal {X}}_{R_{*}})\) and the definition in (3.9) implies that f satisfies (3.1). Integrating this equation with respect to the x variable, we obtain the following estimates:

$$\begin{aligned}&\partial _{t}\left( \int _{{{\mathbb {R}}}_* }f\left( \text {d}x,t\right) \right) \nonumber \\&\quad {\leqq }\frac{1}{2}\int _{{{\mathbb {R}}}_* }f\left( \text {d}y,t\right) \int _{{ {\mathbb {R}}}_*}K\left( x,y\right) f\left( \text {d}x,t\right) -\int _{{{\mathbb {R}}}_* }f\left( \text {d}y,t\right) \int _{{{\mathbb {R}}}_* }K\left( x,y\right) f\left( \text {d}x,t\right) \nonumber \\&\qquad + \int _{{{\mathbb {R}}}_*}\eta \left( \text {d}x\right) \nonumber \\&\quad = -\frac{1}{2}\int _{{{\mathbb {R}}}_* }f\left( \text {d}y,t\right) \int _{{ {\mathbb {R}}}_*}K\left( x,y\right) f\left( \text {d}x,t\right) +\int _{{{\mathbb {R}}} _* }\eta \left( \text {d}x\right) \nonumber \\&\quad {\leqq }\int _{{{\mathbb {R}}}_*}\eta \left( \text {d}x\right) , \end{aligned}$$
(3.11)

whence (3.7) follows. We can then extend the solution to arbitrarily long times \(T>0\) using standard arguments. After this, the uniqueness of the solution in \(C^{1}(\left[ 0,T\right] ,{\mathcal {M}}_{+,b}({{\mathbb {R}}}_*))\) follows by a standard Grönwall estimate. \(\square \)

Remark 3.8

Notice that using the inequality \(K(x,y){\geqq }a_{1}>0\) we can strengthen (3.11) into the estimate

$$\begin{aligned} \partial _{t}\left( \int _{{{\mathbb {R}}}_*}f\left( \text {d}x,t\right) \right) {\leqq }- \frac{a_{1}}{2}\left( \int _{{{\mathbb {R}}}_* }f\left( \text {d}x,t\right) \right) ^{2}+\int _{{{\mathbb {R}}}_*}\eta \left( \text {d}x\right) . \end{aligned}$$

Inspecting the sign of the right hand side this implies an estimate stronger than (3.7), namely,

$$\begin{aligned} \int _{{{\mathbb {R}}}_* }f\left( \text {d}x,t\right) {\leqq }\max \left\{ \int _{ { {\mathbb {R}}}_*}f_{0}\left( \text {d}x\right) ,\left( \frac{2}{a_{1}}\int _{{{\mathbb {R}}} _*}\eta \left( \text {d}x\right) \right) ^{\frac{1}{2}} \right\} . \end{aligned}$$

We now prove that solutions obtained in Proposition 3.6 are weak solutions in the sense of Definition 3.3.

Proposition 3.9

Suppose that the assumptions in Proposition 3.6 hold. Then, the solution f obtained is a weak solution of (3.1) in the sense of Definition 3.3.

Proof

Multiplying (3.1) by a continuous test function \(\varphi \in C^{1}\left( \left[ 0,T\right] ,C\left( {{\mathbb {R}}}_*\right) \right) \) with \(T>0\) we obtain, using the action of the convolution on a test function in (3.10),

$$\begin{aligned}&\int _{{{\mathbb {R}}}_*}\varphi \left( x,t\right) {\dot{f}}\left( \text {d}x,t\right) \nonumber \\&\quad =\frac{1}{2}\int _{{{\mathbb {R}}}_*}\int _{{{\mathbb {R}}}_*}K\left( x,y\right) \left[ \varphi \left( x+y,t\right) \zeta _{R_{*}}\left( x+y\right) \right. \nonumber \\&\quad \left. -\,\varphi \left( x,t\right) -\varphi \left( y,t\right) \right] f\left( \text {d}x,t\right) f\left( \text {d}y,t\right) \nonumber \\&\qquad +\int _{{{\mathbb {R}}}_*} {\varphi \left( x,t\right) } \eta \left( \text {d}x\right) . \end{aligned}$$
(3.12)

As mentioned earlier, the left-hand side can be rewritten as

$$\begin{aligned} \int _{{{\mathbb {R}}}_*}\varphi \left( x,t\right) {\dot{f}}\left( \text {d}x,t\right) =\frac{d}{dt}\int _{{{\mathbb {R}}}_*}\varphi \left( x,t\right) f\left( \text {d}x,t\right) -\int _{{{\mathbb {R}}}_*} {{\dot{\varphi }}} \left( x,t\right) f\left( \text {d}x,t\right) . \end{aligned}$$

Therefore, f satisfies (3.2) in Definition 3.3. \(\square \)

We will use in the following the dynamical system notation \(S\left( t\right) \) for the map

$$\begin{aligned} S\left( t\right) f_{0}=f\left( \cdot ,t\right) , \end{aligned}$$
(3.13)

where f is the solution of (3.1) obtained in Proposition 3.6. Note that by uniqueness \(S\left( t\right) \) has the following semigroup property:

$$\begin{aligned} S\left( t_{1}+t_{2}\right) =S\left( t_{1}\right) S\left( t_{2}\right) \text { for each }t_{1},t_{2}\in {\mathbb {R}}_+ . \end{aligned}$$
(3.14)

The operators \(S\left( t\right) \) define a mapping

$$\begin{aligned} S\left( t\right) :{\mathcal {X}}_{R_{*}}\rightarrow {\mathcal {X}}_{R_{*}}\ \text {for each }t{\geqq }0, \end{aligned}$$
(3.15)

where \({\mathcal {X}}_{R_{*}}=\left\{ f\in {\mathcal {M}}_{+}({{\mathbb {R}}} _*):f\left( \left( 0,1\right) \cup \left( 2R_{*},\infty \right) \right) =0\right\} \), as before.

We can now prove the following result:

Proposition 3.10

Under the assumptions of Proposition 3.6, there exists a stationary injection solution \({\hat{f}} \in {\mathcal {M}}_{+}({{\mathbb {R}}}_*)\) to (3.1) as defined in Definition 3.5.

Proof

We provide below a proof of the statement but skip over some standard technical computations. Further details about these technical estimates can be found from [43, Section 5].

We first construct an invariant region for the evolution equation (3.1). Let \(f_0 \in {\mathcal {X}}_{R_{*}} \) and set \(f(t)=S(t)f_0\). In particular, f satisfies (3.2). Let us then choose a time-independent test function such that \(\varphi (x)=1\) when \(1\le x{\leqq }2R_{*}\). Similarly to (3.11) and using the fact that \(f(\cdot ,t)\) has support in \([1,2 R_*]\), the lower bound for K implies an estimate

$$\begin{aligned} \frac{d}{dt}\int _{[1,2R_{*}]}f\left( \text {d}x,t\right) {\leqq }-\frac{a_{1}}{2} \left( \int _{[1,2R_{*}]}f\left( \text {d}x,t\right) \right) ^{2}+c_{0}, \end{aligned}$$

where \(c_{0}=\int _{{{\mathbb {R}}}_*}\eta \left( \text {d}x\right) \). As in Remark 3.8, inspecting the sign of the right hand side we then find that if we choose any \(M{\geqq }\sqrt{\frac{2c_{0}}{a_{1}}}\), then the following set is invariant under the time-evolution:

$$\begin{aligned} {\mathcal {U}}_{M}=\left\{ f\in {\mathcal {X}}_{R_{*}}:\int _{[1,2R_{*}]}f(\text {d}x){\leqq }M\right\} \, . \end{aligned}$$
(3.16)

Moreover, \({\mathcal {U}}_{M}\) is compact in the \(*-\)weak topology due to Banach-Alaoglu’s Theorem (cf. [5]), since it is an intersection of a \(*-\)weak closed set \({\mathcal {X}}_{R_{*}}\) and the closed ball \(\Vert f\Vert \le M\).

Consider the operator \(S(t):{\mathcal {X}}_{R_{*}}\rightarrow {\mathcal {X}} _{R_{*}}\) defined in (3.13). We now endow \({\mathcal {X}}_{R_{*}}\) with the \(*-\)weak topology and prove that S(t) is continuous. Due to Proposition 3.9 we have that \(f(\cdot ,t)=S(t)f_{0}\) satisfies (3.4) for any test function \( \varphi \in C^{1}\left( \left[ 0,T\right] ,C_c\left( {{\mathbb {R}}}_* \right) \right) \), \(0{\leqq }t{\leqq }T\) with \(T>0\) arbitrary. Let \(f_{0},{\hat{f}} _{0}\in {\mathcal {X}}_{R_{*}}\). We write \(f(\cdot ,t)=S(t)f_{0}\) and \({\hat{f}}(\cdot ,t)=S(t){\hat{f}}_{0}.\) Using (3.4) and subtracting the corresponding equations for f and \({\hat{f}}\), we obtain

$$\begin{aligned}&\int _{{{\mathbb {R}}}_*}\varphi \left( x,t\right) (f\left( \text {d}x,t\right) - {\hat{f}}\left( \text {d}x,t\right) )-\int _{{{\mathbb {R}}}_*}\varphi \left( x,0\right) (f_{0}\left( \text {d}x\right) -{\hat{f}}_{0}\left( \text {d}x\right) ) \nonumber \\&\quad =\int _{0}^{t}\text {d}s\int _{{{\mathbb {R}}}_*}(f\left( \text {d}x,t\right) -{\hat{f}}\left( \text {d}x,t\right) )\left( {\dot{\varphi }} \left( x,s\right) +{\mathcal {L}}\left[ \varphi \right] \left( x,s\right) \right) , \end{aligned}$$
(3.17)

where

$$\begin{aligned}&\quad {\mathcal {L}}\left[ \varphi \right] \left( x,s\right) =\frac{1}{2}\int _{{ {\mathbb {R}}}_*}K\left( x,y\right) \left[ \varphi \left( x+y,s\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x,s\right) -\varphi \left( y,s\right) \right] \\&(f\left( \text {d}y,s\right) +\,{\hat{f}}\left( dy,s\right) ) \,. \end{aligned}$$

For the derivation of (3.17), we have used symmetry properties under the transformation \(x\leftrightarrow y\): clearly, \(K\left( x,y\right) \left[ \varphi \left( x+y,s\right) \chi _{\left\{ x+y{\leqq }R_{*}\right\} }\left( x,y\right) -\varphi \left( x,s\right) \right. \left. -\varphi \left( y,s\right) \right] \) is then symmetric and \(\left[ f\left( \text {d}x,s\right) {\hat{f}}\left( dy,s\right) -f\left( dy,s\right) {\hat{f}}\left( \text {d}x,s\right) \right] \) is antisymmetric, and hence their product integrates to zero.

Consider then an arbitrary \(\psi \in C_c\left( {{\mathbb {R}}}_*\right) \). We claim that there is a test function \(\varphi \in C^{1}\left( \left[ 0,t\right] ,C_{c}\left( { {\mathbb {R}}}_*\right) \right) \) such that

$$\begin{aligned} {\dot{\varphi }} \left( x,s\right) +{\mathcal {L}}\left[ \varphi \right] \left( x,s\right) =0\ \ \text {for }0{\leqq }s{\leqq }t\, , \ { x\ge 1}\,, \quad \text {with\ \ }\varphi \left( \cdot ,t\right) =\psi \left( \cdot \right) . \end{aligned}$$
(3.18)

Given such a function \(\varphi \), since f and \({\hat{f}}\) have no support on (0, 1), equation (3.17) implies

$$\begin{aligned} \int _{{{\mathbb {R}}}_*}\psi \left( x\right) (f\left( \text {d}x,t\right) -{\hat{f}} \left( \text {d}x,t\right) )=\int _{{{\mathbb {R}}}_*}\varphi \left( x,0\right) (f_{0}\left( \text {d}x\right) -{\hat{f}}_{0}\left( \text {d}x\right) ) \,. \end{aligned}$$
(3.19)

Therefore, if such a function \(\varphi \) exists for any \( \psi \in C_c\left( {{\mathbb {R}}}_*\right) \), we would find that the estimate at time t, \(\left| \int _{{{\mathbb {R}}}_*}\psi \left( x\right) (f\left( \text {d}x,t\right) -{\hat{f}}\left( \text {d}x,t\right) )\right| \), will become arbitrarily small if the estimate at time 0, \(\left| \int _{[1, {2R_{*}}]}\varphi \left( x,0\right) (f_{0}\left( \text {d}x\right) -{\hat{f}}_{0}\left( \text {d}x\right) )\right| \), is made sufficiently small. In particular, this property can be used to prove that for every \(f(t)=S(t)f_0\) in a \(*-\)weak open set U one can find a \(*-\)weak open neighbourhood V of \(f_0\) such that for any \({\hat{f}}_{0}\in V\) one has \(S(t){\hat{f}}_{0} \in U\). Hence, the \(*-\)weak continuity of \( S\left( t\right) \) would then follow.

In order to conclude the proof of the continuity of S(t) in the \(*-\) weak topology it only remains to prove the existence of \(\varphi \in C^{1}\left( \left[ 0,t\right] ,C_{c}\left( {{\mathbb {R}}}_*\right) \right) \) satisfying (3.18) for a fixed \( \psi \in C_c\left( {{\mathbb {R}}}_*\right) \). First, let us choose \(a\in (0,1)\) and \(b\ge 4R_{*}\) so that the support of \(\psi \) is contained in \(I_0:=[a,b]\). We now construct \(\varphi \) as a solution to an evolution equation in the Banach space \(Y:= \left\{ h\in C({{\mathbb {R}}}_*)\left| \, h(x)=0\text { if }x\le a\text { or }x\ge b{\displaystyle }\right. \!\right\} \) which is a closed subspace of \(C_0({{\mathbb {R}}}_*)\).

More precisely, we now look for solutions \(\varphi \in {\tilde{X}}:= C([0,t],Y)\), endowed with the weighted norm \(\Vert \varphi \Vert _M := \sup _{x\in {{\mathbb {R}}}_*,\, s\in [0,t]}|\varphi (x,s)| \mathrm{e}^{M (s-t)}\). The parameter \(M>0\) is chosen sufficiently large, as explained later.

Clearly, \(\psi \in Y\). To regularize the small values of x, we choose a function \(\phi _a\in C({{\mathbb {R}}}_*)\) such that \(0\le \phi _a\le 1\), \(\phi _a(x)=0\) if \(x\le a\), and \(\phi _a(x)=1\) if \(x\ge 1\), and then define

$$\begin{aligned} \tilde{{\mathcal {L}}}[\varphi ](x,s) := \phi _a(x) {\mathcal {L}}[\varphi ](x,s)\,,\quad x>0\,,\ 0\le s\le t\,. \end{aligned}$$

Now, if \(\varphi \in {\tilde{X}}\), we have \(\tilde{{\mathcal {L}}}[\varphi ](x,s)=0\) both if \(x\le a\) (due to the factor \(\phi _a\)) and if \(x\ge b\), since \(K(x,y)=0\) if \(x\ge b \ge 4 R_*\). In addition, the assumptions guarantee that \(x\mapsto {\mathcal {L}}[\varphi ](x,s)\) is continuous, so we find that \( \tilde{{\mathcal {L}}}[\varphi ](x,s)\in Y\) for any fixed s.

We look for solutions \(\varphi \) as fixed points satisfying \(\varphi ={\mathcal {A}}[\varphi ]\), where

$$\begin{aligned} {\mathcal {A}}[\varphi ](x,s) := \psi (x) + \int _s^t \tilde{{\mathcal {L}}}[\varphi ](x,s') \text {d}s'\,, \quad x>0\,,\ 0\le s\le t\,. \end{aligned}$$

A straightforward computation, using the uniform bounds of total variation norms of f and \({\hat{f}}\), shows that \({\mathcal {A}}\) is a map from \({\tilde{X}}\) to itself. In addition, since

$$\begin{aligned} |{\mathcal {A}}[\varphi _1](x,s)-{\mathcal {A}}[\varphi _2](x,s)| \le \frac{3a_2}{2} \left( \Vert f\Vert _t+\Vert {\hat{f}}\Vert _t \right) \Vert \varphi _1-\varphi _2\Vert _{M} \int _s^t \mathrm{e}^{-M(s'-t)} \text {d}s'\,, \end{aligned}$$

we find that \({\mathcal {A}}\) is also a contraction if we fix M so that \(M> \frac{3a_2}{2} \left( \Vert f\Vert _t+\Vert {\hat{f}}\Vert _t \right) \). Thus by the Banach fixed point theorem, there is a unique \(\varphi \in {\tilde{X}}\) such that \(\varphi ={\mathcal {A}}[\varphi ]\). This choice satisfies (3.18), at least for \(x\ge 1\), and hence completes the proof of continuity of S(t).

We next prove that also \(t\mapsto S\left( t\right) f_{0}\) is continuous in the \(*-\)weak topology. Let \(t_{1},t_{2}\in \left[ 0,T \right] \) with \(t_{1}<t_{2}.\) Let \(\varphi \in C_{c}\left( {{\mathbb {R}}} _*\right) .\) Using (3.4) we obtain:

$$\begin{aligned}&\int _{{{{\mathbb {R}}}_*}}\varphi \left( x\right) \left[ f\left( \text {d}x,t_{2}\right) -f\left( \text {d}x,t_{1}\right) \right] \\&\quad =\frac{1}{2}\int _{t_{1}}^{t_{2}}\text {d}s\int _{{{{\mathbb {R}}}_*}}\int _{{{\mathbb { R}}_*}}K\left( x,y\right) \left[ \varphi \left( x+y\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] \\&\qquad f\left( \text {d}x,s\right) f\left( dy,s\right) +\int _{t_{1}}^{t_{2}}\text {d}s\int _{{{{\mathbb {R}}}_*}}\varphi \left( x\right) \eta \left( \text {d}x\right) . \end{aligned}$$

Thus using the bound \(\left\| f\right\| _{T}<\infty \) we obtain

$$\begin{aligned} \left| \int _{{{{\mathbb {R}}}_*}}\varphi \left( x\right) \left[ f\left( \text {d}x,t_{2}\right) -f\left( \text {d}x,t_{1}\right) \right] \right| {\leqq }C \left( t_{2}-t_{1}\right) \Vert \varphi \Vert \,, \end{aligned}$$
(3.20)

where the constant C does not depend on \(t_1\), \(t_2\) or \(\varphi \). Therefore, the mapping \(t\mapsto S\left( t\right) f_{0}\) is continuous in the \(*-\)weak topology.

We can now conclude the proof of Proposition 3.10. As proven above, for any fixed t, the operator \(S(t):{\mathcal {U}}_{M}\rightarrow {\mathcal {U}}_{M}\) is continuous and \({\mathcal {U}}_{M}\) is convex and compact when endowed with the \(*-\)weak topology. Using Schauder fixed point theorem, for all \( \delta >0\), there exists a fixed point \({\hat{f}}_{\delta }\) of \(S(\delta )\) in \({\mathcal {U}}_{M}\). In addition, \({\mathcal {U}}_{M}\) is metrizable and hence sequentially compact. As shown in [18, Theorem 1.2], these properties imply that there is \({\hat{f}}\) such that \(S(t){\hat{f}} = {\hat{f}}\) for all t. Thus \({\hat{f}}\) is a stationary injection solution to (3.1). \(\square \)

We now prove Theorem 2.3.

Proof of Theorem 2.3 (existence)

Given a kernel K(xy) satisfying (2.11), (2.12), it can be rewritten as

$$\begin{aligned} K\left( x,y\right) =\left( x+y\right) ^{\gamma }\Phi \left( \frac{x}{x+y},x\right) , \end{aligned}$$
(3.21)

where

$$\begin{aligned} \frac{C_{1}}{s^{p} \left( 1-s\right) ^{p}} {\leqq }\Phi \left( s,x\right) {\leqq }\frac{C_{2}}{s^{p}\left( 1-s\right) ^{p}},\quad {(s,x)\in (0,1)\times {{\mathbb {R}}}_{*}}, \end{aligned}$$
(3.22)

with \(p=\max \left\{ \lambda ,-\left( \gamma +\lambda \right) \right\} \) and the constants \(C_1>0\), \(C_2<\infty \) independent of x. Notice that the dependence of the function \(\Phi \) on x is due to the fact that we are not assuming the kernel K(xy) to be an homogeneous function.

By definition of p, we have \(\gamma + 2 p =|\gamma +2\lambda |\ge 0\), and thus always \(p {\geqq }-\frac{\gamma }{2}\). On the other hand, by assumption, \(|\gamma +2\lambda |<1\), and thus also \(p<\frac{1-\gamma }{2}\). Reciprocally, we observe that kernels with the form (3.21) satisfying (3.22) with \(p {\geqq }-\frac{\gamma }{2}\) satisfy also (2.11), (2.12) .

We use two levels of truncations. First, given \(\varepsilon \) with \(0<\varepsilon <1\) we define

$$\begin{aligned} K_{\varepsilon }\left( x,y\right) =\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) +\varepsilon \,, \end{aligned}$$
(3.23)

where \(\Phi _\varepsilon \) is smooth, non-negative, and bounded by \(\frac{A}{\varepsilon ^{\sigma }}\) everywhere, and satisfies

$$\begin{aligned} \Phi _{\varepsilon }\left( s, x\right) = {\left\{ \begin{array}{ll} \Phi \left( s, x\right) \,, &{} \text {if }\Phi \left( s, x\right) {\leqq }\frac{A}{\varepsilon ^{\sigma }}\,,\\ 0\,, &{} \text {if }\Phi \left( s, x\right) {\geqq }\frac{2A}{\varepsilon ^{\sigma }}\,. \end{array}\right. } \end{aligned}$$
(3.24)

Here A is a large constant independent of \(\varepsilon \); we take \(A=1\) when \(\Phi \) is unbounded, and assume it sufficiently large in a way that will be seen in the proof if \(\Phi \) is bounded. Concerning \(\sigma \) we take \(\sigma =0\) if \(p{\leqq }0\) for any \(\gamma \), \(\sigma >0\) arbitrary small if \(p>0\) and \(\gamma {\leqq }0\) and \(0< \sigma <\frac{p}{\gamma }\) if \(p>0\) and \(\gamma >0\). We then have

$$\begin{aligned} 0{\leqq }\Phi _{\varepsilon }\left( s, x\right) {\leqq }C_{2}\min \left\{ \frac{1}{s^{\lambda }}\frac{1}{\left( 1-s\right) ^{\lambda }}+s^{\gamma +\lambda }\left( 1-s\right) ^{\gamma +\lambda },\frac{A}{C_{2}\varepsilon ^{\sigma }}\right\} \,. \end{aligned}$$
(3.25)

The second level of truncation is to define

$$\begin{aligned} K_{\varepsilon ,R_{*}}\left( x,y\right) =K_{\varepsilon }\left( x,y\right) \omega _{R_{*}}\left( x,y\right) , \end{aligned}$$
(3.26)

where \(\omega _{R_{*}}\in C^{\infty }_{0}({{\mathbb {R}}}^2_{+})\), \(0\le \omega _{R_*}\le 1\), and

$$\begin{aligned} \omega _{R_{*}}(x,y) = {\left\{ \begin{array}{ll} 1\,, &{}\text {if } (x,y)\in [0,2R_{*} ]^2\,, \\ 0\,, &{}\text {if } x{\geqq }4 R_{*} \text { or } y{\geqq }4R_{*} \,. \end{array}\right. } \end{aligned}$$

Notice that, if \(\gamma {\leqq }0\) the truncation in \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \) in (3.23) does not have any effect, because we are only interested in the region where \(x{\geqq }1\) and \(y{\geqq }1\), due to the fact that the solutions we construct satisfy \(f((0,1))=0\).

From Proposition 3.10, to every \(\varepsilon \) and \(R_*\), there exists a stationary injection solution \(f_{\varepsilon ,R_{*}}\) satisfying

$$\begin{aligned}&\frac{1}{2}\int _{{{{\mathbb {R}}}_{*}}}\int _{{{{\mathbb {R}}}_{*}}}K_{\varepsilon ,R_{*}}\left( x,y\right) \left[ \varphi \left( x+y\right) \zeta _{R_{*}}\left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f_{\varepsilon ,R_{*}}\left( \text{ d }x\right) \nonumber \\&\quad f_{\varepsilon ,R_{*}}\left( dy\right) +\int _{{{{\mathbb {R}}}_{*}}}\varphi \left( x\right) \eta \left( \text{ d }x\right) =0\, , \end{aligned}$$
(3.27)

for any test function \(\varphi \in {C}_{c}{({{\mathbb {R}}}_{*})}\). As in the proof of Lemma 2.8 consider any \(z,\delta >0\) and take \(\chi _\delta \in C^\infty ({{\mathbb {R}}}_+)\) satisfying \(\chi _\delta (x) = 1\) if \(x {\leqq }z\), and \(\chi _\delta (x) = 0\) if \(x {\geqq }z+\delta \). Then \(\varphi (x) = x \chi _\delta (x) \) is a valid non-negative test function. Since \(\zeta _{R_*}\le 1\), we may employ the inequality \(\varphi \left( x+y\right) \zeta _{R_{*}}\left( x+y\right) \le \varphi \left( x+y\right) \) in (3.27), and conclude that for these test functions

$$\begin{aligned}&\frac{1}{2}\int _{{{{\mathbb {R}}}_{*}}}\int _{{{{\mathbb {R}}}_{*}}}K_{\varepsilon ,R_{*}}\left( x,y\right) \left[ \varphi \left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f_{\varepsilon ,R_{*}}\left( \text {d}x\right) f_{\varepsilon ,R_{*}}\left( dy\right) \\&\quad +\,\int _{{{{\mathbb {R}}}_{*}}}\varphi \left( x\right) \eta \left( \text {d}x\right) \ge 0\,. \end{aligned}$$

Using the equalities derived in the proof of Lemma 2.8 and taking \(\delta \rightarrow 0\) then proves that

$$\begin{aligned} \int _{\left( 0,z\right] }\int _{( z-x,\infty ) }K_{\varepsilon ,R_{*}}\left( x,y\right) xf_{\varepsilon ,R_{*}}\left( \text {d}x\right) f_{\varepsilon ,R_{*}}\left( dy\right) \le \int _{(0,z]}x\eta \left( \text {d}x\right) \,, \quad \text {for }z>0\, . \end{aligned}$$
(3.28)

A lower bound for the left hand-side and an upper bound for the right hand-side of (3.28), both independent of \(R_{*}\), are computed next. Since \({\mathrm{supp}} \eta \subset [1,L_\eta ]\) and \(\vert |\eta \vert |\) is bounded, then

$$\begin{aligned} \int _{(0,z]} x\eta \left( \text {d}x\right) {\leqq }\int _{[1,L_\eta ]} x\eta \left( \text {d}x\right) =:c, \end{aligned}$$
(3.29)

where c is a constant independent of \(R_{*}\) and c is bounded by \(L_{\eta } \vert |\eta \vert |\). On the other hand we have \(K_{\varepsilon ,R_{*}}\left( x,y\right) {\geqq }\varepsilon >0\) for \(\left( x,y\right) \in \left[ 1,2 R_{*}\right] ^{2}.\) Then,

$$\begin{aligned} \varepsilon \int _{\left( 0,z\right] }\int _{(z-x,2 R_{*}] }xf_{\varepsilon ,R_{*}}\left( \text {d}x\right) f_{\varepsilon ,R_{*}}\left( dy\right) {\leqq }c\quad \text {if }0<z\le 2 R_*\, . \end{aligned}$$

Using that here

$$\begin{aligned}{}[2z/3,z]^{2}\subset \left\{ \left( x,y\right) \in {\mathbb {R}}_{+}^{2}:0< x{\leqq }z,\ z-x< y{\leqq }2 R_* \right\} \, , \end{aligned}$$

we obtain

$$\begin{aligned} \varepsilon \iint _{[2z/3,z]^{2}}xf_{\varepsilon ,R_{*}}(\text {d}x)f_{\varepsilon ,R_{*}}(dy){\leqq }c\quad \text {if }0<z\le 2 R_*\, . \end{aligned}$$

Since \(x{\geqq }2z/3\) in the domain of integration, we obtain

$$\begin{aligned} 2z/3 \left( \int _{[2z/3,z]}f_{\varepsilon ,R_{*}}(\text {d}x)\right) ^{2}{\leqq }\frac{c}{\varepsilon }, \end{aligned}$$

which implies that

$$\begin{aligned} \frac{1}{z}\int _{[2z/3,z]}f_{\varepsilon ,R_{*}}(\text {d}x){\leqq }\frac{C_{\varepsilon }}{z^{3/2}},\ \ \ 0< z{\leqq }2 R_{*}, \end{aligned}$$
(3.30)

where \(C_{\varepsilon }\) is a numerical constant depending on \(\varepsilon \) but independent of \(R_{*}\). Since the right hand side is integrable on \([1,2R_{*}]\), Lemma 2.10 may be employed to obtain a bound

$$\begin{aligned} \int _{[1,2 R_*]}f_{\varepsilon ,R_{*}}(\text {d}x){\leqq }\frac{2 C_\varepsilon }{\ln (3/2)} + C_\varepsilon \frac{1}{\sqrt{2 R_{*}}}\,. \end{aligned}$$
(3.31)

Since the support of \(f_{\varepsilon ,R_{*}}\) lies in \([1,2 R_{*}]\) we find that for all \(R_{*}\ge 1\)

$$\begin{aligned} \int _{{{\mathbb {R}}}_*}f_{\varepsilon ,R_{*}}(\text {d}x){\leqq }{\bar{C}}_{\varepsilon }\,, \end{aligned}$$
(3.32)

where \({\bar{C}}_{\varepsilon }\) is a constant independent of \(R_{*}\). Following the same argument for arbitrary lower limit \(y\in [1,2R_{*}]\), we also obtain a decay bound

$$\begin{aligned} \int _{[y,\infty )}f_{\varepsilon ,R_{*}}(\text {d}x){\leqq }\bar{C}_{\varepsilon } y^{-\frac{1}{2}}\,, \end{aligned}$$
(3.33)

which obviously extends to \(y> 2R_{*}\) since \(f_{\varepsilon ,R_{*}}((2R_{*},\infty ))=0\).

Thus, estimate (3.32) implies that for each \(\varepsilon \) the family of solutions \(\{f_{\varepsilon ,R_{*}}\}_{R_{*}\ge 1}\) is contained in a closed unit ball of \({\mathcal {M}}_{+,b}\left( {\mathbb {R}}_*\right) \). This is a sequentially compact set in the \(*-\)weak topology, and thus by taking a subsequence if needed, we can find \(f_{\varepsilon }\in {\mathcal {M}}_{+,b}\left( {\mathbb {R}}_*\right) \) such that \(f_{\varepsilon }\left( \left( 0,1\right) \right) =0\) and

$$\begin{aligned} f_{\varepsilon ,R_{*}^{n}}\rightharpoonup f_{\varepsilon }\text { as } n\rightarrow \infty \text { in the }*-\text {weak topology} \end{aligned}$$
(3.34)

with \(R_{*}^{n}\rightarrow \infty \) as \(n\rightarrow \infty \). Note that then we can use the earlier “step-like” test-functions and the bounds (3.32) and (3.33) to conclude that also the limit functions satisfy similar estimates, namely,

$$\begin{aligned} \int _{\left( 0,\infty \right) }f_{\varepsilon }(\text {d}x){\leqq }\bar{C}_{\varepsilon }\,, \qquad \int _{\left[ y,\infty \right) }f_{\varepsilon }(\text {d}x){\leqq }{\bar{C}}_{\varepsilon }y^{-\frac{1}{2}}\,, \quad \text {if } y\ge 1\,. \end{aligned}$$
(3.35)

Consider next a fixed test function \(\varphi \in C_{c}({{\mathbb {R}}}_*)\). Now for all large enough values of n, we have \(\varphi \left( x+y\right) \zeta _{R^n_{*}}\left( x+y\right) =\varphi \left( x+y\right) \) everywhere, since the support of \(\varphi \) is bounded. We claim that as \(n\rightarrow \infty \), the limit of (3.27) is given by

$$\begin{aligned} \frac{1}{2}\int _{{{\mathbb {R}}}_*^{2}}K_{\varepsilon }\left( x,y\right) [\varphi (x+y)-\varphi (x)-\varphi (y)]f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) +\int _{{{\mathbb {R}}}_*}\varphi (x)\eta \left( \text {d}x\right) =0. \end{aligned}$$
(3.36)

Since \(f_{\varepsilon ,R_{*}^{n}}\) has support in \([1,2 R_{*}]\), it follows that we may always replace \(K_{\varepsilon ,R_{*}}(x,y)\) in (3.27) by \(K_{\varepsilon }(x,y)\) without altering the value of the integral. By the above observations, it suffices to show that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{{{\mathbb {R}}}_*^{2}}\phi ( x,y) \mu _n(\text {d}x)\mu _n(dy) = \int _{{{\mathbb {R}}}_*^{2}}\phi ( x,y) f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \, , \end{aligned}$$
(3.37)

for \(\mu _n(\text {d}x):=f_{\varepsilon ,R^n_{*}}\left( \text {d}x\right) \) and

$$\begin{aligned} \phi ( x,y) := K_{\varepsilon }\left( x,y\right) [\varphi (x+y)-\varphi (x)-\varphi (y)]\,. \end{aligned}$$

Note that although \(\phi \in C_b({{\mathbb {R}}}_*^2)\), it typically would not have compact support. However, the earlier tail estimates suffice to control the large values of xy, as we show in detail next.

We prove (3.37) by showing that every subsequence has a subsequence such that the limit holds. For notational convenience, let \(\mu _n\) denote the first subsequence and consider an arbitrary \(\varepsilon '>0\). We first regularize the support of \(\phi \) by choosing a function \(g:{{\mathbb {R}}}_+\rightarrow [0,1]\) which is continuous and for which \(g(r)=1\), for \(r\le 1\), and \(g(r)=0\), for \(r\ge 2\). We set \(\phi _M(x,y) := g\left( \frac{x}{M}\right) g\left( \frac{y}{M}\right) g\left( \frac{1}{M x}\right) g\left( \frac{1}{M y}\right) \phi (x,y)\). Then for every M, we have \(\phi _M\in C_c({{\mathbb {R}}}_*^2)\) and thus it is uniformly continuous. By (3.35), we may use dominated convergence theorem to conclude that \(\int \phi _M(x,y)f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \rightarrow \int \phi (x,y)f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \) as \(M\rightarrow \infty \). Thus for all sufficiently large M, we have \(\big |\int \phi _M(x,y)f_{\varepsilon }\big ( \text {d}x\big ) f_{\varepsilon }\big ( dy\big )- \int \phi (x,y)f_{\varepsilon }\big ( \text {d}x\big ) f_{\varepsilon }( dy)\big |<\varepsilon '\). On the other hand, by the decay bound in (3.33) we can find a constant C which does not depend on \(R_*\) and for which \(\big |\int \phi _M(x,y)\mu _n(\text {d}x)\mu _n(dy)-\int \phi (x,y)\mu _n(\text {d}x)\mu _n(dy)\big |\le C M^{-\frac{1}{2}}\). We fix \(M=M(\varepsilon ')\) to be a value such that also this second bound is less than \(\varepsilon '\) for all n.

In order to conclude the proof of (3.37) it only remains to show that \(\int \phi _{M}\left( x,y\right) \mu _{n}\left( \text {d}x\right) \mu _{n}\left( dy\right) \) converges to \(\int \phi _{M}\left( x,y\right) f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \) as \(n\rightarrow \infty .\) This is just a consequence of the fact that the convergence \(\mu _{n}\left( \text {d}x\right) \rightharpoonup f_{\varepsilon }\left( \text {d}x\right) \) as \(n\rightarrow \infty \) in the \(*-\)weak topology of the space \({\mathcal {M}} _{+,b}\left( \left[ \frac{1}{2M},2M\right] \right) \) implies the convergence \(\mu _{n}\left( \text {d}x\right) \mu _{n}\left( dy\right) \rightharpoonup f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \) as \(n\rightarrow \infty \) in the \(*-\)weak topology of the space \({\mathcal {M}}_{+,b}\left( \left[ \frac{1}{2M},2M\right] ^{2}\right) .\) This result can be found for instance in [2], Theorem 3.2 for probability measures, which implies the result for arbitrary measures using simple rescaling arguments.

Since \(f_\varepsilon \) is then a stationary solution to (1.3) with \(K=K_{\varepsilon }\), we can apply Lemma 2.8 directly, and conclude that

$$\begin{aligned} \int _{\left( 0,z\right] }xf_{\varepsilon }\left( \text {d}x\right) \int _{( z-x,\infty ) }K_{\varepsilon }\left( x,y\right) f_{\varepsilon }\left( dy\right) {\leqq }c\quad \text {if }z>0\, , \end{aligned}$$
(3.38)

where c is defined in (3.29) and is independent of \(\varepsilon \). We now observe that (3.22) and (3.23)–(3.24) imply for all sufficiently small \(\varepsilon \)

$$\begin{aligned} K_{\varepsilon }(x,y){\geqq }\varepsilon +C_{0}\min \{z^{\gamma },\frac{1}{\varepsilon }\} \quad \text {for}\ \ (x,y)\in \left[ \frac{z}{2},z\right] ^2 \end{aligned}$$

where \(C_{0}>0\) is independent of \(\varepsilon \) and we used that \(\frac{x}{x+y}\in \left[ \frac{1}{3}, \frac{2}{3}\right] \). Combining this estimate with (3.38) as well as the fact that

$$\begin{aligned}{}[2z/3,z]^{2}\subset \left\{ \left( x,y\right) \in {\mathbb {R}}_{+}^{2}:0< x{\leqq }z,\ z-x< y<\infty \right\} \ \end{aligned}$$

we obtain

$$\begin{aligned} \left( \varepsilon +C_{0}\min \{z^{\gamma },\frac{1}{\varepsilon }\} \right) \frac{2}{3} z \left( \int _{[2z/3,z]}f_{\varepsilon }(\text {d}x)\right) ^{2}{\leqq }c\quad \text {for all }z\in (0,\infty ). \end{aligned}$$

Therefore, we obtain the following estimates for the measures \(f_{\varepsilon }\left( \text {d}x\right) \):

$$\begin{aligned}&\frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] }f_{\varepsilon }\left( \text {d}x\right) {\leqq }\frac{{\tilde{C}}}{z^{\frac{3}{2}}}\left( \frac{1}{\min \left( z^{\gamma },\frac{1}{\varepsilon }\right) }\right) ^{\frac{1}{2}}, \quad \text {for all }z\in (0,\infty )\,, \end{aligned}$$
(3.39)
$$\begin{aligned}&\frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] }f_{\varepsilon }\left( \text {d}x\right) {\leqq }\frac{{\tilde{C}}}{z^{\frac{3}{2}}\sqrt{\varepsilon }}, \quad \text {for all }z\in (0,\infty )\,, \end{aligned}$$
(3.40)

where \({\tilde{C}}\) is independent of \(\varepsilon \).

Consider first the case \(\gamma \le 0\) and recall that then \(p\ge 0\) and \(z^\gamma \le 1\) for \(z\ge 1\). Since \(f_\varepsilon ((0,1))=0\), then the bound (3.39) implies that for all \(z\ge 1\) we have

$$\begin{aligned} \frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] } x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) {\leqq }C z^{\frac{\gamma +2p-3}{2}}\,. \end{aligned}$$
(3.41)

Since \(\gamma +2 p <1\), Lemma 2.10 implies then that for all \(y\ge 1\),

$$\begin{aligned} \int _{[y,\infty ) } x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) {\leqq }C y^{-\frac{1-\gamma -2p}{2}}\, , \end{aligned}$$
(3.42)

where the constant C does not depend on \(\varepsilon \). In particular, then the measures \(x^{\gamma +p} f_{\varepsilon }\left( \text {d}x\right) \) belong to a \(*\)-weak compact set, and there exist \(F\in {\mathcal {M}}_{+,b}\left( {\mathbb {R}}_{*}\right) \) such that

$$\begin{aligned} x^{\gamma +p} f_{\varepsilon _n}\left( \text {d}x\right) \rightharpoonup F\left( \text {d}x\right) \text { as }n\rightarrow \infty \text { in the }*-\text {weak topology} \end{aligned}$$
(3.43)

for some sequence \(( \varepsilon _{n}) _{n\in {\mathbb {N}}}\) with \(\lim _{n\rightarrow \infty }\varepsilon _{n}=0\). We denote \(f\left( \text {d}x\right) = x^{-\gamma -p} F\left( \text {d}x\right) \), and then \(f \in {\mathcal {M}}_{+}\left( {\mathbb {R}}_{*}\right) \). In addition, \(f((0,1))=0\) and it satisfies the tail estimate

$$\begin{aligned} \int _{[y,\infty ) } x^{\gamma +p}f\left( \text {d}x\right) = \int _{[y,\infty ) } F\left( \text {d}x\right) {\leqq }C y^{-\frac{1-\gamma -2p}{2}}\,, \quad y\ge 1\,. \end{aligned}$$
(3.44)

It remains to consider the case \(\gamma >0\). Then (3.39) implies that

$$\begin{aligned}&\frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] }f_{\varepsilon }\left( \text {d}x\right) {\leqq }{\tilde{C}} z^{-\frac{(\gamma +3)}{2}} \,, \quad 1\le z\le \varepsilon ^{-\frac{1}{\gamma }}\,, \end{aligned}$$
(3.45)
$$\begin{aligned}&\frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] }f_{\varepsilon }\left( \text {d}x\right) {\leqq }\frac{{\tilde{C}} \sqrt{\varepsilon }}{z^{\frac{3}{2}}}\,,\quad z> \varepsilon ^{-\frac{1}{\gamma }}\,. \end{aligned}$$
(3.46)

Using these bounds in item 3 of Lemma 2.10 implies then that for all \(y\ge 1\),

$$\begin{aligned} \int _{[y,\infty ) } f_{\varepsilon }\left( \text {d}x\right) {\leqq }C \left( y^{-\frac{1+\gamma }{2}}+ \left( \frac{\varepsilon }{y}\right) ^{\frac{1}{2}}\right) \, \end{aligned}$$
(3.47)

where the constant C does not depend on \(\varepsilon \). Hence, in this case the family of measures \(\left\{ f_{\varepsilon }\right\} _{\varepsilon >0}\) is contained in a \(*\)-weak compact set in \({\mathcal {M}}_{+,b}\left( {\mathbb {R}}_{*}\right) \). Therefore, there exists \(f\in {\mathcal {M}}_{+,b}\left( {\mathbb {R}}_{*}\right) \) such that

$$\begin{aligned} f_{\varepsilon _{n}}\rightharpoonup f\text { as }n\rightarrow \infty \text { in the }*-\text {weak topology} \end{aligned}$$
(3.48)

for some sequence \(\left( \varepsilon _{n}\right) _{n\in {\mathbb {N}}}\) with \(\lim _{n\rightarrow \infty }\varepsilon _{n}=0.\)

To obtain better tail bounds for the limit measure, let us first observe that by (3.45), there is a constant C such that for all \(\varepsilon \)

$$\begin{aligned} \frac{1}{z}\int _{\left[ \frac{2z}{3},z\right] } x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) {\leqq }C z^{\frac{\gamma +2p-1}{2}-1}\,, \quad 1\le z\le \varepsilon ^{-\frac{1}{\gamma }}\,. \end{aligned}$$

Therefore, applying item 2 of Lemma 2.10 with \(r=\frac{1}{2}\), and using the assumption \(\gamma +2 p <1\), we can adjust the constant C so that

$$\begin{aligned} \int _{[a,\varepsilon ^{-\frac{1}{\gamma }}] } x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) {\leqq }C a^{-\frac{1-(\gamma +2p)}{2}}\,, \quad 1\le a\le \frac{1}{2}\varepsilon ^{-\frac{1}{\gamma }}\,. \end{aligned}$$
(3.49)

Let then \(y,R\ge 1\) be such that \(y<R\) but they are otherwise arbitrary. We choose a test function \(\varphi \in C_c({{\mathbb {R}}}_*)\), such that \(0{\leqq }\varphi {\leqq }1\), \(\varphi (x)=1\) for \(y{\leqq }x{\leqq }R\), and \(\varphi (x)=0\) for \(x{\geqq }2R\) and for \(x\le \frac{1}{2} y\). Then, if also \(\varepsilon \le (2R)^{-\gamma }\), we have

$$\begin{aligned} \int _{{{\mathbb {R}}}_*}\varphi (x) x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) \le \int _{[\frac{1}{2}y,2R]} x^{\gamma +p}f_{\varepsilon }\left( \text {d}x\right) \le C 2^{\frac{1-(\gamma +2p)}{2}}y^{-\frac{1-(\gamma +2p)}{2}} \,, \end{aligned}$$

where for values \(y\le 2\) the estimate follows by using \(f_{\varepsilon }((0,1))=0\) and then \(a=1\) in (3.49). Applying this with \(\varepsilon =\varepsilon _n\) and then taking \(n\rightarrow \infty \) proves that

$$\begin{aligned} \int _{[y,R]} x^{\gamma +p}f\left( \text {d}x\right) \le \int _{{{\mathbb {R}}}_*}\varphi (x) x^{\gamma +p}f\left( \text {d}x\right) \le C 2^{\frac{1-(\gamma +2p)}{2}}y^{-\frac{1-(\gamma +2p)}{2}} \,. \end{aligned}$$

Here we may take \(R\rightarrow \infty \), and using monotone convergence theorem we can conclude that f satisfies a tail estimate identical to the earlier case with \(\gamma \le 0\), namely, also for \(\gamma >0\) we can find a constant C such that

$$\begin{aligned} \int _{[y,\infty ) } x^{\gamma +p}f\left( \text {d}x\right) {\leqq }C y^{-\frac{1-\gamma -2p}{2}}\,, \quad y\ge 1\,. \end{aligned}$$
(3.50)

It only remains to take the limit \(\varepsilon _{n}\rightarrow 0\) in (3.36). Suppose that \(\varphi \in C_{c}\left( {\mathbb {R}}_{*}\right) . \) Then, in the term containing \(\varphi (x+y)\) we have that the integrand is different from zero only in a bounded region. Using then that for any \(q\in {{\mathbb {R}}}\) we have \( \lim _{\varepsilon \rightarrow 0} (x y)^{q} K_{\varepsilon }\left( x,y\right) = (x y)^{q} K\left( x,y\right) \) uniformly in compact subsets of \({{\mathbb {R}}}_*\), as well as (3.43) and (3.48), we obtain that the limit of that term is

$$\begin{aligned} \int _{(0,\infty )^{2}}K\left( x,y\right) \varphi (x+y)f\left( \text {d}x\right) f\left( dy\right) . \end{aligned}$$

The terms containing \(\varphi \left( x\right) \) or \(\varphi \left( y\right) \) can be treated analogously due to the symmetry under the transformation \( x\leftrightarrow y.\) We then consider the limit of the term containing \( \varphi \left( x\right) \) where \(\varphi \in C_{c}\left( {\mathbb {R}}_{*}\right) .\) Our goal is to show that the contribution to the integral due to regions \(\{y{\geqq }M\}\) where M is very large, can be made arbitrarily small as \(M\rightarrow \infty \), uniformly in \(\varepsilon .\) Suppose that M is chosen sufficiently large, so that the support of \(\varphi \) is contained in \(\left( 0,M\right) \). We then have the following identity:

$$\begin{aligned}&\int _{{\mathbb {R}}_{*}^{2}\cap \left\{ y{\geqq }M\right\} }K_{\varepsilon }\left( x,y\right) \varphi \left( x\right) f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \\&\quad =\int _{{\mathbb {R}}_{*}^{2}\cap \left\{ y{\geqq }M\right\} }\left[ \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) +\varepsilon \right] \varphi \left( x\right) f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \, . \end{aligned}$$

Given that only values with \(x{\geqq }1\) may contribute, and x is in a bounded region contained in \(\left[ 1,M\right] \), we obtain, using (3.39), an estimate

$$\begin{aligned}&\int _{{\mathbb {R}}_{*}^{2}\cap \left\{ y{\geqq }M\right\} }K_{\varepsilon }\left( x,y\right) \varphi \left( x\right) f_{\varepsilon }\left( \text {d}x\right) f_{\varepsilon }\left( dy\right) \nonumber \\&\quad {\leqq }C \sup _{x\in {\mathrm{supp}} \varphi } \int _{\left\{ y{\geqq }M\right\} } \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) f_{\varepsilon }\left( dy\right) \nonumber \\&\qquad +C\varepsilon \int _{\left\{ y{\geqq }M\right\} }f_{\varepsilon }\left( dy\right) \,. \end{aligned}$$
(3.51)

Using (3.40) we can bound the second term uniformly,

$$\begin{aligned} C\varepsilon \int _{\left\{ y{\geqq }M\right\} }f_{\varepsilon }\left( dy\right) {\leqq }C\sqrt{\varepsilon }\int _{\left\{ y{\geqq }M\right\} }\frac{dy}{y^{\frac{3}{2}}}{\leqq }\frac{C\sqrt{\varepsilon }}{M^{\frac{1}{2}}}\,, \end{aligned}$$

where the constant C is always independent of \(\varepsilon \), although it might need to be adjusted at each inequality. Therefore, the second term on the right hand side of (3.51) tends to zero as \(\varepsilon \rightarrow 0.\)

In order to estimate the first term we need to consider separately different ranges of the values of the exponents p and \(\gamma \). We claim that for \(1{\leqq }x {\leqq }C_0 {\leqq }M,\) \(y{\geqq }M\), where \({\mathrm{supp}} \varphi \subset [0,C_0]\), the following estimates hold for some constants C, \(C_*>0\) which do not depend on \(\varepsilon ,\) M:

  1. 1.

    If \(\gamma {\leqq }0\) and \(p{\leqq }0\) we have

    $$\begin{aligned} \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) {\leqq }C \, . \end{aligned}$$
    (3.52)
  2. 2.

    If \(\gamma >0\) and \(p{\leqq }0\) we have

    $$\begin{aligned}&\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) \nonumber \\&\quad {\leqq }C\left( y^{\gamma +\lambda }+y^{-\lambda }\right) \chi _{\left\{ y\le \left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma }}\right\} }+\frac{C}{\varepsilon }\left( y^{\lambda }+y^{-\gamma -\lambda }\right) \chi _{\left\{ y>\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma }}\right\} }\, , \end{aligned}$$
    (3.53)

    where \(\chi _{U}\) is the characteristic function of the set U.

  3. 3.

    If \(\gamma {\leqq }0\) and \(p>0\) we have

    $$\begin{aligned} \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) {\leqq }C\left( y^{\gamma +\lambda }+y^{-\lambda }\right) \chi _{\left\{ y{\leqq }C_{*} \varepsilon ^{-\frac{\sigma }{p}}\right\} }\,. \end{aligned}$$
    (3.54)
  4. 4.

    If \(\gamma >0\) and \(p>0\) we have

    $$\begin{aligned} \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) {\leqq }C\left( y^{\gamma +\lambda }+y^{-\lambda }\right) \chi _{\left\{ y{\leqq }C_{*} \varepsilon ^{-\frac{\sigma }{p}}\right\} }\, . \end{aligned}$$
    (3.55)

In the case 1 we use the fact that, since \(p{\leqq }0,\) we have \(\sigma =0.\) Then (3.25) implies that \(\Phi _{\varepsilon }\left( s,x\right) {\leqq }C.\) On the other hand, using that \(\gamma {\leqq }0\) and \(x+y{\geqq }x{\geqq }1\) we have \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }1\) whence (3.52) follows.

In the case 2, we use the fact that since \(p{\leqq }0\) we have \(\sigma =0.\) Moreover, since \(x{\leqq }M\) and \(y{\geqq }M\) we have that \(s=\frac{x}{x+y}{\leqq }\frac{1}{2}.\) Given that \(p=\max \left\{ \lambda ,-\left( \gamma +\lambda \right) \right\} {\leqq }0\) we have \(\lambda {\leqq }0,\) \(\left( \gamma +\lambda \right) {\geqq }0.\) Then (3.25) implies that \(\Phi _{\varepsilon }\left( s,x\right) {\leqq }C\left( s^{\gamma +\lambda }+s^{-\lambda }\right) \). Using then that \(y{\leqq }\left( x+y\right) {\leqq }2y\) as well as \(1{\leqq }x{\leqq }C_{0}{\leqq }M{\leqq }y\) we obtain \(\Phi _{\varepsilon }\left( s,x\right) {\leqq }C\left( \frac{x^{\gamma +\lambda }}{\left( y\right) ^{\gamma +\lambda }} +\frac{x^{-\lambda }}{\left( y\right) ^{-\lambda }}\right) {\leqq }C\left( y^{\lambda }+y^{-\gamma -\lambda }\right) .\) On the other hand, in order to estimate \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \) we use the fact that since \(1{\leqq }x{\leqq }C_{0}{\leqq }M{\leqq }y\) we have \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }C\min \left\{ y^{\gamma },\frac{1}{\varepsilon }\right\} .\) Considering separately the cases \(y{\leqq }\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma }}\) and \(y>\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma }}\) we obtain

$$\begin{aligned} \min \left\{ y^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }C\left( y^{\gamma }\chi _{\left\{ y{\leqq }\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma }}\right\} }+\frac{1}{\varepsilon }\chi _{\left\{ y>\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma } }\right\} }\right) . \end{aligned}$$

Multiplying the estimates obtained for \(\Phi _{\varepsilon }\left( s,x\right) \) and \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \) we derive (3.53).

We now consider cases 3 and 4. In case 4 we recall that \(0<\sigma <\frac{p}{\gamma }\), in case 3, \(\sigma >0\) is arbitrary. Using (3.22) and (3.24) we obtain that \(\Phi _{\varepsilon }\left( s,x\right) =0\) if \(s{\leqq }C \varepsilon ^{\frac{\sigma }{p}}.\) Using that \(\frac{x}{2y}{\leqq }s=\frac{x}{x+y}{\leqq }\frac{x}{y}\) and that \(1{\leqq }x{\leqq }C_{0}\) it follows that \(\Phi _{\varepsilon }\left( s,x\right) =0\) for \(y>C_{*} \left( \frac{1}{ \varepsilon }\right) ^{\frac{\sigma }{p}}\) for some \(C_{*}>0.\) On the other hand (3.22) and (3.24) as well as the fact that \(s{\leqq }\frac{1}{2}\) imply also that \(\Phi _{\varepsilon }\left( s,x\right) {\leqq }\frac{C}{s^{p}}.\) We then have

$$\begin{aligned} \Phi _{\varepsilon }\left( s,x\right) {\leqq }\frac{C}{s^{p}}\chi _{\left\{ y{\leqq }C_{*} \left( \frac{1}{ \varepsilon }\right) ^{\frac{\sigma }{p}}\right\} \ . } \end{aligned}$$
(3.56)

We now remark that in the case 3, since \(\gamma {\leqq }0\) and \(y{\geqq }1\) we have \(\left( x+y\right) ^{\gamma }{\leqq }\frac{1}{\varepsilon }\) whence \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} =\left( x+y\right) ^{\gamma }{\leqq }y^{\gamma }.\) Combining this inequality with (3.56) we then obtain

$$\begin{aligned} \Phi _{\varepsilon }\left( s,x\right) \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }\frac{C}{s^{p}}y^{\gamma } \chi _{\left\{ y{\leqq }C_{*} \left( \frac{1}{ \varepsilon }\right) ^{\frac{\sigma }{p}}\right\} }\ . \end{aligned}$$
(3.57)

In the case 4 we can derive a similar estimate. To this end we use the fact that since \(\sigma <\frac{p}{\gamma }\) it follows, since \(\gamma >0,\) that \(\Phi _{\varepsilon }\left( s,x\right) =0\) if \(y^{\gamma }{\geqq }\frac{1}{\varepsilon }\) and \(\varepsilon \) is sufficiently small, because then \(y{\geqq }\frac{1}{\varepsilon ^{\frac{1}{\gamma }}}{\geqq }C_{*} \left( \frac{1}{ \varepsilon }\right) ^{\frac{\sigma }{p}}.\) Then \(\Phi _{\varepsilon }\left( s,x\right) \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }C\Phi _{\varepsilon }\left( s,x\right) \min \left\{ y^{\gamma },\frac{1}{\varepsilon }\right\} {\leqq }C\Phi _{\varepsilon }\left( s,x\right) y^{\gamma }{\leqq }\frac{C}{s^{p}}y^{\gamma }\chi _{\left\{ y{\leqq }C_{*} \left( \frac{1}{ \varepsilon }\right) ^{\frac{\sigma }{p}}\right\} }\) which yields the inequality (3.57) that we had obtained also in the case 3. Using then that \(p=\max \left\{ \lambda ,-\left( \gamma +\lambda \right) \right\} >0\) and \(y{\geqq }1\) we obtain \(y^{p}{\leqq }y^{\lambda }+y^{-\gamma -\lambda }\) whence both (3.54) and (3.55) follow.

We can now estimate the first term on the right hand side of (3.51) in all the cases. We first observe that in the cases 1, 3 and 4 (cf. (3.52), (3.54), (3.55)) the region, where the integrand is non-zero, is contained in the set \(V_{\gamma ,\varepsilon ,M}=\left\{ y\in {\mathbb {R}}_{*}: y {\geqq }M,\ y^{\gamma }{\leqq }\frac{1}{\varepsilon }\right\} .\) This follows immediately in the cases (3.52), (3.54), since in those cases \(\gamma {\leqq }0\) and then \(y^{\gamma }{\leqq }1{\leqq }\frac{1}{\varepsilon }.\) In the case of (3.55) we remark that due to the presence of the characteristic function on the right of (3.55) the region is restricted to to the set \(\left\{ 1{\leqq }y{\leqq }C_{*} \varepsilon ^{-\frac{\sigma }{p}}\right\} .\) Since in this case \(\gamma >0\) it follows that this set is the same as \(\left\{ 1{\leqq }y^{\gamma }{\leqq }C_{*} \varepsilon ^{-\frac{\sigma \gamma }{p}}\right\} .\) Using then that \(0<\sigma <\frac{p}{\gamma }\) it follows that \(C_{*} \varepsilon ^{-\frac{\sigma \gamma }{p}}{\leqq }\frac{1}{\varepsilon }\) for \(\varepsilon \) sufficiently small. Then, the region of non-zero integrand is contained in \(V_{\gamma ,\varepsilon ,M}\) also in this case, as claimed. We now remark that for any \(y\in V_{\gamma ,\varepsilon ,M}\) we have \(\min \left\{ y^{\gamma },\frac{1}{\varepsilon }\right\} =y^{\gamma }.\) Then (3.39) implies that

$$\begin{aligned} \frac{1}{y}\int _{\left[ \frac{2y}{3},y\right] }f_{\varepsilon }\left( \text {d}x\right) {\leqq }\frac{{\tilde{C}}}{y^{\frac{3+\gamma }{2}}}\,, \quad \text {if } y \in V_{\gamma ,\varepsilon ,M}\ . \end{aligned}$$
(3.58)

We then obtain, using (3.52), (3.54), (3.55), the following estimate for the first term on the right hand side of (3.51) in the cases 1, 3 and 4

$$\begin{aligned} \sup _{x\in {\text {*}}{supp}\varphi }\int _{\left\{ y{\geqq }M\right\} } \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y},x\right) f_{\varepsilon }\left( dy\right) \nonumber \\ {\leqq }C\int _{V_{\gamma ,\varepsilon ,M}}\left( y^{\gamma +\lambda }+y^{-\lambda }\right) f_{\varepsilon }\left( dy\right) \ . \end{aligned}$$
(3.59)

Notice that in the cases 3 and 4, estimate (3.59) follows from (3.54), (3.55). In the case 1 we use that \(p=\max \left\{ \lambda ,-\left( \gamma +\lambda \right) \right\} {\leqq }0.\) Then \(\left( \gamma +\lambda \right) {\geqq }0\) and \(-\lambda {\geqq }0\) and we can use then (3.52) to show that \(\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y},x\right) {\leqq }C\left( y^{\gamma +\lambda }+y^{-\lambda }\right) \) in the region of integration. Therefore (3.59) follows also in this case.

For any power \(q\in {{\mathbb {R}}}\) there is a constant C such that \(x^q\le C y^q\) if \(y\in V_{\gamma ,\varepsilon ,M}\) and \(\frac{2}{3}y\le x\le y\). Considering first case 4, in which \(\gamma >0\), we next assume that \(\varepsilon \) is sufficiently small so that \(\varepsilon ^{-\frac{1}{\gamma }}\ge 2M\). Then, we can combine (3.58) and (3.59) with item 2 of Lemma 2.10 to obtain

$$\begin{aligned}&\sup _{x\in {\text {*}}{supp}\varphi }\int _{\left\{ y{\geqq }M\right\} } \min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y},x\right) f_{\varepsilon }\left( dy\right) \\&\quad {\leqq }C\int _{ V_{\gamma ,\varepsilon ,M} }\left( \frac{y^{\gamma +\lambda }+y^{-\lambda }}{y^{\frac{3+\gamma }{2}}}\right) dy {\leqq }C\int _{\left\{ y{\geqq }M\right\} }\left( y^{\frac{\gamma }{2}+\lambda -\frac{3}{2}}+y^{-\left( \lambda +\frac{\gamma }{2}\right) -\frac{3}{2}}\right) dy {\leqq }\frac{C}{M^{b}} \end{aligned}$$

with \(b>0\) since \(\left| \gamma + 2\lambda \right| <1\). In cases 1 and 3, we have \(\gamma \le 0\) and \(V_{\gamma ,\varepsilon ,M}=[M,\infty )\), so we may then apply item 3 of Lemma 2.10 and conclude that the same sequence of inequalities holds also in those cases. Thus these terms can be made arbitrarily small by taking first \(\limsup _{\varepsilon \rightarrow 0}\) and then \(M\rightarrow \infty \).

It only remains to examine in detail the case 2 when (3.53) holds. In this case we obtain

$$\begin{aligned}&\int _{\left\{ y{\geqq }M\right\} }\min \left\{ \left( x+y\right) ^{\gamma },\frac{1}{\varepsilon }\right\} \Phi _{\varepsilon }\left( \frac{x}{x+y}, x\right) f_{\varepsilon }\left( dy\right) \\&\quad {\leqq }C\int _{ V_{\gamma ,\varepsilon ,M} }\left( y^{\gamma +\lambda }+y^{-\lambda }\right) f_{\varepsilon }\left( dy\right) +\frac{C}{\varepsilon } \int _{\left\{ y{\geqq }\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma } }\right\} }\left( y^{\lambda }+y^{-\gamma -\lambda }\right) f_{\varepsilon }\left( dy\right) \ . \end{aligned}$$

The first integral can be estimated by \(\frac{C}{M^{b}}\) with \(b>0\) arguing as before (using \(\left| \gamma +2\lambda \right| <1\)). It only remains to estimate the last integral. We have \(p=\max \left\{ \lambda ,-\left( \gamma +\lambda \right) \right\} {\leqq }0,\) whence \(\lambda {\leqq }0\) and \(-\left( \gamma +\lambda \right) {\leqq }0.\) In this case we have also \(\gamma >0.\) Hence, the second integral can be estimated using the tail bound in (3.47) as follows:

$$\begin{aligned}&\frac{C}{\varepsilon } \int _{\left\{ y{\geqq }\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma } }\right\} }\left( y^{\lambda }+y^{-\gamma -\lambda }\right) f_{\varepsilon }\left( dy\right) \le C \left( \varepsilon ^{-1-\frac{\lambda }{\gamma }}+ \varepsilon ^{-1+\frac{\gamma +\lambda }{\gamma }}\right) \int _{\left\{ y{\geqq }\left( \frac{1}{\varepsilon }\right) ^{\frac{1}{\gamma } }\right\} } f_{\varepsilon }\left( dy\right) \\&\quad {\leqq }C \left( \varepsilon ^{-\frac{\gamma +\lambda }{\gamma }}+ \varepsilon ^{\frac{\lambda }{\gamma }}\right) \varepsilon ^{\frac{1+\gamma }{2\gamma }} =C\left[ \varepsilon ^{\frac{1}{2\gamma }\left( 1-2\lambda -\gamma \right) }+\varepsilon ^{\frac{1}{2\gamma }\left( 1+2\lambda +\gamma \right) }\right] . \end{aligned}$$

Thus the integral converges to zero as \(\varepsilon \rightarrow 0\) since \(\vert \gamma +2\lambda \vert <1\).

Therefore, we can take the limit \(\varepsilon _{n}\rightarrow 0\) as \(n\rightarrow \infty \) in (3.36) with an arbitrary large M. Then \(M\rightarrow \infty \) can be taken by the assumed bounds on K and using the tail estimates (3.44) or (3.50). This yields

$$\begin{aligned} \int _{( 0,\infty )^{2}}K\left( x,y\right) [\varphi (x+y)-\varphi (x)-\varphi (y)]f\left( \text {d}x\right) f\left( dy\right) +\int _{(0,\infty )}\varphi (x)\eta \left( \text {d}x\right) =0\, \end{aligned}$$
(3.60)

for any \(\varphi \in C_{c}\left( {\mathbb {R}}_{*}\right) .\) In particular, \(f\ne 0\) due to \(\eta \ne 0\). Taking the limit of (3.39) as \(\varepsilon \rightarrow 0\) we arrive at

$$\begin{aligned} \frac{1}{z}\int _{[2z/3,z]}f(\text {d}x){\leqq }\frac{\widetilde{C}}{z^{3/2+\gamma /2}}\ \ \text { for all }z\in (0,\infty ), \end{aligned}$$

which implies

$$\begin{aligned} \frac{1}{z}\int _{[2z/3,z]}x^{\mu } f(\text {d}x){\leqq }{{\widetilde{C}}} \frac{z^{\mu } }{z^{3/2+\gamma /2}}\ \ \text { for all }z\in (0,\infty ) \end{aligned}$$

for any \(\mu \in {{\mathbb {R}}}\). From Lemma 2.10 we obtain the boundedness of the moment of order \(\mu \) as follows:

$$\begin{aligned} \int _{(0,\infty )}x^{\mu } f(\text {d}x)< \infty , \end{aligned}$$
(3.61)

for any \(\mu \) satisfying \(\mu < \frac{\gamma + 1}{2}\). In particular, since \(|\gamma + 2\lambda | <1\), then the moments \(\mu = -\lambda \) and \(\mu = \gamma + \lambda \) are bounded, which proves (2.14).

4 Nonexistence Result: Continuous Model

The rationale behind the proof of Theorem 2.4 is the following. The solutions of (2.2) satisfy (2.6) for large values of x with \(J\left( x;f\right) \) as in (2.5) and \(J_{0}=\int x\eta \left( \text {d}x\right) \). A detailed analysis of the contributions to the integrand of the different regions, using also the assumption (2.14), that is the minimal assumption required to define a solution of (2.2), shows that \(J\left( x;f\right) \) can be approximated for large values of x as

$$\begin{aligned} \int \int _{\left\{ y+z>x,\ y{\leqq }x\right\} \cap \left\{ z{\leqq }\delta y\right\} }yK\left( y,z\right) f\left( y\right) f\left( z\right) dy\text {d}z\simeq J_{0}, \end{aligned}$$
(4.1)

where \(\delta >0\) can be chosen arbitrarily small. By assumption \(K\left( y,z\right) \approx y ^{\gamma +\lambda }z^{-\lambda }+ z ^{\gamma +\lambda }y^{-\lambda }.\) Suppose that \(\gamma +\lambda {\geqq }0>-\lambda ,\) since the other ranges of exponents can be studied with slight modifications of the arguments. Notice that the assumption \(\left| \gamma +2\lambda \right| {\geqq }1\) then implies, since \(\gamma +2\lambda {\geqq }0,\) that \(\gamma +2\lambda {\geqq }1.\) Then \(\gamma +\lambda {\geqq }1-\lambda \) and (2.14) implies that

$$\begin{aligned} \int _{1}^{\infty }f\left( z\right) z^{1-\lambda }\text {d}z<\infty \ . \end{aligned}$$
(4.2)

Moreover, we can approximate (4.1), using the form of the region of integration, as

$$\begin{aligned} x ^{\gamma +\lambda +1}\int \int _{\left\{ y+z>x,\ y{\leqq }x\right\} \cap \left\{ z{\leqq }\delta y\right\} }f\left( y\right) f\left( z\right) z^{-\lambda }dy\text {d}z\simeq J_{0}\ . \end{aligned}$$
(4.3)

We define \(F\left( x\right) =\int _{x}^{\infty }f\left( y\right) dy\). This integral is well defined due to (2.14) and the fact that \(\gamma +\lambda {\geqq }0.\) We can then approximate (4.3) for large values of x, as

$$\begin{aligned} \int _{1}^{\delta x}\left[ F\left( x-z\right) -F\left( x\right) \right] f\left( z\right) z^{-\lambda }\text {d}z\simeq \frac{J_{0}}{ x ^{\gamma +\lambda +1}}\ . \end{aligned}$$
(4.4)

The equation (4.4) can be thought as a nonlocal differential equation. Due to (4.2) we can approximate formally (4.4 ) for large values of x as

$$\begin{aligned} -\frac{dF}{\text {d}x}\simeq \frac{J_{0}}{\int _{1}^{\infty }f\left( z\right) z^{1-\lambda }\text {d}z}\ \frac{1}{ x ^{\gamma +\lambda +1}}\ . \end{aligned}$$
(4.5)

Therefore, using the definition of F we formally obtain that \(f\left( x\right) \simeq \frac{C}{x ^{\gamma +\lambda +1}}\) as \(x\rightarrow \infty .\) However, this implies that \(\int _{1}^{\infty } x^{\gamma +\lambda }f\left( x\right) \text {d}x=\infty \) which contradicts (2.14). This argument is formal and instead of approximating \(K\left( y,z\right) \) by means of power laws we must use the inequalities (2.11), (2.12). The solutions of (4.4) can be estimated in terms of the solutions of (4.5) by means of maximum principle arguments which are described in the following Lemmas:

Lemma 4.1

Let a and b be constants satisfying \(a{\geqq }0\) and \((a-b){\geqq }1\). Let \(F: {{\mathbb {R}}}_* \rightarrow {{\mathbb {R}}}\) be a right-continuous non-increasing function satisfying \(F(R){\geqq }0\), for all \(R> 0\). Assume that \(f \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\) satisfies \(f([1,\infty ))> 0\) and

$$\begin{aligned} \int _{[1,\infty )} x^a f(\text {d}x) < \infty \,. \end{aligned}$$
(4.6)

There exists \(\delta _0\in (0,1)\) which depends only on a such that the following result holds:

If \(0<\delta \le \delta _0\), \(R_0> 1/\delta \), and \(C>0\) are such that

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F\left( R-y\right) -F\left( R\right) \right] y^{b}f\left( dy\right) {\leqq }-\frac{C}{R^{a +1}}\,,\quad \text {for }R{\geqq }R_{0}, \end{aligned}$$
(4.7)

then there are \(R_0' {\geqq }R_0\) and \(B>0\) which depend only on a, f, \(\delta \), \(R_0\), and C, such that if \(a>0\) then

$$\begin{aligned} F\left( R\right) {\geqq }\frac{B}{R^{a}}\, , \quad \text{ for } R{\geqq }R_{0}'. \end{aligned}$$
(4.8)

Else, if \(a=0\), then

$$\begin{aligned} F\left( R\right) {\geqq }B \log (R)\, , \quad \text {for } R{\geqq }R_{0}'. \end{aligned}$$
(4.9)

Proof

Since F is non-increasing and right-continuous, we have

$$\begin{aligned} F\left( R^{-}\right) =\lim _{\rho \rightarrow R^{-}}F\left( \rho \right) {\geqq }\lim _{\rho \rightarrow R^{+}}F\left( \rho \right) ={F\left( R\right) }\,. \end{aligned}$$
(4.10)

For the proof, let us first point out that we can increase \(R_0\) while keeping \(\delta \) and C fixed if needed.

We first consider the case of \(a>0\), and prove that in this case the choice \(\delta _0 := 1-(3/4)^{1/(1+a)}\in (0,1)\) will suffice. From now on, we assume that \(\delta \) is fixed to a value such that \(0<\delta \le \delta _0\).

We use a comparison argument. To this end, we construct an auxiliary function

$$\begin{aligned} F_{*}\left( R\right) =\frac{2B}{R^{a}} \end{aligned}$$

with \(B>0\) to be determined. We choose B in order to have

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F_{*}\left( R-y\right) -F_{*}\left( R\right) \right] y^{b}f\left( dy\right) {\geqq }-\frac{C}{ R^{a+1}}\ \ \text {for }R{\geqq }R_{0}\ . \end{aligned}$$
(4.11)

Therefore, the goal is to impose

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ \frac{2B}{\left( R-y\right) ^{a}}-\frac{2B}{R^{a}}\right] y^{b}f\left( dy\right) {\geqq }-\frac{C}{R^{a+1}}\ \ \text {for } R{\geqq }R_{0} \,. \end{aligned}$$
(4.12)

Since \((1-\delta )^{a+1}\ge \frac{1}{2}\), we have for any \(R\ge R_0>1/\delta \) and \(y\in \left[ 1,\delta R\right] \)

$$\begin{aligned} \frac{1}{\left( R-y\right) ^{a}}-\frac{1}{R^{a}} {\leqq }\frac{2 a y}{R^{a+1}} \,. \end{aligned}$$

Thus,

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ \frac{2B}{\left( R-y\right) ^{a}}-\frac{2B}{R^{a}}\right] y^{b}f\left( dy\right) {\geqq }-\frac{4aB}{R^{a+1}} \int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) . \end{aligned}$$

On the other hand, then

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) {\leqq }D, \end{aligned}$$

where \(D=\int _{\left[ 1,\infty \right) }y^{1+b}f\left( dy\right) \) is a well-defined, strictly positive constant due to \(b+1\le a\), (4.6) and \(f\ne 0.\) Therefore, choosing

$$\begin{aligned} B=\frac{C }{4D a }, \end{aligned}$$

we obtain that (4.12) holds.

For the next step, we require that \(f([1,\delta R_0])>0\). If needed, this can be accomplished by increasing \(R_0\) since the left hand side, by dominated convergence theorem, approaches \(f([1,\infty ))>0\), as \(R_0\rightarrow \infty \).

To prove (4.8), we argue by contradiction. Suppose that there exists \(R_{1}{\geqq }R_{0}\) such that \(F\left( R_{1}\right) <\frac{B}{\left( R_{1}\right) ^{a }}.\) Then, using that \(F\left( R\right) \) is decreasing, we obtain that

$$\begin{aligned} F\left( R\right) <\frac{B}{\left( R_{1}\right) ^{a }}\,, \quad \text { for } R \in \left[ R_{1},\frac{R_{1}}{1-\delta }\right] \,. \end{aligned}$$
(4.13)

We define

$$\begin{aligned} G\left( R\right) =F_{*}\left( R\right) -\frac{B}{2}\frac{1}{\left( R_{1}\right) ^{a }}-F\left( R\right) . \end{aligned}$$

Combining (4.7) and (4.11) we obtain that

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ G\left( R-y\right) -G\left( R\right) \right] y^{b }f\left( dy\right) {\geqq }0\ \ \text {for all }R{\geqq }R_{0} \ . \end{aligned}$$
(4.14)

Using (4.13) we obtain

$$\begin{aligned} G\left( R\right)&=F_{*}\left( R\right) -\frac{B}{2}\frac{1}{\left( R_{1}\right) ^{a }}-F\left( R\right)>\frac{2B}{R^{a }}-\frac{B}{2}\frac{1}{\left( R_{1}\right) ^{a }}-\frac{B}{\left( R_{1}\right) ^{a }} \nonumber \\&{\geqq }B\left( \frac{2\left( 1-\delta \right) ^{a }}{\left( R_{1}\right) ^{a }}-\frac{3}{2}\frac{1}{\left( R_{1}\right) ^{a}}\right) >0, \text { for } R\in \left[ R_{1},\frac{R_{1}}{1-\delta }\right] , \end{aligned}$$
(4.15)

since \(\delta >0\) is sufficiently small so that \(\left( 1-\delta \right) ^{a }>\left( 1-\delta \right) ^{a+1}\ge \frac{3}{4}\). Notice that since \( F_{*}\left( R\right) \) and \(\frac{B}{2}\frac{1}{\left( R_{1}\right) ^{a }}\) are continuous functions we have that G is right continuous and (4.10) implies

$$\begin{aligned} G\left( R^{-}\right) =\lim _{\rho \rightarrow R^{-}}G\left( \rho \right) {\leqq }\lim _{\rho \rightarrow R^{+}}G\left( \rho \right) =G\left( R\right) \,. \end{aligned}$$
(4.16)

We define \(R_{2}\) as

$$\begin{aligned} R_{2}=\inf \left\{ \rho {\geqq }R_{1}:G\left( \rho \right) {\leqq }0\right\} \,. \end{aligned}$$

Suppose first that \(R_{2}<\infty .\) By definition \(G(R_2^+) {\leqq }0\). Since G is right-continuous, then \(G(R_2) {\leqq }0\). From (4.15), \(G(R_2) {\geqq }G(R_2^-) {\geqq }0\). Therefore, necessarily \(G(R_2)=0\). From (4.15) we also have that \(R_2>\frac{R_1}{1-\delta }\) and

$$\begin{aligned} G(R)>0 \text { for } R \in [R_1,R_2). \end{aligned}$$
(4.17)

For \(y \in [1,\delta R_2]\), we have that \((R_2-y) \in [R_1,R_2)\), therefore \(G(R_2-y) >0\). Since \(f([1,\delta R_2]) \ge f([1,\delta R_0])>0\), this implies

$$\begin{aligned} -\int _{\left[ 1,\delta R_{2}\right] }\left[ G\left( R_{2}-y\right) -G\left( R_{2}\right) \right] y^{b}f\left( dy\right) <0, \end{aligned}$$

which contradicts (4.14). Then \(R_{2}=\infty \) whence \(G\left( R\right) {\geqq }0\) for all \(R{\geqq }R_{1}\). Therefore,

$$\begin{aligned} F\left( R\right) {\leqq }F_{*}\left( R\right) -\frac{B}{2}\frac{1}{\left( R_{1}\right) ^{a }}\quad \text { for }R{\geqq }R_{1}\,. \end{aligned}$$

However, this inequality implies that \(F\left( R\right) <0\) for R large enough, but this contradicts the definition of F. Therefore,

$$\begin{aligned} F\left( R\right) {\geqq }\frac{B}{R^{a}}\ \ \text {if } R{\geqq }R_{0}, \end{aligned}$$

which concludes the proof for \(a>0\). Note that \(R_0\) in this formula might have been increased compared to the value in the original assumptions, hence it is denoted by \(R'_0\) in the conclusions of the Lemma.

We now consider the case \(a=0\). In this case, we prove that the choice \(\delta _0 := \frac{1}{2}\) will suffice. We assume that \(\delta \) is fixed to a value such that \(0<\delta \le \delta _0\), and that \(R_0\) is sufficiently large so that \(R_0>\frac{1}{1-\delta }\) and \(f([1,\delta R_0])>0\), as before.

We construct an auxiliary function

$$\begin{aligned} F_{*}\left( R\right) =- B \log (R) \end{aligned}$$

with \(B>0\) to be determined by the requirement that

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F_{*}\left( R-y\right) -F_{*}\left( R\right) \right] y^{b}f\left( dy\right) {\geqq }-\frac{C}{R}\ \ \text {for }R{\geqq }R_{0} \ . \end{aligned}$$
(4.18)

Therefore, we need to impose

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }\left[ B\log (R-y)-B\log (R)\right] y^{b}f\left( dy\right) {\geqq }-\frac{C}{R}\ \ \text {for } R{\geqq }R_{0} \ . \end{aligned}$$
(4.19)

Since \(0<\delta \le \frac{1}{2}\), we have for all \(R>1/\delta \) and \(y\in \left[ 1,\delta R\right] \) an estimate

$$\begin{aligned} \log (R-y)-\log (R) {\geqq }- \frac{ 2y}{R}\,. \end{aligned}$$

Thus,

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }\left[ B\log (R-y)-B\log (R)\right] y^{b}f\left( dy\right) {\geqq }-\frac{2B}{R}\int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) . \end{aligned}$$

Here,

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) {\leqq }D, \end{aligned}$$

where \(D=\int _{\left[ 1,\infty \right) }y^{1+b}f\left( dy\right) \) is a well-defined strictly positive constant due to \(b+1\le a\), (4.6) and \(f\ne 0.\) Therefore, choosing

$$\begin{aligned} B=\frac{C }{2D }, \end{aligned}$$

we obtain that (4.19) holds.

To prove (4.9), we again argue by contradiction. Suppose that there exists \(R_{1}{\geqq }R_{0}\) such that \(F\left( R_{1}\right) <B\log ( R_{1}).\) Then, using that \(F\left( R\right) \) is decreasing, we obtain that

$$\begin{aligned} F\left( R\right) <B\log (R_{1}) \text { for } R \in \left[ R_{1},\frac{R_{1}}{1-\delta }\right] . \end{aligned}$$
(4.20)

We define

$$\begin{aligned} G\left( R\right) =F_{*}\left( R\right) +3B\log ( R_{1})-F\left( R\right) . \end{aligned}$$

Combining (4.7) and (4.18) we obtain that

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ G\left( R-y\right) -G\left( R\right) \right] y^{b }f\left( dy\right) {\geqq }0\ \ \text {for all }R{\geqq }R_{0}\ . \end{aligned}$$
(4.21)

Using (4.20) we obtain

$$\begin{aligned} G\left( R\right)&=F_{*}\left( R\right) +3 B\log ( R_{1}) -F\left( R\right)> - B\log (R)+3B\log (R_1)-B\log (R_1) \nonumber \\&{\geqq }B\left( -\log (\frac{R_1}{1-\delta })+2\log (R_1) \right) = B\log (R_1(1-\delta ))>0, \nonumber \\&\text { for } R\in \left[ R_{1},\frac{R_{1}}{1-\delta }\right] \,, \end{aligned}$$
(4.22)

where in the last step we used the property that \(R_1\ge R_0>\frac{1}{1-\delta }\). Notice that since \(F_{*}\left( R\right) \) and \(3B\log ( R_{1})\) are continuous functions we have that G is right continuous and (4.10) implies

$$\begin{aligned} G\left( R^{-}\right) =\lim _{\rho \rightarrow R^{-}}G\left( \rho \right) {\leqq }\lim _{\rho \rightarrow R^{+}}G\left( \rho \right) =G\left( R\right) \ . \end{aligned}$$
(4.23)

We define \(R_{2}\) as

$$\begin{aligned} R_{2}=\inf \left\{ \rho {\geqq }R_{1}:G\left( \rho \right) {\leqq }0\right\} \ . \end{aligned}$$

Using the same reasoning as in the case \(a>0\) we obtain that \(R_{2}=\infty \), and thus \(G\left( R\right) > 0\) for all \(R{\geqq }R_{1}.\) Therefore,

$$\begin{aligned} F\left( R\right) {\leqq }F_{*}\left( R\right) + 3B\log (R_1) \quad \text { for }R{\geqq }R_{1} \ . \end{aligned}$$

However, this inequality implies that \(F\left( R\right) <0\) for R large enough, but this contradicts the definition of F. Therefore,

$$\begin{aligned} F\left( R\right) {\geqq }B\log (R)\,,\ \ \text {if \ } R{\geqq }R_{0} \ , \end{aligned}$$

which concludes the proof. \(\square \)

Lemma 4.2

Let a and b be constants satisfying \(a<0\) and \((a-b){\geqq }1\). Assume that \(F: {{\mathbb {R}}}_* \rightarrow {{\mathbb {R}}}\) is a right-continuous non-decreasing function and \(f \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\) satisfies \(f([1,\infty ))> 0\) and

$$\begin{aligned} \int _{[1,\infty )} x^a f(\text {d}x) < \infty \ . \end{aligned}$$
(4.24)

There exists \(\delta _0\in (0,1)\) which depends only on a such that the following result holds:

If \(0<\delta \le \delta _0\), \(R_0> 1/\delta \), and \(C>0\) are such that \(F(R_0)>0\) and

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F\left( R-y\right) -F\left( R\right) \right] y^{b}f\left( dy\right) {\geqq }\frac{C}{R^{a +1}}\ \ \text {for }R{\geqq }R_{0}, \end{aligned}$$
(4.25)

then there are \(R_0'{\geqq }R_0\) and \(B>0\) which only depend on a, f, \(\delta \), \(R_0\), and C, such that

$$\begin{aligned} F\left( R\right) {\geqq }\frac{B}{R^{a}}\, , \quad \text {for } R{\geqq }R_{0}'. \end{aligned}$$
(4.26)

Proof

Since F is non-decreasing and right-continuous, we have

$$\begin{aligned} F\left( R^{-}\right) =\lim _{\rho \rightarrow R^{-}}F\left( \rho \right) {\leqq }\lim _{\rho \rightarrow R^{+}}F\left( \rho \right) =F\left( R^{+}\right) = F(R)\, . \end{aligned}$$
(4.27)

We assume \(\delta \) is fixed and satisfies \(0<\delta \le \delta _0\). We will show that \(\delta _0=\frac{1}{2}\in (0,1)\) works in this case. Note that if \(R'_0\ge R_0\), then also \(F(R'_0)\ge F(R_0)>0\) since F is increasing. Therefore, as in the previous proof, we can increase \(R_0\) while keeping \(\delta \) and C fixed if needed. In particular, we may assume that \(f([1,\delta R_0])>0\), as before.

We again use a comparison argument. To this end, we construct an auxiliary function

$$\begin{aligned} F_{*}\left( R\right) =\frac{A }{R^{a}}, \end{aligned}$$

where \(A > 0\) is a constant to be determined. We choose A in order to have

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F_{*}\left( R-y\right) -F_{*}\left( R\right) \right] y^{b}f\left( dy\right) {\leqq }\frac{C}{R^{a+1}}\ \ \text {for }R{\geqq }R_{0} \ . \end{aligned}$$
(4.28)

Therefore, we need to impose

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ \frac{A }{ (R-y) ^{a}}-\frac{A }{R^{a}}\right] y^{b}f\left( dy\right) {\leqq }\frac{C}{R^{a+1}}\ \ \text {for } R{\geqq }R_{0} \ . \end{aligned}$$
(4.29)

Let us next show that a constant A for the above inequality may be found for the values of \(\delta \) considered here. Since \(0<\delta \le \delta _0=\frac{1}{2}\), we have \((1-\delta )^{|a|-1}\le 2\) for \(a\in [-1,0)\). If \(a<-1\), the function \(x\mapsto x^{|a|-1}\) is increasing, and thus Taylor’s theorem implies

$$\begin{aligned} {\left( \frac{1}{R^{a}}-\frac{1}{\left( R-y\right) ^{a}}\right) {\leqq }2 \vert a \vert y \frac{1}{R^{a+1}}} \end{aligned}$$

whenever \(y\in \left[ 1,\delta R\right] \). Thus,

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ \frac{A }{\left( R-y\right) ^{a}}-\frac{A }{R^a}\right] y^{b}f\left( dy\right) {\leqq }\frac{2 A |a|}{R^{a+1}} \int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) \end{aligned}$$

for any \(A>0\). For \(R>1/\delta \) we obtain that

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }y^{1+b}f\left( dy\right) {\leqq }D, \end{aligned}$$

where \(D=\int _{\left[ 1,\infty \right) }y^{1+b}f\left( dy\right) \) is a well-defined positive constant due to \(b+1\le a\), (4.24) and \(f\ne 0.\) Therefore, choosing

$$\begin{aligned} 0< A {\leqq }\frac{ C }{D |a| } \end{aligned}$$
(4.30)

we obtain that (4.29) holds.

Next we will prove (4.26). We define

$$\begin{aligned} G\left( R\right) =F(R) - F_{*}\left( R\right) . \end{aligned}$$

Combining (4.25) and (4.28) we obtain that

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ G\left( R-y\right) -G\left( R\right) \right] y^{b }f\left( dy\right) {\geqq }0\ \ \text {for all }R{\geqq }R_{0}. \end{aligned}$$
(4.31)

Since F is increasing and \(F(R_0)>0\), then \(F(R){\geqq }F(R_0)>0\) for all \(R{\geqq }R_0\). Then \(G(R){\geqq }F(R_0)-\frac{A}{R^{a}}\) for any \(R{\geqq }R_0\). Therefore, choosing A sufficiently small and satisfying also (4.30), we obtain

$$\begin{aligned} G(R)>0 \quad \text {for } R \in \left[ R_0,\frac{R_0}{1-\delta }\right] . \end{aligned}$$
(4.32)

Since \(F_{*}\) is continuous, we have that G is right continuous and (4.27) implies

$$\begin{aligned} G\left( R^{-}\right) =\lim _{\rho \rightarrow R^{-}}G\left( \rho \right) {\leqq }\lim _{\rho \rightarrow R^{+}}G\left( \rho \right) ={G\left( R\right) } \ . \end{aligned}$$
(4.33)

We define \(R_{2}\) as

$$\begin{aligned} R_{2}=\inf \left\{ \rho {\geqq }R_0: G\left( \rho \right) {\leqq }0\right\} \ . \end{aligned}$$

Suppose first that \(R_{2}<\infty .\) By definition, \(G(R_2^+) {\leqq }0\). Since G is right-continuous, then \(G(R_2) {\leqq }0\). From (4.33), \(G(R_2) {\geqq }G(R_2^-) {\geqq }0\). Therefore, necessarily \(G(R_2)=0\). From (4.32) we also have that \(R_2>\frac{R_0}{1-\delta }\) and

$$\begin{aligned} G(R)>0 \text { for } R \in [R_0,R_2). \end{aligned}$$
(4.34)

For \(y \in [1,\delta R_2]\), we have that \((R_2-y) \in [R_0,R_2)\), therefore \(G(R_2-y) >0\). This implies

$$\begin{aligned} -\int _{\left[ 1,\delta R_{2}\right] }\left[ G\left( R_{2}-y\right) -G\left( R_{2}\right) \right] y^{-\lambda }f\left( dy\right) <0 \end{aligned}$$

which contradicts (4.31). Then \(R_{2}=\infty \) whence \(G\left( R\right) > 0\) for all \(R{\geqq }R_{0}.\) Therefore,

$$\begin{aligned} F\left( R\right) {\geqq }F_*(R) = \frac{A}{R^{a}}\ \ \text {for } R{\geqq }R_{0}, \end{aligned}$$

which proves (4.26) with \(B=A\). \(\square \)

Proof of Theorem 2.4 (non-existence)

We argue by contradiction. Suppose that \(f\in {\mathcal {M}}_{+}\left( {\mathbb {R}}_*\right) \) satisfies \(f\left( \left( 0,1\right) \right) =0\) as well as ( 2.14) and it is a stationary injection solution of (1.3) in the sense of Definition 2.1. Then, from Lemma 2.8 and using also that \(f\left( \left( 0,1\right) \right) =0\) we obtain

$$\begin{aligned} -\int _{\left[ 1,R\right] }f\left( \text {d}x\right) \int _ {\left( R-x ,\infty \right) \cap \left[ 1,\infty \right) } {K\left( x,y\right) x f\left( dy\right) } +\int _{ [1,R]}x\eta \left( \text {d}x\right) =0,\ R{\geqq }1 \ . \end{aligned}$$
(4.35)

Then we introduce a function \(J:{\mathbb {R}}_{*}\rightarrow {\mathbb {R}}_{+}\) defined by

$$\begin{aligned} J\left( R\right) =\iint _{\Sigma _{R}}K\left( x,y\right) xf\left( \text {d}x\right) f\left( dy\right) , \end{aligned}$$
(4.36)

where

$$\begin{aligned} \Sigma _{R}=\left\{ x{\geqq }1,\ y{\geqq }1:x+y>R,\ x{\leqq }R\right\} \ . \end{aligned}$$

We notice that the function J is constant if \(R{\geqq }L_{\eta },\) that is

$$\begin{aligned} J\left( R\right) =J\left( L_{\eta }\right) \text { for }R{\geqq }L_{\eta }. \end{aligned}$$
(4.37)

Suppose that \(\eta \) is different from zero. Then (4.35) implies that \(J\left( L_{\eta }\right) =\int _{{\mathbb {R}}_{+}}x\eta \left( \text {d}x\right) >0.\) If \((\gamma + 2\lambda ) {\geqq }1\), we define \(a:= \gamma +\lambda \) and \(b:= -\lambda \), else, if \((\gamma + 2\lambda ) {\leqq }-1\), we define \(a:= -\lambda \) and \(b:= \gamma +\lambda \). The assumption \(|\gamma + 2\lambda |{\geqq }1\) becomes \(a-b{\geqq }1\) in both cases. By (2.14) we have

$$\begin{aligned} \int _{\left[ 1,\infty \right) }x^{a }f\left( \text {d}x\right) <\infty \ . \end{aligned}$$
(4.38)

We now prove that the main contribution to the integral J(R) in (4.36) as \( R\rightarrow \infty \) is due to the portion of the region of integration where x is close to R and y is order one. To this end, let us consider parameters \(\delta \) which satisfy \(0<\delta <\delta _0\) for the value \(\delta _0=\delta _0(a)\) given by Lemma 4.1 if \(a\ge 0\), or by Lemma 4.2 if \(a<0\). We then define the domains

$$\begin{aligned} D_{\delta }^{\left( 1\right) }&=\left\{ x{\geqq }1,\ y{\geqq }1:y{\leqq }\delta x\right\} \ , \\ D_{\delta }^{\left( 2\right) }&=\left\{ x{\geqq }1,\ y{\geqq }1:y>\delta x\right\} \ . \end{aligned}$$

We then write

$$\begin{aligned} J\left( R\right)&=J_{1}\left( R\right) +J_{2}\left( R\right) \text { with} \\ J_{k}\left( R\right)&=\iint _{\Sigma _{R}\cap D_{\delta }^{\left( k\right) }}\left[ K\left( x,y\right) x\right] f\left( \text {d}x\right) f\left( dy\right) ,\ \ k=1,2 \ . \end{aligned}$$

We estimate first \(J_{2}\left( R\right) \) for large values of R. Using (2.12) we obtain

$$\begin{aligned} 0{\leqq }J_{2}\left( R\right) {\leqq }c_{2}\iint _{\Sigma _{R}\cap D_{\delta }^{\left( 2\right) }}\left( x^{a }y^{b }+y^{a }x^{b }\right) xf\left( \text {d}x\right) f\left( dy\right) \ . \end{aligned}$$

Using that \(\left( a-b \right) >0\) we obtain that in the region \( D_{\delta }^{\left( 2\right) }\) we have \(x^{a}y^{b}{\leqq }\delta ^{b-a}y^{a}x^{b}\). Therefore,

$$\begin{aligned} J_{2}\left( R\right) {\leqq }C_{\delta }\iint _{\Sigma _{R}\cap D_{\delta }^{\left( 2\right) }}\left( y^{a}x^{1+b}\right) f\left( \text {d}x\right) f\left( dy\right) . \end{aligned}$$

Notice that \(\Sigma _{R}\cap D_{\delta }^{\left( 2\right) }\subset \left[ 1,R \right] \times \left[ \frac{\delta R}{1+\delta },\infty \right) ,\) whence

$$\begin{aligned} J_{2}\left( R\right) {\leqq }C_{\delta }\int _{\left[ 1,R\right] }x^{1+b }f\left( \text {d}x\right) \int _{\left[ \frac{\delta R}{1+\delta },\infty \right) }y^{a}f\left( dy\right) \ . \end{aligned}$$

Given that \(\left( a-b \right) {\geqq }1\) we obtain, taking into account (4.38),

$$\begin{aligned} \int _{\left[ 1,R\right] }x^{1+b }f\left( \text {d}x\right) {\leqq }\int _{\left[ 1,\infty \right) }x^{a }f\left( \text {d}x\right) <\infty \ . \end{aligned}$$

Moreover, using again (4.38), it follows that

$$\begin{aligned} \lim _{R\rightarrow \infty }\int _{\left[ \frac{\delta R}{1+\delta },\infty \right) }y^{a }f\left( dy\right) =0. \end{aligned}$$

This implies that the contribution due to \(J_{2}\) vanishes in the limit \(R\rightarrow \infty \), namely

$$\begin{aligned} \lim _{R\rightarrow \infty }J_{2}\left( R\right) =0. \end{aligned}$$

Therefore, (4.37) implies that

$$\begin{aligned} \lim _{R\rightarrow \infty }J_{1}\left( R\right) =J\left( L_\eta \right) \ . \end{aligned}$$

For the next step, let us remark that for \((x,y) \in \Sigma _{R}\cap D_{\delta }^{\left( 1\right) }\) we have \(x>R-y {\geqq }R-\delta R\) and therefore \((1-\delta )R< x{\leqq }R.\) In this region we have also \(y^{a }x^{b }{\leqq }\delta ^{a-b }x^{a }y^{b }.\) Combining (2.12) and using the above bounds for x, we obtain

$$\begin{aligned} K\left( x,y\right) x{\leqq }c_3\left( 1+\delta ^{|a-b| }\right) R^{a +1}y^{b }\ \ ,\ \ \ \left( x,y\right) \in \Sigma _{R}\cap D_{\delta }^{\left( 1\right) } \end{aligned}$$

where \(c_3>0\) can be chosen independent of \(\delta \) as soon as \(\delta {\leqq }\frac{1}{2}\) which we do in the following. Then

$$\begin{aligned} \liminf _{R\rightarrow \infty }\left( R^{a +1}\iint _{\Sigma _{R}\cap D_{\delta }^{\left( 1\right) }} y^{b }f\left( \text {d}x\right) f\left( dy\right) \right) {\geqq }\frac{J\left( L_\eta \right) }{c_3\left( 1+\delta ^{|a-b| }\right) } \ . \end{aligned}$$

Notice that, if \(R>1/\delta , 1/(1-\delta )\),

$$\begin{aligned} \Sigma _{R}\cap D_{\delta }^{\left( 1\right) }\subset \left\{ \left( x,y\right) :1{\leqq }y{\leqq }\delta R,\ R<x+y,\ 1 {\leqq }x{\leqq }R\right\} , \end{aligned}$$

whence

$$\begin{aligned} \int _{\left[ 1,\delta R\right] }y^{b}f\left( dy\right) \int _{\left( R-y,R\right] }f\left( \text {d}x\right) {\geqq }\frac{J\left( L_\eta \right) }{2c_3\left( 1+\delta ^{|a-b|}\right) }\frac{1}{R^{a+1}} \end{aligned}$$
(4.39)

for \(R{\geqq }R_{0}\) with \(R_{0}\) large enough.

The rest of the proof is divided into two cases: \(a{\geqq }0\) and \(a<0\).

Suppose first that \(a{\geqq }0\). Due to (4.38) we may define the function

$$\begin{aligned} F\left( R\right) =\int _{\left( R,\infty \right) }f\left( \text {d}x\right) \ \ ,\ \ R{\geqq }1 \ . \end{aligned}$$
(4.40)

Note that the function \(R\rightarrow F\left( R\right) \) is right continuous, that is \(F\left( R\right) =F\left( R^+\right) = \lim _{\rho \rightarrow R^{+}}F\left( \rho \right) .\) Moreover, F is non-increasing and \(F(R){\geqq }0\), for all \(R{\geqq }1\). Using (4.40) we can rewrite (4.39) as

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F\left( R-y\right) -F\left( R\right) \right] y^{b}f\left( dy\right) {\leqq }-\frac{J\left( L_\eta \right) }{ 2c_3\left( 1+\delta ^{|a-b| }\right) }\frac{1}{R^{a +1}}\ \ \text {for }R{\geqq }R_{0}. \end{aligned}$$

From Lemma 4.1, it then follows that

$$\begin{aligned} F\left( R\right) {\geqq }\frac{B}{R^{a}}\ \ \text {if } R{\geqq }R_{0},\ \text { for } a>0 \end{aligned}$$
(4.41)

and

$$\begin{aligned} F\left( R\right) {\geqq }B \log (R)\ \ \text {if } R{\geqq }R_{0},\ \text { for } a=0 \end{aligned}$$
(4.42)

for some constant \(B>0\).

In the case where \(a>0\), we use (4.38) and (4.41) to obtain

$$\begin{aligned} \int _{[1,\infty )} x^{a}f(\text {d}x)= & {} \int _{[1,R]}x^{a}f(\text {d}x)+ \int _{(R,\infty )}x^{a}f(\text {d}x) \\&{\geqq }&\int _{[1,R]}x^{a}f(\text {d}x)+ R^a\int _{(R,\infty )}f(\text {d}x) \\&{\geqq }&\int _{[1,R]}x^{a}f(\text {d}x)+ B \ . \end{aligned}$$

By taking the limit \(R \rightarrow \infty \) we obtain that \(B{\leqq }0\) which leads to a contradiction.

In the case where \(a=0\), (4.42) yields

$$\begin{aligned} \int _{(R,\infty )} f(\text {d}x)&{\geqq }&B \log (R). \end{aligned}$$

By taking the limit \(R \rightarrow \infty \) we obtain using (4.38) that the left-hand side converges to zero, while the right hand-side diverges, which leads to a contradiction.

Suppose now that \(a<0\). We define the function F by

$$\begin{aligned} F\left( R\right) =\int _{[1,R] }f\left( \text {d}x\right) \ \ ,\ \ R{\geqq }1. \end{aligned}$$
(4.43)

The function \(R\rightarrow F\left( R\right) \) is right continuous and non-decreasing. Since \(f \ne 0\), then \(F(R)> 0\), for all \(R{\geqq }R_0\), for \(R_0\) large enough. Using (4.43) we can rewrite (4.39) as

$$\begin{aligned} -\int _{\left[ 1,\delta R\right] }\left[ F\left( R\right) -F\left( R-y\right) \right] y^{b }f\left( dy\right) {\leqq }-\frac{J\left( L_\eta \right) }{ 2c_3\left( 1+\delta ^{|a-b| }\right) }\frac{1}{R^{a +1}}\ \ \text {for }R{\geqq }R_{0}. \end{aligned}$$

From Lemma 4.2, it follows that there are \(B>0\) and \(R'_0\ge R_0\) such that

$$\begin{aligned} F\left( R\right) {\geqq }\frac{B}{R^{a}}\ \ \text {if } R{\geqq }R'_{0} \, . \end{aligned}$$
(4.44)

From (4.44) it follows that for all \(R>M\) we have

$$\begin{aligned}&B {\leqq }R^a \int _{[1,R]} f(\text {d}x) {\leqq }R^a \int _{[1,M]} f(\text {d}x) + \int _{[M,R]} x^a f(\text {d}x) {\leqq }R^a \int _{[1,M]} f(\text {d}x) \\&\quad +\, \int _{[M,\infty )} x^a f(\text {d}x). \end{aligned}$$

Using that \(a < 0\), we first let \(R \rightarrow \infty \) and then \(M\rightarrow \infty \) to obtain that \(B {\leqq }0\), which leads to a contradiction. \(\square \)

5 Existence and Non-existence Results: Discrete Model

5.1 Setting and Main Results

We consider the following discrete coagulation equation with source:

$$\begin{aligned} \partial _{t}n_{\alpha }=\frac{1}{2}\sum _{\beta <\alpha }K_{\alpha -\beta ,\beta }n_{\alpha -\beta }n_{\beta }-n_{\alpha }\sum _{\beta >0}K_{\alpha ,\beta }n_{\beta }+ s_{\alpha }, \end{aligned}$$
(5.1)

where \(\alpha \in {\mathbb {N}}=\{1,2,\dots \}\). We assume that the sequence \(s=(s_{\alpha })_{\alpha \in {{\mathbb {N}}}}\) satisfies

$$\begin{aligned} s_{\alpha }{\geqq }0 \;\; \forall \, \alpha \in {{\mathbb {N}}} \quad \text {and} \quad {\mathrm{supp}} s \subset \{1,2,\dots , L_s\}. \end{aligned}$$
(5.2)

We consider coagulation kernels \(K_{\alpha ,\beta }: {{\mathbb {N}}}^2 \rightarrow {{\mathbb {R}}}_{+}\) defined on the integers satisfying the same conditions as before:

$$\begin{aligned} K_{\alpha ,\beta }{\geqq }0,\ \ \ K_{\alpha ,\beta }=K_{\beta ,\alpha }, \end{aligned}$$
(5.3)
$$\begin{aligned} K_{\alpha ,\beta } {\geqq }c_{1}\left( \alpha ^{\gamma +\lambda }\beta ^{-\lambda }+\beta ^{\gamma +\lambda }\alpha ^{-\lambda }\right) \end{aligned}$$
(5.4)

and

$$\begin{aligned} K_{\alpha ,\beta } {\leqq }c_{2}\left( \alpha ^{\gamma +\lambda }\beta ^{-\lambda }+\beta ^{\gamma +\lambda }\alpha ^{-\lambda }\right) \end{aligned}$$
(5.5)

for \((\alpha ,\beta )\in {\mathbb {N}}^{2}\), with \(0<c_{1}{\leqq }c_{2}<\infty \). Similarly to the continuous case, we will try to construct steady states for the coagulation equation (5.1) yielding the transfer of particles to infinity. More precisely, we consider stationary injection solutions to the discrete coagulation equation (5.1).

Definition 5.1

Assume that \(K:{{\mathbb {N}}}^{2}\rightarrow {{\mathbb {R}}}_{+}\) is a function satisfying (5.3) and (5.5). Assume further that \(s=(s_{\alpha })_{\alpha =1 }^{\infty }\) is a sequence in \({{\mathbb {R}}}\) satisfying (5.2). We will say that \((n_{\alpha })_{\alpha =1}^{\infty }\) satisfying

$$\begin{aligned} \sum _{\alpha =1 }^{\infty } \alpha ^{\gamma +\lambda }n_{\alpha } + \sum _{\alpha =1 }^{\infty } \alpha ^{-\lambda }n_{\alpha } <\infty \end{aligned}$$
(5.6)

is a stationary injection solution of (5.1) if the following identity holds for any test sequence with finite support \( (\varphi _{\alpha })_{\alpha =1}^{\infty }\):

$$\begin{aligned} \frac{1}{2}\sum _{\beta }\sum _{\alpha }K_{\alpha ,\beta }n_{\alpha }n_{\beta } \left[ \varphi _{\alpha +\beta }-\varphi _{\alpha }-\varphi _{\beta }\right] +\sum _{ \beta }s_{\beta }\varphi _{\beta }=0 . \end{aligned}$$
(5.7)

Next we prove the existence of stationary injection solutions as stated in the next theorems

Theorem 5.2

Assume that \(K:{{\mathbb {N}}}^{2}\rightarrow {{\mathbb {R}}}_{+}\) satisfies (5.3)– (5.5) and \(| \gamma +2\lambda | <1.\) Let \(s \ne 0 \) satisfy (5.2). Then, there exists a stationary injection solution \( (n_{\alpha })_{\alpha =1}^{\infty }\) to (5.1) in the sense of Definition 5.1 satisfying \(n_{\alpha }{\geqq }0\) for all \(\alpha \).

Theorem 5.3

Suppose that \(K:{{\mathbb {N}}}^{2}\rightarrow {{\mathbb {R}}}_{+}\) satisfies (5.3)–(5.5) and \(|\gamma +2\lambda | {\geqq }1.\) Let us assume also that \(s \ne 0\) satisfies (5.2). Then, there is no stationary injection solution of  (5.1) in the sense of the Definition 5.1.

5.2 Existence Result

We first consider equations with the form (5.1) but with \(n_\alpha (t) \) and \(n_\beta (t) \) supported in \(I:=\{1,2,\dots ,R_{*} \}\) for each \(t{\geqq }0.\) Therefore, (5.1) becomes

$$\begin{aligned} \partial _{t}n_{\alpha }=\frac{1}{2}\sum _{\beta {\leqq }\alpha -1}K_{\alpha -\beta ,\beta }n_{\alpha -\beta }n_{\beta }-n_{\alpha }\sum _{\beta {\leqq }R_{*}}K_{\alpha ,\beta }n_{\beta }+\sum _{ \beta {\leqq }R_*}s_{\beta }\delta _{\alpha ,\beta }, \end{aligned}$$
(5.8)

where \(\delta _{\alpha ,\beta }=1\) if \(\alpha =\beta \), and \(\delta _{\alpha ,\beta }=0\) otherwise. Let \((\varphi _{\alpha })_{\alpha \in I}\) be an arbitrary test function such that \(\varphi _{\alpha }:[0,T]\rightarrow {{\mathbb {R}}}\) is continuously differentiable for any \(\alpha \). Multiplying (5.8) by \((\varphi _{\alpha })_{\alpha \in I}\) and adding up in \(\alpha \) we obtain the weak formulation of (5.8):

$$\begin{aligned}&\frac{d}{dt}\left( \sum _{\alpha {\leqq }R_{*}}{n_\alpha (t)\varphi _\alpha (t)}\right) -\sum _{\alpha {\leqq }R_{*}}{n_\alpha (t)\dot{ \varphi }_\alpha (t) } \nonumber \\&\quad =\frac{1}{2}\sum _{\beta {\leqq }R_{*}}\sum _{\alpha {\leqq }R_{*}}K_{\alpha ,\beta }{n_\alpha (t)}{n_\beta (t)}\left[ {\varphi _{\alpha +\beta }(t)} \chi _{\left\{ \alpha +\beta {\leqq }R_{*}\right\} }-\varphi _{\alpha }(t)-\varphi _{\beta }(t)\right] \nonumber \\&\quad +\sum _{\beta {\leqq }R_{*}}s_{\beta }\varphi _{\beta }(t), \end{aligned}$$
(5.9)

where \({{\dot{\varphi }}}\) denotes the time-derivative of \(\varphi \) and \(\chi _{\left\{ \alpha +\beta {\leqq }R_{*}\right\} } \) is the characteristic function of the set \(\left\{ \alpha +\beta {\leqq }R_{*}\right\} .\)

The approximation (5.8) is known as the non-conservative approximation of the coagulation equation (5.1). This equation and its weak formulation (5.9) have been extensively used in the study of the mathematical properties of the coagulation equations (cf. for instance [8, 29]).

Our first goal is to prove the well-posedness for (5.8).

Proposition 5.4

Assume that \(1<R_{*}<\infty \) and that \( K:I^{2}\rightarrow {{\mathbb {R}}}_{+}\) is a function satisfying (5.3) and (5.4). Assume further that \( s=(s_{\alpha })_{\alpha \in I}\) satisfies (5.2). Let \( (n_{\alpha }\left( 0\right) )_{\alpha \in I}\) be the initial condition. Then, there exists a unique solution \((n_{\alpha }\left( t\right) )_{\alpha \in I},\) with \(n_{\alpha }: (0,\infty )\rightarrow {{\mathbb {R}}}_{+}\) continuously differentiable for any \(\alpha \), which solves (5.8) in the classical sense.

Proof

The proof of this statement relies on classical arguments of the theory of ordinary differential equations. We just outline the main steps. To simplify the notation we define

$$\begin{aligned} g_{\alpha }:{{\mathbb {R}}}_{+}^{R_{*}}\rightarrow {{\mathbb {R}}} _{+}^{R_{*}} \quad \text {for}\;\; \alpha =1,\dots , R_{*} \end{aligned}$$

such that

$$\begin{aligned} g_{\alpha }(\xi _1,\dots , \xi _{R_{*}})=\frac{1}{2}\sum _{\beta {\leqq }\alpha -1}K_{\alpha -\beta ,\beta }\xi _{\alpha -\beta }\xi _{\beta }-\xi _{\alpha } \sum _{\beta {\leqq }R_{*}}K_{\alpha ,\beta }\xi _{\beta }+\sum _{\beta {\leqq }R_*}s_{\beta }\delta _{\alpha ,\beta }. \end{aligned}$$

Then, we can rewrite (5.8) as

$$\begin{aligned} \partial _t n_{\alpha }=g_{\alpha }(n_1,\dots , n_{R_{*}}), \end{aligned}$$

with initial condition \(n_{\alpha }\left( 0\right) .\) We observe that the functions \(g_{\alpha }\) are polynomials, therefore they are locally Lipschitz continuous functions. Thus, due to the Picard-Lindelöf theorem there exists a unique solution continuously differentiable \((n_{\alpha }\left( t\right) )_{\alpha \in I}\) on a maximal time interval \([0,T_{*})\).

Moreover, since \(K_{\alpha ,\beta }{\geqq }0,\) \(s_{\gamma }{\geqq }0\) by assumption and \(n_{\alpha }(0){\geqq }0\) it easily follows that \(n_{\alpha }{\geqq }0\) in \([0,T_{*})\) for any \(\alpha =1,\dots ,R_{*}\). The fact that the solutions of (5.8) are globally defined in time follows from the fact that

$$\begin{aligned} \partial _t\left( \sum _{\alpha =1}^{R_*} n_\alpha \right) {\leqq }\sum _{\alpha =1}^{R_*} s_\alpha . \end{aligned}$$
(5.10)

\(\square \)

Next we show the existence of stationary injection solutions to (5.8) corresponding to time independent solutions of (5.8).

Proposition 5.5

Under the assumptions of Proposition 5.4, there exists a stationary injection solution \(({{\hat{n}}}_{\alpha })_{\alpha \in I}\) to (5.8) satisfying \({{\hat{n}}}_{\alpha }{\geqq }0\) for any \(\alpha \in I\).

Proof

We first construct an invariant region for the evolution equation  (5.8). From Proposition 5.4 there exists a unique solution to (5.8), \((n_{\alpha }\left( t\right) )_{\alpha \in I}\), with \(n_{\alpha }:(0,\infty )\rightarrow {{\mathbb {R}}} _{+} \) continuously differentiable for any \(\alpha \). In particular, \((n_{\alpha }\left( t\right) )_{\alpha \in I}\) satisfies  (5.9). Choosing \(\varphi _{\alpha }=1\) and using the upper bound for \(\chi _{\left\{ \alpha +\beta {\leqq }R_{*}\right\} }{\leqq }1\) and the lower bound \(a_{1}=\min _{\alpha ,\beta \in I}K_{\alpha ,\beta }\), we obtain

$$\begin{aligned} \frac{d}{dt}\sum _{\alpha {\leqq }R_{*}}n_{\alpha }(t){\leqq }-\frac{a_{1}}{2} \left( \sum _{\alpha {\leqq }R_{*}}n_{\alpha }(t)\right) ^{2}+c_{0} \end{aligned}$$

where \(c_{0}=\sum _{\beta {\leqq }R_{*}}s_{\beta }\varphi _{\beta }\). Notice that \(a_{1}>0\) because we assume that (5.4) holds. We then obtain the invariant region

$$\begin{aligned} {\mathcal {U}}_{M}=\left\{ (n_{\alpha })_{\alpha \in I}\in {{\mathbb {R}}} ^{R_{*}}:\sum _{\alpha {\leqq }R_{*}}n_{\alpha }{\leqq }M\right\} , \end{aligned}$$
(5.11)

with \(M{\geqq }\sqrt{\frac{2c_{0}}{a_{1}}}\). Moreover, \({\mathcal {U}}_{M}\) is compact and convex. Consider the operator \(S(t):{{\mathbb {R}}}^{R_{*}}\rightarrow { {\mathbb {R}}}^{R_{*}}\) defined by \(n_{\alpha }(t)=S(t)n_{\alpha }(0)\). This operator is continuous by standard continuity results on the initial data for the solutions of ODEs (cf. [6]). Since the functions \(n_{\alpha }(t)\) solve a first order ODE, they are also continuous in time. Then, the mapping \(t\rightarrow S(t)n_{\alpha }(0)\) is continuous.

We can now conclude the proof of Proposition 5.5. The operator \(S(t):{\mathcal {U}}_{M}\rightarrow {\mathcal {U}}_{M}\) is continuous and \({\mathcal {U}}_{M}\) is convex and compact. Then, Brouwer’s Theorem (cf. [19]) implies that for all \(\delta >0\), there exists a fixed-point \({\hat{n}}_{\delta }\) of \(S(\delta )\) in \({\mathcal {U}}_{M}\). Arguing as in the last paragraph of the proof of Theorem 2.3 we conclude that there exists \({\hat{n}}\in {\mathcal {U}}_{M} \) such that \(S(t) {\hat{n}} = {\hat{n}}\), which implies that \({\hat{n}}\) is a stationary injection solution to (5.8). \(\square \)

We now prove the Theorem 5.2.

Proof of Theorem 5.2 (existence)

We just sketch the argument since it is an adaptation of Theorem 2.3. For notational convenience we rewrite the kernel \(K_{\alpha ,\beta }=K(\alpha ,\beta )\) in the form (3.21) where now x, \(y\in {{\mathbb {N}}}\). Throughout this proof we will also use the notation \(n_\alpha =n(\alpha )\) and \(n_\beta =n(\beta )\). The function \(\Phi (s,x)\) is defined in a subset of the rational numbers contained in the interval (0, 1) and satisfies (3.22) in this domain of definition. We then define the kernel \(K_{\varepsilon }(x,y)\) as in (3.23) and \(K_{\varepsilon ,R_{*}}(x,y)\) as in (3.26). Hence, using Proposition 5.5 there exists a stationary injection solution \(n_{\varepsilon ,R_{*}}\) satisfying

$$\begin{aligned}&\frac{1}{2}\sum \limits _{\beta {\leqq }R_{*}}\sum \limits _{\alpha {\leqq }R_{*}}K_{\varepsilon ,R_{*}}(\alpha ,\beta )n_{\varepsilon ,R_{*}}(\alpha )n_{\varepsilon ,R_{*}}(\beta )\left[ \varphi _{\alpha +\beta }\chi _{\left\{ \alpha +\beta {\leqq }R_{*}\right\} }-\varphi _\alpha -\varphi _\beta \right] \nonumber \\&\quad +\,\sum _{\beta {\leqq }R_{*}}s_{\beta }\varphi _\beta =0 \end{aligned}$$
(5.12)

for any test function \(\varphi : {{\mathbb {N}}}\rightarrow {{\mathbb {R}}}\) compactly supported.

Choosing \(\varphi \) of the form \(\varphi _{\alpha }=\alpha \psi _{\alpha }\) for some compactly supported function \(\psi : {{\mathbb {N}}}\rightarrow {{\mathbb {R}}}\), we obtain

$$\begin{aligned}&\varphi _{\alpha +\beta } \chi _{\{ \alpha +\beta {\leqq }R_{*}\}} -\varphi _\alpha -\varphi _\beta \\&\quad =\alpha (\psi _{ \alpha +\beta }\chi _{\{ \alpha +\beta {\leqq }R_{*}\}} -\psi _{\alpha })+ \beta (\psi _{ \alpha +\beta }\chi _{\{ \alpha +\beta {\leqq }R_{*}\}} -\psi _\beta ). \end{aligned}$$

Symmetrizing we arrive at

$$\begin{aligned}&\sum _{\beta {\leqq }R_{*}}\sum _{\alpha {\leqq }R_{*}}K_{\varepsilon ,R_{*}}(\alpha ,\beta )n_{\varepsilon ,R_{*}}(\alpha )n_{\varepsilon ,R_{*}}(\beta )\left[ \alpha (\psi _{ \alpha +\beta }\chi _{\{ \alpha +\beta {\leqq }R_{*}\}} -\psi _\alpha )\right] \\&\quad +\sum _{\alpha {\leqq }R_{*}}\alpha \psi _\alpha s_\alpha =0. \end{aligned}$$

Let us assume that

$$\begin{aligned} \psi _\alpha =0\text { for }\alpha {\geqq }R_{*}. \end{aligned}$$

For such test functions we have \(\psi _{\alpha +\beta }\chi _{\{ \alpha +\beta {\leqq }R_{*}\}} =\psi _{\alpha +\beta }\), therefore,

$$\begin{aligned} \sum _{\beta {\leqq }R_{*}}\sum _{\alpha {\leqq }R_{*}}K_{\varepsilon ,R_{*}}(\alpha ,\beta )n_{\varepsilon ,R_{*}}(\alpha )n_{\varepsilon ,R_{*}}(\beta )\left[ \alpha (\psi _{ \alpha +\beta }-\psi _\alpha )\right] +\sum _{\alpha {\leqq }R_{*}}\alpha \psi _\alpha s_\alpha =0. \end{aligned}$$

Choosing a test function \(\psi _\alpha = \chi _{\{\alpha {\leqq }z\}}\) we obtain

$$\begin{aligned} \sum _{\alpha {\leqq }z} \alpha n_{\varepsilon ,R_{*}}(\alpha )\sum _{\beta > z-\alpha }K_{\varepsilon ,R_{*}}(\alpha ,\beta )n_{\varepsilon ,R_{*}}(\beta ) = \sum _{\alpha {\leqq }z} \alpha s_\alpha ,\ \ z\in \left( 0,R_{*}\right) . \end{aligned}$$

We can then argue as in the proof of (3.32) to obtain

$$\begin{aligned} \sum _{\alpha < R_*} n_{\varepsilon ,R_{*}}(\alpha ) {\leqq }\bar{C}_{\varepsilon }. \end{aligned}$$

Therefore there exists a subsequence \(R_{*}^{n}\rightarrow \infty \) and \((n_{\varepsilon }(\cdot ) )\in \ell ^{1}({{\mathbb {N}}})\) such that \(n_{\varepsilon , R^n_{*}}(\alpha ) \rightarrow n_{\varepsilon }(\alpha )\) for any \(\alpha \in {{\mathbb {N}}}\). Moreover, \(\sum _{\alpha } n_{\varepsilon }(\alpha ) {\leqq }\bar{C}_{\varepsilon }\) and for any bounded test function \(\varphi : {{\mathbb {N}}}\rightarrow {{\mathbb {R}}}_+\), \(n_{\varepsilon }\) satisfies

$$\begin{aligned} \frac{1}{2}\sum _{\beta }\sum _{\alpha }K_{\varepsilon }(\alpha ,\beta )n_{\varepsilon }(\alpha )n_{\varepsilon }(\beta )\left[ \varphi _{\alpha +\beta }-\varphi _\alpha -\varphi _\beta \right] +\sum _{\beta }s_{\beta }\varphi _\beta =0. \end{aligned}$$
(5.13)

Following the same reasoning as in the derivation of (3.39) and (3.40) in the proof of Theorem 2.3 we then arrive at

$$\begin{aligned} \frac{1}{\beta }\sum _{\alpha \in \left[ \frac{2\beta }{3},\beta \right] \cap {{\mathbb {N}}}}n_{\varepsilon }(\alpha )&{\leqq }&\frac{c}{\beta ^{3/2}}\left( \frac{1}{\min \left\{ \beta ^{\gamma },\frac{1}{\varepsilon }\right\} }\right) ^{1/2} \nonumber \\&{\leqq }&\frac{C}{ \beta ^{3/2} \sqrt{\varepsilon }},\ \ \text { for all }\beta \in {{\mathbb {N}}}. \end{aligned}$$
(5.14)

Then, taking subsequences, we obtain that there exists a limit sequence \((n(\alpha ))_{\alpha \in {{\mathbb {N}}}}\) such that \(n_{\varepsilon _{n}}(\alpha )\rightarrow n(\alpha )\) as \(n\rightarrow \infty \) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \) for any \(\alpha \in {{\mathbb {N}}}\).

Definition (3.23) implies that \(\lim _{\varepsilon \rightarrow 0}K_{\varepsilon }\left( \alpha ,\beta \right) =K\left( \alpha ,\beta \right) \) uniformly in compact sets. Taking now the limit as \(n\rightarrow \infty \) in (5.13) we obtain that n satisfies:

$$\begin{aligned} \frac{1}{2}\sum _{\beta }\sum _{\alpha }K(\alpha ,\beta )n(\alpha )n(\beta )\left[ \varphi _{\alpha +\beta }-\varphi _\alpha -\varphi _\beta \right] +\sum _{\beta }s_{\beta }\varphi _\beta =0, \end{aligned}$$
(5.15)

for every test function \(\varphi \) compactly supported. The only difficulty doing that is to control the contribution due to the regions with \(\beta {\geqq }M\) with M large in the sums

$$\begin{aligned} \sum _{\beta }\sum _{\alpha }K(\alpha ,\beta )n_{\varepsilon }(\alpha )\varphi _\alpha n_{\varepsilon }(\beta ). \end{aligned}$$

This can be made arguing exactly as in the proof of Theorem 2.3 distinguishing the cases (3.52)-(3.55) and replacing the integrals by sums.

Moreover, taking the limit of (5.14) as \(\varepsilon \rightarrow 0\) we arrive at:

$$\begin{aligned} \frac{1}{\beta }\sum _{\alpha \in [2\beta /3,\beta ]\cap {{\mathbb {N}}}}n(\alpha ){\leqq }\frac{C}{\beta ^{3/2+\gamma /2}}\ \ \text { for all }\beta \in {{\mathbb {N}}}, \end{aligned}$$

which implies

$$\begin{aligned} \frac{1}{\beta }\sum _{\alpha \in [2\beta /3,\beta ]\cap {{\mathbb {N}}}} \alpha ^{\gamma + \lambda } n(\alpha ){\leqq }{\bar{C}} \beta ^{\gamma + \lambda - 3/2-\gamma /2}\ \ \text { for all }\beta \in {{\mathbb {N}}}, \end{aligned}$$

which implies (5.6), using that \(-1<\gamma +2\lambda <1\) . \(\square \)

Remark 5.6

We notice that in the paper [37] it has been proved that there exists a unique stationary solution of a problem that can be reformulated as a solution of (5.1) for the explicit kernel \(K_{\alpha ,\beta }=\alpha \beta .\)

5.3 Non-existence Result

We first give an example of a construction of a continuous kernel \({{\widetilde{K}}}\) which interpolates the values of the discrete kernel \(K_{\alpha ,\beta }\) and satisfies (2.11)-(2.12). Let \(K_{\alpha , \beta }\) satisfy (5.3)-(5.5) and let w denote the corresponding weight function in (1.5). We define the continuous kernel \({\widetilde{K}}: ({{\mathbb {R}}}_*)^2 \rightarrow {{\mathbb {R}}}_+\) by setting

$$\begin{aligned} {{\widetilde{K}}}(x,y) = \sum _{\alpha ,\beta =1}^\infty K_{\alpha ,\beta } \zeta _\varepsilon (x-\alpha ) \zeta _\varepsilon (y-\beta )+ c_1 \left( \zeta _\varepsilon (x) + \zeta _\varepsilon (y)\right) w(x,y)\,, \quad x,y>0\,, \end{aligned}$$
(5.16)

where \(\varepsilon <1/2\) and \(\zeta _\varepsilon \) is a continuous non-negative function satisfying \(\zeta _\varepsilon (x)= 0,\ |x| {\geqq }1/2+\varepsilon \), \(\zeta _\varepsilon (x)= 1,\ |x| {\leqq }1/2-\varepsilon \) and affine in each interval \((1/2-\varepsilon ,1/2+\varepsilon )\) and \((-1/2-\varepsilon ,-1/2+\varepsilon )\). We remark that the series in (5.16) is convergent at any x and y since it contains at most 4 non-zero terms.

The function \({{\widetilde{K}}}\) is continuous, non-negative and symmetric as it is written as a sum of functions with the same properties. We now show that \({{\widetilde{K}}}\) satisfies the growth bounds (2.11)-(2.12) with the same exponents \(\gamma , \lambda \) of the discrete kernel \(K_{\alpha ,\beta }\), although possibly for different constants \(c_1\) and \(c_2\). Let us observe first that, if \(x<\frac{1}{2}\) or \(y<\frac{1}{2}\), the second term in (5.16) is proportional to w(xy) and thus already provides a suitable lower bound. The upper bound may also be checked to hold then, after possibly adjusting \(c_2\) from the value in (5.5). Hence, we assume \(x,y\ge \frac{1}{2}\) in the following.

For each \(\alpha ,\beta \in {{\mathbb {N}}}\), we have from (5.16) that \({{\widetilde{K}}}(\alpha ,\beta )=K_{\alpha ,\beta }\). Therefore \(\widetilde{K}(x,y)\) satisfies (2.11)-(2.12) for \(x=\alpha \) and \(y=\beta \). If \(x\in [\alpha -1/2, \alpha +1/2]\), \(y\in [\beta -1/2, \beta +1/2]\) we have that \(\frac{1}{2}K_{\alpha ,\beta } {\leqq }{{\widetilde{K}}}(x,y) {\leqq }\sum _{i,j=-1,0,1 } K_{\alpha +i,\beta +j} + c_1 \left( \zeta _\varepsilon (x) + \zeta _\varepsilon (y)\right) w(x,y)\), where we set \(K_{0,j}=K_{j,0}=0\) for \(j\in {{\mathbb {N}}}\). Using the bounds (5.4), (5.5), and the monotonicity properties of w, this implies that there exist positive constants \(c_1\) and \(c_2\) such that \({{\widetilde{K}}}(x,y)\) satisfies (2.11)-(2.12).

Lemma 5.7

Assume that

  • \(K: {{\mathbb {N}}}^2 \rightarrow {{\mathbb {R}}}_+\) is a function satisfying (5.3) and (5.5),

  • \({{\widetilde{K}}}: {{\mathbb {R}}}_*^2 \rightarrow {{\mathbb {R}}}_+ \) is a continuous interpolation of K satisfying (2.10) and (2.12), i.e., \({{\widetilde{K}}} \in C({{\mathbb {R}}}_*^2) \) and \(K_{\alpha , \beta } = {{\widetilde{K}}}(\alpha ,\beta )\),

  • \(s = (s_\alpha )_{\alpha \in {{\mathbb {N}}}} \) satisfies \(s \ne 0\) and (5.2),

  • \((n_\alpha )_{\alpha \in {{\mathbb {N}}}}\) is a stationary injection solution to (5.1) in the sense of Definition 5.1.

Let \(f, \eta \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\) be defined by \( f(\text {d}x)=\sum _{\alpha =1}^\infty n_\alpha \delta _{\alpha }(\text {d}x)\) and \( \eta (\text {d}x)= \sum _{\alpha =1}^\infty s_{\alpha }\delta _{\alpha }(\text {d}x) \), where \(\delta _{\alpha } \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\) is the Dirac measure at \(\alpha \). Then f is a stationary injection solution to the continuous coagulation equation (1.3) in the sense of Definition 2.1 with the kernel \({{\widetilde{K}}}\) and source \(\eta \).

Proof

We first notice that \(\eta \) satisfies (2.13) with \(L_\eta = L_s\) and \({{\widetilde{K}}}\) satisfies (2.10) and (2.12). Therefore \({{\widetilde{K}}}\) and \(\eta \) satisfy the assumptions of Definition 2.1.

For \(f \in {\mathcal {M}}_+({{\mathbb {R}}}_+)\) such that \(f=\sum _{\alpha =1}^\infty \delta _{\alpha } n_\alpha \), we have that \(f((0,1))=0\). Since \((n_\alpha )_{\alpha \in {{\mathbb {N}}}}\) is a stationary injection solution in the sense of Definition 5.1, then it satisfies (5.6). Using (5.6) and using Fubini’s theorem to exchange the sum and the integral, we obtain

$$\begin{aligned} \infty> & {} \sum _{\alpha =1 }^{\infty } \alpha ^{\gamma +\lambda }n_{\alpha } + \sum _{\alpha =1 }^{\infty } \alpha ^{-\lambda }n_{\alpha } \\= & {} \sum _{\alpha =1}^\infty \int _{\left( 0,\infty \right) }x^{\gamma +\lambda } n_\alpha { \delta _{\alpha }(\text {d}x) } + \sum _{\alpha =1}^\infty \int _{\left( 0,\infty \right) }x^{-\lambda } n_\alpha { \delta _{\alpha }(\text {d}x) } \\= & {} \int _{\left( 0,\infty \right) }x^{\gamma +\lambda }\sum _{\alpha =1}^\infty n_\alpha { \delta _{\alpha }(\text {d}x) }+ \int _{\left( 0,\infty \right) }x^{-\lambda }\sum _{\alpha =1}^\infty n_\alpha { \delta _{\alpha }(\text {d}x) } \\= & {} \int _{\left( 0,\infty \right) }x^{\gamma +\lambda }f\left( \text {d}x\right) + \int _{\left( 0,\infty \right) }x^{-\lambda }f\left( \text {d}x\right) , \end{aligned}$$

which proves (2.14). For any test function \(\varphi \in C_{c}({{\mathbb {R}}}_*)\) we use Fubini’s theorem to exchange the sum and the integral yielding

$$\begin{aligned} \int _{ (0,\infty )} \varphi \left( x\right) \eta \left( \text {d}x\right)= & {} \int _{(0,\infty )}\varphi \left( x\right) \sum _{\alpha =1}^\infty s_{\alpha } { \delta _{\alpha }(\text {d}x) }\nonumber \\= & {} \sum _{\alpha =1}^\infty \int _{ (0,\infty )}\varphi \left( x\right) s_{\alpha }{ \delta _{\alpha }(\text {d}x) } =\sum _{ \alpha =1 }^\infty s_{\alpha }\varphi _{\alpha }. \end{aligned}$$
(5.17)

Using (2.14) we have that for any test function \(\varphi \in C_{c}({{\mathbb {R}}}_*)\),

$$\begin{aligned} \iint _{ (0,\infty )^2}{{\widetilde{K}}}\left( x,y\right) \left[ \varphi \left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f\left( \text {d}x\right) f\left( dy\right) < \infty \end{aligned}$$

is well-defined. Using again Fubini’s theorem we obtain that

$$\begin{aligned}&\frac{1}{2}\iint _{(0,\infty )^2}{{\widetilde{K}}}\left( x,y\right) \left[ \varphi \left( x+y\right) -\varphi \left( x\right) -\varphi \left( y\right) \right] f\left( \text {d}x\right) f\left( dy\right) \nonumber \\&\quad = \frac{1}{2}\sum _{\beta }\sum _{\alpha }K_{\alpha ,\beta }n_{\alpha }n_{\beta }\left[ \varphi _{\alpha +\beta }-\varphi _{\alpha }-\varphi _{\beta }\right] . \end{aligned}$$
(5.18)

Adding (5.17) with (5.18) and using (5.7) we obtain (2.15), which concludes the proof. \(\square \)

Let \({\mathcal {C}} \subset {\mathcal {M}}_+({{\mathbb {R}}}_+)\) be the set of positive bounded Radon measures supported on the natural numbers, that is,

$$\begin{aligned} {\mathcal {C}} = \{ f \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\ | \ f = \sum _{\beta = 1}^\infty n_\beta {\delta _{\beta }},\ n_\beta {\geqq }0,\ \beta \in {{\mathbb {N}}}\}. \end{aligned}$$

Proof of Theorem 5.3 (non-existence)

We first recall the interpolation construction in the beginning of the Section which allows to extend to the discrete bounds (5.4)-(5.5) and construct a continuous interpolation kernel \({{\widetilde{K}}}\) such that (2.11)-(2.12) hold. From Theorem 2.3 there is no stationary injection solution to (1.3) in the sense of Definition 2.1. In particular, by Lemma 5.7 then there is no solution in the subset of discrete measures \({\mathcal {C}}\), which concludes the proof. \(\square \)

6 Estimates and Regularity

In order to define upper and lower estimates for the measure f we need detailed estimates for the fluxes J defined on the left hand side of (2.18) in Lemma 2.8. That is, we consider the function

$$\begin{aligned} J(z) = \iint _{\Omega _z} K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) \,, \quad z >0, \end{aligned}$$

where

$$\begin{aligned} \Omega _z := \{(x,y)\ |\ 0< x {\leqq }z ,\ y > z-x \}. \end{aligned}$$
(6.1)

Given \(\delta >0\), we introduce a partition of \(({{\mathbb {R}}}_+)^2= \Sigma _1(\delta ) \cup \Sigma _2(\delta ) \cup \Sigma _3(\delta ) \) by

$$\begin{aligned} \Sigma _1(\delta ) = \{(x,y)\ |\ y > x/\delta \}\,,\quad \Sigma _2(\delta ) = \{(x,y)\ |\ \delta x{\leqq }y {\leqq }x/\delta \}\,,\nonumber \\ \Sigma _3(\delta ) = \{(x,y)\ |\ y < \delta x \} \,, \end{aligned}$$
(6.2)

and we then define for \(j=1,2,3\)

$$\begin{aligned} J_j(z,\delta ) = \iint _{\Omega _z \cap \Sigma _j(\delta )}K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) \ \text {for }z >0. \end{aligned}$$
(6.3)

Clearly, \(J(z) = \sum _{j=1}^3 J_j(z,\delta ) \) for any choice of \(\delta \).

The following Lemma will be used to prove that the contribution to the integral defining the fluxes due to the points contained in \( \Sigma _1(\delta )\) and \( \Sigma _3(\delta ) \) are small for \(\delta \) sufficiently small.

Lemma 6.1

Let K satisfy (2.9)–(2.12) and \(| \gamma +2\lambda | <1.\) Suppose that \(f \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\) satisfies

$$\begin{aligned} \frac{1}{z}\int _{[z/2,z]} f(\text {d}x) {\leqq }\frac{ A }{z^{(\gamma + 3)/2}}\ \ \text { for all } z>0. \end{aligned}$$
(6.4)

Then for every \(\varepsilon >0\) there exists a \( \delta _{\varepsilon }>0\) depending on \(\varepsilon \) as well as on \(\gamma ,\ \lambda \) and on the constants \(c_1,\ c_2\) in (2.11)–(2.12) but independent of A such that for any \(\delta {\leqq }\delta _{\varepsilon }\) we have that

$$\begin{aligned} \sup _{z>0} J_1(z,\delta ) {\leqq }\varepsilon A^2 \end{aligned}$$
(6.5)

and

$$\begin{aligned} \sup _{R>0}\frac{1}{R}\int _{[R,2R]} J_3(z,\delta )\text {d}z {\leqq }\varepsilon A^2. \end{aligned}$$
(6.6)

Proof

We set \(\theta := 1/\delta >1\). In order to estimate the contribution due to the region \( \Sigma _1(\delta ) \cap \Omega _z\) we define

$$\begin{aligned} D(z,\theta ):= \{(x,y)\ |\ 0< x {\leqq }z,\ \max \{\theta x ,z/2\} {\leqq }y \}. \end{aligned}$$

First suppose that \(2\lambda +\gamma {\geqq }0 \). Using the upper bound for K given in (2.12) and the fact that \(\Omega _z \cap \Sigma _1(\delta ) \subset D(z,\theta )\) we obtain that

$$\begin{aligned} \iint _{\Omega _z \cap \Sigma _1(\delta ) }K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) {\leqq }2 c_2 \iint _{D(z,\theta ) } x^{1-\lambda }y^{\gamma +\lambda } f \left( \text {d}x\right) f\left( dy\right) .\ \end{aligned}$$

Assuming \(0<x\le z\) we denote \(a:=\max \{\theta x ,z/2\}\). Then we can employ item 3 of Lemma 2.10 and the upper bound (6.4) to estimate

$$\begin{aligned} \int _{[a,\infty )} y^{\gamma +\lambda } f\left( dy\right) \le A \frac{2^{|\gamma +\lambda |}}{\nu \ln 2} a^{-\nu }\,, \end{aligned}$$

where \(\nu :=\frac{1-|2\lambda +\gamma |}{2}=\frac{1-(2\lambda +\gamma )}{2}>0\). Therefore, by Fubini’s theorem, we can now conclude that

$$\begin{aligned} \iint _{\Omega _z \cap \Sigma _1(\delta )} K\left( x,y\right) x f \left( \text {d}x\right) f\left( dy\right) {\leqq }C A \int _{(0,z]} \max \{\theta x ,z/2\}^{-\nu } x^{1-\lambda } f \left( \text {d}x\right) \,. \end{aligned}$$
(6.7)

Denoting \(\varphi (x)=\max \{\theta x ,z/2\}^{-\nu } x^{1-\lambda } \), it follows from (6.4) that

$$\begin{aligned} \frac{1}{y}\int _{[y/2,y]} \varphi (x) f(\text {d}x) {\leqq }2^\nu \max \{\theta y ,z\}^{-\nu } 2^{|1-\lambda |} A y^{\nu -1}\,, \quad \text { for all } y>0. \end{aligned}$$

Thus by item 1 of Lemma 2.10, we find that for any \(a'\in (0,z]\),

$$\begin{aligned} \int _{[a',z]} \varphi (x) f \left( \text {d}x\right) \le \frac{ 2^{|1-\lambda |+\nu } }{\ln 2} A \int _{[a',z]} \max \{\theta y ,z\}^{-\nu } y^{\nu -1} dy + 2^{|1-\lambda |+\nu } A \max \{\theta ,1\}^{-\nu } \,. \end{aligned}$$

In the inequality above we have \(\max \{\theta ,1\}=\theta >1\). The limit \(a'\rightarrow 0\) of the right hand side is finite, and thus we can use monotone convergence theorem to conclude that

$$\begin{aligned} \int _{(0,z]} \varphi (x) f \left( \text {d}x\right) \le 2^{|1-\lambda |+\nu } A \left( \frac{1}{\ln 2} \int _{0}^z \max \{\theta y ,z\}^{-\nu } y^{\nu -1} dy + \theta ^{-\nu }\right) \,. \end{aligned}$$

Evaluating the remaining integral and inserting the result in (6.7) yields

$$\begin{aligned} \iint _{\Omega _z \cap \Sigma _1(\delta )} K\left( x,y\right) x f \left( \text {d}x\right) f\left( dy\right) \nonumber&{\leqq }&C A^2 \left( 1+ \frac{1}{\nu } + \ln \theta \right) \theta ^{-\nu } \,. \end{aligned}$$

Since \(\nu >0\), the factor multiplying \(A^2\) converges to 0 as \(\theta \rightarrow \infty \), that is, also when \(\delta \rightarrow 0\). Therefore, to any \(\varepsilon >0\) there is \(\delta _\varepsilon >0\) such that (6.5) holds for all \(0<\delta \le \delta _\varepsilon \).

In the case where \(\gamma +2\lambda <0\) we have \(\nu =\frac{1+2\lambda +\gamma }{2}>0\). The above steps can then be repeated simply by exchanging the exponents \(\gamma +\lambda \) and \(-\lambda \) therein. We find

$$\begin{aligned}&\iint _{\Omega _z \cap \Sigma _1(\delta ) }K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) {\leqq }2 c_2 \iint _{D(z,\theta ) } x^{1+\gamma +\lambda }y^{-\lambda } f \left( \text {d}x\right) f\left( dy\right) \\&\quad \le C A^2 \left( 1+ \frac{1}{\nu } + \ln \theta \right) \theta ^{-\nu }\,. \end{aligned}$$

Thus also in this case (6.5) holds for all sufficiently small \(\delta \).

To study the region \(\Sigma _3(\delta )\), we return to the case \(\gamma +2\lambda \ge 0\) and assume also \(\delta \le \frac{1}{4}\). Then we have that

$$\begin{aligned} \Omega _z \cap \Sigma _3(\delta ) \subset \{(x,y)\ |\ 0 < y {\leqq }\delta z,\ z-y {\leqq }x {\leqq }z \}\,. \end{aligned}$$

In particular, if \((x,y)\in \Omega _z \cap \Sigma _3(\delta )\), we have \(x\ge (1-\delta )z\ge (\delta ^{-1}-1)y> y\). We integrate (6.3) in z from R to 2R, and using (2.12) we obtain

$$\begin{aligned} I_3:= & {} \int _{[R,2R]}\iint _{\Omega _z \cap \Sigma _3(\delta ) } K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) \text {d}z \\&{\leqq }&2 c_2 \int _{[R,2R]} \int _{(0,\delta z]} \int _{[z-y,z]} x^{1+\gamma +\lambda }y^{-\lambda }f \left( \text {d}x\right) f\left( dy\right) \text {d}z . \end{aligned}$$

Notice that in the region of integration we have \( R/2 {\leqq }x {\leqq }z{\leqq }2R\) since \(\delta \le \frac{1}{2}\). Therefore, \((0,\delta z] \subset (0,2\delta R]\). Thus there exists a constant \(C>0\) independent of \(\delta \) and R such that

$$\begin{aligned} I_3&{\leqq }&C R^{1+\gamma +\lambda } \int _{[R,2R]} \int _{(0,2 \delta R]} \int _{[z-y,z]} y^{-\lambda }f \left( \text {d}x\right) f\left( dy\right) \text {d}z. \end{aligned}$$

Using now Fubini’s theorem, as well as the fact that \(\{(x,z)\ |\ z-y{\leqq }x {\leqq }z,\ R {\leqq }z {\leqq }2R \} \subset \{(x,z)\ |\ R/2 {\leqq }x {\leqq }2R, \ x{\leqq }z{\leqq }x+y \}\) if \(0<y\le R/2\), we obtain

$$\begin{aligned} I_3&{\leqq }&C R^{1+\gamma +\lambda }\int _{(0,2\delta R]} y^{-\lambda } f\left( dy\right) \int _{[R/2,2R]}f\left( \text {d}x\right) \int _{[x,x+y]} \text {d}z \\= & {} C R^{1+\gamma +\lambda }\int _{(0,2\delta R]} y^{1-\lambda }f\left( dy\right) \int _{[R/2,2R]} f\left( \text {d}x\right) . \end{aligned}$$

We estimate the integral with respect to the x-variable using the bound (6.4) which is valid since \(|\gamma +2\lambda |<1\). The integral over y can be estimated using item 2 of Lemma 2.10 after a regularization by \(a'\rightarrow 0\). We then obtain

$$\begin{aligned} I_3 {\leqq }CA^2 R^{1+\gamma + \lambda } \delta ^\nu R^{\nu } R^{-\frac{\gamma +1}{2}}= C A^2 \delta ^\nu R, \end{aligned}$$

where \(\nu = \frac{1-|2\lambda +\gamma |}{2}>0\), as before, and C is an adjusted constant independent of R and \(\delta \). Thus the prefactor of \(A^2 R\) goes to zero as \(\delta \rightarrow 0\), and we may conclude that to any \(\varepsilon >0\) there is \(\delta _\varepsilon >0\) such that (6.6) holds for all \(0<\delta \le \delta _\varepsilon \). Taking the smallest of the cutoffs \(\delta _\varepsilon \) obtained for (6.5) and (6.6), we find a value such that both inequalities are valid whenever \(0<\delta \le \delta _\varepsilon \).

In the case where \(\gamma +2\lambda <0\), also (6.6) can be checked as in the first case, by exchanging the exponents \(\gamma +\lambda \) and \(-\lambda \) in the above. \(\square \)

In this Section we use the assumptions of Theorem 2.3 as stated next which guarantee the existence of a stationary injection solution f in the sense of Definition 2.1.

Assumption 6.2

Let K satisfy (2.9)–(2.12) and suppose \(| \gamma +2\lambda | <1.\) Let \(\eta \ne 0 \) satisfy (2.13). Let \(f \in {\mathcal {M}}_+({{\mathbb {R}}}_*)\), \(f\ne 0\) be a stationary injection solution to (1.3) in the sense of Definition 2.1 with \(f((0,a))=0\), for some \(a>0\) (cf. Remark 2.2).

Under Assumption 6.2 we obtain, from Lemma 2.8, that f satisfies:

$$\begin{aligned} \iint _{\Omega _z }K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) = \int _{(0,z]}x\eta \left( \text {d}x\right) \ \text {for }z > 0. \end{aligned}$$
(6.8)

Notice that Lemma 6.1 shows that the contributions of the regions \(\Sigma _j(\delta ) \cap \Omega _z\) with \(j=1,3\) to the fluxes defined in (2.5) are small for \(\delta \) sufficiently small. This shows that the flux of particles in the size space is due to region \(\Sigma _2(\delta ) \cap \Omega _z\), which yields the contribution of the collisions between particles of comparable size.

Proposition 6.3

Suppose that Assumption 6.2 holds. Let J be the constant \(J= \int _{(0,L_\eta ]} x \eta (\text {d}x)\). Then

$$\begin{aligned} \frac{1}{z}\int _{[z/2,z]} f(\text {d}x) {\leqq }\frac{ C_1\sqrt{J} }{z^{(\gamma + 3)/2}}\ \ \text { for all } z>0. \end{aligned}$$
(6.9)

Moreover, there exists a constant b, with \(0<b<1\) and depending on \(\gamma ,\ \lambda \) and on the constants \(c_1,\ c_2\) in (2.11)-(2.12) such that

$$\begin{aligned} \frac{1}{z}\int _{ (bz,z]} f(\text {d}x) {\geqq }\frac{ C_2\sqrt{J} }{z^{(\gamma + 3)/2}}\ \ \text { for all } z{\geqq }\frac{L_\eta }{\sqrt{b}} \ . \end{aligned}$$
(6.10)

The constants \(C_1, \ C_2\) that appear in (6.9) and (6.10) depend on \(\gamma ,\ \lambda \) and on the constants \(c_1,\ c_2\) in (2.11)-(2.12).

Proof

Using Lemma 2.8 we obtain that (6.8) holds. We first prove the upper bound (6.9). Using that \([2z/3,z]^2 \subset \Omega _z\), where \(\Omega _z\) is as in (6.1), we obtain

$$\begin{aligned} \iint _{\left[ 2z/3,z\right] ^2} K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) {\leqq }J. \end{aligned}$$

Using the lower bound (2.11) for K and the fact that x and y are of the same order of z in the domain of integration we obtain

$$\begin{aligned} z^{\gamma +1} \left( \int _{\left[ 2z/3,z\right] } f \left( \text {d}x\right) \right) ^2 {\leqq }C^2 J \end{aligned}$$

for some positive constant C which depends only on K. Equivalently,

$$\begin{aligned} \frac{1}{z}\int _{\left[ 2z/3,z\right] } f \left( \text {d}x\right) {\leqq }\frac{ C \sqrt{J}}{z^{(\gamma +3)/2}},\ \text {for }z\in (0,\infty ), \end{aligned}$$
(6.11)

which proves the upper estimate using that \([z/2,z]\subset [4z/9,2z/3] \cup [2z/3,z] \).

Using \(J= \int _{(0,L_\eta ]} x \eta (\text {d}x)\) in (6.8) as well as the definition of \(J_j\) in (6.1)–(6.3) we obtain

$$\begin{aligned} J = \sum _{j=1}^3 J_j(z,\delta ),\ \ z{\geqq }L_\eta . \end{aligned}$$
(6.12)

Integrating (6.12) with respect to z in [R, 2R], using the upper estimate (6.9) as well as Lemma 6.1 we obtain that for \(\delta >0\) sufficiently small depending only on \(\gamma ,\ \lambda \) and on the constants \(c_1,\ c_2\) in (2.11)–(2.12), the following chain of inequalities holds with \(A:=C \sqrt{J}\) and \(\varepsilon {\leqq }\frac{1}{4A^2}\)

$$\begin{aligned} \frac{JR}{2}&{\leqq }&J(1-2\varepsilon A^2) R {\leqq }\int _{[R,2R]} J_2(z,\delta ) \text {d}z \\&{\leqq }&\int _{[R,2R]}\iint _{\Omega _z \cap \Sigma _2(\delta )} K\left( x,y\right) xf \left( \text {d}x\right) f\left( dy\right) \text {d}z, \quad R {\geqq }L_\eta \ . \end{aligned}$$

A simple geometrical argument shows that there exists a constant b, \(0<b<1\), depending only on \(\delta \) (and therefore on \(\gamma ,\ \lambda ,\ c_1\) and \(c_2\)) such that \(\underset{z \in [R,2R]}{\bigcup } (\Omega _z \cap \Sigma _2(\delta )) \subset (\sqrt{b}R, R/\sqrt{b}]^2 \). (For a fixed z and \((x,y)\in \Omega _z \cap \Sigma _2(\delta )\) one finds \((\delta ^{-1}+1)^{-1} z<x,y\le \delta ^{-1} z\); thus for example \(b = \frac{\delta ^2}{4}\) would suffice.) Moreover, for every \((x,y) \in (\sqrt{b}R, R/\sqrt{b}]^2 \) we have \(x K(x,y) {\leqq }C R^{\gamma +1}\), with C depending only on \(\gamma ,\ \lambda ,\ c_1\) and \(c_2\). Then

$$\begin{aligned} \frac{JR}{2} {\leqq }C R R^{\gamma +1} \left( \int _{(\sqrt{b}R, R/\sqrt{b}]}f\left( \text {d}x\right) \right) ^2, \end{aligned}$$

whence \(1/R \int _{(\sqrt{b}R,R/{\sqrt{b}}]} f(\text {d}x) {\geqq }C \sqrt{J} R^{-(\gamma +3)/2}\) for \(R{\geqq }L_\eta \). Thus (6.10) follows after substituting \(R/\sqrt{b}\) by z. \(\square \)

In the next Corollary we obtain the moment estimates for a stationary injection solution, when it exists.

Corollary 6.4

Suppose that Assumption 6.2 holds. Then we have the following moment estimates:

  1. a)

    \( \int _{{\mathbb {R}}_{*}}x^{\mu }f\left( \text {d}x\right)< \infty \quad \text {for }\quad \mu < \frac{\gamma + 1}{2} \) ,

  2. b)

    \(\int _{{\mathbb {R}}_{*}}x^{\frac{\gamma +1}{2}}f\left( \text {d}x\right) = \infty \) .

Proof

a) The boundedness of moments of order \(\mu \) for \(\mu < \frac{\gamma +1}{2}\) has already been obtained in the proof of Theorem 2.3 in Section 3 equation (3.61). Notice also that a) is an easy consequence of (6.9) and Lemma 2.10.

b) Using the lower bound (6.10) and multiplying by \(z^{(\gamma +3)/2}\) we obtain

$$\begin{aligned} C_2\sqrt{J} {\leqq }z^{(\gamma +1)/2} \int _{(bz,z]}f(\text {d}x) {\leqq }C \int _{(bz,z]}x^{(\gamma +1)/2} f(\text {d}x),\quad z {\geqq }\frac{L_\eta }{\sqrt{b}} \end{aligned}$$

for some constant \(C>0\). In particular, for any natural number n satisfying \(b^{-n}\ge L_\eta / \sqrt{b}\) and for \(z=b^{-n}\) we have that \(C_2\sqrt{J} {\leqq }C \int _{(b^{1-n},b^{-n}]}x^{(\gamma +1)/2} f(\text {d}x)\). Summing in n we finally obtain the result b). \(\square \)

Remark 6.5

Notice that for \(\gamma >1\) Corollary 6.4 a) implies that the first moment \(\int _{{{\mathbb {R}}}_*} xf(\text {d}x)\) is finite. Therefore the stationary injection solutions can be interpreted in this case as solutions having a finite number of monomers for which the source of monomers \(\eta (x)\) is balanced with the flux of monomers towards infinity. This is closely related to the phenomenon of gelation, which takes place for \(\gamma >1\), in which it is possible to have solutions with a finite number of monomers having a flux of monomers towards infinity. Notice that for \(\gamma <1\) we have that \(\int _{{{\mathbb {R}}}_*} xf(\text {d}x)\) is infinite. We further observe that the existence or non existence of stationary injection solutions is independent of the corresponding kernels yielding mass conservation or gelation.

Remark 6.6

We observe that for \(\gamma >-1\), Corollary 6.4 implies that the number of clusters associated to the stationary injection solutions \(\int _{{{\mathbb {R}}}_*} f(\text {d}x)\) is finite and Proposition 6.3 together with Lemma 2.10 yields the integral estimates

$$\begin{aligned} \frac{ C_1\sqrt{J}}{z^{(\gamma + 1)/2}} {\leqq }\int _{[z,\infty )} f(\text {d}x) {\leqq }\frac{ C_2\sqrt{J}}{z^{(\gamma + 1)/2}} \quad \text {for } z {\geqq }L_{\eta }, \end{aligned}$$

where \(J= \int _{(0,L_\eta ]} x \eta (\text {d}x)\) and \(0< C_1 {\leqq }C_2 \).

Remark 6.7

The result in Corollary 6.4 has been obtained in [12] in the case of bounded kernels.

The next Corollary contains the estimates for a stationary injection solution in the discrete case.

Corollary 6.8

Assume that \(K:{{\mathbb {N}}}^{2}\rightarrow {{\mathbb {R}}}_{+}\) satisfies (5.3)- (5.5) and \(| \gamma +2\lambda | <1.\) Let \(s \ne 0 \) satisfy (5.2). Let \((n_\alpha )_{\alpha =1}^{\infty }\) be a stationary injection solution to (5.1) in the sense of Definition 5.1. Then

$$\begin{aligned} \frac{ C_1 \sqrt{J}}{z^{(\gamma + 3)/2}} {\leqq }\frac{1}{z}\sum _{\alpha \in {{\mathbb {N}}}\cap [z/2,z]} n_\alpha {\leqq }\frac{ C_2\sqrt{J}}{z^{(\gamma + 3)/2}}\ \ \text { for all } z{\geqq }L_{s}, \end{aligned}$$
(6.13)

where \(J=\sum _\alpha s_\alpha \) and the constants \(0< C_1 {\leqq }C_2 \) are independent of s.

Proof

The results follow directly from Lemma 5.7 and Proposition 6.3. \(\square \)

Finally we obtain that the solutions to the continuous problem when they exist are measures in \(C^k({{\mathbb {R}}}_+)\) provided that the source \(\eta \) and the kernel K are functions in \( C^k({{\mathbb {R}}}_+)\) and that the derivatives of K satisfy some growth conditions.

Lemma 6.9

Suppose that Assumption 6.2 holds with \(\eta \in L^\infty ((0,\infty ))\). Let \(L_0 {\geqq }\frac{1}{2}\) and \(0<\rho < \frac{1}{8}\). Assume that there exists \(A > 0\) such that

$$\begin{aligned} \int _{[x_0-r,x_0+r]}f(\text {d}x) {\leqq }Ar \end{aligned}$$
(6.14)

for all \(r{\leqq }\rho \) and for all \(x_0 \in [\frac{1}{4}, L_0]\). Then there exists a constant \(B>0\) that depends on \(L_0,\ \eta \) and A, but it is independent of r such that

$$\begin{aligned} \int _{[x_0-r,x_0+r]}f(\text {d}x) {\leqq }B( A^2+\Vert \eta \Vert _{L^\infty }) r \end{aligned}$$
(6.15)

for any \(x_0 \in [\frac{1}{4},L_0+1]\) and \(r {\leqq }\rho /2\).

Proof

Using (2.15), we obtain for all \(\varphi \in C_c({{\mathbb {R}}}_*)\) that

$$\begin{aligned} \int _{{{\mathbb {R}}}_*} \varphi (x) \alpha (x) f(\text {d}x) = \int _{{{\mathbb {R}}}_*}\varphi \left( x\right) \eta \left( \text {d}x\right) + \frac{1}{2}\int _{{{\mathbb {R}}}_*}\int _{{{\mathbb {R}}}_*} K\left( x,y\right) \varphi \left( x+y\right) f\left( \text {d}x\right) f\left( dy\right) , \end{aligned}$$
(6.16)

where

$$\begin{aligned} \alpha (x) = \int _{{{\mathbb {R}}}_*} K(x,y) f(dy). \end{aligned}$$
(6.17)

The continuity and the lower estimate for the kernel K (cf. (2.11) and (2.9)) imply that \(\alpha (x) {\geqq }\alpha _{L_0}>0, \) for all \(x \in [\frac{1}{8},L_0+1]\). Using an approximation argument as in Lemma 2.8, we may use in (6.16) a test function \(\varphi (x) = \chi _{[x_0-r, x_0+r]}(x)\). Using the boundedness of \(\eta \) we obtain

$$\begin{aligned}&\int _{[x_0-r, x_0+r]}f(\text {d}x) \nonumber \\&\quad {\leqq }\frac{1}{\alpha _{L_0}} \left( 2 \Vert \eta \Vert _{L^\infty } r + \frac{1}{2}\int _{{{\mathbb {R}}}_*}\int _{{{\mathbb {R}}}_*} K\left( x,y\right) \chi _{[x_0-r, x_0+r]}(x+y) f\left( \text {d}x\right) f\left( dy\right) \right) .\nonumber \\ \end{aligned}$$
(6.18)

We now use a geometrical argument to show that for every \(x_0 \in [\frac{1}{4}, L_0+1] \) and \(r < \frac{\rho }{2} \) there exists a set \(\{ \xi _\ell \}_{\ell \in { I}} \subset {{\mathbb {R}}}_+\) such that \(\# {I} {\leqq }\frac{L_0+1}{r}\) and

$$\begin{aligned} \{(x,y) \ |\ |x+y- x_0| {\leqq }r \} \subset \bigcup _{\ell \in {I}} Q_\ell , \end{aligned}$$

with \(Q_\ell =[\xi _\ell - 2r,\xi _\ell +2r] \times [x_0-\xi _\ell - 2r,x_0- \xi _\ell +2r]\) and \(\xi _\ell {\leqq }x_0 \) for all \(\ell \in {I}\).

This can be seen just locating points along the segment \(\{ \left( x,y\right) :x+y=x_{0},\ x{\geqq }0,\ y{\geqq }0\} \) given by \(\left\{ \left( \xi _{\ell },x_{0}-\xi _{\ell }\right) \right\} _{\ell \in I}\) and such that \({\text {*}}{dist}\left( \xi _{\ell },\left\{ \xi _{j}\right\} _{j\in I}\backslash \left\{ \xi _{\ell }\right\} \right) =r.\) Then, the union of the cubes \(Q_{\ell }\) cover the strip \(\left\{ \left( x,y\right) :\left| x+y-x_{0}\right| {\leqq }r, x{\geqq }0,\ y{\geqq }0\right\} .\) Using the boundedness of K for \(x{\geqq }1, y {\geqq }1\) and \(x+y {\leqq }L_0 + 1 + \frac{\rho }{2}\) as well as the assumption \(f((0,1))=0\), we obtain

$$\begin{aligned} \int _{[x_0-r, x_0+r]}f(\text {d}x) {\leqq }\frac{1}{\alpha _{L_0}} \left( 2 \Vert \eta \Vert _{L^\infty } r + C \sum _{\ell \in { I}} \iint _{Q_\ell } f\left( \text {d}x\right) f\left( dy\right) \right) , \end{aligned}$$

where C depends on K and \(L_0\). Using (6.14) it follows that \(\iint _{Q_\ell } f\left( \text {d}x\right) f\left( dy\right) {\leqq }4A^2 r^2 \). Then, since \(\# I {\leqq }\frac{L_0+1}{r}\), we get

$$\begin{aligned} \int _{[x_0-r, x_0+r]}f(\text {d}x) {\leqq }\frac{1}{\alpha _{L_0}} \left( 2 \Vert \eta \Vert _{L^\infty } r + 4 A^2 C (L_0+1)r \right) . \end{aligned}$$

Hence (6.15) follows. \(\square \)

Proposition 6.10

Suppose that Assumption 6.2 holds with \(\eta \in C((0,\infty ))\). Then \(f \in C((0,\infty ))\).

In addition, suppose that for some \(k{\geqq }1\) we have that \(\eta \in C^k((0,\infty ))\), \( K \in C^k((0,\infty )^2)\) and that for every \(P>1\) there exists a constant \(C_P\) such that

$$\begin{aligned} \left| \frac{\partial ^\ell K}{\partial x^\ell } (x,y)\right| {\leqq }C_P[y^{-\lambda } + y^{\gamma +\lambda }],\ \ \ \forall x \in [1,P],\ y\in (0,\infty ), \ 1 {\leqq }\ell {\leqq }k. \end{aligned}$$
(6.19)

Then \(f \in C^k((0,\infty ))\).

Proof

Suppose that \(\eta \in C((0,\infty ))\). Using that \(f((0,1))=0\) it follows that \(\int _{[x_0-r, x_0+r]}f(\text {d}x) =0\) for all \(x_0 \in [ 1/8, 1/2]\) and \(r {\leqq }\rho =1/8\). Given any \(M>1/8\), it then follows from Lemma 6.9 that \(\int _{[x_0-r, x_0+r]}f(\text {d}x) {\leqq }C_M r\) for any \(x_0 \in [ 1/8, M]\) and \(r {\leqq }\rho _M\) and \(\rho _M>0\) sufficiently small. Then, since every null set can be covered by a countable union of intervals with arbitrary small lengths, we have that f is absolutely continuous with respect to the Lebesgue measure. Thus \(f(\text {d}x)=f \text {d}x\) for some \(f\in L^1_{loc}({{\mathbb {R}}}_{+})\). Moreover, \(f(x_0)=\lim _{r\rightarrow 0} \frac{1}{r} \int _{[x_0-r,x_0+r]}f(\text {d}x),\) almost everywhere \(x_0\in {{\mathbb {R}}}_ {+}\) whence \(f(x_0){\leqq }C_{M}\) almost everywhere \(x_0\in {{\mathbb {R}}}_ {+}\). Hence \(f \in L_{\text {loc}}^\infty ({{\mathbb {R}}}_+).\) Using also the weak formulation (6.16) it follows that

$$\begin{aligned} f(x)= & {} \frac{1}{\alpha (x)}[\eta (x)+\frac{1}{2} \int _0^x K(x-y,y) f(x-y)f(y)dy]\, \\= & {} \frac{1}{\alpha (x)}[\eta (x)+ \int _0^{x/2} K(x-y,y) f(x-y)f(y)dy]\,, \end{aligned}$$

with \(\alpha \) given in (6.17). Then \(f \in C((0,\infty ))\) can be obtained by induction, taking as starting point the fact that \(f(x)=0\) for \(0 {\leqq }x {\leqq }1\). The fact that \(f \in C^k((0,\infty ))\) if \(\eta \in C^k((0,\infty )) \) and (6.19) follows in a similar manner. \(\square \)

7 Convergence of Discrete to Continuous Model

We start by defining constant flux solution (cf. Section 2.1).

Definition 7.1

Assume that \(K:{{\mathbb {R}}}_{*}^{2}\rightarrow {{\mathbb {R}} }_{+}\) is a continuous function satisfying (2.10) and (2.12). We will say that \(f\in {\mathcal {M}}_+\left( 0,\infty \right) \) is a constant flux solution of (1.3) with \(\eta \equiv 0\) if the following identity holds for some constant \(J{\geqq }0\) and for any \(z>0\):

$$\begin{aligned} \int _{(0,z]}\int _{(z-y, \infty )}y K\left( x,y\right) f\left( \text {d}x\right) f\left( dy\right) =J. \end{aligned}$$
(7.1)

Remark 7.2

Note that in Definition 7.1 we use measures \(f\in {\mathcal {M}}_+\left( 0,\infty \right) \) and therefore the measure can be unbounded in any interval of the form (0, a) for any \(a>0\). Notice in particular that for a measure \(f\in {\mathcal {M}}_+\left( 0,\infty \right) \) the left hand side of (7.1) could be infinity.

Our goal is to prove that for a large class of kernels \(K_{\alpha ,\beta }\) satisfying (5.3)-(5.5), the stationary injection solutions to the discrete problem (5.1) can be approximated for large cluster sizes by constant flux solutions of the continuous problem (1.3) in the sense of Definition 7.1. Since we proved in Theorems 5.2-5.3 that stationary injection solutions to (5.1) exist if and only if \(|\gamma +2\lambda |<1\), we will assume this condition in the rest of this Section. To this end, for each \(R>0\) we construct stationary injection solutions \(f_R\) to (1.3) with some suitable kernel \(K_R\) and \(\eta _R\) satisfying \( {\mathrm{supp}} \eta _R \subseteq [1/R, L_\eta /R]\) (cf. Remark 2.2).

Let \(K_{\alpha , \beta }\) satisfy (5.3)-(5.5) with \(\left| \gamma +2\lambda \right| <1\) and s satisfy (5.2). Let \((n_\alpha )_{\alpha \in {{\mathbb {N}}}}\) be a discrete stationary injection solution to (5.1) in the sense of Definition 5.1. For each \(R>0\), we define the measure \(f_R \in {\mathcal {M}}({{\mathbb {R}}}_*) \) by

$$\begin{aligned} f_R(\text {d}x) = R^{(3+\gamma )/2}\sum _{\alpha =1}^\infty n_{\alpha } \delta _{\alpha /R}(\text {d}x) , \end{aligned}$$
(7.2)

and the continuous kernel \(K_R: ({{\mathbb {R}}}_*)^2 \rightarrow {{\mathbb {R}}}_+\) by

$$\begin{aligned} K_R(x,y) = R^{-\gamma } \sum _{\alpha ,\beta =1}^\infty K_{\alpha ,\beta } \zeta _\varepsilon (Rx-\alpha ) \zeta _\varepsilon (Ry-\beta ) {+ c_1 \left( \zeta _\varepsilon (Rx) + \zeta _\varepsilon (Ry)\right) w(x,y)\,,} \end{aligned}$$
(7.3)

where w denotes the weight function in (1.5), \(\varepsilon <1/2\), and \(\zeta _\varepsilon \) is a continuous nonnegative function satisfying \(\zeta _\varepsilon (x)= 0,\ |x| {\geqq }1/2+\varepsilon \), \(\zeta _\varepsilon (x)= 1,\ |x| {\leqq }1/2-\varepsilon \) and affine in each interval \((1/2-\varepsilon ,1/2+\varepsilon )\) and \((-1/2-\varepsilon ,-1/2+\varepsilon )\). Moreover, we define the source \(\eta _R \in {\mathcal {M}}({{\mathbb {R}}}_*) \) with \({\mathrm{supp}} \eta _R \subseteq [1/R,L_\eta /R]\) by

$$\begin{aligned} \eta _R(\text {d}x) = R^{2}\sum _{\alpha =1}^\infty s_{\alpha } \delta _{\alpha /R}(\text {d}x). \end{aligned}$$
(7.4)

Lemma 7.3

The kernel \(K_R\) satisfies (2.9)-(2.12) with \(\left| \gamma +2\lambda \right| <1\), uniformly in R. The measure \(f_R\) defined as in (7.2) is a stationary injection solution to (1.3) in the sense of Definition 2.1 (cf. Remark 2.2) satisfying (2.14) and (2.15) with the kernel \(K_R\) and the source \(\eta _R\) given by (7.3) and (7.4) respectively.

Proof

We first notice that the function \(K_R\) is continuous, non-negative and symmetric as it is written as a sum of functions with the same properties. Next we will show that \(K_R\) satisfies the growth bounds (2.11)-(2.12) with the same exponents \(\gamma , \lambda \) of the discrete kernel \(K_{\alpha ,\beta }\). In particular these exponents satisfy \(\left| \gamma +2\lambda \right| <1\). If \(x <\frac{1}{2}\) or \(y < \frac{1}{2}\), the second term in (7.3) is proportional to w(xy) and thus it provides a suitable lower bound. The upper bound also holds after possibly adjusting \(c_2\) in (5.5). Hence, we may assume \(x,y {\geqq }\frac{1}{2}\) in the following. For each \(\alpha ,\beta \in {{\mathbb {N}}}\), we have from (7.3) that \(K_R(\alpha /R,\beta /R)=R^{-\gamma }K_{\alpha ,\beta }\). Therefore \(K_R(x,y)\) satisfies (2.11)-(2.12) for \(x=\alpha /R\) and \(y=\beta /R\) uniformly in R due to the assumption on \(K_{\alpha ,\beta }\) (5.4)-(5.5). For \(x\in [(\alpha -1/2)/R, (\alpha +1/2)/R]\), \(y\in [(\beta -1/2)/R, (\beta +1/2)/R]\) we have that \(\frac{1}{2} R^{-\gamma }K_{\alpha ,\beta } {\leqq }K_R(x,y) {\leqq }R^{-\gamma } \sum _{i,j=-1,0,1 } K_{\alpha +i,\beta +j}\), where we set \(K_{0,j}=K_{j,0}=0,\) for \(j\in {{\mathbb {N}}}\). This implies together with the bounds (5.4)-(5.5) and the monotonicity properties of w, that there exist positive constants \(c_1\) and \(c_2\) that are independent of R and such that \(K_R(x,y)\) satisfies (2.11)-(2.12), which concludes the first part of the Lemma.

Next we substitute the expressions for \(f_R\), \(K_R\) and \(\eta _R\) in the weak formulation (2.15) and perform a change of variables \(\xi = Rx\) and \(\theta = Ry\). We then obtain an expression where all the terms are multiplied by the same factor R. Using then that for \(m \in {{\mathbb {N}}}\), \(\zeta _{\varepsilon }(m) = 1, \ m=0\) and \(\zeta _{\varepsilon }(m) = 0, \ m\ne 0\) we obtain that the weak formulation of the continuous problem (2.15) reduces to the weak formulation of the discrete problem (5.7). \(\square \)

Theorem 7.4

Let \(K_{\alpha ,\beta }:{{\mathbb {N}}}^{2}\rightarrow {{\mathbb {R}}}_{+}\) be a kernel satisfying (5.3)-(5.5) with \(| \gamma +2\lambda | <1\) and let the sequence \(s=(s_{\alpha })_{\alpha \in {{\mathbb {N}}}}\) satisfy (5.2). Let \((n_\alpha )_\alpha \) be a solution of the stationary problem (5.1) in the sense of Definition 5.1. Let \(f_R, K_R\) and \(\eta _R\) be as in (7.2), (7.3) and (7.4), respectively. Assume that there exists \(K \in C(({{\mathbb {R}}}_*)^2)\) such that \(K_R \rightarrow K \) as \(R \rightarrow \infty \) uniformly on compact sets of \((0,\infty )^2\). Consider the family of stationary injection solutions defined above \((f_R)_{R>0}\). Then for any sequence \(( R_n)_{n\in {{\mathbb {N}}}}\) such that \(\lim _{n \rightarrow \infty } R_{n} = \infty \) there exists a subsequence \(( R_{n_k})_{k\in {{\mathbb {N}}}}\) and \(f \in {\mathcal {M}}(0,\infty )\) (that might depend on the subsequence) such that

$$\begin{aligned} \forall \varphi \in C_c(0,\infty ),\ \int _{{{\mathbb {R}}}_*} \varphi (x) f_{R_{n_k}}(\text {d}x) \rightarrow \int _{{{\mathbb {R}}}_*} \varphi (x) f(\text {d}x) \text { as } k \rightarrow \infty \end{aligned}$$
(7.5)

and f is a constant flux solution to (1.3) in the sense of Definition 7.1 with \(J = \sum _{\alpha =1}^\infty \alpha s_\alpha \).

Remark 7.5

Note that a priori we may expect that the only constant flux solutions in the sense of Definition 7.1 are power laws. We will see in [20] that there are homogeneous kernels K that satisfy the upper and lower bounds (2.11)–(2.12) for which this is not true. Therefore the limit measure f can be different for different subsequences \((f_{n_k})_k\) in (7.5).

Remark 7.6

The assumption \(K_R \rightarrow K \) as \(R \rightarrow \infty \) means that the discrete kernel \(K_{\alpha ,\beta }\) behaves like the continuous kernel K for large values of \(\alpha ,\ \beta \). For instance, in the case of the kernel \(K_{\alpha ,\beta }=\frac{\alpha ^{\gamma +\lambda }}{\beta ^{\lambda }} +\frac{\beta ^{\gamma +\lambda }}{\alpha ^{\lambda }},\) the function \(K_{R}\) defined by means of (7.3) converges to \(K\left( x,y\right) =\frac{x^{\gamma +\lambda }}{y^{\lambda }}+\frac{y^{\gamma +\lambda }}{x^{\lambda }}\) as \(R\rightarrow \infty .\) A large class of kernels \(K_{\alpha ,\beta }\) for which the convergence \(K_{R}\rightarrow K\) as \(R\rightarrow \infty \) takes place can be obtained just restricting a continuous homogeneous kernel \(K=K\left( x,y\right) \) to integer values, i.e \(K_{\alpha ,\beta }=K\left( \alpha ,\beta \right) \) for \(\alpha ,\beta \in {\mathbb {N}}\).

Proof of Theorem 7.4

Using the expression (7.2) for \(f_R\) and the upper estimate in Corollary 6.8 we obtain

$$\begin{aligned} \frac{1}{z}\int _{[z/2,z]} f_R(\text {d}x) = C \frac{R^{(\gamma +3)/2}}{Rz}\sum _{\alpha \in [R z/2,Rz]} n_\alpha \ {\leqq }\ \frac{C \sqrt{J}}{z^{(\gamma +3)/2}},\ z>0 \end{aligned}$$
(7.6)

for some positive constant C independent of R. Note that this estimate is valid for \(Rz {\geqq }1\) and for \(0<Rz< 1\) is automatic because the sum is empty. Therefore \(\{ {f_R}_{|_K} \}_{R>0}\) is precompact in \({\mathcal {M}}_+(K)\) for any \(K \subset (0,\infty )\) compact, where \(|_K\) denotes the restriction to K. Given a sequence of compact sets in \((0,\infty )\), \(I_n = [2^{-n},2^n]\) we then obtain using a diagonal sequence argument, that there is a subsequence of measures \(( f_{R_{n_k}})_{k\in {{\mathbb {N}}}}\) and a measure \(f\in {\mathcal {M}}_+({{\mathbb {R}}}_+)\) such that (7.5) holds. Moreover,

$$\begin{aligned} \frac{1}{z}\int _{[z/2,z]} f(\text {d}x) {\leqq }\ \frac{C \sqrt{J}}{z^{(\gamma +3)/2}},\ z>0. \end{aligned}$$
(7.7)

Now we prove that f is a constant flux solution in the sense of Definition 7.1. For any test function \(\varphi \in C_c(0,\infty )\), since \(f_R\) is a stationary injection solution, we have from Lemma 2.8 that \(f_R\) satisfies

$$\begin{aligned} \int _{(L_\eta /R,\infty )}\text {d}z \varphi (z)\int _{ (0,z]}\int _{(z-x,\infty )} K_R(x,y)x f_R(\text {d}x) f_R(dy) = J \int _{(L_\eta /R,\infty )}\text {d}z \varphi (z), \end{aligned}$$
(7.8)

where \(J=\int _{(0,\infty )} x \eta _R(\text {d}x) = \sum _{\alpha =1}^\infty \alpha s_\alpha >0\) is independent of R. We now rewrite using the domain of integration \(\Omega _z\) defined in (6.1) as well as the domains \(\Sigma _1(\delta ),\ \Sigma _2(\delta )\) and \(\Sigma _3(\delta )\) for \(\delta >0\) defined in (6.2). We use also the partial fluxes \(J_j,\ j=1,2,3\) defined in (6.3). In order to make explicit in these fluxes the dependence on the kernel K and the measure f, we will rewrite them as \(J_j(z,\delta ; K,f)\) in the rest of this proof. Therefore (7.8) becomes

$$\begin{aligned} \sum \limits _{j=1}^3\int _{(L_\eta /R,\infty )}\text {d}z \varphi (z) J_j(z,\delta ; K_R,f_R) = J \int _{ (L_\eta /R,\infty )} \text {d}z \varphi (z) \end{aligned}$$
(7.9)

for any \(\varphi \in C_c(0,\infty )\).

Let \(\varepsilon >0\) arbitrarily small. Since the kernels \(K_R,\ R{\geqq }1\) satisfy the Assumption 6.2 with \(c_1,\ c_2\) in (2.11)-(2.12) independent of R, we can apply Lemma 6.1 combined with (7.6) to obtain

$$\begin{aligned} \left| \sum _{j \in \{1,3\}} \int _{(L_\eta /R,\infty )} \text {d}z \varphi (z) J_j(z,\delta ;K_R,f_R) \right| {\leqq }C\varepsilon J \Vert \varphi \Vert _{L^\infty (0,\infty )}, \end{aligned}$$
(7.10)

where \(C>0\) is independent of R.

For every compact set \(K\subset (0,\infty )\) we have that \(\underset{z \in K}{\bigcup } (\Sigma _2(\delta ) \cap \Omega _z)\) is bounded. Then using (7.5) and using that \(\lim _{R\rightarrow \infty } K_R = K\) uniformly on compact sets of \((0,\infty )\) and that \(\varphi \) is compactly supported, we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty } \int _{(0,\infty )}\text {d}z \varphi (z) J_2(z,\delta ; K_{R_n},f_{R_n}) = \int _{(0,\infty )}\text {d}z \varphi (z) J_2(z,\delta ;K,f) \end{aligned}$$

for any test function \(\varphi \in C_c(0,\infty )\). Then using (7.9)-(7.10) we arrive at

$$\begin{aligned} \left| \int _{(0,\infty )}\text {d}z \varphi (z) J_2(z,\delta ;K,f) - J\int _{(0,\infty )}\text {d}z \varphi (z) \right| {\leqq }C\varepsilon J \Vert \varphi \Vert _{L^\infty (0,\infty )}. \end{aligned}$$

Using Lemma 6.1 and (7.7) again, we deduce that

$$\begin{aligned} \left| \sum _{j \in \{1,3\}} \int _{(0,\infty )} \text {d}z \varphi (z) J_j(z,\delta ;K,f) \right| {\leqq }C\varepsilon J \Vert \varphi \Vert _{L^\infty (0,\infty )}, \end{aligned}$$

whence

$$\begin{aligned}&\left| \int _{(0,\infty )}\text {d}z \varphi (z)\int _{ (0,z]}\int _{(z-x,\infty )} K(x,y)x f(\text {d}x) f(dy) - J \int _{(0,\infty )}\text {d}z \varphi (z)\right| \\&{\leqq }C\varepsilon J \Vert \varphi \Vert _{L^\infty (0,\infty )} \end{aligned}$$

for any \(\varphi \in C_c(0,\infty )\). Then since \(\varepsilon \) is arbitrary small and \(\varphi \) is an arbitrary test function, f is a flux solution in the sense of Definition 7.1 and the result follows. \(\square \)

Remark 7.7

We notice that, arguing as in the proof of Theorem 7.4, if \(K:{{\mathbb {R}}}_{*}^{2}\rightarrow {{\mathbb {R}}}_{+}\) satisfies (2.10)-(2.12) and \(f\in {\mathcal {M}}_+\left( 0,\infty \right) \) is a constant flux solution in the sense of Definition 7.1, then (7.7) holds; in particular,

$$\begin{aligned} \frac{1}{z}\int _{[z/2,z]} f(\text {d}x) {\leqq }\ \frac{C \sqrt{J}}{z^{(\gamma +3)/2}},\ z>0. \end{aligned}$$