1 Vlasov–Poisson Near a Point Charge

This article is devoted to the study of the time evolution and asymptotic behavior of a three dimensional gas of charged particles (a plasma) that interact with a point charge. Under suitable assumptions this system can be described via a measure M on \({\mathbb {R}}^3_{x}\times {\mathbb {R}}^3_v\) that is transported by the long-range electrostatic force field created by the charge distribution, resulting in the Vlasov–Poisson system

$$\begin{aligned} \begin{aligned} \partial _tM+\hbox {div}_{x,v}\left( M{\mathfrak {V}}\right) =0,\qquad {\mathfrak {V}}=v\cdot \nabla _x+\nabla _x\phi \cdot \nabla _v,\qquad \Delta _x\phi =\int _{{\mathbb {R}}^3_v}Mdv.\qquad \end{aligned} \end{aligned}$$
(1.1)

Since this equation is rotationally invariant, the Dirac mass \(M_{eq}=\delta =\delta _{(0,0)}(x,v)\) is a formal stationary solution and we propose to investigate its stability. We consider initial dataFootnote 1 of the form \(M=q_c\delta +q_g\mu ^2_0dxdv\), where \(q_c>0\) is the charge of the Dirac mass and \(q_g>0\) is the charge per particle of the gas, which results in purely repulsive interactions. We track the singular and the absolutely continuous parts of a solution as \(M(t)=q_c\delta _{({\bar{x}}(t),{\bar{v}}(t))}+q_g\mu ^2(t)dxdv\), which formally yields the coupled system

$$\begin{aligned} \begin{aligned} \left( \partial _t+v\cdot \nabla _x+\frac{q}{2}\frac{x-{\bar{x}}(t)}{\vert x-{\bar{x}}(t)\vert ^3}\cdot \nabla _v\right) \mu +\lambda \nabla _x\psi \cdot \nabla _v\mu&=0,\qquad \Delta _x\psi =\varrho =\int _{{\mathbb {R}}^3_v}\mu ^2dv,\\ \frac{d{\bar{x}}}{dt}={\bar{v}},\qquad \frac{d{\bar{v}}}{dt}={\overline{q}}\nabla _x\psi ({\bar{x}}) \end{aligned} \end{aligned}$$
(1.2)

where \(\lambda =q_g^2/(\epsilon _0m_g)>0\), \(q=q_cq_g/(2\pi \epsilon _0 m_g)>0\), \({\overline{q}}=q_cq_g/(\epsilon _0 m_c)>0\) are positive constantsFootnote 2.

1.1 Main result

Our main result concerns (1.2) with radial initial data, where the point charge is located at the origin. In this case, the densities are considered with respect to the reference measure \(\delta (x\wedge v)\cdot dxdv\) instead of the Lebesgue measure dxdv. For sufficiently small initial charge distributions \(\mu \) we establish the existence and uniqueness of global, strong solutions and we describe their asymptotic behavior as a modified scattering dynamic. While our full result can be most adequately stated in more adapted “action-angle” variables (see Theorem 1.6 below on page 7), for the sake of readability we begin here by giving a (weaker, slightly informal) version in standard Cartesian coordinates:

Theorem 1.1

Given any radial initial data \(\mu _0\in C^1_c({\mathbb {R}}^*_+\times {\mathbb {R}})\), there exists \(\varepsilon ^*>0\) such that for any \(0<\varepsilon <\varepsilon ^*\), there exists a unique global strong solution of (1.2) with initial data

$$\begin{aligned} ({\bar{x}}(t=0),{\bar{v}}(t=0))=(0,0),\qquad \mu (x,v,t=0)=\varepsilon \mu _0(\vert x\vert ,\text {sign}(x\cdot v)\vert v\vert ). \end{aligned}$$

Moreover, the electric field decays pointwise and there exists an asymptotic profile \(\mu _\infty \in L^2({\mathbb {R}}^*_+\times {\mathbb {R}})\) and a Lagrangian map \((\mathcal {R},\mathcal {V}):{\mathbb {R}}_+^*\times {\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}_+^*\times {\mathbb {R}}\) along which the density converges pointwiseFootnote 3

$$\begin{aligned} \begin{aligned} \mu (\mathcal {R}(r,\nu ,t),\mathcal {V}(r,\nu ,t),t)\rightarrow \mu _\infty (r,\nu ),\qquad t\rightarrow \infty . \end{aligned} \end{aligned}$$

Remark 1.2

  1. (1)

    Our main theorem is in fact much more precise and requires fewer assumptions, but is better stated in adapted “action angle” variables. We refer to Theorem 1.6.

  2. (2)

    The Lagrangian map can be written in terms of an asymptotic “electric field profile” \(\mathcal {E}_\infty \):

    $$\begin{aligned} \begin{aligned} \mathcal {R}(r,\nu ,t)&=t\sqrt{\nu ^2+\frac{q}{r}}-\frac{rq}{2(q+r\nu ^2)}\ln (t)+\lambda \mathcal {E}_\infty (\sqrt{\nu ^2+\frac{q}{r}})\ln (t)+O(1),\\ \mathcal {V}(r,\nu ,t)&=\sqrt{\nu ^2+\frac{q}{r}}-\frac{rq}{2(q+r\nu ^2)}\frac{1}{t}+O(\frac{\ln t}{t^2}). \end{aligned} \end{aligned}$$

    The first term corresponds to conservation of the energy along trajectories, the second term comes from a linear correction and the third term on the first line comes from a nonlinear correction to the position. This can be compared with the asymptotic behavior close to vacuum in [21, 31] by setting \(q=0\).

1.1.1 Prior work

In the absence of a point charge, the Vlasov–Poisson system has been extensively studied and we only refer to [2, 15, 25, 33, 35] for references on global wellposedness and dispersion analysis, to [8, 13, 21, 31] for more recent results describing the asymptotic behavior, to [14, 34] for book references and to [3] for a historical review.

The presence of a point charge introduces singular electric fields and significantly complicates the analysis. Nevertheless, global existence and uniqueness of strong solutions when the support of the density is separated from the point charge has been established in [27], see also [5] and references therein, while global existence of weak solutions for more general support was proved in [10] with subsequent improvements in [23, 24, 28]. We also refer to [9] where “Lagrangian solutions” are studied and to [6, 7] for works in the case of attractive interactions. Concentration, creation of a point charge and subsequent lack of uniqueness were studied in a related system for ions in 1d, see [26, 36]. To the best of our knowledge, there are no works concerning the asymptotic behavior of such solutions.

The existence and stability of other equilibriums has been considered for the Vlasov–Poisson system with repulsive interactions, most notably in connection to Landau damping [1, 4, 12, 18, 30]. In the case of Vlasov–Poisson with attractive interactions, there are many more equilibriums and their linear and nonlinear (in)stability have been studied [16, 17, 22, 29, 32], but the analysis of asymptotic stability is very challenging. We also refer to [20] which studies the stability of a Dirac mass in the context of the 2d Euler equation.

1.1.2 Our approach

In previous works on (1.2), the Lagrangian approach allows to integrate the solutions against characteristics but faces the problem of a singular electric field, while a purely Eulerian method leads to a poor control of the solutions, which makes it difficult to study the asymptotic behavior. In this paper, we introduce a different method based on the decomposition of the Hamiltonian to rewrite (1.2) as

$$\begin{aligned} \begin{aligned} \partial _t\mu +\{\mathcal {H}_{0}+\mathcal {H}_{pert},\mu \}=0, \end{aligned} \end{aligned}$$

where the linearized Hamiltonian \(\mathcal {H}_0\) is given in (2.1) and the nonlinear Hamiltonian \(\mathcal {H}_{pert}\) corresponds to the self-generated electrostatic potential (1.17). In short, our approach combines a Lagrangian analysis of the linearized problem with an Eulerian PDE framework in the nonlinear analysis, all the while respecting the symplectic structure. This amounts to considering solutions as superpositions of measures on each trajectory of the linearized flow instead of measures on the whole phase space.

On a technical level, one faces the two difficulties of a singular transport field and the nonlinearity separately: the singular electric field created by the point charge is present in the linearized equation coming from \(\mathcal {H}_0\), which is integrated exactly. The nonlinearity comes from the perturbed Hamiltonian \(\mathcal {H}_{pert}\), but this leads to a simple nonlinear equation, with a nonlinearity which is smoothing.

More precisely, in a first step we analyze the characteristic equations of the linear problem associated to (1.2). These turn out to be the classical ODEs of the Kepler problem, which can be integrated in adapted “action-angle” coordinates. In these, the geometry of the characteristic curves is straightened and the linear flow is solved explicitly as a linear map. To treat the nonlinear problem, we conjugate by the linear flow and study the resulting unknown in an Eulerian, \(L^2\) based PDE framework, based on energy estimates as in our recent work on the vacuum case [21]. This allows us to propagate the required regularity and moments to obtain a global strong solution. Moreover, we can readily identify the asymptotic dynamic: in a mixing type mechanism, the dependence on the “angles” is eliminated from the asymptotic electrostatic fields, and the scattering of solutions is modified by a field defined in terms of the “actions”.

We remark on some features and context of our techniques.

  1. (1)

    Since the system (1.2) is Hamiltonian and we solve the linearized system through a canonical change of unknown (i.e. a diffeomorphism respecting the symplectic structure), the nonlinear problem becomes quite simple after conjugation, see (1.16).

  2. (2)

    The moments we propagate are conserved by the linearized flow, unlike the physical moments in \(\langle r\rangle \), \(\langle v\rangle \). In fact, even in the nonlinear problem it is quite direct to globally propagate moments in action-angle variables, which already gives the existence of global weak solutions.

  3. (3)

    The asymptotic dynamic is easy to exhibit in action-angle variables through inspection of the formulas for the asymptotic electrostatic fields (see (1.18)).

  4. (4)

    It is notable that we do not require any separation between the point charge and the continuous distribution \(\mu \), addressing a question raised in [10, p. 376], (see also [27]).

  5. (5)

    We expect the methods presented here, based on integration of the linearized equation through “action-angle” coordinates, to be broadly applicable, both for local existence of rough solutions and especially for the analysis of long time behavior whenever the linearized equation corresponds to a completely integrable ODE without closed trajectories. This should include a large number of radial problems for plasmas since \(1+1\) Hamiltonian ODEs can be integrated by phase portrait.

  6. (6)

    The usefulness of action-angle variables for the Vlasov–Poisson equation was already exhibited in [11, 16, 19] where the authors produce a large class of 1d BGK-type waves which are linearly stable.

1.1.3 Remarks on the physical setup

Our primary interest here is to study the interaction of a gas of ions or electrons interacting with a (similarly) charged particle, subject only to electrostatic forces. In this case, up to rescaling, we may assume that \(\lambda =1\) in (1.2).

Taking into account gravitational effects, we may also consider the more general case of a large charged and massive point particle with mass \(m_c\) and charge \(q_c\) interacting with a gas of small particles with mass-per-particle \(m_g\) and charge-per-particle \(q_g\) subject to both gravitational and electrostatic forces. In this case, our result holds whenever the principal gas-point particle interaction is repulsive, i.e. when (in appropriates physical units)

$$\begin{aligned} q_cq_g> m_cm_g, \end{aligned}$$
(1.3)

whereas (due to the small data assumption on the gas at initial time) the gas-gas interaction may be repulsive \((\lambda =1)\) or attractive \((\lambda =-1\)) depending on the sign of \((q_g)^2-(m_g)^2\).

The situation that is beyond the scope of our analysis is the case when the inequality in (1.3) is reversed and some trajectories of the linearized system are closed. Note that in this case, even the local existence theory is incomplete.

1.2 Overview and ideas of the proof

To clarify the passage to the radially symmetric setting, we denote here by \((x,p)\in {\mathbb {R}}^3\times {\mathbb {R}}^3\) the three dimensional phase space variables. We note that in the particular case of radial initial conditions,Footnote 4

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu ( {x},{ p},t=0)&{}=\mu _0(\left| x\right|,\hbox {sign}(x\cdot p)\left|p\right|)\\ ({\bar{x}},{\bar{p}})(t=0)&{}=(0,0), \end{array}\right. } \end{aligned}$$
(1.4)

the Dirac mass in (1.2) does not move (i.e. \({\bar{x}}(t)={\bar{p}}(t)=0\)) and the continuous particle distribution \(\mu (t)\) of the solution is a radial function. The equations (1.2) reduce to the following system for \(\mu ({x},{p},t)\):

$$\begin{aligned} \left( \partial _t\!+\!{ p}\cdot \nabla _{x}\!+\!\frac{q}{2}\frac{x}{\vert x\vert ^3}\cdot \nabla _{p}\right) \mu +\lambda \nabla _{x}\psi \cdot \nabla _{p}\mu =0,\qquad \Delta _{x}\psi =\varrho \!=\!\int _{{\mathbb {R}}^3_{p}}\mu ^2d{p}.\qquad \end{aligned}$$
(1.5)

Per a slight abuse of notation with \(r:=\left|x\right|\), \(\varrho (r,t)=\varrho (x,t)\), by radiality the electric field \(E:=-\nabla _{x}\psi \) of the ensemble can be computed as

$$\begin{aligned} E(x,t)=-\nabla _{x}\psi (x,t)=-\partial _r\psi (r,t)\frac{x}{r},\qquad \partial _r\psi (r,t)=\frac{1}{r^2}\int _{s=0}^r\varrho (s,t)s^2ds.\qquad \end{aligned}$$
(1.6)

1.2.1 The “radial” phase space

Since as discussed the equations (1.2) are invariant under spherical symmetry and we will work with spherically symmetric data, it is more convenient to work on the phase space \((r,v)\in {\mathbb {R}}_{+}^*\times {\mathbb {R}}\) (rather than \((x, p)\in {\mathbb {R}}^3\times {\mathbb {R}}^3\)). Note that \(r^2v^2drdv\) is the natural measure corresponding to that of radially symmetric functions on \({\mathbb {R}}^3\times {\mathbb {R}}^3\), and hence we will work with the new density

$$\begin{aligned} \varvec{\mu }(r,v,t):=rv\mu (r,v,t). \end{aligned}$$
(1.7)

This is chosen such that the (conserved) mass is the square of the \(L^2\) norm of both \(\varvec{\mu }\) on \({\mathbb {R}}_+^*\times {\mathbb {R}}\) and \(\mu \) on \({\mathbb {R}}^3\times {\mathbb {R}}^3\), i.e. we have

$$\begin{aligned} \iint _{r,v}\varvec{\mu }^2(r,v,t)\, drdv=\iint _{x, p}\mu ^2(x, p,t)\, dxdp=\iint _{x,p}\mu ^2_0(x,p)\,dxdp. \end{aligned}$$
(1.8)

Moreover, the equations for \(\varvec{\mu }\) simply read

$$\begin{aligned} \begin{aligned}&\left( \partial _t+v\partial _r+\frac{q}{2r^2}\partial _v\right) \varvec{\mu }=\lambda {\varvec{E}}\partial _v\varvec{\mu },\\&\qquad {\varvec{E}}(r,t):=-\partial _r\psi (r,t)=\frac{1}{r^2}\int _{s=0}^r\varvec{\varrho }(s,t)\,ds,\quad \varvec{\varrho }(s,t)=\int \varvec{\mu }^2(s,v,t)\,dv. \end{aligned} \end{aligned}$$
(1.9)

This equation is Hamiltonian and can be equivalently written as

$$\begin{aligned} \begin{aligned} 2\partial _t\varvec{\mu }+\{\mathcal {H},\varvec{\mu }\}=0,\qquad \mathcal {H}(r,v):=v^2+\frac{q}{r}-2\lambda \psi (r),\qquad \{f,g\}&:=\partial _vf\partial _rg-\partial _rf\partial _vg, \end{aligned} \end{aligned}$$
(1.10)

which leads to the conservation of energy

$$\begin{aligned} \begin{aligned} \mathbf{\mathcal {H}}_{total}(t)&:=\iint \mathcal {H}(r,v)\cdot \varvec{\mu }^2\,drdv=\iint \left( v^2+\frac{q}{ r}\right) \cdot \varvec{\mu }^2\,drdv+\lambda \int {\varvec{E}}^2\cdot r^2dr=\mathbf{\mathcal {H}}_{total}(0). \end{aligned} \end{aligned}$$

1.2.2 The linearized system

In order to study (1.9), we first consider the linearized equation for a function f(rvt):

$$\begin{aligned} \begin{aligned} \left( \partial _t+v\partial _r+\frac{q}{2r^2}\partial _v\right) f=0. \end{aligned} \end{aligned}$$
(1.11)

This linear transport equation can be solved directly via its characteristic equations

$$\begin{aligned} {\dot{r}}=v,\qquad {\dot{v}}=\frac{q}{2r^2}. \end{aligned}$$
(1.12)

One recognizes here the classical Kepler problem in the radial setting, which can be integrated using generalizedFootnote 5 “action angle” coordinates \((\theta , a)\) (see also Fig. 1).

Fig. 1
figure 1

The trajectories of solutions of (1.12), on the left in physical space (rv) coordinates, on the right in action-angle coordinates \((\theta ,a)\) from Lemma 1.3. The large dots correspond to points that are uniformly spaced in \(\theta \) for a fixed a (right), and their image in (rv) (left)

Lemma 1.3

There exists a canonical transformation to “action-angle” coordinates:

$$\begin{aligned} {\mathbb {R}}_+^*\times {\mathbb {R}}\ni (r,v)\mapsto (\Theta (r,v),\mathcal {A}(r,v))\in {\mathbb {R}}\times {\mathbb {R}}_+^*\end{aligned}$$
(1.13)

with inverse \((R(\theta ,a),V(\theta ,a))\), such that f solves (1.11) if and only if

$$\begin{aligned} g(\theta ,a,t):=f(R(\theta ,a),V(\theta ,a),t) \end{aligned}$$

solves the free streaming equation

$$\begin{aligned} \left( \partial _t+a\partial _\theta \right) g=0. \end{aligned}$$
(1.14)

1.2.3 Nonlinear analysis

Since (1.14) can be solved directly, we conjugate by this change of variables to stabilize the linearized system, thus defining \(\gamma \) as follows:

$$\begin{aligned} \begin{aligned} \gamma (\theta ,a,t)&:=\varvec{\mu }(R(\theta +ta,a),V(\theta +ta,a),t),\\ \varvec{\mu }(r,v,t)&=\gamma (\Theta (r,v)-t\mathcal {A}(r,v),\mathcal {A}(r,v),t). \end{aligned} \end{aligned}$$
(1.15)

Since the change of variable preserves the symplectic structure, we find that the full nonlinear problem (1.9) is equivalent to

$$\begin{aligned} \partial _t\gamma =\lambda \{{\widetilde{\Psi }},\gamma \},\qquad \{f,g\}=\partial _af\partial _\theta g-\partial _\theta f\partial _ag. \end{aligned}$$
(1.16)

Here the potential can be expressed in action angle coordinates as follows:

$$\begin{aligned} {\widetilde{\Psi }}(\theta ,a,t):=\iint _{\vartheta ,\alpha }\frac{1}{\max \{R(\theta +ta,a),R(\vartheta +t\alpha ,\alpha )\}}\gamma ^2(\vartheta ,\alpha ,t)\, d\vartheta d\alpha . \end{aligned}$$
(1.17)

Remark 1.4

While the trajectories of the linear equation (2.2) are straight lines in action-angle variables, in physical variables they correspond to an incoming ray followed by an outgoing one traced at varying velocities.

For the nonlinear problem this creates extra challenges, as interactions can occur over vastly disparate spatial scales. As the below Fig. 2 illustrates, in some regimes the evolution \(R(\theta +ta,a)\) is not a simple function of \((\theta ,a)\), from which one of the two variables can be recovered once the other is known (see also Lemma 2.5 below).

Fig. 2
figure 2

Fronts R(taa) of particles in the nonlinear problem, starting at \(\theta =0\)

It remains to study solutions to (1.16). The dispersion mechanism is accounted for through the conjugation with the linearized flow and we hope to show that this picture remains true when we add the remaining nonlinear contribution, i.e. we expect that solutions to (1.16) do not change too much over time. We first use a bootstrap argument to propagate strong norms, which suffices to obtain global existence and decay of the electric field:

Proposition 1.5

There exists \(\varepsilon ^*\) such that for all \(0<\varepsilon _0\le \varepsilon _1\le \delta <\varepsilon ^*\), the following holds. Let \(\gamma \) be a solution to (1.16) with initial data \(\gamma _0\) on \(0\le t\le T\) and assume that for \(0\le t\le T\),

$$\begin{aligned} \begin{aligned} \Vert \left( a^{-20}+\theta ^{20}+a^{20}\right) \gamma _0\Vert _{L^2_{\theta ,a}}+\Vert (a+a^{-1})\partial _\theta \gamma _0\Vert _{L^2_{\theta ,a}}+\Vert a\partial _a\gamma _0\Vert _{L^2_{\theta ,a}}&\le \varepsilon _0,\\ \Vert \left( a^{-20}+\theta ^{20}+a^{20}\right) \gamma (t)\Vert _{L^2_{\theta ,a}}+\Vert (a+a^{-1})\partial _\theta \gamma (t)\Vert _{L^2_{\theta ,a}}+\Vert a\partial _a\gamma (t)\Vert _{L^2_{\theta ,a}}&\le \varepsilon _1\langle t\rangle ^{\delta },\\ \end{aligned} \end{aligned}$$

then in fact

$$\begin{aligned} \begin{aligned} \Vert \left( a^{-20}+a^{20}\right) \gamma \Vert _{L^2_{\theta ,a}}+\Vert (a+a^{-1})\partial _\theta \gamma \Vert _{L^2_{\theta ,a}}&\le \varepsilon _0+\varepsilon _1^\frac{3}{2},\\ \Vert \theta ^{20}\gamma \Vert _{L^2_{\theta ,a}}+\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}&\le \varepsilon _0+\varepsilon _1^\frac{3}{2} t^{\delta }. \end{aligned} \end{aligned}$$

This in turn allows us to investigate the asymptotic behavior. It can easily be formally deduced once one observes that, given the bounds propagated by the bootstrap, on the support of the density, one has

$$\begin{aligned} \begin{aligned} \vert R(\theta +ta,a)-ta\vert =o(t), \end{aligned} \end{aligned}$$

so that one expects that \({\widetilde{\Psi }}\) is asymptotically independent of \(\theta \):

$$\begin{aligned} \begin{aligned} {\widetilde{\Psi }}(\theta ,a,t)&\simeq \frac{1}{t}\int _\alpha \frac{1}{\max \{a,\alpha \}}\mathcal {Z}(\alpha )\, d\alpha =\frac{1}{t}{ \Phi }(a),\qquad \mathcal {Z}(\alpha ):=\lim _{t\rightarrow \infty }\int \gamma ^2(\theta ,\alpha ,t)\, d\theta . \end{aligned} \end{aligned}$$

As a consequence, (1.16) becomes a perturbation of a shear equation:

$$\begin{aligned} \partial _t\gamma =\frac{\lambda }{t}\partial _a{ \Phi }\cdot \partial _\theta \gamma . \end{aligned}$$
(1.18)

which can easily be integrated. This can be made rigorous under appropriate assumptions on the initial data and it leads to our main result.

Theorem 1.6

There exists \(\varepsilon _0>0\) such that any initial data \(\gamma _0\) satisfying

$$\begin{aligned} \Vert (\theta ^{20}+a^{20}+a^{-20})(\gamma _0,\partial _\theta \gamma _0,\partial _a\gamma _0)\Vert _{L^2_{\theta ,a}}\le \varepsilon \le \varepsilon _0 \end{aligned}$$
(1.19)

leads to a unique global solution \(\gamma (t)\in C^1_tL^2_{\theta ,a}\cap C^0_tH^1_{\theta ,a}\) of (1.16) with an electric field \((\partial _\theta {\widetilde{\Psi }},\partial _a{\widetilde{\Psi }})\) which decays in time. In addition, there exists \(\gamma _\infty \in L^2_{\theta ,a}\cap C^0_\theta L^2_a\) such that

$$\begin{aligned} \begin{aligned} \Vert \gamma (\theta +\lambda \ln t\cdot \mathcal {E}_\infty (a),a,t)-\gamma _\infty (\theta ,a)\Vert _{L^2_{\theta ,a}}\lesssim \varepsilon t^{-\frac{1}{10}}, \end{aligned} \end{aligned}$$
(1.20)

where

$$\begin{aligned} \mathcal {E}_\infty (a):=\frac{1}{a^2}\iint {\mathfrak {1}}_{\{\alpha \le a\}}\gamma ^2_\infty (\vartheta ,\alpha )\, d\vartheta d\alpha . \end{aligned}$$
(1.21)

This implies Theorem 1.1 upon setting \(\mu _\infty (r,\nu ):=\gamma _\infty (\Theta (r,\nu ),\mathcal {A}(r,\nu ))\). The expansion of the asymptotic dynamics in Remark 1.2 is then obtained via our analysis of the action angle variables in Sect. 2.1 below.

We expect that the number of moments in (1.19) can be significantly reduced. It is interesting that the decay of the electric field can be obtained under much weaker assumptions (see Lemma 3.1), but it is unclear to us how much asymptotic information can be recovered in this case.

1.3 Organization of the paper

In Sect. 2, we study the ODE associated to the linearized equation and establish a number of geometric results and bounds on relevant quantities for the nonlinear problem. In Sect. 3, we study the nonlinear equation; we establish the moment bootstrap for weak solutions in Sect. 3.1, the derivative bootstrap for strong solutions in Sect. 3.1.2 and prove the modified scattering in Sect. 3.2 by first obtaining a weak-strong limit for scattering data in Sect. 3.2.1 and finally the convergence of the particle density in Sect. 3.2.2.

We emphasize that one can obtain decay of moments for weak solutions in a self-contained way using only the results of the Sects. 2.1, 2.2 and 3.1.1.

2 Linearized Equation

The goal of this section is to integrate the linearized problem (2.2) and to prove various estimates for the corresponding transfer functions. Lemma 1.3 follows easily from Lemma 2.1 below.

2.1 Straightening the linear characteristics

The linearization of (1.10) at \(\varvec{\mu }=0\) is the Hamiltonian differential equation associated to

$$\begin{aligned} \mathcal {H}_0(r,v):=v^2+\frac{q}{r}, \end{aligned}$$
(2.1)

namely

$$\begin{aligned} \begin{aligned} \left( \partial _t+v\partial _r+\frac{q}{2r^2}\partial _v\right) \mu =0. \end{aligned} \end{aligned}$$
(2.2)

This is now a linear transport equation, which can be integrated easily once we know the trajectories of the corresponding ODE:

$$\begin{aligned} {\dot{r}}=v,\qquad {\dot{v}}=\frac{q}{2r^2}. \end{aligned}$$
(2.3)

2.1.1 Radial trajectories

Since we consider a \(1+1\) Hamiltonian system (2.3), the trajectories can be integrated by phase portrait. Letting \(\mathcal {A}=\sqrt{\mathcal {H}_0}\), we can explicitly integrate the resulting equation

$$\begin{aligned} {\dot{r}}=\sqrt{\mathcal {A}^2-\frac{q}{r}} \end{aligned}$$

by starting the “clock” \(\theta =0\) at the periapsis (i.e. the point of closest approach): Let

$$\begin{aligned} \begin{aligned} \mathcal {A}(r,v)&=\sqrt{v^2+\frac{q}{r}},\qquad \Theta (r,v)=\frac{v}{\vert v\vert }r_{min}G(\frac{r}{r_{min}}),\qquad r_{min}(r,v)=\frac{q}{v^2+\frac{q}{r}},\\ R(\theta ,a)&=r_{min}H(\frac{\vert \theta \vert }{r_{min}}),\qquad V(\theta ,a)=\frac{\theta }{\vert \theta \vert }\sqrt{a^2-\frac{q}{R}},\qquad r_{min}(a)=\frac{q}{a^2}, \end{aligned} \end{aligned}$$
(2.4)

where \(G:(1,\infty )\rightarrow {\mathbb {R}}\) and \(H:{\mathbb {R}}_+\rightarrow (1,\infty )\) satisfy

$$\begin{aligned} \begin{aligned} G(1)&=0,\quad G^\prime (s)=\left[ 1-\frac{1}{s}\right] ^{-\frac{1}{2}},\qquad G(s)=\sqrt{s(s-1)}+\ln \left( \sqrt{s}+\sqrt{s-1}\right) ,\\ H(x)&=G^{-1}(x),\quad H^\prime (x)=\sqrt{\frac{H(x)-1}{H(x)}}. \end{aligned} \end{aligned}$$
(2.5)

These functions and related ones are studied in more details in Sect. 2.1.2. We can now solve the linear problem (2.2) via a canonical change of variable:

Lemma 2.1

The change of variables

$$\begin{aligned} (r,v)\mapsto (\Theta (r,v),\mathcal {A}(r,v)) \end{aligned}$$
(2.6)

in (2.4) defines a canonical diffeomorphism of the phase space (rv) (with inverse \((R(\theta ,a),V(\theta ,a))\) as in (2.4)), which linearizes the flow in the sense that for the flow map \(\Phi ^t(r,v)\) associated to the Hamiltonian ODEs (2.3) we have

$$\begin{aligned} \Theta (\Phi ^t(r,v))-\Theta (r,v)=t\mathcal {A}(r,v),\qquad \mathcal {A}(\Phi ^t(r,v))=\mathcal {A}(r,v). \end{aligned}$$
(2.7)

Moreover, we have

$$\begin{aligned} \det \frac{\partial (\Theta ,\mathcal {A})}{\partial (r,v)}=\det \frac{\partial (R,V)}{\partial (\theta ,a)}=1. \end{aligned}$$

Proof

That \((\Theta ,\mathcal {A})\) and (RV) are inverse can be checked directly once one observes that \(r_{min}\) is consistent: \(r_{min}(\mathcal {A}(r,v))=r_{min}(r,v)\) and \(r_{min}(a)=r_{min}(R(a,\theta ),V(a,\theta ))\). It is direct to check that \(\mathcal {A}\) is conserved along the flow. Moreover,

$$\begin{aligned} \begin{aligned} \frac{d}{dt}\Theta (r(t),v(t))&=\vert v\vert \left( 1-\frac{r_{min}}{r}\right) ^{-\frac{1}{2}}+\frac{q}{2r^2}\delta (v)r_{min}G(\frac{r}{r_{min}})\\&\qquad +\frac{v}{\vert v\vert }\left[ v\partial _rr_{min}+\frac{q}{2r^2}\partial _vr_{min}\right] (G-\frac{r}{r_{min}}G^\prime )(\frac{r}{r_{min}}). \end{aligned} \end{aligned}$$

The first term on the right hand side gives \(\mathcal {A}\), and the second vanishes since when \(v=0\), \(G(r/r_{min})=G(1)=0\), while direct computations show that the bracket in the last term vanishes. In addition, the same computations show that

$$\begin{aligned} \begin{aligned} d\Theta \wedge d\mathcal {A}&=\frac{q}{\mathcal {A}r^2}r_{min}G(\frac{r}{r_{min}})\\&\quad \delta (v)dr\wedge dv+\frac{v}{\vert v\vert \mathcal {A}}(G-\frac{r}{r_{min}}G^\prime )(\frac{r}{r_{min}})\cdot \left( v\partial _rr_{min}+\frac{q}{2r^2}\partial _vr_{min}\right) dr\wedge dv\\&\quad +\frac{\vert v\vert }{\mathcal {A}}G^\prime (\frac{r}{r_{min}})dr\wedge dv\\&=dr\wedge dv, \end{aligned} \end{aligned}$$

which shows that the transformation preserves the symplectic form and hence has Jacobian 1. \(\quad \square \)

2.1.2 Study of the structure functions

In this subsection, we study the geometric functions that arise from the change of variable in Lemma 2.1. These are independent of assumptions on the solutions (Fig. 3).

Fig. 3
figure 3

Plots of the functions G and H of Lemma 2.2

Lemma 2.2

The functions G and H are almost linear

$$\begin{aligned} \begin{aligned} x-1\le&G(x)\le x+\ln \left( 2\sqrt{x}\right) \le 2x,\qquad 1\le x<\infty ,\\ \frac{x}{2}\le&H(x)\le x+1,\qquad 0\le x<\infty . \end{aligned} \end{aligned}$$
(2.8)

In addition, we note the asymptotic behavior of G and its inverse

$$\begin{aligned} G(s)= & {} s+\frac{1}{2}\ln s+\ln 2-\frac{1}{2}+O_{s\rightarrow \infty }(\frac{1}{s}),\nonumber \\ G(1+\hbar )= & {} 2\sqrt{\hbar }+O_{\hbar \rightarrow 0}(\hbar ^{3/2}),\nonumber \\ H(x)= & {} x-\frac{1}{2}\ln (x)-\ln 2+\frac{1}{2}+\frac{1}{4}\frac{\ln (x)}{x}+O_{x\rightarrow \infty }(\frac{1}{x}), \nonumber \\ H(h)= & {} 1+\left( \frac{h}{2}\right) ^2 +O_{h\rightarrow 0}\left( \frac{h}{2}\right) ^5. \end{aligned}$$
(2.9)

Proof of Lemma 2.2

Since G can be integrated explicitly, we easily obtain the first line in (2.8) from (2.5), and the second line follows from the fact that \(H=G^{-1}\). Now (2.9) follows by expanding the expression for G. \(\quad \square \)

In addition, we will frequently consider first and second order derivatives.

Lemma 2.3

We have explicit formulas for the first order derivatives

$$\begin{aligned} \begin{aligned} \partial _\theta R(\theta ,a)&=\frac{\theta }{\vert \theta \vert }H^\prime (\frac{a^2\vert \theta \vert }{q}),\qquad 1-\frac{q}{a^2R(\theta ,a)}\le \frac{\theta }{\vert \theta \vert }\partial _\theta R(\theta ,a)\le 1,\\ \partial _aR(\theta ,a)&=\frac{q}{a^3}\left( \int _{s=0}^{\frac{a^2\vert \theta \vert }{q}}\frac{s}{H^2(s)}ds-2\right) ,\qquad \vert \partial _aR\vert \lesssim \frac{q\ln \langle \frac{a^2}{q}R(\theta ,a)\rangle }{a^3}, \end{aligned} \end{aligned}$$

and the formulas for the second order derivatives

$$\begin{aligned} \begin{aligned} \partial _\theta \partial _\theta R(\theta ,a)&=\frac{q}{2a^2R^2(\theta ,a)},\qquad \partial _\theta \partial _aR(\theta ,a)=\frac{1}{a}\frac{\theta }{\vert \theta \vert }\frac{\frac{a^2\vert \theta \vert }{q}}{H^2(\frac{a^2\vert \theta \vert }{q})},\\ \partial _a\partial _aR(\theta ,a)&=\frac{q}{a^4}\left( 6-3\int _{s=0}^{\frac{a^2\vert \theta \vert }{q}}\frac{s}{H^2(s)}ds+\frac{2\left( \frac{a^2\vert \theta \vert }{q}\right) ^2}{H^2(\frac{a^2\vert \theta \vert }{q})}\right) , \end{aligned} \end{aligned}$$

so that

$$\begin{aligned} \vert \partial _\theta \partial _\theta R(\theta ,a)\vert\le & {} \frac{q}{2a^2R^2(\theta ,a)},\quad \vert \partial _\theta \partial _aR(\theta ,a)\vert \le \frac{1}{a}\frac{\frac{a^2\vert \theta \vert }{q}}{H^2(\frac{a^2\vert \theta \vert }{q})},\\&\vert \partial _a\partial _aR(\theta ,a)\vert \le \frac{q}{a^4}\ln \langle \frac{a^2}{q}R(\theta ,a)\rangle . \end{aligned}$$

Proof of Lemma 2.3

The formulas follow from direct calculations using that

$$\begin{aligned} H(0)=1,\qquad 1\ge H^\prime (x)=\left[ 1-1/H\right] ^{\frac{1}{2}}\ge 1-H^{-1},\qquad H^{\prime \prime }(x)=1/(2H^2(x)), \end{aligned}$$

together with Lemma 2.2 to control the behavior of H. \(\quad \square \)

2.1.3 Kinematics of linear trajectories

Here we collect a few estimates on the behavior of the trajectories of (2.3). Using (2.7), we see that the linearized flow is simple in the action-angle variables. For simplicity of notation, given a function \(F(\theta ,a)\) in phase space, we will denote by

$$\begin{aligned} {\widetilde{F}}(\theta ,a):=F(\theta +ta,a) \end{aligned}$$
(2.10)

its evolution under the linear flow. This is a slight abuse of notation since the transformation depends on time; however all our estimates will be instantaneous so this should not lead to confusion. Since we will show that in action-angle variables, the new density is (almost) stable, we expect that the main role will be played by trajectories starting from the “bulk region” \(\mathcal {B}\) defined by

$$\begin{aligned} \begin{aligned} \mathcal {B}&:=\{a\ge t^{-\frac{1}{4}},\,\, \vert \theta \vert \le ta/2\},\qquad \mathcal {B}^c=\{a\le t^{-\frac{1}{4}}\,\,\hbox { or }\,\,\vert \theta \vert \ge ta/2\}. \end{aligned} \end{aligned}$$
(2.11)

We start with a few simple observations. By definition, we have a universal lower bound for R,

$$\begin{aligned} \begin{aligned} a^2R(\theta ,a)\ge q, \end{aligned} \end{aligned}$$
(2.12)

but in the bulk region, one can be more precise.

Lemma 2.4

We have the following control on \({\widetilde{R}}\):

$$\begin{aligned} \begin{aligned} \vert \partial _\theta {\widetilde{R}}(\theta ,a)\vert \le 1,\qquad \vert \partial _a{\widetilde{R}}(\theta ,a)\vert \lesssim t+\frac{q}{a^3}\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle \end{aligned} \end{aligned}$$
(2.13)

and

$$\begin{aligned} \vert \partial _\theta \partial _\theta {\widetilde{R}}(\theta ,a)\vert \lesssim {\widetilde{R}}^{-1}(\theta ,a),\qquad \vert \partial _\theta \partial _a{\widetilde{R}}(\theta ,a)\vert \lesssim \frac{t+a^{-3}}{{\widetilde{R}}(\theta ,a)},\nonumber \\ \vert \partial _a\partial _a{\widetilde{R}}(\theta ,a)\vert \lesssim \frac{t^2}{a^2{\widetilde{R}}^{2}(\theta ,a)}+\frac{t}{a^3{\widetilde{R}}(\theta ,a)}+\frac{q}{a^4}\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle . \end{aligned}$$
(2.14)

In addition, we have a more precise control in the bulk: when \((\theta ,a)\in \mathcal {B}\), we have that

$$\begin{aligned} \begin{aligned}&ta/2\le {\widetilde{R}}(\theta ,a)\le 2ta,\qquad 1-\frac{2q}{ta^3}\le \partial _\theta {\widetilde{R}}(\theta ,a)\le 1,\qquad \partial _a{\widetilde{R}}(\theta ,a)\ge \frac{3}{4}t,\\&\qquad \vert \partial _a\partial _a{\widetilde{R}}\vert \lesssim a^{-4}\ln \langle ta^3\rangle , \end{aligned} \end{aligned}$$

and in particular, the change of variable \(a\mapsto {\widetilde{R}}(\theta ,a)\) is well behaved.

Proof of Lemma 2.4

The bounds (2.13) and (2.14) follow from Lemma 2.3 and the formulas

$$\begin{aligned} \begin{aligned} \partial _\theta {\widetilde{R}}(\theta ,a)=\partial _\theta R(\theta +ta,a),\qquad \partial _a{\widetilde{R}}(\theta ,a)=\left[ t\partial _\theta R+\partial _aR\right] (\theta +ta,a),\\ \partial _\theta \partial _\theta {\widetilde{R}}(\theta ,a)=\partial _\theta \partial _\theta R(\theta +ta,a),\qquad \partial _\theta \partial _a{\widetilde{R}}=\left( t\partial _\theta \partial _\theta R+\partial _a\partial _\theta R\right) (\theta +ta,a),\\ \partial _a\partial _a{\widetilde{R}}(\theta ,a)=\left( t^2\partial _\theta \partial _\theta R+2t\partial _a\partial _\theta R+\partial _a\partial _aR\right) (\theta +ta,a). \end{aligned} \end{aligned}$$

Now, in the bulk, we observe that \(ta/2\le \vert \theta +ta\vert \le 2ta\), and \(ta^3\gg 1\) so that by (2.8) we have \(ta/2\le {\widetilde{R}}(\theta +ta,a)\le 2ta\). The other bounds follow directly. \(\quad \square \)

Since the interaction involves quantities defined in the physical space, it will be useful to understand how to relate them to phase space variables (Fig. 4). The next lemma is concerned with solutions of the equation

$$\begin{aligned} \begin{aligned} {\widetilde{R}}(\theta ,a)=r \end{aligned} \end{aligned}$$
(2.15)

for fixed t and r.

Fig. 4
figure 4

The different regions of Lemma 2.5

Lemma 2.5

Let

$$\begin{aligned} \begin{aligned} A:=\sqrt{\frac{q}{r}}(1+\hbar ),\qquad \hbar :=c\cdot \min \{1,\frac{r^3}{q^5t^2}\}. \end{aligned} \end{aligned}$$

for some fixed small constant \(c>0\) and define the regions

$$\begin{aligned} \begin{aligned} \mathcal {R}_0:=\{0\le a\le A,\,\, \theta \in {\mathbb {R}}\},\qquad \mathcal {R}_1:=\{a\ge A,\,\, \theta \le -ta\},\qquad \mathcal {R}_2:=\{a\ge A,\,\,\theta \ge -ta\}. \end{aligned} \end{aligned}$$

In the region \(\mathcal {R}_0\), we see that for any \(\theta \), there exists at most one \(a=\aleph (\theta ;r,t)\) solution of (2.15). In addition, we have that

$$\begin{aligned} \begin{aligned} \sqrt{\frac{q}{r}}\le \aleph \le 2\sqrt{\frac{q}{r}},\qquad \vert \theta +t\aleph \vert \le 3\hbar r,\qquad \vert \partial _a{\widetilde{R}}(\theta ,\aleph )\vert \gtrsim \frac{r^\frac{3}{2}}{q^\frac{5}{2}}\gtrsim \frac{a^{-3}}{q}. \end{aligned} \end{aligned}$$
(2.16)

In the region \(\mathcal {R}_j\), \(j\in \{1,2\}\), for each choice of a, there exists exactly one \(\theta =\tau _j(a;r,t)\) solution of (2.15). In addition, we see that

$$\begin{aligned} \begin{aligned} \tau _1&\le -ta-r/2,\qquad a^2r\ge q,\qquad \partial _\theta {\widetilde{R}}(\tau _1,a)\gtrsim \hbar ,\\ \end{aligned} \end{aligned}$$
(2.17)

while on \(\mathcal {R}_2\), we see that

$$\begin{aligned} \begin{aligned} \partial _\theta {\widetilde{R}}(\tau _1,a)\gtrsim \hbar \qquad \hbox { and }\qquad \hbox {either }1\le \frac{a^2r}{q}\le C,\quad \hbox { or }\quad \partial _a{\widetilde{R}}(\tau _2,a)\ge ct. \end{aligned} \end{aligned}$$
(2.18)

Proof

We can rewrite (2.15) as

$$\begin{aligned} \begin{aligned} \frac{q}{a^2}H(\frac{a^2\vert \theta +ta\vert }{q})=r\quad \Leftrightarrow \quad \vert \theta +ta\vert =\frac{q}{a^2}G(\frac{a^2 r}{q})\quad \Leftrightarrow \quad \theta =-ta\pm \frac{q}{a^2}G(\frac{a^2 r}{q}). \end{aligned} \end{aligned}$$

We start with \(\mathcal {R}_0\) and denote \(a^*=\sqrt{q/r}\), \(\vartheta =\theta +ta\) and \(x=a^2\vert \vartheta \vert /q\) so that we are considering the equation

$$\begin{aligned} \begin{aligned} r=\frac{q}{a^*}=\frac{q}{a^2}H(\frac{a^2\vert \vartheta \vert }{q})\quad \Leftrightarrow \quad x=G((\frac{a}{a^*})^2). \end{aligned} \end{aligned}$$
(2.19)

Now let \(a=a^*\sqrt{1+h^2}\) for some \(h\ge 0\). We have that by (2.9) and Lemma 2.3

$$\begin{aligned} E_1=x-G((\frac{a}{a^*})^2)= & {} x-2h+O(h^3),\nonumber \\ \partial _a{\widetilde{R}}(\theta ,a)= & {} t\frac{\vartheta }{\vert \vartheta \vert }H^\prime (x)-\frac{2}{q (a^*)^3}\left[ 1+h^2\right] ^{-\frac{3}{2}}\left( 1-\int _{s=0}^x\frac{s}{2H^2(s)}ds\right) .\nonumber \\ \end{aligned}$$
(2.20)

From this we see that if h is small enough, \(0\le h\le c\), there exists a unique solution x to \(E_1=0\), and this solution satisfies \(h\le x\le 3h\). In addition, if \(h\le c/(q(a^*)^3t)\), there holds that \(\partial _a{\widetilde{R}}\lesssim -1/(q(a^*)^3)\).

Now in region \(\mathcal {R}_j\), \(j\in \{1,2\}\), we see that

$$\begin{aligned} \begin{aligned} x=G(1+h^2)\ge h\ge \hbar \end{aligned} \end{aligned}$$

and in particular, using (2.19), we find that

$$\begin{aligned} \begin{aligned} \tau _1(a;r,t)&:=-ta- \frac{q}{a^2}G(\frac{a^2 r}{q})<-ta< \tau _2(a;r,t):=-ta+\frac{q}{a^2}G(\frac{a^2 r}{q}). \end{aligned} \end{aligned}$$

In addition, we have that

$$\begin{aligned} \begin{aligned} \partial _\theta {\widetilde{R}}(\theta ,a)&=H^\prime (x)\gtrsim \hbar .\end{aligned} \end{aligned}$$

and the other bounds in (2.17) follow directly from the definitions.

Finally, the last statement in (2.18) follows from the formula for \(\partial _a{\widetilde{R}}\) in (2.20): Note that \(\vartheta /\left|\vartheta \right|=1\) and letting \(x=b\) the bound where the term in parenthesis vanishes, then when \(0\le x\le 2b\), there holds that \(q\le a^2r\le H(2b)\), while for \(x\ge 2b\) we see that both terms have same sign and \(H^{\prime \prime }\ge 0\), so that

$$\begin{aligned} \begin{aligned} \partial _a{\widetilde{R}}(\theta ,a)\ge tH^\prime (2b). \end{aligned} \end{aligned}$$

\(\square \)

2.2 Electric field and potential

Given an instantaneous density distribution \(\mu (r,v,t)\), it is useful to introduce the “physical potential” \(\mathbf{\Psi }\) of the associated electric field as in (1.9), explicitly given as

$$\begin{aligned} \mathbf{\Psi }(r,t)=\iint _{r,s}\frac{1}{\max \{r,s\}}\mu ^2(s,v,t)ds dv=\iint _{\vartheta ,\alpha }\frac{1}{\max \{r,R(\vartheta +t\alpha ,\alpha )\}}\gamma ^2(\vartheta ,\alpha ,t)\, d\vartheta d\alpha ,\nonumber \\ \end{aligned}$$
(2.21)

when \(\gamma (\vartheta ,\alpha ,t)\) is the corresponding density distribution in action-angle coordinates as in (1.15). Then (1.17) can be rewritten as \({\widetilde{\Psi }}(\theta ,a,t)=\mathbf{\Psi }({\widetilde{R}}(\theta ,a),t)\). This allows to obtain formulas for the derivatives of \({\widetilde{\Psi }}\) in action angle variables in terms of the electric field \(\mathbf{E}\), the local mass \(\mathbf{m}\) and the density \(\varvec{\varrho }\) (compare also (1.9)):

$$\begin{aligned} \begin{aligned} \mathbf{E}(r,t)\!=\!-\partial _r\mathbf{\Psi }(r,t)&=\!\frac{\mathbf{m}(r,t)}{r^2},\qquad \mathbf{m}(r,t)\!:=\!\iint _{\vartheta ,\alpha }{\mathfrak {1}}_{\{R(\vartheta +t\alpha ,\alpha )\le r\}}\gamma ^2(\vartheta ,\alpha ,t)\, d\vartheta d\alpha ,\\ \varvec{\varrho }(r,t)&:=\partial _r\mathbf{m}(r,t)=\iint \delta (R(\vartheta +t\alpha ,\alpha )-r)\cdot \gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha . \end{aligned} \end{aligned}$$
(2.22)

Then for \(\beta \in \{a,\theta \}\) we have

$$\begin{aligned} \begin{aligned} \partial _\beta {\widetilde{\Psi }}&=-\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^{2}(\theta ,a)}\cdot \partial _\beta {\widetilde{R}}(\theta ,a),\\ \partial _\alpha \partial _\beta {\widetilde{\Psi }}&=-\frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^{2}}\left( \partial _\alpha \partial _\beta {\widetilde{R}}-2\frac{\partial _\alpha {\widetilde{R}}\partial _\beta {\widetilde{R}}}{{\widetilde{R}}}\right) -\varvec{\varrho }({\widetilde{R}})\cdot \frac{\partial _\alpha {\widetilde{R}}\partial _\beta {\widetilde{R}}}{{\widetilde{R}}^2}. \end{aligned} \end{aligned}$$
(2.23)

We note that the local mass has a trivial uniform bound

$$\begin{aligned} 0\le \mathbf{m}(r)\le \mathbf{m}(\infty ):=\Vert \gamma \Vert _{L^2_{\theta ,a}}^2 \end{aligned}$$
(2.24)

but this can be made more precise.

Lemma 2.6

We can decompose \(\mathbf{m}\) as

$$\begin{aligned} \begin{aligned} \mathbf{m}(r,t)&=\mathbf{m}_s(r,t)+\mathbf{m}_n(r,t), \end{aligned} \end{aligned}$$

where we have that for any \(\ell ,\kappa >0\)

$$\begin{aligned} \begin{aligned} 0\le \mathbf{m}_s(r,t)&\lesssim \left( \frac{r}{t}\right) ^\ell \Vert a^{-\frac{\ell }{2}}\gamma \Vert _{L^2_{\theta ,a}}^2,\\ 0\le \mathbf{m}_n(r,t)&\lesssim \left( \frac{r}{t}\right) ^\ell t^{-\frac{\kappa -\ell }{2}}\left[ \Vert a^{-\kappa }\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert a^{\frac{\ell -\kappa }{2}}\theta ^{\frac{\ell +\kappa }{2}}\gamma \Vert _{L^2_{\theta ,a}}^2\right] . \end{aligned} \end{aligned}$$
(2.25)

Proof of Lemma 2.6

The decomposition corresponds to localizing in and out of the bulk zone defined in (2.11). Thus

$$\begin{aligned} \begin{aligned} \mathbf{m}_s(r,t)&:= \iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}{\mathfrak {1}}_{\mathcal {B}}\cdot \gamma ^2(\vartheta ,\alpha )\,d\vartheta d\alpha , \\ \mathbf{m}_n(r,t)&:=\iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}{\mathfrak {1}}_{\mathcal {B}^c}\cdot \gamma ^2(\vartheta ,\alpha )\,d\vartheta d\alpha . \end{aligned} \end{aligned}$$

Using Lemma 2.4, we see that

$$\begin{aligned} \begin{aligned} \mathbf{m}_s(r,t)&\le r^\kappa \iint {\mathfrak {1}}_{\mathcal {B}}\cdot \gamma ^2(\vartheta ,\alpha )\,\frac{d\vartheta d\alpha }{{\widetilde{R}}^{\kappa }(\vartheta ,\alpha )}\lesssim (r/t)^{\kappa } \iint \alpha ^{-\kappa }\cdot \gamma ^2(\vartheta ,\alpha )\,d\vartheta d\alpha , \end{aligned} \end{aligned}$$

while using (2.12),

$$\begin{aligned} \begin{aligned} \frac{\mathbf{m}_n(r,t)}{r^\kappa }&\le \iint {\mathfrak {1}}_{\mathcal {B}^c}\cdot \gamma ^2(\vartheta ,\alpha )\,\frac{d\vartheta d\alpha }{{\widetilde{R}}^\kappa (\vartheta ,\alpha )}\\&\quad \lesssim \iint \left[ {\mathfrak {1}}_{\{\vert \alpha \vert \le t^{-\frac{1}{4}}\}}+{\mathfrak {1}}_{\{\vert \vartheta \vert \ge t\alpha /2\}}\right] \cdot \alpha ^{2\kappa }\cdot \gamma ^2(\vartheta ,\alpha )\,d\vartheta d\alpha . \end{aligned} \end{aligned}$$

\(\square \)

We will use the following consequences:

Proposition 2.7

There holds that

$$\begin{aligned} \begin{aligned} t^\frac{3}{2}\cdot \sup _{\theta ,a}\frac{1}{a} \vert \partial _\theta {\widetilde{\Psi }}(\theta ,a)\vert&\lesssim \Vert a^{-\frac{3}{4}}\gamma \Vert _{L^2_{\theta ,a}}^2+t^{-\frac{1}{4}}\Vert (a^{-2}+\theta ^2)\gamma \Vert _{L^2_{\theta ,a}}^2 \end{aligned} \end{aligned}$$
(2.26)

and

$$\begin{aligned} \begin{aligned} t\cdot \vert \partial _a{\widetilde{\Psi }}(\vartheta ,\alpha )\vert&\lesssim \Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert \gamma \Vert _{L^2_{\theta ,a}}^2\cdot \left[ \vert \vartheta \vert +\alpha ^{-3}\right] +t^{-\frac{1}{4}}\Vert (a^{-\frac{5}{2}}+\theta +\theta ^{\frac{5}{2}})\gamma \Vert _{L^2_{\theta ,a}}^2. \end{aligned}\nonumber \\ \end{aligned}$$
(2.27)

Proof of Proposition 2.7

Using (2.12) and (2.13), we find that

$$\begin{aligned} \begin{aligned} \frac{1}{a} \vert \partial _\theta {\widetilde{\Psi }}(\theta ,a)\vert&=\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{a{\widetilde{R}}^2(\theta ,a)}\vert \partial _\theta {\widetilde{R}}(\theta ,a)\vert \le \frac{1}{\sqrt{q}}\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^\frac{3}{2}(\theta ,a)} \end{aligned} \end{aligned}$$

and we can use (2.25) with \(\ell =3/2\), \(\kappa =2\). For (2.27), we use (2.13) to get that

$$\begin{aligned} \begin{aligned} \vert \partial _a{\widetilde{\Psi }}(\theta ,a)\vert&=\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}\vert \partial _a{\widetilde{R}}(\theta ,a)\vert \lesssim t\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}+\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^\frac{1}{2}(\theta ,a)}\frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle }{\sqrt{q}(\frac{a^2}{q}{\widetilde{R}}(\theta ,a))^\frac{3}{2}}. \end{aligned} \end{aligned}$$

From (2.25) with \(\ell =2\), \(\kappa =\frac{5}{2}\) we obtain

$$\begin{aligned} \begin{aligned} t\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}\lesssim t^{-1}\Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}}^2+t^{-\frac{5}{4}}\Vert (a^{-\frac{5}{2}}+\theta ^{\frac{5}{2}})\gamma \Vert _{L^2_{\theta ,a}}^2. \end{aligned} \end{aligned}$$

Similarly, if \(\frac{a^2}{q}{\widetilde{R}}(\theta ,a)\ge t^\frac{1}{2}\), we can use (2.25) (\(\ell =\frac{1}{2}\), \(\kappa =1\)) to get

$$\begin{aligned} \frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^\frac{1}{2}(\theta ,a)}\frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle }{\sqrt{q}(\frac{a^2}{q}{\widetilde{R}}(\theta ,a))^\frac{3}{2}}&\lesssim t^{-\frac{2}{3}}\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^\frac{1}{2}(\theta ,a)} \\&\lesssim t^{-\frac{7}{6}}\Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}} +t^{-\frac{7}{6}-\frac{1}{4}}\Vert (a^{-1}+\theta )\gamma \Vert _{L^2_{\theta ,a}}. \end{aligned}$$

On the other hand, it follows from Lemma 2.4 that if \(q\le a^2{\widetilde{R}}(\theta ,a)\le t^\frac{1}{2}\), then \((\theta ,a)\in \mathcal {B}^c\), and we use that

$$\begin{aligned} \begin{aligned} \frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^\frac{1}{2}(\theta ,a)}\frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle }{\sqrt{q}(\frac{a^2}{q}{\widetilde{R}}(\theta ,a))^\frac{3}{2}}{\mathfrak {1}}_{\mathcal {B}^c}\lesssim a\Vert \gamma \Vert _{L^2_{\theta ,a}}^2{\mathfrak {1}}_{\mathcal {B}^c}\lesssim t^{-1}\Vert \gamma \Vert _{L^2_{\theta ,a}}^2\cdot \left[ \vert \theta \vert +a^{-3}\right] , \end{aligned} \end{aligned}$$

which gives (2.27).\(\quad \square \)

2.2.1 Study of the density

Controlling derivatives of \(\gamma \) requires estimates on the density; these are obtained in a similar way to the mass (see Lemma 2.6), but are more involved.

Lemma 2.8

The density can be decomposed into two terms,

$$\begin{aligned} \begin{aligned} \varvec{\varrho }&=\varvec{\varrho }_s+\varvec{\varrho }_n \end{aligned} \end{aligned}$$

where for \(\kappa ,\sigma \ge 0\),

$$\begin{aligned} \begin{aligned} \vert \varvec{\varrho }_s(r,t)/r^\kappa \vert&\lesssim t^{-1-\kappa }\left[ \Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\Vert a^{-1-\kappa }\gamma \Vert _{L^2_{\theta ,a}}+\Vert a^{-\frac{\kappa +1}{2}}\gamma \Vert _{L^2_{\theta ,a}}^2\right] ,\\ \vert \varvec{\varrho }_n(r,t)/r^\kappa \vert&\lesssim t^{-1-\kappa -\sigma }\left[ \Vert \partial _\theta \gamma \Vert _{L^2_{\theta ,a}}+\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\right] \\&\qquad \qquad \qquad \cdot \Vert (1+a^{-2\kappa -4\sigma -6}+a^{\kappa -\sigma +3}\theta ^{\kappa +\sigma +3})\gamma \Vert _{L^2_{\theta ,a}}. \end{aligned} \end{aligned}$$
(2.28)

The key observation is that the estimate for \(\varvec{\varrho }_s\) only involves at most one copy of the large term \(\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\).

Proof of Lemma 2.8

We can decompose into two regions

$$\begin{aligned} \begin{aligned} \mathbf{m}(r,t)&=\mathbf{m}^1(r,t)+\mathbf{m}^2(r,t),\qquad \chi ^{[1]}(\vartheta ,\alpha ,t)=\varphi _{\ge 1}(t^{-1}\partial _a{\widetilde{R}}(\vartheta ,\alpha )),\qquad \chi ^{[2]}=1-\chi ^{[1]},\\ \mathbf{m}^j(r,t)&:=\iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\chi ^{[j]}(\vartheta ,\alpha ,t)\gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha , \end{aligned} \end{aligned}$$

where \(\varphi _{\ge 1}(x)\) denotes a smooth function supported on \(\{x\ge 1/10\}\) and equal to 1 for \(x\ge 1/2\).

Study of \(\partial _r\mathbf{m}^1\). This contains the main term. Integrating by parts, we observe that

$$\begin{aligned} \partial _r\mathbf{m}^1(r,t)&=\iint \partial _r\left( {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\right) \chi ^{[1]}(\vartheta ,\alpha ,t)\gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha \\&=\iint -\frac{1}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\partial _\alpha \left( {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\right) \chi ^{[1]}(\vartheta ,\alpha ,t)\gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha \\&=\iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\partial _\alpha \left( \frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\gamma ^2(\vartheta ,\alpha )\right) d\vartheta d\alpha \\&= \varvec{\varrho }^1_s(r,t)+\varvec{\varrho }^2_s(r,t)+M^{1,1}+M^{1,2},\\ \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} \varvec{\varrho }_s^1(r,t)&:=\iint {\mathfrak {1}}_{\mathcal {B}}{\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}} \partial _\alpha \left( \frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\right) \gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha ,\\ M^{1,1}&:=\iint {\mathfrak {1}}_{\mathcal {B}^c}{\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}} \partial _\alpha \left( \frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\right) \gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha ,\\ \varvec{\varrho }_s^2(r,t)&:=2\iint {\mathfrak {1}}_{\mathcal {B}}{\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\gamma (\vartheta ,\alpha )\cdot \partial _\alpha \gamma (\vartheta ,\alpha )d\vartheta d\alpha \\ M^{1,2}&:=2\iint {\mathfrak {1}}_{\mathcal {B}^c}{\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}\frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\gamma (\vartheta ,\alpha )\cdot \partial _\alpha \gamma (\vartheta ,\alpha )d\vartheta d\alpha .\\ \end{aligned} \end{aligned}$$

From Lemma 2.4 we recall that in the bulk region \({\widetilde{R}}\sim at\), and thus

$$\begin{aligned} r^{-\kappa }\vert \varvec{\varrho }^2_s(r,t)\vert&\lesssim \iint {\mathfrak {1}}_{\mathcal {B}}\frac{1}{{\widetilde{R}}^\kappa (\vartheta ,\alpha )} \frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\vert \gamma (\vartheta ,\alpha )\cdot \partial _\alpha \gamma (\vartheta ,\alpha )\vert d\vartheta d\alpha \\&\lesssim t^{-\kappa -1}\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\Vert a^{-1-\kappa }\gamma \Vert _{L^2_{\theta ,a}}, \end{aligned}$$

while on the other hand, using (2.12) and (2.11),

$$\begin{aligned} \begin{aligned} r^{-\kappa }\vert M^{1,2}\vert&\lesssim \iint {\mathfrak {1}}_{\mathcal {B}^c}\alpha ^{2\kappa }\frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\vert \gamma (\vartheta ,\alpha )\cdot \partial _\alpha \gamma (\vartheta ,\alpha )\vert d\vartheta d\alpha \\&\lesssim t^{-1-\kappa -\sigma }\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\left( \Vert a^{-2\kappa -1-4\sigma }\gamma \Vert _{L^2_{\theta ,a}}+\Vert a^{\kappa -1}\theta ^{\kappa +\sigma }\gamma \Vert _{L^2_{\theta ,a}}\right) . \end{aligned} \end{aligned}$$

Direct computations using Lemma 2.4 show that

$$\begin{aligned} \begin{aligned} \left| \partial _\alpha \left( \frac{\chi ^{[1]}(\vartheta ,\alpha ,t)}{\partial _\alpha {\widetilde{R}}(\vartheta ,\alpha )}\right) \right|&\lesssim \left| \frac{\partial _a\partial _a{\widetilde{R}}(\vartheta ,\alpha ,t)}{(\partial _a{\widetilde{R}})^2(\vartheta ,\alpha ,t)}\right| {\underline{\chi }}^{[1]}(\vartheta ,\alpha ,t)\lesssim \frac{1}{{\widetilde{R}}(\vartheta ,\alpha )}+\frac{1}{t\alpha }+\frac{\vert \vartheta \vert }{\alpha ^2t^2}. \end{aligned} \end{aligned}$$

Separating the contribution of the bulk and outside as in the proof of Lemma 2.6, we find that

$$\begin{aligned} r^{-\kappa }\vert \varvec{\varrho }_s^{1}\vert&\lesssim \iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}{\mathfrak {1}}_{\mathcal {B}} \left( \frac{1}{{\widetilde{R}}(\vartheta ,\alpha )}+\frac{1}{t\alpha }+\frac{\vert \vartheta \vert }{\alpha ^2t^2}\right) \gamma ^2(\vartheta ,\alpha )\frac{d\vartheta d\alpha }{{\widetilde{R}}^\kappa (\vartheta ,\alpha )}\\&\lesssim t^{-1-\kappa }\Vert \alpha ^{-\frac{1+\kappa }{2}}\gamma \Vert _{L^2_{\theta ,a}}^2, \end{aligned}$$

while using (2.12) and (2.11) yields

$$\begin{aligned} \begin{aligned} r^{-\kappa }\vert M^{1,1}\vert&\lesssim \iint {\mathfrak {1}}_{\{{\widetilde{R}}(\vartheta ,\alpha )\le r\}}{\mathfrak {1}}_{\mathcal {B}^c} \left( \frac{1}{{\widetilde{R}}(\vartheta ,\alpha )}+\frac{1}{t\alpha }+\frac{\vert \vartheta \vert }{\alpha ^2t^2}\right) \gamma ^2(\vartheta ,\alpha )\frac{d\vartheta d\alpha }{{\widetilde{R}}^\kappa (\vartheta ,\alpha )}\\&\lesssim t^{-1-\kappa -\sigma }\left\| (a^{-\kappa -1-2\sigma }(1+a)+a^{\frac{1}{2}(\kappa -1-\sigma )}\left|\theta \right|^{\frac{1}{2}(\kappa +\sigma )}(1+a\left|\theta \right|^{\frac{1}{2}})\gamma \right\| _{L^2_{\theta ,a}}^2. \end{aligned} \end{aligned}$$

Study of \(\partial _r\mathbf{m}^2\). We now consider

$$\begin{aligned} \begin{aligned} \partial _r\mathbf{m}^2(r,t)&=\iint \delta ({\widetilde{R}}(\vartheta ,\alpha )-r)\chi ^{[2]}(\vartheta ,\alpha ,t)\gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha .\\ \end{aligned} \end{aligned}$$

The main observation is that thanks to Lemma 2.4, we have that

$$\begin{aligned} \chi ^{[2]}{\mathfrak {1}}_{\mathcal {B}}=0. \end{aligned}$$
(2.29)

The Dirac measure restricts to the set studied in Lemma 2.5 and we decompose accordingly

$$\begin{aligned} \begin{aligned} \partial _r\mathbf{m}^2(r,t)&=M^{2,0}+M^{2,1}+M^{2,2},\\ M^{2,j}&:=\iint \delta ({\widetilde{R}}(\vartheta ,\alpha )-r)\chi ^{[2]}(\vartheta ,\alpha ,t){\mathfrak {1}}_{\mathcal {R}_j}\gamma ^2(\vartheta ,\alpha )d\vartheta d\alpha . \end{aligned} \end{aligned}$$

and using (2.16), we see that

$$\begin{aligned} \begin{aligned} 0\le r^{-\kappa }M^{2,0}&= r^{-\kappa }\int \chi ^{[2]}{\mathfrak {1}}_{\mathcal {R}_0}\gamma ^2(\vartheta ,\aleph )\frac{d\vartheta }{\partial _a{\widetilde{R}}(\vartheta ,\aleph )}\lesssim \int {\mathfrak {1}}_{\mathcal {B}^c}\gamma ^2(\vartheta ,\aleph )\aleph ^{3+2\kappa }d\vartheta . \end{aligned} \end{aligned}$$

Integrating \(\gamma ^2(\vartheta ,\aleph )\aleph ^{3+2\kappa }=\int _0^{\aleph }\partial _\alpha (\gamma ^2(\vartheta ,\alpha )\alpha ^{3+2\kappa }) d\alpha \) from \(0\le \alpha \le \aleph \) (note that \(0\le \alpha \le a\) and \(a\in \mathcal {B}^c\) implies that \(\alpha \in \mathcal {B}^c\)), we can estimate

$$\begin{aligned} \begin{aligned} 0\le t^{\kappa +\sigma +1} r^{-\kappa }M^{2,0}&\lesssim t^{\kappa +\sigma +1}\iint {\mathfrak {1}}_{\mathcal {B}^c}\alpha ^{2\kappa }\left( \alpha ^2\gamma ^2(\vartheta ,\alpha )+\vert \gamma (\vartheta ,\alpha )\cdot \alpha ^3\partial _a\gamma (\vartheta ,\alpha )\vert \right) \, d\vartheta d\alpha \\&\lesssim \Vert a^{-\kappa -1-2\sigma }\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert a^{\frac{\kappa +1-\sigma }{2}}\theta ^{\frac{\sigma +\kappa }{2}}\gamma \Vert _{L^2_{\theta ,a}}^2\\&\quad +\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\cdot \left( \Vert a^{-2\kappa -2-4\sigma }\gamma \Vert _{L^2_{\theta ,a}}+\Vert a^{\kappa +1+\sigma }\theta ^{\sigma }\gamma \Vert _{L^2_{\theta ,a}}\right) . \end{aligned} \end{aligned}$$

Using (2.17) and integrating over \(\vartheta \ge \tau _1\), we see that

$$\begin{aligned} \begin{aligned} 0\le M^{2,1}&= \int \chi ^{[2]}{\mathfrak {1}}_{\mathcal {R}_1}\gamma ^2(\tau _{1},\alpha )\frac{d\alpha }{\partial _\theta {\widetilde{R}}(\tau _{1},\alpha )}\\&\lesssim (1+r^{-3}t^2)\int {\mathfrak {1}}_{\{\alpha ^2r\ge q,\,\,\vert \tau _{1}\vert \gtrsim \alpha t+r \}}\gamma ^2(\tau _{1},\alpha )d\alpha \\&\lesssim (1+r^{-3}t^2)\iint {\mathfrak {1}}_{\{\alpha ^2r\ge q,\,\,\vert \vartheta \vert \gtrsim \alpha t+r \}}\vert \gamma (\vartheta ,\alpha )\partial _\theta \gamma (\vartheta ,\alpha )\vert d\vartheta d\alpha \\&\lesssim t^{-1-\kappa -\sigma }r^\kappa \Vert \partial _\theta \gamma \Vert _{L^2_{\theta ,a}}\Vert (1+a^4\theta ^2)\theta ^{\kappa +\sigma +1} a^{\kappa -1-\sigma }\gamma \Vert _{L^2_{\theta ,a}}. \end{aligned} \end{aligned}$$

Similarly, using (2.18) we obtain

$$\begin{aligned} \begin{aligned} 0\le M^{2,2}&= \iint \chi ^{[2]}{\mathfrak {1}}_{\mathcal {R}_2}\gamma ^2(\tau _2,\alpha )\frac{d\alpha }{\partial _\theta {\widetilde{R}}(\tau _2,\alpha )}\\&\lesssim (1+r^{-3}t^2)\int {\mathfrak {1}}_{\{\alpha ^2 r\ge q\}}{\mathfrak {1}}_{\mathcal {B}^c}\gamma ^2(\tau _2,\alpha )d\alpha \\&\lesssim (1+r^{-3}t^2)\iint {\mathfrak {1}}_{\{\alpha ^2 r\ge q\}}{\mathfrak {1}}_{\{\alpha \le t^{-\frac{1}{4}}\,\,\hbox { or }\vert \vartheta \vert \ge t\alpha /2\}}\vert \gamma (\vartheta ,\alpha )\partial _\theta \gamma (\vartheta ,\alpha )\vert d\alpha \\&\lesssim t^{-1-\kappa -\sigma }r^\kappa \Vert \partial _\theta \gamma \Vert _{L^2_{\theta ,a}}\left[ \left\| a^{-2\kappa -6-4\sigma }(1+a^5)\gamma \right\| _{L^2_{\theta ,a}}+\left\| a^{\kappa -1}\theta ^{\kappa +\sigma }(1+\alpha ^{4+\sigma })\gamma \right\| _{L^2_{\theta ,a}}\right] . \end{aligned} \end{aligned}$$

This finishes the proof with \(\varvec{\varrho }_s=\varvec{\varrho }_s^1+\varvec{\varrho }_s^2\) and \(\varvec{\varrho }_n=M^{1,1}+M^{1,2}+M^{2,0}+M^{2,1}+M^{2,2}\). \(\quad \square \)

Remark 2.9

As can be seen from the proof of 2.8, we only need positive moments in a to control the area outside of the bulk where \(\left|\theta \right|\ge a t\). Such moments in a could be replaced by moments in \(\theta \), and thus positive weights in a are not necessary for our result.

Proposition 2.10

There holds that

$$\begin{aligned} \begin{aligned} t^\frac{3}{2}\vert \partial _\theta \partial _a{\widetilde{\Psi }}\vert +t^2(1+a^{-2})\vert \partial _\theta \partial _\theta {\widetilde{\Psi }}\vert&\lesssim N_1, \\ \end{aligned} \end{aligned}$$
(2.30)

and

$$\begin{aligned} \begin{aligned} \frac{ta^2}{1+a^2}\vert \partial _a\partial _a{\widetilde{\Psi }}\vert&\lesssim \Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}}+\Vert (a^2+a^{-2})\gamma \Vert _{L^2_{\theta ,a}}^2+t^{-\frac{1}{3}}N_1 \end{aligned} \end{aligned}$$
(2.31)

where

$$\begin{aligned} \begin{aligned} N_1&:=\Vert (a^{-20}+a^{20}+\theta ^{20})\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert \partial _\theta \gamma \Vert _{L^2_{\theta ,a}}^2. \end{aligned} \end{aligned}$$
(2.32)

Proof of Proposition 2.10

The most important term is the term with mixed derivative (see Sect. 3.11). We recall from (2.23) that

$$\begin{aligned} \begin{aligned} \partial _\theta \partial _a{\widetilde{\Psi }}=-\frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^2}\left( \partial _\theta \partial _a{\widetilde{R}}-2\frac{\partial _\theta {\widetilde{R}}\partial _a{\widetilde{R}}}{{\widetilde{R}}}\right) -\varvec{\varrho }({\widetilde{R}})\frac{\partial _\theta {\widetilde{R}}\partial _a{\widetilde{R}}}{{\widetilde{R}}^2}. \end{aligned} \end{aligned}$$

On the one hand, using Lemma 2.4, we see that

$$\begin{aligned} \begin{aligned} \left| \frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^2}\left( \partial _\theta \partial _a{\widetilde{R}}-2\frac{\partial _\theta {\widetilde{R}}\partial _a{\widetilde{R}}}{{\widetilde{R}}}\right) \right|&\lesssim t\frac{\mathbf{m}({\widetilde{R}},t)}{{\widetilde{R}}^3}+\frac{1}{(a^2{\widetilde{R}})^\frac{3}{2}}\frac{\mathbf{m}({\widetilde{R}},t)}{{\widetilde{R}}^\frac{3}{2}}+\frac{1}{(a^2{\widetilde{R}})^\frac{1}{2}}\frac{\mathbf{m}({\widetilde{R}},t)}{{\widetilde{R}}^\frac{3}{2}},\\ \left| \frac{\partial _\theta {\widetilde{R}}(\theta ,a)\partial _a{\widetilde{R}}(\theta ,a)}{{\widetilde{R}}^2(\theta ,a)}\right|&\lesssim \frac{1}{(a^2{\widetilde{R}}(\theta ,a))^\frac{1}{2}}\frac{1}{{\widetilde{R}}^\frac{1}{2}(\theta ,a)}+\frac{t}{{\widetilde{R}}^2(\theta ,a)}, \end{aligned} \end{aligned}$$

and this leads to an acceptable contribution using Lemmas 2.6 and 2.8. We now turn to

$$\begin{aligned} \begin{aligned} \partial _\theta \partial _\theta {\widetilde{\Psi }}=-\frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^2}\left( \partial _\theta \partial _\theta {\widetilde{R}}-2\frac{(\partial _\theta {\widetilde{R}})^2}{{\widetilde{R}}}\right) -\varvec{\varrho }({\widetilde{R}})\frac{(\partial _\theta {\widetilde{R}})^2}{{\widetilde{R}}^2}. \end{aligned} \end{aligned}$$

Using Lemma 2.4 and (2.12), we see that

$$\begin{aligned} \begin{aligned} \vert \partial _\theta \partial _\theta {\widetilde{R}}\vert +\frac{\vert \partial _\theta {\widetilde{R}}\vert ^2}{{\widetilde{R}}}\lesssim {\widetilde{R}}^{-1}(\theta ,a),\qquad a^{-2}\vert \partial _\theta \partial _\theta {\widetilde{R}}\vert +a^{-2}\frac{\vert \partial _\theta {\widetilde{R}}\vert ^2}{{\widetilde{R}}}\lesssim 1, \end{aligned} \end{aligned}$$

and this term can be handled as before using Lemmas 2.6 and 2.8. Finally, we compute that

$$\begin{aligned} \begin{aligned} \partial _a\partial _a{\widetilde{\Psi }}=-\frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^2}\left( \partial _a\partial _a{\widetilde{R}}-2\frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}}\right) -\varvec{\varrho }({\widetilde{R}})\frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}^2}. \end{aligned} \end{aligned}$$

Using (2.13) and (2.14), we find that

$$\begin{aligned} \begin{aligned} a^2\vert \partial _a\partial _a{\widetilde{R}}\vert +a^2\vert \frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}}\vert&\lesssim \frac{t^2}{{\widetilde{R}}^2}+\frac{t}{a{\widetilde{R}}}+\frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}\rangle }{\frac{a^2}{q}{\widetilde{R}}}{\widetilde{R}}+\left( \frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}\rangle }{\frac{a^2}{q}{\widetilde{R}}}\right) ^2{\widetilde{R}}+a^2\frac{t^2}{{\widetilde{R}}}\\&\lesssim (1+a^2)\left( 1+{\widetilde{R}}(\theta ,a)+t^2/{\widetilde{R}}\right) \end{aligned} \end{aligned}$$

and that

$$\begin{aligned} \begin{aligned} a^2\frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}^2}&\lesssim a^2\frac{t^2}{{\widetilde{R}}^2}+\left( \frac{\ln \langle \frac{a^2}{q}{\widetilde{R}}\rangle }{\frac{a^2}{q}{\widetilde{R}}}\right) ^2\lesssim 1+\frac{a^2t^2}{{\widetilde{R}}^2}. \end{aligned} \end{aligned}$$

Using Lemma 2.6, we find that

$$\begin{aligned} \begin{aligned} a^2\frac{\mathbf{m}({\widetilde{R}})}{{\widetilde{R}}^2}\left| \partial _a\partial _a{\widetilde{R}}-2\frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}}\right|&\lesssim \frac{1+a^2}{t}\left( \Vert (a^{-\frac{1}{2}}+a^{-\frac{3}{2}})\gamma \Vert _{L^2_{\theta ,a}}+t^{-\frac{1}{4}}\Vert (a^{-\frac{7}{2}}+\theta ^{\frac{3}{2}}+\theta ^{\frac{7}{2}})\gamma \Vert _{L^2_{\theta ,a}}^2\right) \end{aligned} \end{aligned}$$

while using Lemma 2.8, we find that

$$\begin{aligned} \begin{aligned} a^2\varvec{\varrho }({\widetilde{R}})\frac{(\partial _a{\widetilde{R}})^2}{{\widetilde{R}}^2}&\lesssim \left( 1+\frac{a^2t^2}{{\widetilde{R}}^2}\right) \varvec{\varrho }({\widetilde{R}}(\theta ,a))\\&\lesssim \frac{1+a^2}{t}\left[ \Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\Vert (a^{-1}+a^{-3})\gamma \Vert _{L^2_{\theta ,a}}+\Vert (a^{-\frac{1}{2}}+a^{-\frac{3}{2}})\gamma \Vert _{L^2_{\theta ,a}}^2\right] +\frac{1+a^2}{t^\frac{4}{3}}N_1. \end{aligned} \end{aligned}$$

This finishes the proof. \(\quad \square \)

3 Nonlinear Analysis

In this section we consider the full nonlinear equation (1.16),

$$\begin{aligned} \partial _t\gamma =\lambda \{{\widetilde{\Psi }},\gamma \}. \end{aligned}$$
(3.1)

We first establish global existence of strong solutions via a bootstrap in Sect. 3.1, then we demonstrate the modified scattering asymptotics in Sect. 3.2. This establishes Proposition 1.5 and Theorem 1.6.

3.1 Bootstraps and global existence

We first propagate global bounds using energy estimates. The key property we will use is that the integral of a Poisson bracket vanishes. Commuting with appropriate operators gives the equations

$$\begin{aligned} \begin{aligned} \partial _t(a^p\gamma )-\lambda \left\{ {\widetilde{\Psi }},a^p\gamma \right\}&=-\frac{p\lambda }{a}\{{\widetilde{\Psi }},a\}a^p\gamma =p\lambda \partial _\theta {\widetilde{\Psi }}\cdot a^{p-1}\gamma ,\\ \partial _t(\theta ^p\gamma )-\lambda \left\{ {\widetilde{\Psi }},\theta ^p\gamma \right\}&=-p\lambda \partial _a{\widetilde{\Psi }}\cdot \theta ^{p-1}\gamma ,\\ \partial _t(\partial _\beta \gamma )-\lambda \left\{ {\widetilde{\Psi }},\partial _\beta \gamma \right\}&=\lambda \{\partial _\beta {\widetilde{\Psi }},\gamma \},\qquad \beta \in \{\theta ,a\}. \end{aligned} \end{aligned}$$
(3.2)

The key in the bootstrap estimates is that one can propagate a moments easily and that the terms with slowest decay involve only these moments (see \(\mathbf{m}_s\) in (2.25) and \(\varvec{\varrho }_s\) in (2.28)). Interestingly, we will see in Sect. 3.1.1 that the moments can be bootstrapped on their own, allowing global bounds on weak solutions. These moment bounds allow to propagate another bootstrap for higher regularity. For simplicity, we only propagate the first order derivatives in Sect. 3.1.2.

3.1.1 Moment Bootstrap

It turns out that control of the moments can be bootstrapped independently of any derivative bound.

Lemma 3.1

Let \(p\ge 2\) and assume that \(\gamma \) solves (1.16) on an interval \(0\le t\le T\) and assume that

$$\begin{aligned} \begin{aligned} \Vert \left( a^{-3p}+a^p+\theta ^p\right) \gamma (t=0)\Vert _{L^2_{\theta ,a}}&\le \varepsilon _0,\\ \Vert \left( a^{-3p}+a^p+\theta ^p\right) \gamma (t)\Vert _{L^2_{\theta ,a}}&\le \varepsilon _1\langle t\rangle ^\delta \end{aligned} \end{aligned}$$
(3.3)

then there holds that

$$\begin{aligned} \begin{aligned} \Vert \left( a^{-3p}+a^p\right) \gamma (t)\Vert _{L^2_{\theta ,a}}&\lesssim \varepsilon _0,\qquad \Vert \theta ^p\gamma (t)\Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _0+\varepsilon _1\langle t\rangle ^{\varepsilon _0}. \end{aligned} \end{aligned}$$

Proof of Lemma 3.1

The moments can be readily estimated. By (3.2) we have that, for \(q\in {\mathbb {R}}\)

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\left\| a^q\gamma \right\| _{L^2_{\theta ,a}}^2\lesssim \Vert a^{-1}\partial _\theta {\widetilde{\Psi }}\Vert _{L^\infty }\left\| a^q\gamma \right\| _{L^2_{\theta ,a}}^2. \end{aligned}$$

Using (2.26), the bootstrap assumptions (3.3) and Gronwall inequality, we find that

$$\begin{aligned} \begin{aligned} \Vert a^q\gamma (t)\Vert _{L^2_{\theta ,a}}^2&\lesssim \Vert a^q\gamma (0)\Vert _{L^2_{\theta ,a}}^2\lesssim \varepsilon _0^2. \end{aligned} \end{aligned}$$
(3.4)

Similarly, for \(q\ge 0\), using (3.2) and (2.27), we find that

$$\begin{aligned} \begin{aligned} \frac{1}{2}\frac{d}{dt}\left\| \theta ^q\gamma \right\| _{L^2_{\theta ,a}}^2&\lesssim \iint \vert \partial _a{\widetilde{\Psi }}(\theta ,a)\vert \cdot \vert \theta \vert ^q\gamma \cdot \vert \theta \vert ^{q-1}\gamma \,dad\theta \\&\lesssim t^{-1}\Vert (1+a^{-1})\gamma \Vert _{L^2_{\theta ,a}}^2\cdot \left[ \Vert \theta ^q\gamma \Vert _{L^2_{\theta ,a}}^2+\Vert a^{-3}\theta ^{q-1}\gamma \Vert _{L^2_{\theta ,a}}\Vert \theta ^q\gamma \Vert _{L^2_{\theta ,a}}\right] \\&\quad +t^{-\frac{5}{4}}\Vert \theta ^2\gamma \Vert _{L^2_{\theta ,a}}\Vert \theta ^q\gamma \Vert _{L^2_{\theta ,a}}\Vert \theta ^{q-1}\gamma \Vert _{L^2_{\theta ,a}} \end{aligned} \end{aligned}$$

and we can again apply Gronwall estimate. \(\quad \square \)

3.1.2 Control on the derivatives

We now show that we can obtain strong solutions by bootstrapping control of derivatives. It turns out that we will also need some moments of first derivatives. Given a weight function \(\omega (\theta ,a)\), we define

$$\begin{aligned} \omega ^{(1)}(\theta ,a):=(a+a^{-1})\omega (\theta ,a),\qquad \omega ^{(2)}(\theta ,a):=a\omega (\theta ,a), \end{aligned}$$

and we compute that

$$\begin{aligned} \begin{aligned}&\partial _t(\omega ^{(1)}\gamma _\theta )-\lambda \{{\widetilde{\Psi }},\omega ^{(1)}\gamma _\theta \} -\lambda \partial _\theta \partial _a{\widetilde{\Psi }}\cdot \omega ^{(1)}\gamma _\theta +\lambda \frac{a+a^{-1}}{a}\partial _\theta \partial _\theta {\widetilde{\Psi }}\cdot \omega ^{(2)}\gamma _a\\&\qquad =\frac{\lambda }{a}\partial _\theta {\widetilde{\Psi }}\cdot a\partial _a\omega ^{(1)}\gamma _\theta -\lambda \partial _a{\widetilde{\Psi }}\cdot \partial _\theta \omega ^{(1)}\gamma _\theta ,\\&\partial _t(\omega ^{(2)}\gamma _a)-\lambda \{{\widetilde{\Psi }},\omega ^{(2)}\gamma _a\}+\lambda \partial _\theta \partial _a{\widetilde{\Psi }}\cdot \omega ^{(2)}\gamma _a -\lambda \frac{a^2}{1+a^2}\partial _a\partial _a{\widetilde{\Psi }}\cdot \omega ^{(1)}\gamma _\theta \\&\qquad =\frac{\lambda }{a}\partial _\theta {\widetilde{\Psi }}\cdot a\partial _a\omega ^{(2)}\gamma _a-\lambda \partial _a{\widetilde{\Psi }}\cdot \partial _\theta \omega ^{(2)}\gamma _a.\\ \end{aligned} \end{aligned}$$
(3.5)

We will need this when

$$\begin{aligned} \omega \in \mathcal {I}_{0}:=\{1,\,a^{-3},\,a^{-6},\,\theta ,\,\theta a^{-3},\,\theta ^2\}. \end{aligned}$$
(3.6)

More generally, one can consider \(\omega _{p,q}:=\theta ^pa^q\) for \(p\in {\mathbb {N}}_{0}\) and \(q\in {\mathbb {Z}}\). Then the properties we need are

$$\begin{aligned} \vert \omega ^{(1)}_{p,q}\vert \ge \vert \omega _{p,q}\vert ,\qquad \vert a\partial _a\omega ^{(j)}_{p,q}\vert \lesssim \vert \omega ^{(j)}_{p,q}\vert ,\qquad \vert \partial _\theta \omega ^{(j)}_{p,q}\vert \lesssim p\omega ^{(j)}_{p-1,q}, \end{aligned}$$
(3.7)

and that the set of weights \(\mathcal {I}=\{\omega _{p,q}\}_{p,q}\) we consider satisfies the induction property

$$\begin{aligned} \begin{aligned} \omega _{p,q}\in \mathcal {I}\,\,\Rightarrow \omega _{p-1,q-3},\omega _{p-1,q}\in \mathcal {I}, \end{aligned} \end{aligned}$$
(3.8)

where we make the notational convention that \(\omega _{p,q}=0\) if \(p<0\). We call such sets of weights compatible.

Proposition 3.2

Let \(\mathcal {I}\) be a compatible finite set of weights. Assume that \(\gamma \) solves (1.16) for \(0\le t\le T\) and satisfies for any weight

$$\begin{aligned} \begin{aligned}&\Vert \omega ^{(1)}_{p,q}\partial _\theta \gamma (t=0)\Vert _{L^2_{\theta ,a}}{+}\Vert \omega ^{(2)}_{p,q}\partial _a\gamma (t{=}0)\Vert _{L^2_{\theta ,a}}{+}\Vert \left( a^{20}{+}a^{- 20}{+}\theta ^{20}\right) \gamma (t=0)\Vert _{L^2_{\theta ,a}} \le \varepsilon _0\\&\qquad \Vert \left( a^{20}+a^{-20}+\theta ^{20}\right) \gamma (t)\Vert _{L^2_{\theta ,a}}\le \varepsilon _1\langle t\rangle ^\delta ,\\&\qquad \Vert \omega ^{(1)}_{p,q}\partial _\theta \gamma (t)\Vert _{L^2_{\theta ,a}}+\Vert \omega ^{(2)}_{p,q}\partial _a\gamma (t)\Vert _{L^2_{\theta ,a}}\le \varepsilon _1\langle t\rangle ^{(p+1)\delta }. \end{aligned} \end{aligned}$$
(3.9)

Then the following stronger bounds hold for any weights

$$\begin{aligned} \begin{aligned} \Vert \omega ^{(1)}_{p,q}\partial _\theta \gamma \Vert _{L^2_{\theta ,a}}&\lesssim \varepsilon _0+\varepsilon _1^\frac{3}{2}\langle t\rangle ^{p\delta },\qquad \Vert \omega ^{(2)}_{p,q}\partial _a\gamma \Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _0+\varepsilon _1^\frac{3}{2}\langle t\rangle ^{(p+1)\delta }. \end{aligned} \end{aligned}$$
(3.10)

In particular, the case of \(\omega =1\) gives the result of Proposition 1.5.

Proof of Proposition 3.2

Writing \(\omega =\omega _{p,q}\) for simplicity of notation, using (3.5) we find that

$$\begin{aligned} \begin{aligned} \frac{d}{dt}\Vert \omega ^{(1)}\gamma _\theta \Vert _{L^2_{\theta ,a}}^2&\lesssim \iint \left[ \vert \partial _\theta \partial _a{\widetilde{\Psi }}\vert +\vert \frac{1}{a}\partial _\theta {\widetilde{\Psi }}\vert \right] \cdot (\omega ^{(1)}\gamma _\theta )^2\, d\theta da\\&\quad +\!\iint (1+a^{-2})\vert \partial _\theta \partial _\theta {\widetilde{\Psi }}\vert \cdot \vert \omega ^{(2)}\gamma _a\cdot \omega ^{(1)}\gamma _\theta \vert \, d\theta da\\&\quad +\iint \vert \partial _a{\widetilde{\Psi }}\vert \cdot \vert \partial _\theta \omega ^{(1)}\gamma _\theta \cdot \omega ^{(1)}\gamma _\theta \vert \, d\theta da, \end{aligned} \end{aligned}$$

where we have used \(\vert a\partial _a\omega ^{(j)}\vert \lesssim \omega ^{(j)}\) from (3.7). The first two terms on each right hand side lead directly to a Gronwall bootstrap using (2.26) and (2.30). The last is not present when \(p=0\). If \(p\ge 1\), we may use the induction property (3.7) with (2.27) to proceed as follows:

$$\begin{aligned} \begin{aligned}&\iint \vert \partial _a{\widetilde{\Psi }}\vert \cdot \vert \partial _\theta \omega ^{(1)}_{p,q}\gamma _\theta \cdot \omega ^{(1)}_{p,q}\gamma _\theta \vert d\theta da\\&\quad \lesssim t^{-1}\Vert (1+a^{-1})\gamma \Vert _{L^2_{\theta ,a}}^2\Vert \omega ^{(1)}_{p,q}\gamma _\theta \Vert _{L^2_{\theta ,a}}\Vert \omega _{p-1,q}^{(1)}\gamma _\theta \Vert _{L^2_{\theta ,a}}\\&\qquad +t^{-1}\Vert \gamma \Vert _{L^2_{\theta ,a}}^2\left[ \Vert \omega ^{(1)}_{p,q}\gamma _\theta \Vert _{L^2_{\theta ,a}}^2+\Vert \omega ^{(1)}_{p-1,q-3}\gamma \Vert _{L^2_{\theta ,a}}^2\right] \\&\qquad +t^{-\frac{5}{4}}\Vert \theta ^2\gamma \Vert _{L^2_{\theta ,a}}^2\Vert \omega ^{(1)}_{p,q}\gamma \Vert _{L^2_{\theta ,a}}\Vert \omega ^{(1)}_{p-1,q}\gamma \Vert _{L^2_{\theta ,a}}, \end{aligned} \end{aligned}$$

and we see that all terms lead to (3.10). Similarly, we compute that

$$\begin{aligned} \begin{aligned} \frac{d}{dt}\Vert \omega ^{(2)}\gamma _a\Vert _{L^2_{\theta ,a}}^2&\lesssim \iint \left[ \vert \partial _\theta \partial _a{\widetilde{\Psi }}\vert +\vert \frac{1}{a}\partial _\theta {\widetilde{\Psi }}\vert \right] \cdot (\omega ^{(2)}\gamma _a)^2 d\theta da\\&\quad +\iint \frac{a^2}{1+a^2}\vert \partial _a\partial _a{\widetilde{\Psi }}\vert \cdot \vert \omega ^{(2)}\gamma _a\cdot \omega ^{(1)}\gamma _\theta \vert d\theta da\\&\quad +\iint \vert \partial _a{\widetilde{\Psi }}\vert \cdot \vert \partial _\theta \omega ^{(2)}\gamma _a\cdot \omega ^{(2)}\gamma _a\vert d\theta da. \end{aligned} \end{aligned}$$

Here the only new term is the second one on the right hand side. For this term, we use (2.31) to get

$$\begin{aligned} \begin{aligned}&\iint \frac{a^2}{1+a^2}\vert \partial _a\partial _a{\widetilde{\Psi }}\vert \cdot \vert \omega ^{(2)}\gamma _a\cdot \omega ^{(1)}\gamma _\theta \vert d\theta da\\&\quad \lesssim t^{-1}\Vert a\gamma _a\Vert _{L^2_{\theta ,a}}\Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}}\Vert \omega ^{(2)}\gamma _a\Vert _{L^2_{\theta ,a}} \Vert \omega ^{(1)}\gamma _\theta \Vert _{L^2_{\theta ,a}}\\&\qquad +t^{-1}\Vert (a^2+a^{-2})\gamma \Vert _{L^2_{\theta ,a}}^2\Vert \omega ^{(2)}\gamma _a\Vert _{L^2_{\theta ,a}}\Vert \omega ^{(1)}\gamma _\theta \Vert _{L^2_{\theta ,a}}\\&\qquad +t^{-\frac{5}{4}}N_1\cdot \Vert \omega ^{(2)}\gamma _a \Vert _{L^2_{\theta ,a}}\Vert \omega ^{(1)}\gamma _\theta \Vert _{L^2_{\theta ,a}} \end{aligned} \end{aligned}$$

and since we have already controlled the \(\theta \) derivative, we may use (3.10) to obtain that

$$\begin{aligned} \begin{aligned} \Vert a\gamma _a\Vert _{L^2_{\theta ,a}}\Vert \omega ^{(1)}_{p,q} \gamma _\theta \Vert _{L^2_{\theta ,a}} \lesssim \varepsilon _1^2\langle t\rangle ^{(p+1)\delta }, \end{aligned} \end{aligned}$$

which gives an acceptable contribution with Gronwall’s estimate. \(\quad \square \)

3.2 Asymptotic behavior

The analysis in this section is partially inspired by [21, 31].

3.2.1 Weak-strong limit and asymptotic electric field

Before we obtain strong convergence of the particle distribution, we first need weak convergence of “asymptotic functions” which are defined in terms of averages along linearized trajectories. Given a bounded measurable function \(\tau \), we define

$$\begin{aligned} \begin{aligned} \langle \tau \rangle (t)&:=\iint \tau (\alpha )\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha . \end{aligned} \end{aligned}$$

The following Lemma states that these averages converge.

Lemma 3.3

Assume that \(\gamma \) solves (1.16) for \(0\le t\le T\) and satisfies the conclusions of Proposition 3.1 for \(p=2\) and the conclusions of Proposition 3.2. Given any bounded function \(\tau (a)\), the limit

$$\begin{aligned} \begin{aligned} \langle \tau \rangle _\infty :=\lim _{t\rightarrow \infty }\iint \tau (\alpha )\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha \end{aligned} \end{aligned}$$

exists and satisfies

$$\begin{aligned} \vert \langle \tau \rangle (t)-\langle \tau \rangle _\infty \vert \lesssim \varepsilon _1^4t^{-\frac{1}{4}}. \end{aligned}$$
(3.11)

Proof of Lemma 3.3

Using (1.16), we see that

$$\begin{aligned} \begin{aligned} \left| \frac{d}{dt}\langle \tau \rangle (t)\right|&\lesssim \left|\iint \tau (\alpha ) \partial _\theta {\widetilde{\Psi }}\cdot \gamma (\vartheta ,\alpha )\cdot \partial _a \gamma (\vartheta ,\alpha )d\vartheta d\alpha \right|\\&\quad +\left|\iint \tau (\alpha )\partial _\alpha \partial _\theta {\widetilde{\Psi }}\cdot \gamma (\vartheta ,\alpha ) \cdot \partial _\theta \gamma (\vartheta ,\alpha )d\vartheta d\alpha \right|\\&\lesssim \left[ \Vert a^{-1}\partial _\theta {\widetilde{\Psi }}\Vert _{L^\infty _{\theta ,a}}\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}} +\Vert \partial _\theta \partial _a{\widetilde{\Psi }}\Vert _{L^\infty _{\theta ,a}} \left\| \partial _\theta \gamma \right\| _{L^2_{\theta ,a}}\right] \Vert \tau (\alpha )\gamma (\vartheta ,\alpha ) \Vert _{L^2_{\theta ,a}}. \end{aligned} \end{aligned}$$

Proposition 2.7, Proposition 2.10 and (3.9) then show that \(\langle \tau \rangle (t)\) is a Cauchy sequence as \(t\rightarrow \infty \). Integrating the time derivative then gives the bound (3.11).\(\quad \square \)

The convergence of the scattering data allows to define the asymptotic electric potential and electric field

$$\begin{aligned} \begin{aligned} \Psi _\infty (a)&:=\lim _{t\rightarrow \infty }\iint \frac{1}{\max \{a,\alpha \}}\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha ,\\ \mathcal {E}_\infty (a)&:=\frac{1}{a^2}\lim _{t\rightarrow \infty }\iint {\mathfrak {1}}_{\{\alpha \le a\}}\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha . \end{aligned} \end{aligned}$$

Informally, we expect that

$$\begin{aligned} \begin{aligned} {\widetilde{\Psi }}&=\frac{1}{t}\Psi _\infty (a)+o(t^{-1}),\qquad {\widetilde{E}}(\theta ,a,t)=\frac{1}{t}\mathcal {E}_\infty (a)+o(t^{-1}). \end{aligned} \end{aligned}$$

Under our assumptions we can prove the following:

Lemma 3.4

Under the assumptions of Lemma 3.3, there holds that

$$\begin{aligned} \begin{aligned} \vert \mathcal {E}_\infty (a)\vert \lesssim \varepsilon _1^2,\qquad {\mathfrak {1}}_{\mathcal {B}_*}\cdot \left| \partial _a{\widetilde{\Psi }}(\theta ,a,t)+\frac{1}{t}\mathcal {E}_\infty (a)\right|&\lesssim t^{-\frac{6}{5}}\varepsilon _1^2, \end{aligned} \end{aligned}$$

where \(\mathcal {B}_*\) is a smaller version of the bulk

$$\begin{aligned} \begin{aligned} \mathcal {B}_*:=\{(\theta ,a)\,\, \vert \theta \vert \le t^\frac{1}{4},\,\, t^{-\frac{1}{4}}\le a\le t^\frac{1}{4}\}\subset \mathcal {B}. \end{aligned} \end{aligned}$$

Proof of Lemma 3.4

The first estimate follows from the uniform bound

$$\begin{aligned} \begin{aligned} \frac{1}{a^2}\left| \iint {\mathfrak {1}}_{\{\alpha \le a\}}\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha \right|&\le \iint {\mathfrak {1}}_{\{\alpha \le a\}}\alpha ^{-2}\gamma ^2(\vartheta ,\alpha ,t)d\vartheta d\alpha \le \Vert a^{-1}\gamma \Vert _{L^2_{\theta ,a}}^2\lesssim \varepsilon _1^2. \end{aligned} \end{aligned}$$

We now turn to the second estimate. Recall from the proof of Proposition 2.7 that

$$\begin{aligned} \begin{aligned} \partial _a{\widetilde{\Psi }}(\theta ,a,t)&=-\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}\left[ t\partial _\theta R(\theta +ta,a)+\partial _aR(\theta +ta,a)\right] =-t\frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}+\mathcal {R}_1,\\ {\mathfrak {1}}_{\mathcal {B}}\vert \mathcal {R}_1\vert&\le \frac{\mathbf{m}({\widetilde{R}}(\theta ,a),t)}{{\widetilde{R}}^2(\theta ,a)}\left( \frac{tq}{a^2{\widetilde{R}}(\theta ,a)}+\frac{q}{a^3}\ln \langle \frac{a^2}{q}{\widetilde{R}}(\theta ,a)\rangle \right) \lesssim \varepsilon _1^2t^{-\frac{6}{5}}, \end{aligned} \end{aligned}$$

where we have used Lemma 2.4. Furthermore, with \(\mathcal {E}(a,t):=a^{-2}\iint {\mathfrak {1}}_{\alpha \le a}\gamma ^2(\vartheta ,\alpha ,t)\), we have

$$\begin{aligned} \begin{aligned} t\frac{\mathbf{m}(at,t)}{(at)^2}&=\frac{1}{a^2t}\iint {\mathfrak {1}}_{\{R(\vartheta +t\alpha ,\alpha )\le at\}}\gamma ^2(\vartheta ,\alpha ,t)\, d\vartheta d\alpha =\frac{1}{t}\left[ \mathcal {E}(a,t)+\mathcal {R}_2\right] ,\\ \vert \mathcal {R}_2\vert&\le \frac{1}{a^2}\iint {\mathfrak {1}}_{\mathcal {S}_1\cup \mathcal {S}_2}\gamma ^2(\vartheta ,\alpha ,t)\, d\vartheta d\alpha , \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} \mathcal {S}_1\cup \mathcal {S}_2&=\{R(\vartheta +t\alpha ,\alpha )\le at\}\triangle \{\alpha t\le at\},\\ \mathcal {S}_1&:=\{\alpha \le a,\,\, R(\vartheta +t\alpha ,\alpha )\ge at\},\qquad \mathcal {S}_2:=\{\alpha \ge a,\,\, R(\vartheta +t\alpha ,\alpha )\le at\}. \end{aligned} \end{aligned}$$

Note from (2.9), there holds that

$$\begin{aligned} \begin{aligned} \vert \vartheta +t\alpha \vert \ge {\widetilde{R}}(\vartheta ,\alpha )\ge \vert \vartheta +t\alpha \vert -\frac{q}{a^2}\ln \langle \frac{a^2\vert \vartheta +t\alpha \vert }{q}\rangle , \end{aligned} \end{aligned}$$

so that on the support of \(\mathcal {B}_*\)

$$\begin{aligned} \vert {\widetilde{R}}(\vartheta ,\alpha )-t\alpha \vert \le Ct^{\frac{1}{4}} \end{aligned}$$
(3.12)

for some universal constant \(C>0\). Therefore we have

$$\begin{aligned} \begin{aligned} \mathcal {B}_*\cap \{\mathcal {S}_1\cup \mathcal {S}_2\}\subset \{a-Ct^{-\frac{1}{4}}\le \alpha \le a+Ct^{-\frac{1}{4}}\}, \end{aligned} \end{aligned}$$

so that, using (3.9),

$$\begin{aligned} \begin{aligned} {\mathfrak {1}}_{\mathcal {B}_*}\cdot \vert \mathcal {R}_2\vert&\lesssim \frac{1}{a^2}{\mathfrak {1}}_{\mathcal {B}_*}\cdot \iint {\mathfrak {1}}_{\mathcal {B}^c_*}\gamma ^2(\vartheta ,\alpha )\, d\vartheta d\alpha +\frac{1}{a^2}\iint {\mathfrak {1}}_{\{\vert \alpha -a\vert \le Ct^{-\frac{1}{4}}\}}\gamma ^2(\vartheta ,\alpha )\, d\vartheta d\alpha \\&\lesssim t^{\frac{1}{2}}\iint {\mathfrak {1}}_{\mathcal {B}^c_*}\gamma ^2(\vartheta ,\alpha )\, d\vartheta d\alpha +\iint {\mathfrak {1}}_{\{\vert \alpha -a\vert \le Ct^{-\frac{1}{4}}\}}\left( \frac{\alpha }{a}\right) ^2\alpha ^{-2}\gamma ^2(\vartheta ,\alpha )\, d\vartheta d\alpha \\&\lesssim t^{-\frac{1}{5}}\varepsilon _1^2. \end{aligned} \end{aligned}$$

Finally, using (3.12) with Lemma 2.6 and Lemma 2.8,

$$\begin{aligned} \begin{aligned} {\mathfrak {1}}_{\mathcal {B}_*}\left| \frac{\mathbf{m}({\widetilde{R}}(\theta ,a))}{{\widetilde{R}}^2(\theta ,a)}-\frac{\mathbf{m}(at)}{a^2t^2}\right|&\le {\mathfrak {1}}_{\mathcal {B}_*}\cdot \frac{\vert \mathbf{m}({\widetilde{R}}(\theta ,a))-\mathbf{m}(at)\vert }{a^2t^2}+{\mathfrak {1}}_{\mathcal {B}_*}\cdot \frac{\mathbf{m}({\widetilde{R}}(\theta ,a))}{{\widetilde{R}}^2(\theta ,a)}\left| 1-\frac{{\widetilde{R}}^2(\theta ,a)}{a^2t^2}\right| \\&\lesssim {\mathfrak {1}}_{\mathcal {B}_*}\cdot \frac{\vert {\widetilde{R}}(\theta ,a)-at\vert }{(at)^2}\cdot \sup _{r} \varvec{\varrho }(r,t)+{\mathfrak {1}}_{\mathcal {B}_*}\cdot \varepsilon _1^2t^{-2}\frac{\vert {\widetilde{R}}(\theta ,a)-at\vert }{at}\\&\lesssim \varepsilon _1^2 t^{-\frac{6}{5}}. \end{aligned} \end{aligned}$$

Since by (3.11) we have that \(\left|\mathcal {E}(a,t)-\mathcal {E}_\infty (a)\right|\lesssim \varepsilon _1^4 t^{-\frac{1}{4}}\), this concludes the proof. \(\quad \square \)

3.2.2 Strong limit

We can now correct the trajectories to get a strong limit and prove our main theorem.

Proof of Theorem 1.6

Under these conditions, we may apply Lemma 3.1 and Proposition 3.2 to propagate global bounds on the moments and derivatives with the weights in (3.6). Lemma 3.3 justifies the existence of \(\mathcal {E}_\infty \) and we have the estimates in Lemma 3.4. Let

$$\begin{aligned} \begin{aligned} \sigma (\theta ,a,t)&:=\gamma (\theta +\lambda \ln t\cdot \mathcal {E}_\infty (a),a,t). \end{aligned} \end{aligned}$$

We claim that \(\sigma \) converges to a limit \(\sigma _\infty \) in \(L^2_{\theta ,a}\). Indeed we compute that

$$\begin{aligned} \begin{aligned} \partial _t\sigma (\theta ,a,t)&=\lambda \left( \partial _a{\widetilde{\Psi }}(\theta ^*,a,t)+\frac{1}{t}\mathcal {E}_\infty (a)\right) \partial _\theta \gamma (\theta ^*,a,t)-\lambda \partial _\theta {\widetilde{\Psi }}(\theta ^*,a,t)\partial _a\gamma (\theta ^*,a,t),\\ \theta ^*&=\theta +\lambda \ln t\cdot \mathcal {E}_\infty (a). \end{aligned} \end{aligned}$$

We directly obtain that

$$\begin{aligned} \begin{aligned} \Vert \partial _\theta {\widetilde{\Psi }}(\theta ^*,a,t)\partial _a\gamma (\theta ^*,a,t)\Vert _{L^2_{\theta ,a}}&\lesssim \Vert a^{-1}\partial _\theta {\widetilde{\Psi }}\Vert _{L^\infty _{\theta ,a}}\Vert a\partial _a\gamma \Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _1^2t^{-\frac{5}{4}}, \end{aligned} \end{aligned}$$

while using Lemma 3.4, we find that

$$\begin{aligned} \begin{aligned} \Vert {\mathfrak {1}}_{\mathcal {B}_*}\left( \partial _a{\widetilde{\Psi }}(\theta ^*,a,t)+\frac{1}{t}\mathcal {E}_\infty (a)\right) \partial _\theta \gamma (\theta ^*,a,t)\Vert _{L^2_{\theta ,a}}&\lesssim \varepsilon _1^2 t^{-\frac{6}{5}}\Vert \partial _\theta \gamma \Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _1^3 t^{-\frac{9}{8}}. \end{aligned} \end{aligned}$$

Using (3.9) and (2.27) yields

$$\begin{aligned} \begin{aligned} \Vert {\mathfrak {1}}_{\mathcal {B}^c_*}\frac{1}{t}\mathcal {E}_\infty (a)\partial _\theta \gamma (\theta ^*,a,t)\Vert _{L^2_{\theta ,a}}&\lesssim t^{-\frac{6}{5}}\varepsilon _1^2\Vert (\vert \theta \vert +a+a^{-1})\partial _\theta \gamma \Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _1^3 t^{-\frac{9}{8}}, \\ \Vert {\mathfrak {1}}_{\mathcal {B}^c_*}\partial _a{\widetilde{\Psi }}(\theta ^*,a,t)\partial _\theta \gamma (\theta ^*,a,t)\Vert _{L^2_{\theta ,a}}&\lesssim \varepsilon _1^2 t^{-1}\Vert {\mathfrak {1}}_{\mathcal {B}_*}(1+\vert \theta \vert +a^{-3})\partial _\theta \gamma \Vert _{L^2_{\theta ,a}}\lesssim \varepsilon _1^3 t^{-\frac{9}{8}}. \end{aligned} \end{aligned}$$

This establishes (1.20). In addition, the bounds from Proposition 3.2 give uniform bounds on \(\partial _\theta \gamma \) in \(L^2_{\theta ,a}\), which carries over to \(\gamma _\infty \). Finally (1.21) follows from \(L^2_{\theta ,a}\) convergence.

Finally, the uniqueness of solutions follows by Gronwall estimate on the \(L^2_{\theta ,a}\)-norm of the difference of two solutions \(\gamma _j\), \(j=1,2\). Starting with (1.16), one computes that

$$\begin{aligned} \begin{aligned} \frac{1}{2}\frac{d}{dt}(\gamma _1-\gamma _2)^2=\lambda (\gamma _1-\gamma _2)\cdot \{{\widetilde{\Psi }}_1-{\widetilde{\Psi }}_2,\gamma _2\}+\lambda (\gamma _1-\gamma _2)\cdot \{{\widetilde{\Psi }}_1,\gamma _1-\gamma _2\} \end{aligned} \end{aligned}$$

and since the second term is a perfect Poisson bracket, we deduce that

$$\begin{aligned}&\frac{1}{2}\frac{d}{dt}\left\| \gamma _1-\gamma _2\right\| _{L^2_{\theta ,a}}^2\\&\quad \lesssim \left( \Vert \partial _\theta ({\widetilde{\Psi }}_1-{\widetilde{\Psi }}_2)\Vert _{L^\infty _{\theta ,a}}\left\| \partial _a\gamma _2\right\| _{L^2_{\theta ,a}}+\Vert \partial _a({\widetilde{\Psi }}_1-{\widetilde{\Psi }}_2)\Vert _{L^\infty _{\theta ,a}}\left\| \partial _\theta \gamma _2\right\| _{L^2_{\theta ,a}}\right) \left\| \gamma _1-\gamma _2\right\| _{L^2_{\theta ,a}}, \end{aligned}$$

where \({\widetilde{\Psi }}_j\) is the potential field corresponding to \(\gamma _j\), \(j=1,2\). Now using (2.23), the bounds for \({\widetilde{R}}\), \(\partial _\theta {\widetilde{R}}\), \(\partial _a{\widetilde{R}}\) in. (2.12)–(2.13), and the fact that \(\mathbf{m}\) is quadratic in \(\gamma \), it follows that

$$\begin{aligned}&\Vert \partial _\theta ({\widetilde{\Psi }}_1-{\widetilde{\Psi }}_2)(t)\Vert _{L^\infty _{\theta ,a}}+\Vert \partial _a({\widetilde{\Psi }}_1-{\widetilde{\Psi }}_2)(t)\Vert _{L^\infty _{\theta ,a}}\\&\qquad \lesssim \langle t\rangle \left[ \left\| \langle a\rangle ^4\gamma _1\right\| _{L^2_{\theta ,a}}+\left\| \langle a\rangle ^4\gamma _2\right\| _{L^2_{\theta ,a}}\right] \cdot \left\| \gamma _1-\gamma _2\right\| _{L^2_{\theta ,a}}. \end{aligned}$$

and we can conclude the Gronwall argument. \(\quad \square \)