1 Introduction and statement of the main result

In this paper we are concerned with the question of finding multiple positive solutions to problem

$$\begin{aligned} \left\{ \begin{array}{l} -\Delta u+a(x) u =u^p \ \ \ \text{ in } \ \ {\mathbb {R}}^N\\ u \in H^1({\mathbb {R}}^N), \end{array}\right. \end{aligned}$$
(1.1)

where \(N\ge 2\), \(p> 1,\ p< \frac{N+2}{N-2}\) if \(N\ge 3\).

Euclidean Scalar Field equations like (1.1) arise naturally in a large number of Physical topics like the study of solitary waves in nonlinear Schrödinger equations or in nonlinear Klein-Gordon equations. However, besides the relevance in applied sciences, the interest of researchers in studying such kind of problems has been also due to the loss of compactness created by the invariance of \({\mathbb {R}}^N\) under the action of translations and to the related challenging difficulties.

Actually, (1.1) has a variational structure, its solutions can be searched as critical points of the functional

$$\begin{aligned} E(u)={1\over 2}\int _{{\mathbb {R}}^N}(|Du|^2+a(x)u^2)dx-{1\over p+1}\int _{{\mathbb {R}}^N}|u|^{p+1}dx\qquad \forall u\in H^1({\mathbb {R}}^N) \end{aligned}$$
(1.2)

but, since E does not satisfy the Palais-Smale compactness condition, the classical variational methods cannot be applied in a standard way. Furthermore, one can understand that the difficulty in facing problems of this type is not only a technical fact considering that, really, (1.1) can have only the trivial solution as happens, for instance, when the potential a(x) is increasing along one direction (see [9]).

In this paper, in view of their physical meaning too, we shall look only at potentials satisfying:

$$\begin{aligned} a\in L^{N/2}_{\mathrm{loc}\,}({\mathbb {R}}^N),\quad a_0{:=}\inf _{{\mathbb {R}}^N}a> 0,\qquad \lim _{|x|\rightarrow \infty }a(x)=a_\infty . \end{aligned}$$
(1.3)

Starting from the Sixties of last century many mathematicians have devoted a lot of efforts and exploited different tools to overcome the difficulties and to prove existence and multiplicity of solutions to (1.1).

First results were obtained using the spherical symmetry of \({\mathbb {R}}^N\) and considering radial data. So the existence of a ground state radial positive solution and infinitely many radial changing sign solutions has been obtained first by ordinary differential equations methods (see [26, 28]) then by variational methods (see [5, 6, 29]) taking advantage of the compactness of the embedding in \(L^q ({\mathbb {R}}^N)\), \(2< q < \frac{2N}{N-2}\), of the subspace of \(H^1({\mathbb {R}}^N)\) consisting of radially symmetric functions. It is worth also to observe that under radial symmetry assumptions the existence of infinitely many non radial changing sign solutions has been shown (see [4]).

Although analogous results could be reasonably expected when the symmetry in (1.1) is broken by non symmetric coefficients, on the contrary in this case even the question of the existence appeared at once not easy to handle and affected by an impressive topological difference between the cases in which the potential a(x) approaches its limit at infinity from below or from above.

In the first case the existence of a positive ground state solution was obtained by minimizing the functional E on the Nehari natural constraint and applying concentration-compactness arguments [19, 27], while the multiplicity question had an answer in [8] where the existence of infinitely many changing sign solutions was proved assuming on the potential to decay slower than any exponential and that the directional derivative enjoys some stability with respect to small perturbation of the direction.

When a(x) goes to \(a_\infty \) from above the minimization argument does not work and, conversely, when \(a(x) - a_\infty > 0\) on a positive measure set, (1.1) has not a ground state solution. Nevertheless, the existence of a positive bound state solution has been shown in [2, 3] by subtle topological and variational arguments, assuming a decay of a(x) faster than some exponential. The multiplicity question is even more tricky.

During last decade some progress has been developed looking mainly for positive multi-bump solutions. Before discussing nonsymmetric cases, we mention that, again, under symmetry assumptions the question can be controlled in a better way. Indeed, the existence of infinitely many positive multi-bump solutions to (1.1) has been proved assuming on a(x) a suitable polynomial decay and radial symmetry in [30], planar symmetry in [15, 16].

The multiplicity question for (1.1) involving potentials without symmetry has been first considered in [12] where the existence of infinitely many positive multibump solutions (namely the existence for any \(k\in {\mathbb {N}}\) of a k-bump solution) has been obtained asking to the potential a “slow” decay with respect to some exponential plus a smallness of the oscillation \(\sup _{x\in {\mathbb {R}}^N}|a-a_\infty |_{L^{N/2}(B(x,1))}\). However, while a suitable decay condition on \(a(x)-a_\infty \) appears quite reasonable, the second condition seems essentially due to technical motives. Hence, in subsequent papers some efforts have been made to drop this condition, but, until now, with successful results only in the planar case \(N=2\) and assuming polynomial decay of a(x) to \(a_\infty \) ([15, 17]).

On the other hand, it is worth remarking that a careful analysis of the proofs in [12, 15,16,17, 30] makes the reader understand that the symmetry in [16, 30], the small oscillation assumption in [12], the dimension restriction \(N=2\) in [15, 17], in spite of the different arguments and methods displayed in the papers, are essentially related to the same basic fact for the proof: working with functions having bumps located in regions where \(a(x)- a_\infty \) is small. This observation is, in a way, also validated by the results of [11] where the existence of infinitely many positive and infinitely many nodal multi-bump solutions to (1.1) is shown considering potentials, having slow decay but not small oscillation neither symmetry, which are asked to sink in some large regions of \({\mathbb {R}}^N\) to the end of localizing the bumps suitably far and, when one looks for changing sing solutions, to control the attractive effect of positive and negative bumps each other.

The result we obtain is, in our opinion, a considerable progress in proving the existence of infinitely many positive solutions to (1.1) in non symmetric situations, without imposing restrictions on the dimension of the space \({\mathbb {R}}^N\) as in [15, 17] and dropping the oscillation condition asked in [12]:

Theorem 1.1

Let a(x) satisfy (1.3) and

$$\begin{aligned}&\lim _{|x| \rightarrow \infty } [a(x) - a_\infty ] e^{\eta |x|} = \infty \quad \forall \eta > 0, \end{aligned}$$
(1.4)
$$\begin{aligned}&\lim _{\rho \rightarrow \infty } \sup \left\{ a(\rho \theta _1) - a(\rho \theta _2) \ :\ \theta _1, \theta _2 \in {\mathbb {R}}^N, |\theta _1|= |\theta _2|=1 \right\} e^{\tilde{\eta }\rho } = 0 \quad \text{ for } \text{ some } \ \tilde{\eta }> 0.\nonumber \\ \end{aligned}$$
(1.5)

Moreover, assume that there exists \(\check{\rho }> 0\) such that

$$\begin{aligned} {\partial ^2\, \over \partial \rho ^2}\int _{{\mathbb {R}}^N}a(x)\check{u}^2(x-\rho \theta )\, dx> 0\qquad \forall \rho > \check{\rho },\ \forall \theta \in {\mathbb {R}}^N\ \mathrm{such\ that }\ |\theta |=1, \end{aligned}$$
(1.6)

where \(\check{u}(x)=(1+|x|)^{1-N\over 2}e^{-\sqrt{a_\infty }\, |x|}\).

Then problem (1.1) has infinitely many positive solutions.

In Remark 5.5 we present some examples of potentials a(x) which satisfy all the assumptions required in Theorem 1.1.

The proof method is fully variational and it is a variant of the arguments introduced in [20, 21] and already applied in [10,11,12, 22, 23]. Of course it is well known that solutions of (1.1) correspond to free critical points of E or, equivalently, to critical points of E on the Nehari natural constraint. However here, as in the quoted papers, critical points are searched by min-max arguments in suitable classes of positive functions having for all \(k \in {\mathbb {N}}\) exactly k “bumps”, satisfying k local Nehari natural constraints and k local barycenter conditions. Therefore, being E subject to constraints that are not all natural, the min-max procedure gives rise to functions that are solutions of equations where Lagrange multipliers of the related constraints appear and it is an heavy task to show null the Lagrange multipliers and so proving that constrained critical points are actually free critical points of E and hence solutions of (1.1). We point out also that, unlike the quoted papers, in the present research the k-bump functions belonging to the above described classes must satisfy a further condition having the purpose of helping to localize the bumps close to large radius spheres, when k is large.

Altough our method is variational, it allows us to describe some considerable asymptotic properties of the solutions we find. We have collected them in a proposition whose statement needs the introduction of the limit problem related to (1.1)

$$\begin{aligned} -\Delta u+a_\infty u = |u|^{p-1}u \ \ \ \text{ in } \ \ {\mathbb {R}}^N \ \ u \in H^1({\mathbb {R}}^N). \end{aligned}$$
(1.7)

It is well known that problem (1.7) has a positive ground state solution, unique up to translations, and that such a solution has a unique maximum point, has radial symmetry with respect to its maximum point and decreases as the radial coordinate increases (see f.i. [5]). We denote by w the positive ground state solution that achieves its maximum in the origin and, since w has radial symmetry, with abuse of notation we shall write w(R) meaning the value that w(x) takes at points x such that \(|x|=R\).

Proposition 1.2

Let assumptions of Theorem 1.1 be satisfied. Then \(\bar{k} \in {\mathbb {N}}\) exists such that to any \( k \ge \bar{k}\) there corresponds a positive solution \(u_k\) of (1.1) having the following property: to \(u_k\) a k-tuple of points \((x_1^k, \dots ,x_k^k )\) of \({\mathbb {R}}^N\) is associated in such a way that

$$\begin{aligned}&\lim _{k\rightarrow \infty }\sup \{|u_k(x+x_i^k)-w(x)|\ :\ |x|\le R, \ i=1,\ldots ,k\}=0\qquad \forall R> 0, \end{aligned}$$
(1.8)
$$\begin{aligned}&\lim _{k\rightarrow \infty }\sup \{u_k(x)\ :\ |x-x_i^k|\ge R,\ i=1,\ldots ,k\}=w(R)\qquad \forall R> 0, \end{aligned}$$
(1.9)
$$\begin{aligned}&\lim _{k\rightarrow \infty }\min \{|x_i^k|\ :\ i=1,\ldots ,k\}=\infty , \end{aligned}$$
(1.10)
$$\begin{aligned}&\lim _{k\rightarrow \infty }\min \{|x_i^k-x^k_j|\ :\ i\ne j,\ i,j=1,\ldots ,k\}=\infty , \end{aligned}$$
(1.11)
$$\begin{aligned}&\lim _{k\rightarrow \infty } \frac{\max \{|x_i^k|\ :\ i=1,\ldots ,k\}}{\min \{|x_i^k|\ :\ i=1,\ldots ,k\}} =1, \end{aligned}$$
(1.12)
$$\begin{aligned}&\lim _{k\rightarrow \infty }[\max \{|x_i^k|\ :\ i=1,\ldots ,k\}- \min \{|x_i^k|\ :\ i=1,\ldots ,k\}]=0. \end{aligned}$$
(1.13)

Furthermore, \(u_k{\longrightarrow }0\) as \(k\rightarrow \infty \), uniformly on the compact subsets of \({\mathbb {R}}^N\), while \(\lim \limits _{k\rightarrow \infty }\Vert u_k\Vert _{H^1({\mathbb {R}}^N)}\) \(=\infty \) and \(\lim \limits _{k\rightarrow \infty } E(u_k)=\infty \).

The above proposition helps to guess a suggestive picture of the solutions shape arguing in this way: the points \(x_1^k, \dots ,x_k^k \) are noting but the barycenters of the bumps which, as k increases, go far from the origin and far away from each other, while at the same time, as k increases, the shape of \(u_k\) in balls centered at \(x^k_i\) for all \(i=1,\dots , k\) approaches the shape of w and outside \(u_k\) decays as w decays. So, considering the profile of w, one can “see” the \(u_k\) as functions having an incresing number of well glued “bumps” which become more and more similar to copies of w. Property (1.12) makes we understand that the bumps tend to be distributed around spheres in \({\mathbb {R}}^N\), indeed the points \({x_1^k\over m_k},\ldots ,{x_k^k\over m_k}\), where \(m_k=\min \{|x^k_i| \): \(i=1,\ldots ,k\}\), as \(k\rightarrow \infty \) become closer and closer to the sphere of radius 1 centered at the origin \( \partial B(0,1)\subset {\mathbb {R}}^N\). Furthermore, as we shall see in Corollary 4.3, more can be asserted, namely that the distribution of these points tends to be “uniform” around \( \partial B(0,1)\), because, for all \(x\in \partial B(0,1) \) and for all \(r> 0\) the number of the points \({x_1^k\over m_k},\ldots ,{x_k^k\over m_k}\) lying in B(xr) tends to infinity, as k goes to infinity, with rate \(k\, r^{N-1}\).

Finally, notice that in [24] we announced (with only a sketch of the proof) a multiplicity result that in the present paper is proved, by Theorem 1.1, in the case where the additional conditions (1.5) and (1.6) hold. In a paper in preparation we show that when these conditions do not hold there exist infinitely many positive multibump solutions of problem (1.1) which, as the number of the bumps tends to infinity, tend to present an asymptotic distribution different from the spherical distribution described by Proposition 1.2 (see [25] and also [18]).

The paper is organized as follows. In Sect. 2 the classes of k-bumps functions in which the solutions are seeked are introduced and their properties are recalled. In Sect. 3 the min-max arguments to find the good candidates to be critical points are displayed, and in Sect. 4 the asymptotic behaviour of these functions is described as the number of bumps increases. Finally, in Sect. 5 the k-bump functions found in Sect. 3 are shown to be free critical points of E. A table of the main notations we use in this paper is reported in the Appendix.

2 Variational framework and known facts

Throughout the paper we make use of the following notation:

  • \(H^{1}({\mathbb R}^{N})\) is the usual Sobolev space endowed with the scalar product and norm

    $$\begin{aligned} (u, v)=\int _{\mathbb R^N}[D u\, D v+a_\infty uv]dx;\qquad \Vert u\Vert ^{2}=\int _{{\mathbb R}^N}\left[ |D u|^{2}+a_\infty u^{2}\right] dx; \end{aligned}$$
  • \(L^q(\Omega )\), \(1\le q \le +\infty \), \(\Omega \subseteq \mathbb R^N\), denotes a Lebesgue space, the norm in \(L^q(\Omega )\) is denoted by \(|u|_{q, \Omega }\) when \(\Omega \) is a proper subset of \(\mathbb R^N\), by \(|\cdot |_q\) when \(\Omega =\mathbb R^N\);

  • for any \(\rho > 0\) and for any \(z\in \mathbb R^N\), \(B(z,\rho )\) denotes the ball of radius \(\rho \) centered at z,  and \(S(z,\rho )=\partial B(z,\rho )\);

  • for any measurable set \( \mathcal {O} \subset {\mathbb {R}}^N, \ |\mathcal {O}|\) denotes its Lebesgue measure;

  • \(c,c', C, C', C_i,\ldots \) denote various positive constants.

In what follows we denote by

$$\begin{aligned} E_\infty (u)={1\over 2}\int _{{\mathbb {R}}^N}(|D u|^2+a_\infty u^2)dx-{1\over p+1}\int _{{\mathbb {R}}^N}|u|^{p+1}dx,\qquad u\in H^1({\mathbb {R}}^N), \end{aligned}$$
(2.1)

the functional related to the limit problem (1.7). Next lemma (see [1, 7] and the references therein) summarizes the main properties of the ground state solution w of (1.7).

Lemma 2.1

The function w is unique up to translations, has radial symmetry, decreases when the radial coordinate increases and satisfies

$$\begin{aligned} \lim _{|x|\rightarrow \infty }|D^i w (x)||x|^{\frac{N-1}{2}}e^{\sqrt{a_\infty }|x|}=d_i> 0\quad \text{ for } i=0,1 \end{aligned}$$
(2.2)

where \(D^0w=w\) and \(D^1w=D w\). If we set

$$\begin{aligned} w_y(x)=w(x-y)\quad \forall x ,y\in {\mathbb {R}}^N\ \text{ and } Z=\{w_y\ :\ y\in {\mathbb {R}}^N\}, \end{aligned}$$
(2.3)

then Z is non degenerate, namely the following properties are true:

  1. a)

    \((E_{\infty })''(w_y)\) is an index zero Fredholm map for all \(y \in {\mathbb {R}}^N\);

  2. b)

    Ker \((E_{\infty })''(w_y)\) = span \(\left\{ \frac{\partial w_y}{\partial x_j}: j=1,\ldots ,N \right\} = T_{w_y}(Z)\), where \(T_{w_y}(Z)\) is the tangent space to Z at \(w_y\).

Since \(a_0=\inf _{{\mathbb {R}}^N}a> 0\), we can fix \({\delta }> 0\) such that

$$\begin{aligned} \delta < \min \left\{ 1, \left( a_0\over p\right) ^{1\over p-1}, \left( a_0\over 2\right) ^{1\over p-1}, {a_0^{1\over p-1}\over 2}, {w(0)\over 3}\right\} , \end{aligned}$$
(2.4)

then, thanks to (2.2) with \(i=0\), we can choose

$$\begin{aligned} R_{\delta }> 0 \ \text{ so } \text{ that } \ w(x)< {\delta }\quad \forall x\in {\mathbb {R}}^N\setminus B(0, R_{\delta }/2). \end{aligned}$$
(2.5)

With respect to any fixed number \(\delta > 0\) satisfying (2.4) we introduce the following notions. For every function \(u\in H^1({\mathbb {R}}^N)\), \(u\ge 0\), we denote by

$$\begin{aligned} u_\delta =u\wedge {\delta },\qquad u^{\delta }=u-u_{\delta }\end{aligned}$$
(2.6)

and call \(u_{\delta }\) and \(u^{\delta }\) the submerged and the emerging part of u respectively. We say that a function \(u\in H^1({\mathbb {R}}^N)\) is emerging around \(x_1,\ldots ,x_k\in {\mathbb {R}}^N\) if \(u^{\delta }=\sum _{i=1}^ku^{{\delta },i}\) where, for all \(i\in \{1,\ldots ,k\}\), \(u^{{\delta },i}(x)=0\) \(\forall x\not \in B(x_i,R_{\delta })\) and \(u^{{\delta },i}\not \equiv 0\). Thus, if \(B(x_i,R_\delta )\cap B(x_j,R_\delta )=\emptyset \) for \(i\ne j\), \(u^{{\delta },i}\) is the projection of \(u^{\delta }\) in \(H^1_0(B(x_i,R_{\delta }))\) and we have

$$\begin{aligned} {\mathbb {R}}^N\setminus \cup _{i=1}^k B(x_i,R_{\delta })\subset {\mathbb {R}}^N\setminus \mathrm{supp}\,(u^{\delta }). \end{aligned}$$

On the submerged parts, the functional E has the following features.

Remark 2.2

The functional E is coercive and convex, hence weakly lower semicontinuous, on the convex set

$$\begin{aligned} {\mathcal {C}}=\{u\in H^1({\mathbb {R}}^N)\ :\ |u|\le \delta \ \text{ a.e. } \text{ in } {\mathbb {R}}^N\}. \end{aligned}$$

Indeed, taking into account the choice of \({\delta }\) in (2.4), we have

$$\begin{aligned} E(u)\ge & {} {1\over 2} \int _{{\mathbb {R}}^N} (|D u|^2+a_0 u^2)dx-{1\over p+1}\int _{{\mathbb {R}}^N}|u|^{p+1}dx \\\ge & {} {1\over 2} \int _{{\mathbb {R}}^N} |D u|^2 dx +\left( {a_0\over 2}-{{\delta }^{p-1}\over p+1}\right) \int _{{\mathbb {R}}^N} u^2 dx\\\ge & {} c\, \Vert u\Vert ^2\qquad \forall u\in {\mathcal {C}}\end{aligned}$$

for \(c> 0\) small enough, and

$$\begin{aligned}&{d^2\over dt^2}E(u_1+t(u_2-u_1))\ge \int _{{\mathbb {R}}^N} \quad |D (u_2-u_1)|^2dx +\int _{{\mathbb {R}}^N}\quad (a_0-p{\delta }^{p-1})(u_2-u_1)^2dx\ge 0\\&\quad \forall t\in [0,1],\qquad \forall u_1,u_2\in {\mathcal {C}}. \end{aligned}$$

We introduce now some sets depending also on the chosen real number \(R_\delta \) satisfying (2.5). For all \(k\ge 1\), we set

$$\begin{aligned} D_k=\{(x_1,\ldots ,x_k)\in ({\mathbb {R}}^N)^k\ :\ |x_i-x_j|\ge 3R_{\delta }\ \text{ for } i\ne j,\ i,j=1,\ldots k\} \end{aligned}$$
(2.7)

(notice that \(D_k={\mathbb {R}}^N\) when \(k=1\)). For all \((x_1,\ldots ,x_k)\in D_k\) we consider the set consisting of functions emerging around \(x_1,\ldots ,x_k\) and satisfying local Nehari and local barycenter constraints:

$$\begin{aligned} S_{x_1,\ldots ,x_k } =\{ u\in H^1({\mathbb {R}}^N)\&:&u \ge 0, \ u \text{ emerging } \text{ around } x_1,\ldots ,x_k,\ u^\delta =\sum _{i=1}^k u^{\delta ,i}, \nonumber \\&E'(u)[u^{\delta ,i}]=0,\ \beta _i(u)=x_i\quad \forall i\in \{1,\ldots ,k\}\}, \end{aligned}$$
(2.8)

where \(\beta _i\) is the local barycenter defined by

$$\begin{aligned} \beta _i(u)=\left( \int _{B(x_i,R_{\delta })}(u^{\delta }(x))^2\, dx\right) ^{-1} \int _{B(x_i,R_{\delta })}x\, (u^{\delta }(x))^2\,dx\qquad \text{ for } i=1,\ldots ,k. \end{aligned}$$
(2.9)

Analogously, we denote by \(S^\infty _y\), \(y\in {\mathbb {R}}^N\), the set obtained replacing E and the points \(x_1,\dots ,x_k\) by \(E_\infty \) and y respectively in (2.8).

Notice that, if \(u\in S_{x_1,\ldots ,x_k }\), it satisfies the equality

$$\begin{aligned}&\beta '_i(u)[\psi ]= 2\left( \int _{B(x_i,R_{\delta })}(u^{\delta }(x))^2\, dx\right) ^{-1} \int _{B(x_i,R_{\delta })} u^{\delta }(x)\psi (x)\, (x-x_i)\, dx\\&\qquad \forall \psi \in H^1_0(B(x_i,R_{\delta }))\quad \forall i\in \{1,\ldots ,k\}.\nonumber \end{aligned}$$
(2.10)

In fact,

$$\begin{aligned} \beta '_i(u)[\psi ]= & {} 2 \left( \int _{B(x_i,R_\delta )}(u^\delta (x))^2dx\right) ^{-1} \int _{B(x_i,R_\delta )} x\, u^\delta (x)\psi (x)dx\\&-2 \left( \int _{B(x_i,R_\delta )}(u^\delta (x))^2dx\right) ^{-2} \int _{B(x_i,R_\delta )} x\, u^\delta (x)\psi (x)dx \int _{B(x_i,R_\delta )} u^\delta (x)\psi (x)dx\\= & {} 2 \left( \int _{B(x_i,R_\delta )}(u^\delta (x))^2dx\right) ^{-1} \\&\cdot \left[ \int _{B(x_i,R_\delta )} x\, u^\delta (x)\psi (x)dx - x_i \int _{B(x_i,R_\delta )} u^\delta (x)\psi (x)dx \right] \\= & {} 2 \left( \int _{B(x_i,R_\delta )}(u^\delta (x))^2dx\right) ^{-1} \int _{B(x_i,R_\delta )} (x-x_i)\, u^\delta (x)\psi (x)dx \end{aligned}$$

because \(u\in S_{x_1,\ldots ,x_k }\) implies

$$\begin{aligned} \left( \int _{B(x_i,R_\delta )}(u^\delta (x))^2dx\right) ^{-1} \int _{B(x_i,R_\delta )} x\, u^\delta (x)\psi (x)dx = x_i \end{aligned}$$

as follows from (2.8) and (2.9).

In the following statements we collect some features, whose proof can be found in [11] (see also [12]), that draw the variational setting we are working in.

Proposition 2.3

(see [11, Lemma 2.10]) Let \(u\in H^1({\mathbb {R}}^N)\) be such that \(u^{\delta }\not \equiv 0\) and \(u^{\delta }\) has compact support. Then there exists a unique \({\bar{t}} \in (0,\infty )\) such that \(E'(u_{\delta }+{\bar{t}} u^{\delta })[u^{\delta }]=0\); moreover \({\bar{t}}\) is the maximum point of the function \(t\mapsto E(u_{\delta }+tu^{\delta })\).

The same statements hold when we consider the functional \(E_\infty \).

Corollary 2.4

Under the same assumptions of Proposition 2.3 we have \(S_{x_1,\ldots ,x_k}\ne \emptyset \) for every \((x_1,\ldots ,x_k)\in D_k\).

Moreover, for every \(u\in S_{x_1,\ldots ,x_k}\) we have

$$\begin{aligned} E(u)=\max \left\{ E(u_\delta +\sum _{i=1}^k t_iu^{\delta ,i})\ :\ t_1> 0,\ldots ,t_k> 0\right\} \end{aligned}$$
(2.11)

and the maximum is achieved if and only if \(t_1=t_2=\ldots =t_k=1\).

Proof

Let \(\phi \in {\mathcal {C}}^\infty _0(B(0,R_{\delta }))\) be a positive radially symmetric function such that \(\phi ^{\delta }\not \equiv 0\) and let \({\bar{t}}_i\) be the value corresponding to \(\phi (x-x_i)\) provided by Proposition 2.3, then \(\sum _{i=1}^k[(\phi (x-x_i))_{\delta }+{\bar{t}}_i(\phi (x-x_i))^{\delta }]\in S_{x_1,\ldots ,x_k}\).

Property (2.11) is a direct consequence of Proposition 2.3.\(\square \)

For every function \(v\in H^1({\mathbb {R}}^N)\) with compact support, let us define

$$\begin{aligned} F(v)= & {} \frac{1}{2}\int _{{\mathbb {R}}^N}(|D v|^2 +a(x)v^2)dx + \int _{ {\mathbb {R}}^N} a(x) \delta \, v \, dx \nonumber \\&- \frac{1}{p+1}\int _{\mathrm{supp}\,\, v} (\delta + v)^{p+1} dx + \frac{\delta ^{p+1}}{p+1}\ |\mathrm{supp}\,v|. \end{aligned}$$
(2.12)

By the choice of \({\delta }\) (see(2.4)), for every \((x_1,\ldots ,x_k)\in D_k\) we can write

$$\begin{aligned} E(u)=E(u_{\delta })+F(u^{\delta }) =E(u_{\delta })+\sum _{i=1}^k F(u^{{\delta },i})\qquad \forall u\in S_{x_1,\ldots ,x_k} \end{aligned}$$

and both E and F have positive sign on our set of functions:

Proposition 2.5

(see [11, Proposition 2.15]) Assume \((x_1,\ldots ,x_k)\in D_k\) and \(u\in S_{x_1,\ldots ,x_k}\), then

$$\begin{aligned} E(u)> 0,\qquad F(u^{{\delta },i})> 0\quad \forall i\in \{1,\ldots ,k\}. \end{aligned}$$

For every \((x_1,\ldots ,x_k)\in D_k\) let us set

$$\begin{aligned} f_k(x_1,\ldots ,x_k){:=}\inf \{E(u)\ :\ u\in S_{x_1,\ldots ,x_k}\}. \end{aligned}$$
(2.13)

Proposition 2.6

For every \((x_1,\ldots ,x_k)\in D_k\) the infimum in (2.13) is achieved and

$$\begin{aligned} f_k(x_1,\ldots ,x_k)> 0. \end{aligned}$$

Moreover, for every \({\bar{u}}\in S_{x_1,\ldots ,x_k}\) such that \(E({\bar{u}})=f_k(x_1,\ldots ,x_k)\) the following properties hold:

  1. i)

    \({\bar{u}}(x)> 0\) \(\forall x\in {\mathbb {R}}^N\) and \({\bar{u}}(x)< {\delta }\) for all x such that \(\mathrm{dist}\,(x,\mathrm{supp}\,{\bar{u}}^{\delta })> 0\);

  2. ii)

    \({\bar{u}}\) satisfies the equation

    $$\begin{aligned} -\Delta {\bar{u}}(x)+a(x){\bar{u}}(x)={\bar{u}}^p(x) \qquad \forall x\in {\mathbb {R}}^N\ \text{ s.t. } \mathrm{dist}\,(x,\mathrm{supp}\,{\bar{u}}^{\delta })> 0; \end{aligned}$$
    (2.14)
  3. iii)

    there exist two positive constants b and c such that

    $$\begin{aligned} {\bar{u}}(x)\le c\, e^{-b\, d(x)}\qquad \forall x\in {\mathbb {R}}^N\setminus \mathrm{supp}\,{\bar{u}}^{\delta }\nonumber \\ \end{aligned}$$
    (2.15)

    where \(d(x)=\mathrm{dist}\,(x,\mathrm{supp}\,{\bar{u}}^{\delta })\);

  4. iv)

    there exist Lagrange multipliers \(\lambda _1,\ldots ,\lambda _k\) in \({\mathbb {R}}^N\) such that

    $$\begin{aligned} E'({\bar{u}})[\psi ]=\int _{B(x_i,R_{\delta })}\quad {\bar{u}}^{{\delta }} (x)\psi (x)[\lambda _i\cdot (x-x_i)]dx\quad \forall \psi \in H^1_0(B(x_i,R_{\delta }))\ \forall i\in \{1,\ldots ,k\}.\nonumber \\ \end{aligned}$$
    (2.16)

The existence of a minimizer \({\bar{u}}\in S_{x_1,\ldots ,x_k}\) is proved in [11, Proposition 3.1], (i) – (iii) are contained in [11, Lemma 3.4] (for the property \({\bar{u}}> 0\) see also [12, Lemma 3.4]) and (iv) is in [11, Proposition 3.5].

Remark 2.7

For every \(R\in \left[ R_{\delta },{1\over 2}\min \{|x_i-x_j|\ :\ i\ne j,\ i,j=1,\ldots ,k\}\right] \), the relation

$$\begin{aligned} \sup \{{\bar{u}}(x)\ :\ x\in {\mathbb {R}}^N\setminus \cup _{i=1}^k B(x_i,R)\} =\sup \{{\bar{u}}(x)\ :\ x\in \partial (\cup _{i=1}^k B(x_i,R))\} \end{aligned}$$
(2.17)

holds true. Indeed, since \({\bar{u}}\) is a positive solution of (2.14) in \({\mathbb {R}}^N\setminus \mathrm{supp}\,({\bar{u}}^{\delta })\), considering the choice of \({\delta }\) we get \(\Delta {\bar{u}}> 0\) in \({\mathbb {R}}^N\setminus \cup _{i=1}^k B(x_i,R)\). So maximum principle gives (2.17).

Concerning the limit problem, we have:

Lemma 2.8

(see [11, Lemma 4.1]) For every \(y\in {\mathbb {R}}^N\), \(w_y\in S^\infty _y\) (see (2.3)) and

$$\begin{aligned} \min _{S^\infty _y}E_\infty =E_\infty (w_y). \end{aligned}$$

Now, we describe the asymptotic behaviour of suitable sequences of minimizing functions and of the corresponding Lagrange multipliers.

Proposition 2.9

Let \((k_n)_n\) be a sequence in \({\mathbb {N}}\) and \(((x_{1,n},\ldots ,x_{k_n,n}))_{n}\) a sequence such that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\min \{|x_{i,n}|\ :\ i=1,\ldots ,k_n\}=\infty ,\nonumber \\&\lim _{n\rightarrow \infty }\min \{|x_{i,n}-x_{j,n}|\ :\ i\ne j,\ i,j=1,\ldots ,k_n\}=\infty . \end{aligned}$$
(2.18)

Notice that \((x_{1,n},\ldots ,x_{k_n,n})\in D_{k_n}\) for n large enough as a consequence of (2.18). Thus, for n large enough, let \(u_n\) be a minimizer of E in \(S_{x_{1,n},\ldots ,x_{k_n,n} }\) and for all \(i\in \{1,\dots ,k_n\}\) let \(\lambda _{i,n}\) be the related Lagrange multipliers provided by (iv) of Proposition 2.6, then

$$\begin{aligned}&\lim _{n\rightarrow \infty } \sup \{|u_n(x+x_{i,n})-w(x)|\ :\ |x|\le R,\ i=1,\ldots ,k_n\}=0\qquad \forall R> 0, \end{aligned}$$
(2.19)
$$\begin{aligned}&\text{ if } i_n\in \{1,\ldots ,k_n\} \ \forall n\in {\mathbb {N}},\ u_n(x+x_{i_n,n})\rightarrow w(x)\qquad \text{ in } H^1_{\mathrm{loc}\,}({\mathbb {R}}^N) \end{aligned}$$
(2.20)

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\max \{|\lambda _{i,n}|\ :\ i=1,\ldots ,k_n\}=0. \end{aligned}$$
(2.21)

For the proof we refer the reader to [11, Proposition 5.5]. In fact, (2.20) is (b) in the proof of Proposition 5.5 in [11] while (2.21) here corresponds to (5.32) in the same proof.

Finally, let us prove the following continuity property.

Proposition 2.10

Let a(x) verify assumptions (1.3), then the function \(f_k:D_k\rightarrow {\mathbb {R}}\) defined by (2.13) is a continuous function.

Proof

The upper semicontinuity is proved in [11, Lemma 4.2].

In order to prove the lower semicontinuity, let us consider a sequence \(((x_1^n,\ldots ,x_k^n))_n\) in \(D_k\) such that \((x_1^n,\ldots ,x_k^n)\rightarrow (x_1,\ldots ,x_k)\in D_k\), as \(n\rightarrow \infty \), and let (see Proposition 2.6) \(u_n\in S_{x_1^n,\ldots ,x_k^n} \) be such that \(E(u_n)=f_k(x_1^n,\ldots ,x_k^n)\).

Since \((x^n_1,\dots ,x^n_k)\rightarrow (x_1,\dots ,x_k)\) and \(f_k\) is upper semicontinuous, \((E(u_n))_n\) is bounded and so we can infer that \((F(u_n^{\delta }))_n\) is bounded and \(((u_n)_{\delta })_n\) is bounded in \(H^1_0({\mathbb {R}}^N)\), taking into account the fact that \(E(u_n)=E((u_n)_{\delta })+F(u_n^{\delta })\), Proposition 2.5 and the coercivity of E on the submerged parts (see Remark 2.2).

Let us show that also \((u_n^{\delta })_n\) is bounded in \(H^1({\mathbb {R}}^N)\), that is \((u^{{\delta },i}_n)_n\) is bounded for every \(i\in \{1,\ldots ,k\}\). From \(E'(u_n)[u^{{\delta },i}_n]= 0\), \(\forall n\in {\mathbb {N}}\), we get

$$\begin{aligned} \int _{{\mathbb {R}}^N}(|D u^{{\delta },i}_n|^2+a(x)(u^{{\delta },i}_n)^2)dx+ \delta \int _{{\mathbb {R}}^N}a(x)u^{{\delta },i}_ndx=\int _{{\mathbb {R}}^N}({\delta }+u^{{\delta },i}_n)^pu^{{\delta },i}_n dx. \end{aligned}$$
(2.22)

Hence, by (2.12), we can write

$$\begin{aligned} F(u^{{\delta },i}_n)= & {} {1\over 2} \int _{{\mathbb {R}}^N}({\delta }+u^{{\delta },i}_n)^p u^{{\delta },i}_ndx +{1\over 2} \int _{{\mathbb {R}}^N}a(x)u^{{\delta },i}_ndx\\&-{1\over p+1} \int _{\mathrm{supp}\,u^{{\delta },i}_n}({\delta }+u^{{\delta },i}_n)^{p+1}dx+{{\delta }^{p+1}\over p+1} \, |\mathrm{supp}\,u^{{\delta },i}_n| \end{aligned}$$

and taking into account that \(|\mathrm{supp}\,u^{{\delta },i}_n|\le c_1\), \(\forall n\in {\mathbb {N}}\), we see that

$$\begin{aligned} F(u^{{\delta },i}_n)\ge & {} {1\over 2} \int _{{\mathbb {R}}^N}({\delta }+u^{{\delta },i}_n)^pu^{{\delta },i}_ndx -{1\over p+1} \int _{\mathrm{supp}\,u^{{\delta },i}_n}({\delta }+u^{{\delta },i}_n)^{p+1}dx\nonumber \\\ge & {} \int _{\{u^{{\delta },i}_n> {4{\delta }\over p-1}\}}({\delta }+u^{{\delta },i}_n)^p u^{{\delta },i}_n\left[ {1\over 2}-{1\over p+1}\left( {{\delta }\over u^{{\delta },i}_n}+1\right) \right] dx\nonumber \\&-{1\over p+1}\int _{\{0< u^{{\delta },i}_n\le {4{\delta }\over p-1}\}}({\delta }+u^{{\delta },i}_n)^{p+1}dx\nonumber \\\ge & {} \int _{\{u^{{\delta },i}_n> {4{\delta }\over p-1}\}}(u^{{\delta },i}_n)^{p+1}\left[ {1\over 4}\, {p-1\over p+1}\right] dx-c_2. \end{aligned}$$
(2.23)

Since \(F(u^{{\delta },i}_n)\le c_3\), from (2.23) it follows that

$$\begin{aligned} \int _{\{u^{{\delta },i}_n> {4{\delta }\over p-1}\}}(u^{{\delta },i}_n)^{p+1}dx\le \mathrm{const}\,, \end{aligned}$$

so that \((u^{{\delta },i}_n)_n\) is bounded in \(L^{p+1}\), in \(L^2\), in \(L^1\) and so it turns out to be bounded also in \(H^1\) by (2.22).

Summarizing, \((u_n)_n\) is bounded in \(H^1\) so, up to a subsequence, it converges to a function \({\bar{u}}\) weakly in \(H^1({\mathbb {R}}^N)\) and we have also that \(u^{{\delta },i}_n\rightarrow {\bar{u}}^{{\delta },i}\) strongly in \(L^{p+1}\) and in \(L^{2}\) .

Observe that \({\bar{u}}^{{\delta },i}\not \equiv 0\), indeed from (2.22) and the choice of \(\delta \) in (2.4) we obtain

$$\begin{aligned} 0\ge & {} c_1|u^{{\delta },i}_n|_{p+1}^2+(a_0{\delta }-2^{p-1}{\delta }^p)\int _{\mathrm{supp}\,u^{{\delta },i}_n}u^{{\delta },i}_n dx-2^{p-1}|u^{{\delta },i}_n|^{p+1}_{p+1} \\> & {} c_1|u^{{\delta },i}_n|_{p+1}^2-c_2|u^{{\delta },i}_n|_{p+1}^{p+1} \end{aligned}$$

that implies \(|u^{{\delta },i}_n|_{p+1}\ge \mathrm{const}\,\) \(\forall n\in {\mathbb {N}}\). Then, \({\bar{u}}\) is a function emerging around \((x_1,\ldots ,x_k)\) and it verifies \(\beta _i({\bar{u}})=x_i\) \(\forall i\in \{1,\ldots ,k\}\), by the \(L^2\)-convergence of the emerging parts.

Now, according to Proposition 2.3, let \(t_i\in (0,\infty )\), \(i\in \{1,\ldots ,k\}\), be such that \({\hat{u}}={\bar{u}}_{\delta }+\sum _{i=1}^kt_i{\bar{u}}^{{\delta },i} \in S_{x_1,\ldots ,x_k}\). Then we have

$$\begin{aligned} f_k(x_1,\ldots ,x_k)\le & {} E({\hat{u}})=E\left( {\bar{u}}_{\delta }+\sum _{i=1}^kt_i{\bar{u}}^{{\delta },i}\right) \\\le & {} \liminf \limits _{n\rightarrow \infty }E\left( (u_n)_{\delta }+ \sum _{i=1}^kt_iu^{{\delta },i}_n\right) \\\le & {} \liminf \limits _{n\rightarrow \infty }E( u_n) = \liminf \limits _{n\rightarrow \infty }f_k(x_1^n,\ldots ,x_k^n), \end{aligned}$$

where the first inequality follows from the definition of \(f_k\), the second one from the weak lower semicontinuity pointed out in Remark 2.2 and the last one holds because \(u_n\in S_{x_1^n,\ldots ,x_k^n}\), see (2.11). Thus, \(f_k\) is also lower semicontinuous and the proof is complete. \(\square \)

3 The min-max argument

In this section we use a min-max argument to obtain suitable k-bumps functions that in Sect. 5 will be proved to be solutions.

Fixed \(\delta > 0\) and \(R_\delta > 0\) as in (2.4) and (2.5) respectively, we introduce the following subset of the set \(D_k\) defined by (2.7). For all \(\sigma > 0\) and for all \(k\ge 2\), let us set \(\forall \rho \ge 0\), \(\forall (\theta _1,\ldots ,\theta _k)\in [S(0,1)]^k\)

$$\begin{aligned} D^{k,\sigma }(\rho ,\theta _1,\ldots ,\theta _k)= & {} \left\{ (x_1,\ldots ,x_k)\in D_k\ :\ \left[ {1\over k}\sum _{i=1}^k|x_i|^2\right] ^{1/2}=\rho ,\right. \nonumber \\&\left. (1+2\sigma )^{-1}\rho \le |x_i|\le (1+2\sigma )\rho ,\ {x_i\over |x_i|}=\theta _i\ \text{ for } i=1,\ldots ,k\right\} .\nonumber \\ \end{aligned}$$
(3.1)

Then, we define on

$$\begin{aligned} D^{k,\sigma }= \{(\rho ,\theta _1,\ldots ,\theta _k)\ :\ \rho \ge 0,\ \theta _i\in S(0,1)\ \text{ for } i=1,\ldots ,k,\ D^{k,\sigma }(\rho ,\theta _1,\ldots ,\theta _k)\ne \emptyset \}\nonumber \\ \end{aligned}$$
(3.2)

the continuous function \(g^{k,\sigma }:D^{k,\sigma }\rightarrow {\mathbb {R}}\) by setting

$$\begin{aligned} g^{k,\sigma }(\rho ,\theta _1,\ldots ,\theta _k)=\min \{f_k(x_1,\ldots ,x_k)\ :\ (x_1,\ldots ,x_k)\in D^{k,\sigma }(\rho ,\theta _1,\ldots ,\theta _k)\},\qquad \end{aligned}$$
(3.3)

where the minimum is achieved because \(f_k\) is a continuous function and \(D^{k,\sigma }(\rho ,\) \(\theta _1,\ldots ,\) \(\theta _k)\) is a compact subset of \(({\mathbb {R}}^N)^k\). The number \(\sigma > 0\), representing the width of the annulus to which the points \(x_1,\dots ,x_k\) belong, will be fixed later.

Proposition 3.1

Assume that the potential a(x) satisfies conditions (1.3) and (1.4) and let \(k\ge 2\). Then, for every \(\sigma > 0\) we have

$$\begin{aligned} \sup _{D^{k,\sigma }}g^{k,\sigma }> k\, E_\infty (w). \end{aligned}$$
(3.4)

Moreover, the supremum in (3.4) is achieved. In particular there exists \((x_1^k,\ldots ,x_k^k)\) in \(D_k\) (depending also on \(\sigma \) even if, to short the notations, we do not indicate it explicitly) fulfilling the following property: set \(r_k{:=}\left[ {1\over k}\sum _{i=1}^k|x_i^k|^2\right] ^{1/2}\), we have

$$\begin{aligned} (1+2\sigma )^{-1}r_k\le |x_i^k|\le (1+2\sigma ) r_k\quad \text{ for } i=1,\ldots ,k \end{aligned}$$
(3.5)

so that \((x^k_1,\dots ,x^k_k)\in D^{k,\sigma }\left( r_k,\frac{x^k_1}{|x^k_1|},\dots ,\frac{x^k_k}{|x^k_k|}\right) \), and it is such that

$$\begin{aligned} f_k(x_1^k,\ldots , x_k^k)=g^{k,\sigma }\left( r_k, {x_1^k\over |x_1^k |},\ldots , {x_k^k\over | x_k^k|}\right) =\max _{D^{k,\sigma }}g^{k,\sigma }. \end{aligned}$$
(3.6)

Proof

For the proof we proceed as follows. First we prove (3.4) and then we infer that, as a consequence, the supremum \(\sup _{D^{k,\sigma }}g^{k,\sigma }\) is achieved, that is there exists \((r_k,\theta _1,\ldots ,\theta _k)\) in \(D^{k,\sigma }\) such that

$$\begin{aligned} g^{k,\sigma }(r_k,\theta _1,\ldots ,\theta _k)=\max _{D^{k,\sigma }}g^{k,\sigma }. \end{aligned}$$

In particular, we show that there exist \((x_1^k,\ldots ,x_k^k)\) in \(D_k\) and \(u_k\) in \(S_{x_1^k,\ldots ,x_k^k}\) satisfying \((x_1^k,\) \(\ldots ,x_k^k)\in D^{k,\sigma }(r_k,\theta _1,\ldots ,\theta _k)\) with \(r_k=\left[ \frac{1}{k}\sum _{i=1}^k |x^k_i|^2\right] ^{1/2}\) and \(\theta _i=\frac{x^k_i}{|x^k_i|}\), such that

$$\begin{aligned} E(u_k)=f_k(x_1^k,\ldots ,x_k^k)=g^{k,\sigma }(r_k,\theta _1,\ldots ,\theta _k). \end{aligned}$$

In order to prove that (3.4) holds, let us choose \({\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k\) in S(0, 1) such that \({\tilde{\theta }}_i\ne {\tilde{\theta }}_j\) for \(i\ne j\). Then, there exists \({\tilde{\rho }}> 0\) such that \((\rho , {\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k)\in D^{k,\sigma }\) \(\forall \rho \ge {\tilde{\rho }}\). So, by Proposition 2.6 and (3.3), for all \(\rho \ge {\tilde{\rho }}\), choose \(({\tilde{x}}_{1,\rho },\ldots ,{\tilde{x}}_{k,\rho })\) in \(D^{k,\sigma }(\rho ,{\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k)\) and \({\tilde{u}}_\rho \) in \(S_{{\tilde{x}}_{1,\rho },\ldots ,{\tilde{x}}_{k,\rho }}\) such that

$$\begin{aligned} E({\tilde{u}}_\rho )=f_k({\tilde{x}}_{1,\rho },\ldots ,{\tilde{x}}_{k,\rho })=g^{k,\sigma }(\rho ,{\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k). \end{aligned}$$
(3.7)

Notice that \(\lim _{\rho \rightarrow \infty }|{\tilde{x}}_{i,\rho }|=\infty \) for \(i=1,\ldots ,k\) and \(\lim _{\rho \rightarrow \infty }|{\tilde{x}}_{i,\rho }-{\tilde{x}}_{j,\rho }|=\infty \) for \(i\ne j\) because \(\tilde{\theta }_i\ne {\tilde{\theta }}_j\). Then, since \(\lim \limits _{|x|\rightarrow \infty }a(x)= a_\infty \), by (2.20), (2.15) and (2.14), we obtain

$$\begin{aligned} \lim \limits _{\rho \rightarrow \infty } E({\tilde{u}}_\rho )= & {} \lim \limits _{\rho \rightarrow \infty } \sum _{i=1}^k E(w(\cdot -{\tilde{x}}_{i,\rho })) \nonumber \\= & {} \lim \limits _{\rho \rightarrow \infty } \sum _{i=1}^k E_\infty (w(\cdot -{\tilde{x}}_{i,\rho }))=k E_\infty (w). \end{aligned}$$
(3.8)

Our next goal is to show that \(E({\tilde{u}}_\rho )\) approaches \(k E_\infty (w)\) from above as \(\rho \rightarrow \infty \). Notice that

$$\begin{aligned} \liminf _{\rho \rightarrow \infty }{|{\tilde{x}}_{i,\rho }-{\tilde{x}}_{j,\rho }|\over \rho } > 0\qquad \text{ for } i\ne j \end{aligned}$$

because \(\tilde{\theta }_i\ne {\tilde{\theta }}_j\) and that, if we set

$$\begin{aligned} r_\rho ={1\over 4}\min \big \{|{\tilde{x}}_{i,\rho }-{\tilde{x}}_{j,\rho }|\ :\ i,j\in \{1,\ldots ,k\},\ i\ne j\big \}, \end{aligned}$$
(3.9)

we have

$$\begin{aligned} \liminf _{\rho \rightarrow \infty }{r_\rho \over \rho }> 0. \end{aligned}$$
(3.10)

Then, let us define

$$\begin{aligned} u_{\rho ,i}(x)=\zeta \left( {|x-{\tilde{x}}_{i,\rho }|\over r_\rho }\right) \, {\tilde{u}}_\rho (x),\qquad x\in {\mathbb {R}}^N, \end{aligned}$$
(3.11)

where \(\zeta \in {\mathcal {C}}^\infty ([0,\infty ),[0,1])\) is a cut-off function such that \(\zeta (t)=1\) if \(t\in [0,1]\), \(\zeta (t)=0\) if \(t\in [2,\infty )\), so that \(\mathrm{supp}\,(u_{\rho ,i})\subset B(\tilde{x}_{i,\rho },2 r_\rho )\). By (2.15), (2.14) and (3.10) there exist constants \({\hat{c}},c_k> 0\) such that

$$\begin{aligned} E({\tilde{u}}_\rho )=\sum _{i=1}^k E(u_{\rho ,i})+O(e^{-{\hat{c}}\, r_\rho })=\sum _{i=1}^kE(u_{\rho ,i})+O(e^{-c_k\rho }), \end{aligned}$$
(3.12)

where the constant \(c_k\) depends only on \(\tilde{\theta }_1,\ldots ,{\tilde{\theta }}_k\) and \(\sigma \), see also Remark 3.2. In order to evaluate \(E(u_{\rho ,i})\), let us consider \(t^\infty _i\in (0,\infty )\) such that \(v_{\rho ,i}{:=}(u_{\rho ,i})_{\delta }+t^\infty _i u^{\delta }_{\rho ,i}\in S^\infty _{{\tilde{x}}_{i,\rho }}\) (see Proposition 2.3). Then, taking also into account Proposition 2.3 and Lemma 2.8, for every \(i\in \{1,\ldots ,k\}\) and large \(\rho \) we have

$$\begin{aligned} E(u_{\rho ,i})\ge & {} E(v_{\rho ,i}) = E_\infty (v_{\rho ,i})+\displaystyle {\int _{B({\tilde{x}}_{i,\rho },2r_\rho )}(a(x)-a_\infty ) v_{\rho ,i}^2dx} \nonumber \\\ge & {} E_\infty (w)+\displaystyle {\int _{B({\tilde{x}}_{i,\rho },R_{\delta })}(a(x)-a_\infty ) (v_{\rho ,i}^\delta )^2dx}. \end{aligned}$$
(3.13)

By (2.20), \(|v^{\delta }_{\rho ,i}-w^{\delta }_{{\tilde{x}}_{i,\rho }}|_{2, B({\tilde{x}}_{i,\rho },R_{\delta })}\rightarrow 0\) as \(\rho \rightarrow \infty \), then (1.4) implies

$$\begin{aligned} \lim _{\rho \rightarrow \infty }\left[ \int _{B({\tilde{x}}_{i,\rho },R_{\delta })}(a(x)-a_\infty )(v^{\delta }_{\rho ,i})^2dx\right] e^{c_k\rho }=\infty . \end{aligned}$$
(3.14)

Setting \(\alpha (\rho )=\sum _{i=1}^k\int _{B({\tilde{x}}_{i,\rho },R_{\delta })}(a(x)-a_\infty )(v^{\delta }_{\rho ,i})^2dx\), from (3.12) and (3.13) it follows

$$\begin{aligned} E(u_\rho )\ge k\, E_\infty (w)+\alpha (\rho )+O(e^{-c_k\rho }), \end{aligned}$$

so we have the desired asymptotic behaviour and (3.4) follows from (3.14).

Now, we are going to prove that \(\sup _{D^{k,\sigma }} g^{k,\sigma }\) is achieved. Consider a sequence \(((\rho _n,\theta _{1,n},\) \(\ldots ,\) \(\theta _{k,n}))_n\) in \(D^{k,\sigma }\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } g^{k,\sigma }( \rho _n,\theta _{1,n},\ldots ,\theta _{k,n})=\sup _{D^{k,\sigma }} g^{k,\sigma } \end{aligned}$$

and, for all \(n\in {\mathbb {N}}\), choose, see (3.3),

$$\begin{aligned} (x_{1,n},\ldots , x_{k,n})\in D^{k,\sigma } (\rho _n,\theta _{1,n},\ldots ,\theta _{k,n}) \end{aligned}$$
(3.15)

such that \(f_k(x_{1,n},\ldots , x_{k,n} )=g^{k,\sigma }( \rho _n,\theta _{1,n},\ldots ,\theta _{k,n})\).

Let us prove that the sequence \({(\rho _{n})}_{n}\) is bounded. Arguing by contradiction, assume that, up to a subsequence, \(\lim _{n\rightarrow \infty }\rho _{n}=\infty \).

For every \(i\in \{1,\ldots ,k\}\) and \(n\in {\mathbb {N}}\), let \(t_{i,n}\in (0,\infty )\) be such that \({\tilde{w}}_{x_{i,n}}{:=}(w_{{x_{i,n}}})_{{\delta }}+t_{i,n} w_{x_{i,n}}^{\delta }\in S_{{x_{i,n}}}\). Notice that since \(\rho _n\rightarrow \infty \), as \(n\rightarrow \infty \), and \(a(x){\longrightarrow }a_\infty \), as \(|x|\rightarrow \infty \), then \(t_{i,n}\rightarrow 1\), because w is the ground state of \(E_\infty \), so that \(\Vert {\tilde{w}}_{x_{i,n}}-w_{x_{i,n}}\Vert \rightarrow 0\). Hence

$$\begin{aligned} \lim _{n\rightarrow \infty }E({\tilde{w}}_{x_{i,n}} )=E_\infty (w). \end{aligned}$$
(3.16)

We observe that

$$\begin{aligned} {\tilde{w}}_{x_{1,n}}\vee \ldots \vee {\tilde{w}}_{x_{k,n}}\in S_{{x_{1,n}},\ldots ,x_{{k,n}}}, \end{aligned}$$

and so, by the coercivity of E on the submerged parts, we obtain

$$\begin{aligned} f_k(x_{1,n},\ldots ,x_{k,n} )\le & {} E({\tilde{w}}_{{x_{1,n}}}\vee \ldots \vee {\tilde{w}}_{{x_{k,n}}} )\nonumber \\= & {} E({\tilde{w}}_{x_{1,n}})+E ({\tilde{w}}_{{x_{2,n}}}\vee \ldots \vee {\tilde{w}}_{{x_{k,n}}} )\nonumber \\&-E({\tilde{w}}_{x_{1,n}}\wedge ({\tilde{w}}_{{x_{2,n}}}\vee \ldots \vee {\tilde{w}}_{{x_{k,n}}}) )\nonumber \\\le & {} E({\tilde{w}}_{x_{1,n}})+E ({\tilde{w}}_{{x_{2,n}}}\vee \ldots \vee {\tilde{w}}_{{x_{k,n}}} )\nonumber \\&\vdots&\vdots \nonumber \\\le & {} \sum _{i=1}^k E({\tilde{w}}_{{x_{i,n}}} ). \end{aligned}$$
(3.17)

By (3.16) and (3.17) we infer that

$$\begin{aligned} \limsup _{n\rightarrow \infty } f_k(x_{1,n},\ldots , x_{k,n} )\le k\, E_\infty (w), \end{aligned}$$

which is in contradiction with (3.4). Therefore, the sequence \((\rho _n)_n\) must be bounded and (up to a subsequence)

$$\begin{aligned} \lim _{n\rightarrow \infty }\rho _n=r_k,\quad \lim _{n\rightarrow \infty }(x_{1,n},\ldots ,x_{k,n})=(x^k_1,\ldots ,x^k_k) \end{aligned}$$

for suitable \(r_k> 0\) and \((x^k_1,\ldots ,x^k_k)\) in \(D_k\). Thus, all the assertions of Proposition 3.1 hold for \(r_k\) (which turns out to be equal to \(\left[ \frac{1}{k}\sum _{i=1}^k |x^k_i|^2\right] ^{1/2}\)) and \((x^k_1,\ldots ,x^k_k)\), by the continuity of \(f_k\) (see Proposition 2.10 and the relation after (3.15)). \(\square \)

Our aim will be to prove that every function \(u_k\in S_{x^k_1,\ldots ,x^k_k}\), such that \(E(u_k)=f_k(x^k_1,\ldots ,\) \(x^k_k)=\max _{D^{k,\sigma }}g^{k,\sigma }\), is a solution of problem (1.1) for k large enough and \(\sigma > 0\) suitably chosen.

Remark 3.2

Notice that in the proof of Proposition 3.1 the existence of the positive constant \(c_k\) (which only depends on \(\tilde{\theta }_1, \dots , \tilde{\theta }_k\) and \(\sigma \)) in (3.12) is strictly related to the fact that, since \(\tilde{\theta }_i\ne \tilde{\theta }_j\) for \(i\ne j\), the minimum

$$\begin{aligned} {\tilde{\mu }}_k({\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k) =\min \{|{\tilde{\theta }}_i-{\tilde{\theta }}_j|\ :\ i,j\in \{1,\ldots ,k\}, i\ne j\} \end{aligned}$$

is positive. Moreover, \( c_{k}\rightarrow 0\) as \({\tilde{\mu }}_k({\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k)\rightarrow 0\). In fact, since the term \(O(e^{-{\hat{c}} r_\rho })\) in (3.12) (due to (3.11)) takes into account the values of \(\tilde{u}_\rho \) out of \(\bigcup _{i=1}^k B (\tilde{x}_{i,\rho },{r_\rho })\) and since \((\tilde{x}_{1,\rho }, \dots , \tilde{x}_{k,\rho })\in D^{k,\sigma }(\rho , \tilde{\theta }_1,\dots ,\tilde{\theta }_k)\), we have

$$\begin{aligned} r_\rho \ge {1\over 4}{\tilde{\mu }}_k({\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k)\, (1+2\sigma )^{-1}\rho . \end{aligned}$$

Then, \( {\hat{c}} r_\rho \ge {{\hat{c}}\over 4} \tilde{\mu }_k(\tilde{\theta }_1,\dots ,\tilde{\theta }_k) (1+2 \sigma )^{-1} \rho \), so that the constant \(c_k\) has to be chosen in the interval \(] 0,{{\hat{c}}\over 4} {\tilde{\mu }}_k({\tilde{\theta }}_1,\ldots ,{\tilde{\theta }}_k)\, (1+2\sigma )^{-1}]\) where \({\hat{c}}\) is the constant in (3.12).

Moreover, it is clear that the maximum

$$\begin{aligned} {\tilde{\mu }}_k=\max \{{\tilde{\mu }}_k(\theta _1,\ldots ,\theta _k)\ :\ \theta _i\in S(0,1)\ \text{ for } i=1,\ldots ,k\} \end{aligned}$$

tends to 0 as \(k\rightarrow \infty \). Therefore, \( c_k\) must tend to 0 as \(k\rightarrow \infty \). This fact explains why in this paper we need condition (1.4), while the decay condition

$$\begin{aligned} {\exists } {\bar{\eta }} \in (0, {\sqrt{a_\infty }}) \quad \text{ such } \text{ that } \quad \lim _{|x|\rightarrow \infty } [a(x)-a_\infty ] e^{{\bar{\eta }} |x|} = \infty , \end{aligned}$$

used in [10,11,12,13], would not be sufficient.

Next remark roughly describes the properties on which we base the idea of the proof of Theorem 1.1 and Proposition 1.2, that will be developed in Sects. 4 and 5.

Remark 3.3

Arguing as in the proof of Proposition 3.1, one can prove also the following assertion.

Let \(x_{1,n},\ldots , x_{k,n}\) in \({\mathbb {R}}^N\) and \(\rho _n=\left[ {1\over k}\sum _{i=1}^k |x_{i,n}|^2\right] ^{1/2}> 0\) be such that \(\lim \limits _{n\rightarrow \infty }\rho _n=\infty \),

$$\begin{aligned} (1+2\sigma )^{-1}\rho _n\le |x_{i,n}|\le (1+2\sigma )\rho _n \text{ for } i=1,\ldots ,k, \end{aligned}$$

and

$$\begin{aligned} f_k(x_{1,n},\ldots ,x_{k,n})= & {} g^{k,\sigma }\left( \rho _n,{x_{1,n}\over |x_{1,n} |},\ldots ,{x_{k,n}\over |x_{k,n} |}\right) \\= & {} \max \big \{g^{k,\sigma }(\rho _n,\theta _1,\ldots ,\theta _k)\ :\ (\rho _n,\theta _1,\ldots ,\theta _k)\in D^{k,\sigma }\big \}\quad \forall n\in {\mathbb {N}}. \end{aligned}$$

Then, for all \(k\ge 2\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \min \big \{|x_{i,n}-x_{j,n}|\ :\ i,j\in \{1,\ldots ,k\},\ i\ne j\big \}=\infty . \end{aligned}$$
(3.18)

In fact,

$$\begin{aligned} \liminf _{n\rightarrow \infty } \min \big \{|x_{i,n}-x_{j,n}|\ :\ i,j\in \{1,\ldots ,k\},\ i\ne j\big \}< \infty \end{aligned}$$

would imply

$$\begin{aligned} \liminf _{n\rightarrow \infty } f_k(x_{1,n},\ldots ,x_{k,n})< k \, E_\infty (w), \end{aligned}$$

in contradiction with the fact that

$$\begin{aligned} \liminf _{n\rightarrow \infty }\max \{g^{k,\sigma }(\rho _n,\theta _1,\ldots ,\theta _k)\ :\ (\rho _n,\theta _1,\ldots ,\theta _k)\in D^{k,\sigma }\big \}\ge k \, E_\infty (w), \end{aligned}$$

that follows arguing exactly as in the proof of Proposition 3.1 (see (3.7) and (3.8)).

Moreover, since (3.18) holds, we have

$$\begin{aligned} \lim _{n\rightarrow \infty } f_k(x_{1,n},\ldots ,x_{k,n})=k\, E_\infty (w) \end{aligned}$$

as one can verify by direct computation (arguing as in (3.8)).

Thus we infer that, as a consequence of the assumptions (1.3) and (1.4), the number \(r_k\) in Proposition 3.1 must be large enough so that the distances between the points \(x_1^k,\ldots ,x_k^k\) may be large, but it cannot be too large, otherwise we would have \(f_k(x_1^k,\ldots ,x_k^k)< \max _{D^{k,\sigma }}g^{k,\sigma }\) in contradiction with (3.6).

As \(k\rightarrow \infty \), \(r_k\) and \(\min \{|x^k_i|\ :\ i\in \{1,\ldots , k\}\}\) tend to infinity because of the definition of \(D_k\) and \(D^{k,\sigma }\). Since \(a(x)\rightarrow a_\infty \) from above as \(|x|\rightarrow \infty \) and

$$\begin{aligned} E(u)=E_\infty (u)+{1\over 2}\int _{{\mathbb {R}}^N}(a(x)-a_\infty )u^2dx, \end{aligned}$$
(3.19)

the integral term in (3.19) pulls the points \(x_1^k,\ldots ,x^k_k\) to go toward infinity, so an equilibrium may be reached only thanks to the attractive interaction between the points \(x^k_1,\ldots ,x^k_k\) due to the term \(E_\infty (u)\).

In order to find an equilibrium configuration of the points \(x^k_1,\ldots , x^k_k\), we need to study their asymptotic behaviour as \(k\rightarrow \infty \), under suitable assumptions on the behaviour of a(x) as \(|x|\rightarrow \infty \).

Therefore, in next section we show that conditions (1.5) and (1.6), combined with (3.6), imply that the points \({x^k_1\over r_k},\ldots , {x^k_k\over r_k}\) tend as \(k\rightarrow \infty \) to be uniformly distributed near all the sphere S(0, 1) and the points \({x^k_1},\ldots , {x^k_k}\) tend to be uniformly close to the sphere \(S(0,r_k)\), namely \(\lim \limits _{k\rightarrow \infty }\max \{|\, |x^k_i|-r_k|\ :\ i\in \{1,\ldots ,k\}\}=0\).

4 Asymptotic estimates

In this section we describe the asymptotic behaviour as \(k\rightarrow \infty \) of the sequence of points \(((x_1^k,\ldots , x_k^k ))_k\), the k–tuples \((x^k_1,\dots ,x^k_k)\) provided by Proposition 3.1, and of the functions \(u_k\), minimizing the energy functional E in the set \(S_{x_1^k,\ldots , x_k^k }\), that is \(E(u_k)=f_k(x^k_1,\dots ,x^k_k)\).

Proposition 4.1

Assume that the potential a(x) satisfies conditions (1.3) and (1.4). Let \(((x_1^k,\ldots , x_k^k))_k\) be a sequence provided by Proposition 3.1 and, for all \(k\in {\mathbb {N}}\), let \(u_k\) be a function in \(S_{x_1^k,\ldots , x_k^k}\) such that \(E(u_k)=f_k( x_1^k,\ldots , x_k^k )=\max _{D^{k,\sigma }}g^{k,\sigma }\) (see (3.6)).

Then, the following properties hold:

$$\begin{aligned}&\lim _{k\rightarrow \infty } \min \{|x_i^k|\ :\ i=1,\ldots ,k\}=\infty \end{aligned}$$
(4.1)
$$\begin{aligned}&\lim _{k\rightarrow \infty } \min \{|x_i^k-x_j^k|\ :\ i\ne j,\ i,j=1,\ldots ,k\}=\infty \end{aligned}$$
(4.2)
$$\begin{aligned}&\lim _{k\rightarrow \infty }\sup \{|u_k(x+x_i^k)-w(x)|\ :\ |x|\le R,\ i=1,\ldots ,k\}=0\quad \forall R> 0. \end{aligned}$$
(4.3)

Moreover, there exists \({\bar{k}}> 0\) such that, for all \(k\ge {\bar{k}}\),

$$\begin{aligned} -\Delta u_k(x)+a(x)u_k(x)=u^p_k(x)+\sum _{i=1}^k u_k^{{\delta },i}(x)[\lambda _i^k\cdot (x-x_i^k)]\quad \forall x\in {\mathbb {R}}^N \end{aligned}$$
(4.4)

where \(u_k^{{\delta },i}\) is the projection of \(u^{\delta }_k=u_k-u_k\wedge {\delta }\) on \(H^1_0(B(x_i^k,R_{\delta }))\), \(\lambda _1^k,\ldots ,\lambda _k^k\) are the Lagrange multipliers of \(u_k\) related to \(S_{x^k_1,\dots ,x^k_k}\) and, finally, \(u_k\rightarrow 0\) uniformly on the compact subsets of \({\mathbb {R}}^N\), as \(k\rightarrow \infty \).

Proof

Property (4.1) is a direct consequence of the definitions of \(D_k\) and \(D^{k,\sigma }\).

In order to prove (4.2) we argue by contradiction and assume that

$$\begin{aligned} \liminf _{k\rightarrow \infty } \min \{|x_i^k-x_j^k|\ :\ i\ne j,\ i,j=1,\ldots ,k\}<\infty . \end{aligned}$$

Without any loss of generality, we can assume that

$$\begin{aligned} \min \{|x_i^k-x_j^k|\ :\ i\ne j,\ i,j=1,\ldots ,k\}=|x^k_1-x^k_2|\qquad \forall k\ge 2. \end{aligned}$$

So (up to a subsequence) we have \(\lim _{k\rightarrow \infty }|x^k_1-x^k_2|<\infty \).

Let us set

$$\begin{aligned} {\mathcal {L}}_k=\max _{\theta \in S(0,1)} \min \left\{ \left| \theta -{x_i^k\over |x_i^k|}\right| \ :\ i=3,\ldots ,k\right\} \qquad \forall k\ge 3. \end{aligned}$$

We say that \(\lim _{k\rightarrow \infty }r_k\cdot {\mathcal {L}}_k=\infty \). In fact, arguing by contradiction, assume that (up to a subsequence)

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }r_k\cdot {\mathcal {L}}_k<\infty . \end{aligned}$$
(4.5)

In this case, if we set

$$\begin{aligned} {\delta }^k_i=\min \left\{ \left| {x_i^k\over |x_i^k|}-{x_j^k\over |x_j^k|}\right| \ :\ j\in \{1,\ldots ,k\},\ j\ne i\right\} \qquad \forall i\in \{1,\ldots ,k\},\ \forall k\ge 2, \end{aligned}$$

we must have also

$$\begin{aligned} \lim _{k\rightarrow \infty }r_k\max \{{\delta }^k_i\ :\ i=1,\ldots ,k\}<\infty \end{aligned}$$
(4.6)

otherwise, for all \(k\ge 3\) we could choose \({\bar{\theta }}_k\in S(0,1)\) such that (up to a subsequence)

$$\begin{aligned} \lim _{k\rightarrow \infty } r_k\cdot \min \left\{ \left| \bar{\theta }_k -{x_i^k\over |x_i^k|}\right| \ :\ i=3,\ldots ,k\right\} =\infty \end{aligned}$$

in contradiction with (4.5).

As a consequence of (4.6), by using (4.1) and \(a(x)\rightarrow a_\infty \), and arguing as in Proposition 5.3 in [11], since \(\left( r_k \frac{x^k_1}{|x^k_1|}, \dots , r_k \frac{x^k_k}{|x^k_k|}\right) \in D^{k,\sigma }\left( r_k, \frac{x^k_1}{|x^k_1|}, \dots , \frac{x^k_k}{|x^k_k|}\right) \), we obtain by (3.3)

$$\begin{aligned} \liminf _{k\rightarrow \infty } {1\over k}g^{k,\sigma }\left( r_k, {x_1^k\over |x_1^k|},\ldots ,{x_k^k\over |x_k^k|}\right) \le \liminf _{k\rightarrow \infty } {1\over k}f_k\left( r_k\cdot {x_1^k\over |x_1^k|},\ldots ,r_k\cdot {x_k^k\over |x_k^k|}\right) < E_\infty (w)\nonumber \\ \end{aligned}$$
(4.7)

in contradiction with (3.4) (see Proposition 3.1), which implies

$$\begin{aligned} \liminf _{k\rightarrow \infty } {1\over k}g^{k,\sigma }\left( r_k, {x_1^k\over |x_1^k|},\ldots ,{x_k^k\over |x_k^k|}\right) \ge E_\infty (w). \end{aligned}$$

Thus, we have proved that \(\lim _{k\rightarrow \infty }r_k\cdot {\mathcal {L}}_k=\infty \). As a consequence, for all \(k\ge 3\) we can choose \(\theta ^k_1,\theta ^k_2\) in S(0, 1) such that

$$\begin{aligned}&\lim \limits _{k\rightarrow \infty }r_k\, |\theta ^k_1-\theta ^k_2|=\infty \quad \text{ and } \nonumber \\&\lim \limits _{k\rightarrow \infty }r_k\, \min \left\{ \left| \theta ^k_i-{x^k_j\over |x^k_j|}\right| \ :\ j=3,\ldots ,k\right\} =\infty \quad \text{ for } i=1,2. \end{aligned}$$
(4.8)

Then, consider \((y^k_1,\ldots ,y^k_k)\) in \(D^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2, {x^k_3\over |x^k_3|},\ldots ,{x^k_k\over |x^k_k|}\right) \) such that, see (3.3),

$$\begin{aligned} f_k(y^k_1,\ldots ,y^k_k)=g^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2, {x^k_3\over |x^k_3|},\ldots ,{x^k_k\over |x^k_k|}\right) \end{aligned}$$
(4.9)

and two points \(z^k_1,z^k_2\) in \({\mathbb {R}}^N\) such that \((z_1^k,z_2^k,y_3^k, \ldots ,y_k^k)\in D_k\),

$$\begin{aligned} {z^k_i\over |z^k_i |}= {x^k_i\over |x^k_i |}\ \text{ for } i=1,2 \end{aligned}$$

and

$$\begin{aligned} |z_1^k|^2+|z_2^k|^2=|y_1^k|^2+|y_2^k|^2,\qquad |\, |z_1^k|-|z_2^k|\,|\le 4\, R_{\delta }, \end{aligned}$$

so that \((z^k_1,z^k_2,y^k_3,\dots ,y^k_k)\in D^{k,\sigma }\left( r_k,\frac{x^k_1}{|x^k_1|},\dots , \frac{x^k_k}{|x^k_k|}\right) \). Notice that

$$\begin{aligned} \limsup _{k\rightarrow \infty }|z^k_1-z^k_2|<\infty \end{aligned}$$
(4.10)

because \(\lim _{k\rightarrow \infty }|x^k_1-x^k_2|<\infty \). Moreover, from (4.8) we obtain

$$\begin{aligned} \lim _{k\rightarrow \infty }|y^k_1-y^k_2|=\infty \quad \text{ and } \lim _{k\rightarrow \infty }\min \{|y^k_i-y^k_j|\ :\ j=3,\ldots ,k\}=\infty \quad \text{ for } i=1,2.\nonumber \\ \end{aligned}$$
(4.11)

From (4.10) and (4.11), by the arguments used for (4.7), we infer that

$$\begin{aligned} \liminf _{k\rightarrow \infty }[f_k(y^k_1,\ldots ,y^k_k) -f_k(z^k_1,z^k_2,y^k_3,\ldots ,y^k_k)]> 0, \end{aligned}$$
(4.12)

which implies (by (4.9) and the definition of \(g^{k,\sigma }\) in (3.3)) that

$$\begin{aligned} \liminf _{k\rightarrow \infty }\left[ g^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2,{x^k_3\over |x^k_3|},\ldots ,{x^k_k\over |x^k_k|}\right) - g^{k,\sigma }\left( r_k,{x^k_1\over |x^k_1|},\ldots ,{x^k_k\over |x^k_k|}\right) \right] > 0. \end{aligned}$$

Therefore, we have

$$\begin{aligned} g^{k,\sigma }\left( r_k,\theta _1^k,\theta _2^k,{x_3^k\over |x_3^k|},\ldots , {x_k^k\over |x_k^k|}\right) > g^{k,\sigma }\left( r_k,{x_1^k\over |x_1^k|},\ldots , {x_k^k\over |x_k^k|}\right) \end{aligned}$$

for k large enough, in contradiction with the fact that \(g^{k,\sigma }\left( r_k,{x_1^k\over |x_1^k|},\ldots , {x_k^k\over |x_k^k|}\right) =\max \limits _{D^{k,\sigma }}g^{k,\sigma }\). Thus, (4.2) is proved.

Property (4.3) follows from (2.19) taking into account (4.1) and (4.2). From (4.3), (2.14) and (2.16) we deduce that, for k large enough, the function \(u_k\) solves the equation (4.4). Finally, taking into account (2.17) and (4.1), since \(w(x)\rightarrow 0\) as \(|x|\rightarrow \infty \), from (4.3) we deduce that \(u_k\rightarrow 0\) as \(k\rightarrow \infty \) and the convergence is uniform on the compact subsets of \({\mathbb {R}}^N\). \(\square \)

Chosen \((x^k_1,\dots ,x^k_k)\in D_k\) as in Proposition 3.1, we set

$$\begin{aligned} \Gamma _k=\min \left\{ \left| {x_i^k\over |x_i^k|}-{x_j^k\over |x_j^k|}\right| \ : \ i,j\in \{1,\ldots ,k\},\ i\ne j\right\} \qquad \forall k\ge 2 \end{aligned}$$
(4.13)

and

$$\begin{aligned} \Lambda _k=\max _{\theta \in S(0,1)}\min \left\{ \left| \theta -{x_i^k\over |x_i^k|}\right| \ :\ i=1,\ldots ,k\right\} \qquad \forall k\ge 1. \end{aligned}$$
(4.14)

Then, the following lemma holds.

Lemma 4.2

Assume that the potential a(x) satisfies the conditions (1.3) and (1.4). Then for any \(k\in {\mathbb {N}}\) and \(\sigma > 0\), and for any choice of \((x^k_1,\ldots ,x^k_k)\) and \(r_k\) as in Proposition 3.1,

$$\begin{aligned} \lim _{k\rightarrow \infty }\Gamma _k=0\quad \text{ and } \ \lim _{k\rightarrow \infty }\Gamma _k\cdot r_k=\infty \end{aligned}$$
(4.15)

where \(\Gamma _k\) and \(\Lambda _k\) are the positive numbers defined in (4.13) and (4.14). If we assume in addition that condition (1.5) holds, then there exists \({\tilde{\sigma }}> 0\) such that for all \(\sigma \in ]0,{\tilde{\sigma }}[\) we have

$$\begin{aligned} \limsup _{k\rightarrow \infty }{\Lambda _k\over \Gamma _k}<\infty \end{aligned}$$
(4.16)

(notice that, as \(x_1^k,\ldots ,x_k^k\), also \(\Lambda _k\) and \(\Gamma _k\) depend on the parameter \(\sigma \) introduced to define \(D^{k,\sigma }\)).

Proof

Notice that the balls \(B\left( {x_1^k\over |x_1^k|},{\Gamma _k\over 2}\right) ,\)..., \(B\left( {x_k^k\over |x_k^k|},{\Gamma _k\over 2}\right) \) are pairwise disjoint, so we must have \(\lim _{k\rightarrow \infty }\Gamma _k=0\) because S(0, 1) is a bounded set.

In order to prove that \(\lim _{k\rightarrow \infty }\Gamma _k\cdot r_k=\infty \), we argue by contradiction and assume that (up to a subsequence) \(\lim _{k\rightarrow \infty }\Gamma _k\cdot r_k<\infty \). Without any loss of generality, in the following we assume also that

$$\begin{aligned} \Gamma _k=\left| {x^k_1\over |x^k_1 |}- {x^k_2\over |x^k_2 |}\right| \qquad \forall k\ge 2. \end{aligned}$$
(4.17)

Then, consider two points \(z_1^k\) and \(z_2^k\) in \({\mathbb {R}}^N\) such that \((z_1^k,z_2^k,x_3^k,\ldots ,x_k^k)\in D_k\),

$$\begin{aligned} {z^k_i\over |z^k_i|}={x^k_i\over |x_i^k|}\ \text{ for } i=1,2 \end{aligned}$$

and

$$\begin{aligned} |z^k_1|^2+|z^k_2|^2=|x^k_1|^2+|x_2^k|^2,\qquad |\, |z_1^k|-|z^k_2|\,|< 4\, R_{\delta }, \end{aligned}$$

so that \((z^k_1,z^k_2,x^k_3,\dots ,x^k_k)\in D^{k,\sigma }\left( r_k,\frac{x^k_1}{|x^k_1|},\dots , \frac{x^k_k}{|x^k_k|}\right) \). Notice that \(\lim _{k\rightarrow \infty } \Gamma _k\cdot r_k< \infty \) implies \(\limsup _{k\rightarrow \infty }|z^k_1-z^k_2|< \infty \). Therefore, taking into account (4.1) and (4.2) and arguing as for (4.12), we obtain

$$\begin{aligned} \liminf _{k\rightarrow \infty }[f_k(x_1^k,\ldots ,x_k^k)-f_k(z_1^k,z_2^k,x_3^k, \ldots ,x_k^k)]> 0 \end{aligned}$$

which is a contradiction because

$$\begin{aligned} f_k(x_1^k,\ldots ,x_k^k)=g^{k,\sigma }\left( r_k,{x_1^k\over |x_1^k|},\ldots ,{x^k_k\over |x_k^k|}\right) =\min _{D^{k,\sigma }\left( r_k,\frac{x^k_1}{|x^k_1|},\dots , \frac{x^k_k}{|x^k_k|}\right) }f_k. \end{aligned}$$

Thus (4.15) is completely proved.

For the proof of (4.16) we argue again by contradiction and assume that (up to a subsequence)

$$\begin{aligned} \lim _{k\rightarrow \infty }{\Lambda _k\over \Gamma _k}=\infty . \end{aligned}$$

Therefore, due to (4.15), we can choose \(\theta _1^k\), \(\theta _2^k\) in S(0, 1) such that

$$\begin{aligned}&\lim \limits _{k\rightarrow \infty }{|\theta _1^k-\theta _2^k|\over \Gamma _k}=\infty \quad \text{ and } \nonumber \\&\lim \limits _{k\rightarrow \infty }{1\over |\theta _1^k-\theta _2^k|}\cdot \min \left\{ \left| \theta _i^k-{x_j^k\over |x^k_j|}\right| \ :\ j=1,\ldots ,k\right\} =\infty \ \text{ for } \ i=1,2, \end{aligned}$$
(4.18)

so that \(\left( r_k,\theta ^k_1,\theta ^k_2, \frac{x^k_3}{|x^k_3|},\dots , \frac{x^k_k}{|x^k_k|}\right) \in D^{k,\sigma }\).

Now, consider \((y^k_1,\ldots ,y^k_k)\) in \(D^{k,\sigma }\left( r_k,\theta ^k_1,\theta _2^k,{x^k_3\over |x^k_3|},\ldots , {x^k_k\over |x^k_k|}\right) \) such that

$$\begin{aligned} f_k(y^k_1,\ldots ,y^k_k)=g^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2,{x^k_3\over |x^k_3|},\ldots , {x^k_k\over |x^k_k|}\right) \end{aligned}$$
(4.19)

and the points \(\xi ^k_1\), \(\xi _2^k\) in \({\mathbb {R}}^N\) such that \((\xi ^k_1,\xi ^k_2, y^k_3,\ldots ,y^k_k)\in D_k\), and

$$\begin{aligned} |\xi ^k_i|=|y^k_i|,\qquad {\xi ^k_i\over |\xi ^k_i|}={x^k_i\over |x^k_i|}\qquad \text{ for } \ i=1,2. \end{aligned}$$
(4.20)

We claim that

$$\begin{aligned} f_k(y^k_1,\ldots ,y^k_k)> f_k(\xi ^k_1,\xi ^k_2,y^k_3,\ldots ,y^k_k) \end{aligned}$$
(4.21)

for k large enough. Once (4.21) is proved, we are done because it produces a contradiction. Indeed, since \((\xi ^k_1,\xi ^k_2,y^k_3,\dots ,y^k_k)\in D^{k,\sigma }\left( r_k, \frac{x^k_1}{|x^k_1|},\dots , \frac{x^k_k}{|x^k_k|}\right) \), by (3.3) and (3.6) we get

$$\begin{aligned} f_k(\xi _1^k,\xi _2^k,y^k_3,\ldots ,y^k_k)\ge & {} g^{k,\sigma }\left( r_k,{x^k_1\over |x^k_1|},\ldots ,{x^k_k\over |x^k_k|}\right) =\max _{D^{k,\sigma }} g^{k,\sigma }\\\ge & {} g^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2,\frac{x^k_3}{|x^k_3|},\dots , \frac{x^k_k}{|x^k_k|}\right) =f_k(y^k_1,\dots ,y^k_k) \end{aligned}$$

and (4.19) holds.

In order to prove (4.21), let us set

$$\begin{aligned} {\varepsilon }(x_1,x_2)=f_1(x_1)+f_1(x_2)-f_2(x_1,x_2)\qquad \forall (x_1,x_2)\in D_2 \end{aligned}$$

and

$$\begin{aligned} {\varepsilon }(x_1,\ldots ,x_k)=f_2(x_1,x_2)+f_{k-2}(x_3,\ldots ,x_k) -f_k(x_1,\ldots ,x_k)\qquad \forall (x_1,\ldots ,x_k)\in D_k. \end{aligned}$$

Then we obtain

$$\begin{aligned} f_k(\xi ^k_1,\xi ^k_2,y^k_3,\ldots ,y^k_k)= & {} f_k(y^k_1,y_2^k,y_3^k,\ldots ,y^k_k) -{\varepsilon }(\xi ^k_1,\xi ^k_2)+{\varepsilon }(y^k_1,y_2^k )\nonumber \\&+{\varepsilon }(y^k_1,y_2^k,y_3^k,\ldots ,y^k_k)-{\varepsilon }(\xi ^k_1,\xi ^k_2,y^k_3,\ldots ,y^k_k) \nonumber \\&+f_1(\xi _1^k)+f_1(\xi _2^k)-f_1(y_1^k)-f_1(y_2^k). \end{aligned}$$
(4.22)

Arguing as in the proof of Proposition 4.5 in [11], one can verify that \({\varepsilon }(\xi ^k_1,\xi ^k_2)\), \({\varepsilon }(y^k_1,y_2^k )\), \({\varepsilon }(\xi ^k_1,\xi ^k_2,y^k_3,\ldots ,y^k_k)\) and \({\varepsilon }(y^k_1,y_2^k,y_3^k,\ldots ,y^k_k)\) are positive numbers, for all \(k> 2\), that tend to zero as \(k\rightarrow \infty \) and there exists a suitable positive constant \({\bar{c}}\) such that

$$\begin{aligned} \liminf _{k\rightarrow \infty }{\varepsilon }(\xi ^k_1,\xi ^k_2)\, e^{{\bar{c}}\, |\xi ^k_1-\xi ^k_2|}> 0. \end{aligned}$$
(4.23)

Moreover, from (4.20) and assumption (1.5) we infer that

$$\begin{aligned} |f_1(\xi _1^k)+f_1(\xi _2^k)-f_1(y_1^k)-f_1(y_2^k)|\le {\tilde{c}} e^{-{\tilde{\eta }}\, {r_k\over 1+2\sigma }} \end{aligned}$$
(4.24)

for a suitable constant \({\tilde{c}}> 0\).

Now, let us prove that (4.18) implies

$$\begin{aligned} \lim _{k\rightarrow \infty }(|y^k_1-y^k_2|-|\xi ^k_1-\xi ^k_2|)=\infty . \end{aligned}$$
(4.25)

First notice that, since \(|\xi _i^k|=|y^k_i|\) for \(i=1,2\) by (4.20), we get, as \((y^k_1,\dots ,y^k_k)\in D^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2,\frac{x^k_3}{|x^k_3|},\dots , \frac{x^k_k}{|x^k_k|}\right) \), that

$$\begin{aligned} |y^k_1-y^k_2|-|\xi ^k_1-\xi ^k_2|= & {} {|y^k_1-y^k_2|^2-|\xi ^k_1-\xi ^k_2|^2\over |y^k_1-y^k_2|+|\xi ^k_1-\xi ^k_2|} \nonumber \\= & {} {2\big (\xi ^k_1\cdot \xi ^k_2-y^k_1\cdot y^k_2\big )\over |y^k_1-y^k_2|+|\xi ^k_1-\xi ^k_2|} \nonumber \\= & {} {2|\xi _1^k|\, |\xi ^k_2|\cdot \left[ {x^k_1\over |x^k_1|}\cdot {x^k_2\over |x^k_2|}-\theta ^k_1\cdot \theta ^k_2\right] \over |y^k_1-y^k_2|+|\xi ^k_1-\xi ^k_2|}\qquad \forall k\ge 2. \end{aligned}$$
(4.26)

Moreover, due to (4.17), we can write

$$\begin{aligned} {x^k_1\over |x^k_1|}\cdot {x^k_2\over |x^k_2|}-\theta ^k_1\cdot \theta ^k_2 = {1\over 2}(|\theta ^k_1-\theta ^k_2|^2-\Gamma _k^2)\ge 0 \end{aligned}$$
(4.27)

by (4.18) for large enough k.

Then, by combining (4.26) with (4.27) we get

$$\begin{aligned} |y^k_1-y^k_2|-|\xi ^k_1-\xi ^k_2|=\frac{|y^k_1||y^k_2|(|\theta ^k_1-\theta ^k_2|^2-\Gamma ^2_k)}{|y^k_1-y^k_2|+|\xi ^k_1-\xi ^k_2|}, \end{aligned}$$
(4.28)

because \((y^k_1,\dots ,y^k_k)\in D^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2,\right. \) \(\left. \frac{x^k_3}{|x^k_3|},\dots , \frac{x^k_k}{|x^k_k|}\right) \).

Since \(\theta ^k_1\) and \(\theta ^k_2\) satisfies (4.18), we have \(\lim \limits _{k\rightarrow \infty }|\theta ^k_1-\theta ^k_2|=0\). Moreover we say that

$$\begin{aligned} \limsup _{k\rightarrow \infty } {|y^k_1-y^k_2|\over r_k\, |\theta ^k_1-\theta ^k_2|}<\infty . \end{aligned}$$
(4.29)

In fact, arguing by contradiction, assume that (up to a subsequence)

$$\begin{aligned} \lim _{k\rightarrow \infty } {|y^k_1-y^k_2|\over r_k\, |\theta ^k_1-\theta ^k_2|}=\infty \end{aligned}$$
(4.30)

and consider two points \(\psi ^k_1\), \(\psi ^k_2\) in \({\mathbb {R}}^N\) such that \((\psi ^k_1,\psi ^k_2, y^k_3,\ldots ,y^k_k)\in D_k\),

$$\begin{aligned} {\psi ^k_i\over |\psi ^k_i|}={y^k_i\over |y^k_i|}=\theta ^k_i\qquad \text{ for } \ i=1,2 \end{aligned}$$

and

$$\begin{aligned} |\psi ^k_1|^2+|\psi ^k_2|^2=|y^k_1|^2+|y^k_2|^2,\qquad |\,|\psi ^k_1|-|\psi ^k_2|\, |\, \le 4R_{\delta }, \end{aligned}$$

so that \((\psi ^k_1,\psi ^k_2,y^k_3,\dots ,y^k_k)\in D^{k,\sigma }\left( r_k,\theta ^k_1,\theta ^k_2\frac{x^k_3}{|x^k_3|},\dots , \frac{x^k_k}{|x^k_k|}\right) \).

Then, we can argue as in the proof of (4.12) and we infer from (4.18) and (4.30) that, for k large enough,

$$\begin{aligned} f_k(y^k_1,\ldots ,y^k_k)> f_k(\psi ^k_1,\psi ^k_2,y^k_3,\ldots ,y^k_k) \end{aligned}$$

in contradiction with (4.19). Thus (4.29) holds and, as a consequence, we have also

$$\begin{aligned} \limsup _{k\rightarrow \infty } {|\xi ^k_1-\xi ^k_2|\over r_k\, |\theta ^k_1-\theta ^k_2|}<\infty . \end{aligned}$$
(4.31)

Hence, since (4.28) gives

$$\begin{aligned} \liminf _{k\rightarrow \infty } (|y^k_1-y^k_2|-|\xi ^k_1-\xi ^k_2|)\ge \liminf _{k\rightarrow \infty }{(1+2\sigma )^{-2}\cdot r^2_k|\theta ^k_1-\theta ^k_2|^2\over |y^k_1-y^k_2|+|\xi ^k_1-\xi ^k_2|}, \end{aligned}$$

then we obtain (4.25) taking into account (4.29), (4.31) and that (see (4.18))

$$\begin{aligned} \lim _{k\rightarrow \infty }r_k|\theta ^k_1-\theta ^k_2|\ge \lim _{k\rightarrow \infty }r_k\cdot \Gamma _k=\infty . \end{aligned}$$

From (4.25) it follows that

$$\begin{aligned} \lim _{k\rightarrow \infty }{{\varepsilon }(y_1^k,y_2^k)\over {\varepsilon }(\xi ^k_1,\xi ^k_2)}=0. \end{aligned}$$
(4.32)

In analogous way, from the choice of \(\theta _1^k,\theta _2^k\) it follows that

$$\begin{aligned} \lim _{k\rightarrow \infty } {{\varepsilon }(\xi ^k_1,\xi ^k_2,y_3^k,\ldots ,y_k^k)\over {\varepsilon }(\xi ^k_1,\xi ^k_2)}= \lim _{k\rightarrow \infty } {{\varepsilon }(y_1^k,y_2^k,\ldots ,y_k^k)\over {\varepsilon }(\xi ^k_1,\xi ^k_2)}=0. \end{aligned}$$
(4.33)

Let us recall that \(\xi ^k_1,\xi ^k_2\) depend on the choice of \(\sigma \). Now, we prove that there exists \(\tilde{\sigma }>0\) such that, for all \(\sigma \in ]0,{\tilde{\sigma }}[\),

$$\begin{aligned} \lim _{k\rightarrow \infty }e^{-{\tilde{\eta }}\, {r_k\over 1+2\sigma }}e^{{\bar{c}}\, |\xi ^k_1-\xi ^k_2|}=0. \end{aligned}$$
(4.34)

Let us choose \(\tilde{\sigma }> 0\) small enough so that \({\bar{c}}[(1+2{\tilde{\sigma }})^2-1]< {\tilde{\eta }}\). Then, since \(\Gamma _k\rightarrow 0\), for all \(\sigma \in ]0,{\tilde{\sigma }}[\) we have by (4.17) and (4.20) that

$$\begin{aligned} \limsup _{k\rightarrow \infty }{\bar{c}}\, {1+2\sigma \over r_k} \cdot |\xi ^k_1-\xi ^k_2|\le & {} \bar{c}(1+2\sigma ) \left[ \limsup _{k\rightarrow \infty } \frac{|\, |\xi ^k_1|-|\xi ^k_2|\, |}{r_k}\right] \\\le & {} \bar{c}(1+2\sigma ) [(1+2\sigma )-(1+2\sigma )^{-1}] \\= & {} {\bar{c}}[(1+2\sigma )^2-1]\le {\bar{c}}[(1+2\tilde{\sigma })^2-1]< {\tilde{\eta }} \end{aligned}$$

which implies (4.34). Hence, from (4.23), (4.32), (4.33), (4.24), (4.34) and (4.22) we obtain our claim (4.21), so the proof is complete. \(\square \)

As it is specified in next corollary, property (4.16) implies that the points \({x^k_1\over |x^k_1|},\ldots ,{x^k_k\over |x_k^k|}\) tend, as \(k\rightarrow \infty \), to spread on all of S(0, 1) and that the limit density of distribution is everywhere positive on S(0, 1).

Corollary 4.3

Assume that all the conditions of Lemma 4.2 are satisfied. Then \(\lim \limits _{k\rightarrow \infty }\Lambda _k\) \(=0\).

Moreover, if \((x^k_1,\dots ,x^k_k)\) is as in Proposition 3.1 and for all \(x\in S(0,1)\) and \(r> 0\) we denote by \(N_k(x,r)\) the number of elements of the set \(\Bigl \{x^k_i\) : \(i\in \{1,\ldots ,k\}\), \({x^k_i\over |x^k_i|}\in B(x,r)\Bigr \}\), then there exists \(c> 0\) such that

$$\begin{aligned} \liminf _{{\varepsilon }\rightarrow 0}\liminf _{k\rightarrow \infty } {N_k(x,{\varepsilon })\over k\cdot {\varepsilon }^{N-1}}\ge c\qquad \forall x\in S(0,1). \end{aligned}$$
(4.35)

Proof

Since \(\lim _{k\rightarrow \infty }\Gamma _k=0\), \(\lim _{k\rightarrow \infty }\Lambda _k=0\) follows directly from (4.16). In order to prove (4.35) we argue by contradiction and assume that there exists a sequence \((x_n)_n\) in S(0, 1) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\liminf _{{\varepsilon }\rightarrow 0}\liminf _{k\rightarrow \infty } {N_{k}(x_n,{\varepsilon })\over k\cdot {\varepsilon }^{N-1}} =0. \end{aligned}$$

Taking into account the definition of \(\Lambda _k\) and \(\Gamma _k\), we infer respectively that

$$\begin{aligned} \inf \left\{ N_{k}(x_n,{\varepsilon })\cdot \left( {\Lambda _{k}\over {\varepsilon }}\right) ^{N-1}\ :\ {\varepsilon }> 0,\ \ k, n\in {\mathbb {N}}\right\} > 0 \end{aligned}$$

and

$$\begin{aligned} \sup \left\{ k\, \Gamma _k^{N-1}\ :\ k\in {\mathbb {N}}\right\} <\infty . \end{aligned}$$

It follows that, for a suitable positive constant c,

$$\begin{aligned} \limsup \limits _{k\rightarrow \infty }{\Lambda _{k}\over \Gamma _{k}}&\ge c\,\lim \limits _{n\rightarrow \infty }\limsup \limits _{{\varepsilon }\rightarrow 0}\limsup \limits _{k\rightarrow \infty } \left[ {k\,{\varepsilon }^{N-1}\over N_{k}(x_n,{\varepsilon })}\right] ^{1\over N-1}\\&=c\,\lim \limits _{n\rightarrow \infty }\left[ \liminf \limits _{{\varepsilon }\rightarrow 0}\liminf \limits _{k\rightarrow \infty } {{N_{k}(x_n,{\varepsilon })}\over k\,{\varepsilon }^{N-1}} \right] ^{-{1\over N-1}}=\infty \end{aligned}$$

in contradiction with (4.16). Thus (4.35) is proved. \(\square \)

Lemma 4.4

Assume that the potential a(x) satisfies conditions (1.3), (1.4), (1.5) and (1.6). Let \({\tilde{\sigma }}\) be as in Lemma 4.2. Then, there exists \({\bar{\sigma }}\in ]0,{\tilde{\sigma }}[\) such that, for all \(k\in {\mathbb {N}}\), \(k\ge 2\) and for any k–tuple \((x^k_1,\dots ,x^k_k)\) and any \(r_k> 0\) as in Proposition 3.1 with \(\sigma ={\bar{\sigma }}\), the following relations hold true

$$\begin{aligned} \lim _{k\rightarrow \infty }{1\over r_k}\min \{|x_i^k|\ :\ i=1,\ldots ,k\} = \lim _{k\rightarrow \infty }{1\over r_k}\max \{|x_i^k|\ :\ i=1,\ldots ,k\}= 1. \end{aligned}$$

Proof

Let us consider a sequence \(({\bar{\sigma }}_n)_n\) in \(]0,{\tilde{\sigma }}[\) such that \(\lim _{n\rightarrow \infty }{\bar{\sigma }}_n=0\). We shall prove that there exists \({\bar{n}}\in {\mathbb {N}}\) such that the assertion of the lemma holds for \({\bar{\sigma }}={\bar{\sigma }}_{{\bar{n}}}\). Let us recall that \((x_1^k,\ldots ,x_k^k)\) and \(r_k\) depend also on \(\sigma \). Therefore, if \(\sigma ={\bar{\sigma }}_n\), in this proof we write, more explicitly, \((x_1^{k,{\bar{\sigma }}_n},\ldots ,x_k^{k,{\bar{\sigma }}_n})\) and \(r_{k,{\bar{\sigma }}_n}\) \(\forall n\in {\mathbb {N}}\).

Since

$$\begin{aligned} \min \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\}\le r_{k,{\bar{\sigma }}_n}\le \max \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\},\quad \forall k\in {\mathbb {N}},\ \forall n\in {\mathbb {N}},\nonumber \\ \end{aligned}$$
(4.36)

we have to prove that

$$\begin{aligned} \limsup _{k\rightarrow \infty }{1\over r_{k,{\bar{\sigma }}_n}}\max \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\}\le 1 \le \liminf _{k\rightarrow \infty }{1\over r_{k,{\bar{\sigma }}_n}}\min \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\} \end{aligned}$$

for some \(n\in {\mathbb {N}}\). Arguing by contradiction, assume that, for all \(n\in {\mathbb {N}}\),

$$\begin{aligned} \liminf _{k\rightarrow \infty }{1\over r_{k,{\bar{\sigma }}_n}}\min \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\} < 1 \end{aligned}$$
(4.37)

or

$$\begin{aligned} \limsup _{k\rightarrow \infty }{1\over r_{k,{\bar{\sigma }}_n}}\max \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\}> 1. \end{aligned}$$
(4.38)

Let us consider, for example, the case in which (4.38) is true (similar arguments work when (4.37) holds).

Then, consider the sequence of positive numbers \((\sigma _n)_n\) such that

$$\begin{aligned} 1+2\sigma _n=\limsup _{k\rightarrow \infty }{1\over r_{k,{\bar{\sigma }}_n}}\max \{|x_i^{k,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k\} \qquad \forall n\in {\mathbb {N}}. \end{aligned}$$
(4.39)

Notice that (4.39) implies \(\sigma _n\le \bar{\sigma }_n\) for all \(n\in {\mathbb {N}}\) and therefore \(\lim _{n\rightarrow \infty }\sigma _n=0\). Now, for all \(n\in {\mathbb {N}}\), consider the set

$$\begin{aligned} V_n=\{\tau \in {\mathbb {Z}}^N\ :\ (\tau +[-1,1]^N)\cap S(0,1/\sigma _n)\ne \emptyset \} \end{aligned}$$
(4.40)

and denote by \(\nu _n\) the number of elements of \(V_n\). It is clear that \(S(0,1/\sigma _n)\subseteq \cup _{\tau \in V_n}(\tau +[-1,1]^N)\), that \(\nu _n< \infty \) \(\forall n\in {\mathbb {N}}\) and that \(\lim _{n\rightarrow \infty }\nu _n=\infty \).

Now, consider a sequence \((\gamma _n)_n\) such that \(\gamma _n> 0\) \(\forall n\in {\mathbb {N}}\) and

$$\begin{aligned} \lim _{n\rightarrow \infty }{\nu _n\over \gamma _n}=0. \end{aligned}$$
(4.41)

From (4.39) it follows that there exists a sequence \((k_n)_n\) in \({\mathbb {N}}\) such that

$$\begin{aligned} {1\over r_{k_n,{\bar{\sigma }}_n}}\max \{|x_i^{k_n,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k_n\}> { 1+\sigma _n},\quad \forall n\in {\mathbb {N}}\end{aligned}$$
(4.42)

and in addition

$$\begin{aligned} \lim _{n\rightarrow \infty }{1\over k_n}\, \left( {\gamma _n\over \sigma _n}\right) ^{N-1}=0. \end{aligned}$$
(4.43)

Taking into account Corollary 4.3, up to a subsequence we have

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0}\lim _{n\rightarrow \infty }{N_{k_n}(x,{\varepsilon })\over k_n{\varepsilon }^{N-1}}\ge c> 0\qquad \forall x\in S(0,1) \end{aligned}$$

(where c is a positive constant independent of x). Moreover, taking also into account (4.43), we infer that

$$\begin{aligned} \liminf _{n\rightarrow \infty }\inf \left\{ N_{k_n}\left( x,r\, {\sigma _n\over \gamma _n}\right) \ :\ x\in S(0,1)\right\} = \infty \qquad \forall r> 0. \end{aligned}$$

Now, let us set

$$\begin{aligned} M_n={\gamma _n\over \sigma _n r_{k_n,{\bar{\sigma }}_n}}\, \max _{\tau \in V_n}\max \bigg \{ |x_i^{k_n,{\bar{\sigma }}_n}|-|x_j^{k_n,{\bar{\sigma }}_n}|&:&i,j=1,\ldots ,k_n, \\&\quad \left. {x_i^{k_n,{\bar{\sigma }}_n}\over |x_i^{k_n,{\bar{\sigma }}_n} |} \text{ and } {x_j^{k_n,{\bar{\sigma }}_n}\over |x_j^{k_n,{\bar{\sigma }}_n} |}\ \text{ in } \sigma _n(\tau +[-1,1]^N)\right\} . \end{aligned}$$

From the definition of the function \(g^{k,\sigma }\) and the properties of the points \(x_1^{k_n,{\bar{\sigma }}_n}, \ldots ,\) \(x_{k_n}^{k_n,{\bar{\sigma }}_n}\) described in Remark 3.3 and in Corollary 4.3, since the curvature of the sphere \(S(0,1/\sigma _n)\) tends to zero as \(n\rightarrow \infty \), we infer that \(\lim _{n\rightarrow \infty }M_n=0\) otherwise, arguing as in the proofs of Proposition 4.1 and Lemma 4.2, we would obtain

$$\begin{aligned} f_{k_n}(x^{k_n,{\bar{\sigma }}_n}_1,\ldots ,x^{k_n,{\bar{\sigma }}_n}_{k_n}) > f_{k_n}\left( r_{k_n}\, {x^{k_n,{\bar{\sigma }}_n}_1\over |x^{k_n,{\bar{\sigma }}_n}_1 |},\ldots ,r_{k_n}\, {x^{k_n,{\bar{\sigma }}_n}_{k_n}\over |x^{k_n,{\bar{\sigma }}_n}_{k_n}|}\right) \end{aligned}$$

for n large enough, in contradiction with the fact that

$$\begin{aligned} f_{k_n}(x^{k_n,{\bar{\sigma }}_n}_1,\ldots ,x^{k_n,{\bar{\sigma }}_n}_{k_n}) =g^{k_n,{\bar{\sigma }}_n}\left( r_{k_n}, {x^{k_n,{\bar{\sigma }}_n}_1\over |x^{k_n,{\bar{\sigma }}_n}_1 |},\ldots , {x^{k_n,{\bar{\sigma }}_n}_{k_n}\over |x^{k_n,{\bar{\sigma }}_n}_{k_n}|}\right) . \end{aligned}$$

It follows that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }{1\over \sigma _n r_{k_n,{\bar{\sigma }}_n}}\max \{||x_i^{k_n,{\bar{\sigma }}_n}|-r_{k_n,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k_n\} \le \\ \limsup \limits _{n\rightarrow \infty }{1\over \sigma _n r_{k_n,{\bar{\sigma }}_n}}\max \{|x_i^{k_n,{\bar{\sigma }}_n}|-|x_j^{k_n,{\bar{\sigma }}_n}|\ :\ i,j=1,\ldots ,k_n\} \le \lim \limits _{n\rightarrow \infty }{\nu _n M_n\over \gamma _n} = 0 \end{aligned}$$

(where the last inequality holds because \(S(0,1/\sigma _n)\) is a connected set).

It is clear that we have a contradiction because from (4.42) we obtain

$$\begin{aligned} {1\over \sigma _n r_{k_n,{\bar{\sigma }}_n}}\,\max \left\{ \left| |x_i^{k_n,{\bar{\sigma }}_n}|-r_{k_n,{\bar{\sigma }}_n}\right| \ :\ i=1,\ldots ,k_n\right\} > 1\quad \forall n\in {\mathbb {N}}, \end{aligned}$$

which implies

$$\begin{aligned} \liminf _{n\rightarrow \infty }{1\over \sigma _n r_{k_n,{\bar{\sigma }}_n}}\max \{||x_i^{k_n,{\bar{\sigma }}_n}|-r_{k_n,{\bar{\sigma }}_n}|\ :\ i=1,\ldots ,k_n\}\ge 1. \end{aligned}$$

Thus, the proof is complete. \(\square \)

Remark 4.5

Lemma 4.4 allows us to say that, for k large enough, the unilateral constraints

$$\begin{aligned} (1+2{\bar{\sigma }})^{-1}r_k\le |x_i|\le (1+2{\bar{\sigma }})r_k\qquad \forall i\in \{1,\ldots ,k\}, \end{aligned}$$

we used to define \(D^{k,{\bar{\sigma }}}\), do not give rise to any variational inequality.

Moreover, exploiting condition (1.6), we can also prove the stronger result presented in Lemma 4.6 and Corollary 4.7.

Lemma 4.6

For all \(\rho > 0\), fix \(k_\rho \in {\mathbb {N}}\), \(k_\rho \ge 2\), and \(\theta _{1,\rho },\ldots ,\theta _{k_\rho ,\rho }\) in S(0, 1) satisfying

$$\begin{aligned} \lim _{\rho \rightarrow \infty } \rho \min \{|\theta _{i,\rho }-\theta _{j,\rho }|\ :\ i,j\in \{1,\ldots ,k_\rho \},\ i\ne j\}=\infty \end{aligned}$$
(4.44)

and, for \(\rho > 0\) large enough, choose \((x_{1,\rho },\ldots ,x_{k_\rho ,\rho })\in D^{k_\rho ,\sigma }(\rho ,\theta _{1,\rho },\ldots ,\theta _{k_\rho ,\rho })\) such that

$$\begin{aligned} f_{k_\rho }(x_{1,\rho },\ldots ,x_{k_\rho ,\rho })= g^{k_\rho ,\sigma }(\rho ,\theta _{1,\rho },\ldots ,\theta _{k_\rho ,\rho }). \end{aligned}$$
(4.45)

Then, if the potential a(x) satisfies conditions (1.3), (1.4), (1.5) and (1.6), we have

$$\begin{aligned} \lim _{\rho \rightarrow \infty } [\max \{|x_{i,\rho }|\ :\ i=1,\ldots ,k_\rho \}-\min \{ |x_{i,\rho }|\ :\ i=1,\ldots ,k_\rho \}]=0. \end{aligned}$$
(4.46)

Proof

Let us consider the function \(M_a:{\mathbb {R}}^+\rightarrow {\mathbb {R}}\) defined (using the function \(\check{u}\) in (1.6)) by setting

$$\begin{aligned} M_a(\rho )=\int _{S(0,1)}d\sigma _\theta \int _{{\mathbb {R}}^N}a(x)\check{u}^2(x-\rho \theta )\, dx\qquad \forall \rho \ge 0. \end{aligned}$$

Then, condition (1.6) implies that \({d^2\, \over d\rho ^2}M_a(\rho )> 0\) for \(\rho > 0\) large enough. Moreover, we have

$$\begin{aligned}&\lim _{\rho \rightarrow \infty }M_a(\rho )=a_\infty \int _{{\mathbb {R}}^N}\check{u}^2(x)\, dx\, H_{N-1}(S(0,1)),\\&\lim _{\rho \rightarrow \infty }{d\,\over d\rho }M_a(\rho )=0 \end{aligned}$$

and \({d\, \over d\rho }M_a(\rho )< 0\) for \(\rho > 0\) large enough.

Taking into account that

$$\begin{aligned} {1\over k}\,\sum _{i=1}^k\rho _i\le \left[ {1\over k}\,\sum _{i=1}^k\rho _i^2\right] ^{1/2}\qquad \forall k\in {\mathbb {N}},\ \forall (\rho _1,\ldots ,\rho _k)\in {\mathbb {R}}^k \end{aligned}$$

and that the equality holds if and only if \(\rho _1=\rho _2=\ldots =\rho _k\ge 0\), it follows that for \(\rho > 0\) large enough, since \((x_{1,\rho },\dots ,x_{k_\rho ,\rho })\in D^{k_\rho ,\sigma }(\rho ,\theta _{1,\rho },\dots ,\theta _{k_\rho ,\rho })\) and so \(\rho =\left( \frac{1}{k_\rho }\sum _{i=1}^{k_\rho }|x_{i,\rho }^2|\right) ^{1/2}\), by applying the above inequality with \(\rho _i=|x_{i,\rho }|\) we get, by convexity,

$$\begin{aligned} M_a(\rho )\le M_a\left( {1\over k_\rho }\,\sum _{i=1}^{k_\rho }|x_{i,\rho }|\right) \le {1\over k_\rho }\, \sum _{i=1}^{k_\rho }M_a(|x_{i,\rho }|), \end{aligned}$$
(4.47)

where the equalities hold if and only if \(|x_{i,\rho }|=\rho \) for \(i=1,\ldots ,k_\rho \).

In order to prove (4.46), we argue by contradiction and assume that (up to a subsequence)

$$\begin{aligned} \lim _{\rho \rightarrow \infty }[\max \{|x_{i,\rho }|\ :\ i=1,\ldots ,k_\rho \}-\min \{ |x_{i,\rho }|\ :\ i=1,\ldots ,k_\rho \}]> 0. \end{aligned}$$

Then, for \(\rho > 0\) large enough, all the inequalities in (4.47) are strict inequalities and, as a consequence, we have

$$\begin{aligned} \sum _{i=1}^{k_\rho }M_a(|x_{i,\rho }|)-k_\rho M_a(\rho )> 0. \end{aligned}$$

Notice that (4.44) implies \((\rho \theta _{1,\rho },\rho \theta _{2,\rho },\ldots ,\rho \theta _{k_\rho ,\rho })\in D_{k_\rho }\) for \(\rho > 0\) large enough. Moreover, for every \(u_\rho \in S_{x_{1,\rho },\ldots ,x_{k_\rho ,\rho }}\) and \({\bar{u}}_\rho \in S_{\rho \theta _{1,\rho },\ldots ,\rho \theta _{k_\rho ,\rho }}\) such that \(E(u_\rho )=f_{k_\rho }(x_{1,\rho },\ldots ,\) \(x_{k_\rho ,\rho })\) and \(E({\bar{u}}_\rho )=f_{k_\rho }(\rho \theta _{1,\rho } ,\ldots ,\rho \theta _{k_\rho ,\rho })\), taking into account conditions (1.4), (1.5) and (1.6), we obtain

$$\begin{aligned} \liminf _{\rho \rightarrow \infty } \frac{E(u_\rho )-E({\bar{u}}_\rho )}{\sum _{i=1}^{k_\rho }M_a(|x_{i,\rho }|)-k_\rho M_a(\rho )} \ge c_\infty , \end{aligned}$$
(4.48)

where \(c_\infty \) is a positive constant. In fact, taking into account (4.44), for \(\rho > 0\) large enough we can set

$$\begin{aligned} d_\rho ={1\over 4(1+2\sigma )}\, \rho \min \{|\theta _{i,\rho }-\theta _{j,\rho }|\ :\ i,j\in \{1,\ldots , k_\rho \},\ i\ne j\}> 0 \end{aligned}$$

and we can consider \(k_\rho \) pairwise disjoint subsets of \({\mathbb {R}}^N\), \(\Omega _{1,\rho },\ldots ,\Omega _{k_\rho ,\rho }\), such that

$$\begin{aligned} \bigcup _{i=1}^{k_\rho }\Omega _{i,\rho }={\mathbb {R}}^N \ \text{ and } \ \Omega _{i,\rho }\supseteq B(x_{i,\rho },d_\rho )\cup B(\rho \theta _{i,\rho },d_\rho )\qquad \forall i\in \{1,\ldots ,k_\rho \}. \end{aligned}$$

Notice that \(\lim \limits _{\rho \rightarrow \infty }d_\rho =\infty \) because of (4.44). Taking into account the asymptotic behaviour as \(\rho \rightarrow \infty \) of the functions \(u_\rho \) and \({\bar{u}}_\rho \) (see Proposition 2.9) and of the solution w to the limit problem (1.7), from conditions (1.4), (1.5) and (1.6) we infer that there exists \(\rho _\infty > 0\) and \(c_\infty > 0\) such that

$$\begin{aligned} E_{\Omega _{i,\rho }}(u_\rho )- E_{\Omega _{i,\rho }}({\bar{u}}_\rho ) \ge c_\infty [M_a(|x_{i,\rho }|)- M_a(\rho ) ] \qquad \forall \rho > \rho _\infty ,\ \forall i\in \{1,\ldots ,k_\rho \}, \end{aligned}$$

where \(E_{\Omega _{i,\rho }}(u_\rho )\) is defined by

$$\begin{aligned} E_{\Omega _{i,\rho }}(u_\rho )={1\over 2}\int _{\Omega _{i,\rho }}(|Du_\rho |^2+a(x)u^2_\rho )\, dx-{1\over p+1}\, \int _{\Omega _{i,\rho }} |u_\rho |^{p+1}dx \end{aligned}$$

and \(E_{\Omega _{i,\rho }}({\bar{u}}_\rho )\) is defined in analogous way. In fact, for all sequences \((\rho _n)_n\) in \((0,+\infty )\) and \((i_n)_n\) in \({\mathbb {N}}\) such that \(\lim \limits _{n\rightarrow \infty }\rho _n=\infty \), \(1\le i_n\le k_{\rho _n}\), \(|x_{i_n,\rho _n}|\ne \rho _n\) \(\forall n\in {\mathbb {N}}\) (so that \(M_a(|x_{i_n,\rho _n}|)\ne M_a(\rho _n)\) for all n large enough), we have

$$\begin{aligned}&\lim _{n\rightarrow \infty } \frac{E_{\Omega _{i_n,\rho _n}}(u_{\rho _n})- E_{\Omega _{i_n,\rho _n}}({\bar{u}}_{\rho _n})}{M_a(|x_{i_n,\rho _n}|)-M_a(\rho _n)}\\&=\lim _{n\rightarrow \infty } \frac{\int _{\Omega _{i_n,\rho _n}}a(x)w^2(x-x_{i_n,\rho _n})\, dx - \int _{\Omega _{i_n,\rho _n}}a(x)w^2\left( x-\rho _n\frac{x_{i,n}}{|x_{i,n}|}\right) \, dx}{M_a(|x_{i_n,\rho _n}|)-M_a(\rho _n)}\\&= \left[ H_{N-1}(S(0,1))\int _{{\mathbb {R}}^N}\check{u}^2(x)\, dx\right] ^{-1} \int _{{\mathbb {R}}^N}w^2(x)\,dx. \end{aligned}$$

As a consequence, we obtain

$$\begin{aligned} \liminf _{\rho \rightarrow \infty } \frac{\sum _{i=1}^{k_\rho }[E_{\Omega _{i,\rho }}(u_\rho )- E_{\Omega _{i,\rho }}({\bar{u}}_\rho )]}{\sum _{i=1}^{k_\rho }[M_a(|x_{i,\rho }|)-M_a(\rho )]} \ge c_\infty , \end{aligned}$$

that is (4.48). It follows that \(E(u_\rho )> E({\bar{u}}_\rho )\) for \(\rho > 0\) large enough, in contradiction with the assumption (4.45). So (4.46) holds and the proof is complete. \(\square \)

Corollary 4.7

Let the potential a(x) satisfy all the assumptions of Theorem 1.1. For all \(k\ge 2\), let \((x^k_1,\ldots ,x^k_k)\) and \(r_k\) satisfy the properties described in Proposition 3.1. Then,

$$\begin{aligned} \lim _{k\rightarrow \infty } [\max \{|x_{i}^k|\ :\ i=1,\ldots ,k \}-\min \{ |x_{i}^k|\ :\ i=1,\ldots ,k \}]=0. \end{aligned}$$

The proof follows directly from Lemma 4.6 taking into account that \(r_k\cdot \Gamma _k\rightarrow \infty \) as \(k\rightarrow \infty \) (see (4.13) and (4.15)).

5 Proof of the main result and final remarks

Fixed \(\sigma =\bar{\sigma }\) as in Lemma 4.4, let \((x^k_1,\dots ,x^k_k)\) and \(r_k\) be as in Proposition 3.1. Let \(\lambda _1^k\), ..., \(\lambda _k^k\) be the Lagrange multipliers corresponding to a minimizing function \(u_k\) for the energy functional E in \(S_{x_1^k,\ldots ,x_k^k}\) (see (2.16)). In order to prove that \(u_k\) is a positive solution of problem (1.1), it remains to show that, for k large enough, \(\lambda _i^k=0\) \(\forall i\in \{1,\ldots ,k\}\).

Proposition 5.1

Assume that \(\sigma ={\bar{\sigma }}\) (see Lemma 4.4). Let \((x^k_1,\ldots ,x^k_k)\) and \(r_k\) be as in Proposition 3.1. Let \(u_k\) be a minimizing function for the energy functional E in \(S_{x^k_1,\ldots ,x^k_k}\) and \(\lambda ^k_1,\ldots ,\lambda ^k_k\) the corresponding Lagrange multipliers (see Proposition 2.6). Then, there exists \({\bar{k}}'\in {\mathbb {N}}\) such that, for all \(k\ge {\bar{k}}'\),

$$\begin{aligned} {{\lambda }^k_i\cdot x^k_i\over |x^k_i|^2}\int _{B(x^k_i,R_{\delta })}[u^{\delta }_k(x)]^2dx=\mu _k\qquad \forall i\in \{1,\ldots ,k\} \end{aligned}$$
(5.1)

for a suitable \(\mu _k\in {\mathbb {R}}\).

Proof

Notice that every function \(u_k\in S_{x^k_1,\ldots ,x^k_k}\), such that \(E(u_k)=f_k(x^k_1,\ldots ,x^k_k)\), satisfies

$$\begin{aligned} E(u_k)&=\min \{ E(u)&:\ u^{\delta }\in H^1_0(\cup _{i=1}^k B(x^k_i,R_{\delta })),\ u\in S_{y_1,\ldots ,y_k},\nonumber \\&\left[ {1\over k}\sum _{i=1}^k|y_i|^2\right] ^{1/2}=r_k,\ y_i\in B(x_i,R_{\delta }),\nonumber \\&(1+2{\bar{\sigma }})^{-1}r_k\le |y_i|\le (1+2{\bar{\sigma }})r_k,\ {y_i\over |y_i|}={x^k_i\over |x^k_i|}\ \text{ for } i=1,\ldots ,k\}\nonumber \\ \end{aligned}$$
(5.2)

because, since \(f_k(x_1^k,\ldots ,x_k^k)=g^{k,\sigma }\left( r_k, {x_1^k\over |x_1^k|},\ldots ,{x_k^k\over |x_k^k|}\right) \), we have \(E(u_k)=f_k(x_1^k,\ldots ,x_k^k )\) \(\le f_k(y_1^k,\ldots ,y_k^k)\le E(u)\), \(\forall u\in S_{y_1,\ldots ,y_k}\) with \(y_1,\ldots ,y_k\) as in (5.2).

Moreover, Lemma 4.4 implies that there exists \({\bar{k}}'\in {\mathbb {N}}\) such that, for all \(k\ge {\bar{k}}'\),

$$\begin{aligned} (1+2{\bar{\sigma }})^{-1}r_k< |x_i^k| < (1+2{\bar{\sigma }})r_k \qquad \forall i\in \{1,\ldots ,k\}, \end{aligned}$$
(5.3)

while, by (2.19), for \(i=1,\ldots ,k\), there exists \(w_{k,i}\in H^1_0(B(x^k_i, R_{\delta }))\) such that, see (2.10), \(\beta _i'(u_k)[w_{k,i}]=x^k_i\) (as one can verify by direct computation). This means that the unilateral constraints used to define \(S_{x^k_1,\ldots ,x^k_k}\) and \(D^{k,\sigma }\) do not give rise to any variational inequality.

By (5.3), the unique constraint that works in the minimum (5.2) is \(\sum _{i=1}^k|\beta _i(u)|^2=kr_k^2\). Hence we get the existence of a Lagrange multiplier \(\mu _k\in {\mathbb {R}}\), \(k\ge {\bar{k}}'\), such that

$$\begin{aligned} E'(u_k)[w_{k,i}] ={\mu _k\over 2}\, \beta _i(u_k)\cdot \beta '_i(u_k)[w_{k,i}]\ ={\mu _k\over 2}\, x^k_i\cdot \beta '_i(u_k)[w_{k,i}]\qquad \forall i\in \{1,\ldots ,k\}.\nonumber \\ \end{aligned}$$
(5.4)

On the other hand, from (2.10) and (2.16) we obtain

$$\begin{aligned} E'(u_k)[w_{k,i}]={1\over 2}\,\lambda ^k_i\cdot \beta '_i(u_k)[w_{k,i}] \int _{B(x^k_i,R_{\delta })}[u^{\delta }_k(x)]^2dx \end{aligned}$$

which, combined with (5.4), implies

$$\begin{aligned} \left[ {\lambda ^k_i\over 2} \int _{B(x^k_i,R_{\delta })}[u^{\delta }_k(x)]^2dx-{\mu _k\over 2}\, x^k_i\right] \cdot \beta '_i(u_k)[w_{k,i}]=0, \end{aligned}$$

that is (5.1). \(\square \)

Remark 5.2

In the proof of next propositions, when we apply Lemma 2.1, we obtain some integrals of the form

$$\begin{aligned} \int _{{\mathbb {R}}^N}w^{\delta }(x)\, (D w(x)\cdot \tau )\, x\, dx \end{aligned}$$

where w is the ground state solution to the limit problem (1.7) and \(\tau \in S(0,1)\). Let us remark that

$$\begin{aligned} \int _{{\mathbb {R}}^N} w^{\delta }(x)\, (D w(x)\cdot \tau )\, x\, dx= & {} {1\over 2}\int _{{\mathbb {R}}^N} (D[w^{\delta }(x)]^2\cdot \tau )\, x\, dx\nonumber \\= & {} -{\tau \over 2}\int _{{\mathbb {R}}^N}[w^{\delta }(x)]^2 dx, \end{aligned}$$
(5.5)

as one can verify by direct computation.

Proposition 5.3

Assume that \(\sigma ={\bar{\sigma }}\) (see Lemma 4.4). For all \(k\ge 2\) let \((x^k_1,\ldots ,x^k_k)\) and \(r_k\) be as in Proposition 3.1. Let \(u_k\) be a minimizing function for the energy functional E in \(S_{x^k_1,\ldots ,x^k_k}\) and \(\lambda ^k_1,\ldots ,\lambda ^k_k\) the corresponding Lagrange multipliers (see Proposition 2.6). Then, there exists \({\bar{k}}''\in {\mathbb {N}}\) such that

$$\begin{aligned} {\lambda }^k_i\cdot x^k_i=0\qquad \forall k\ge {\bar{k}}'',\ \forall i\in \{1,\ldots ,k\}. \end{aligned}$$

Proof

From Proposition 5.1 it follows that there exists \({\bar{k}}'\in {\mathbb {N}}\) such that (5.1) holds for all \(k\ge {\bar{k}}'\). Thus, it remains to show that there exists \({\bar{k}}''\ge {\bar{k}}'\) such that \(\mu _k=0\) \(\forall k\ge {\bar{k}}''\). Arguing by contradiction, assume that there exists a sequence \((k_n)_n\) in \({\mathbb {N}}\) such that \(\lim _{n\rightarrow \infty }k_n=\infty \) and \(\mu _{k_n}\ne 0\) \(\forall n\in {\mathbb {N}}\). Up to a subsequence, \(|\mu _{k_n}|^{-1}\mu _{k_n}\rightarrow {\bar{\mu }}\) as \(n\rightarrow \infty \), for a suitable \({\bar{\mu }}\in \{-1,1\}\). Now, choose a sequence \(({\bar{{\varepsilon }}}_n)_n\) of positive numbers such that \(\lim _{n\rightarrow \infty }{\bar{{\varepsilon }}}_n/\mu _{k_n}=0\) (notice that, as a consequence, \(\lim _{n\rightarrow \infty }{\bar{{\varepsilon }}}_n=0\) because \(\lim _{n\rightarrow \infty }\mu _{k_n}=0\) as follows from (2.21) (4.1), (4.2) and (5.1)). Then, set \(\rho _n=(r^2_{k_n}+{\bar{\mu }}\, {\bar{{\varepsilon }}}_n)^{1/2}\) \(\forall n\in {\mathbb {N}}\), we notice that \(\left( \rho _n,{x^{k_n}_1\over |x^{k_n}_1 |},\ldots , {x^{k_n}_{k_n}\over |x^{k_n}_{k_n} |}\right) \in D^{k_n,{\bar{\sigma }}}\) for n large enough and, due to (3.6),

$$\begin{aligned} g^{k_n,{\bar{\sigma }}} \left( \rho _n,{x^{k_n}_1\over |x^{k_n}_1 |},\ldots , {x^{k_n}_{k_n}\over |x^{k_n}_{k_n} |}\right) \le g^{k_n,{\bar{\sigma }}} \left( r_{k_n},{x^{k_n}_1\over |x^{k_n}_1 |},\ldots , {x^{k_n}_{k_n}\over |x^{k_n}_{k_n} |}\right) . \end{aligned}$$

Let us choose \(({\bar{y}}^{k_n}_1,\ldots ,{\bar{y}}^{k_n}_{k_n})\) in \(D^{k_n,\bar{\sigma }}\left( \rho _n,\frac{x^{k_n}_1}{|x^{k_n}_1|},\dots ,\frac{x^{k_n}_{k_n}}{|x^{k_n}_{k_n}|}\right) \) such that

$$\begin{aligned} f_{k_n}({\bar{y}}^{k_n}_1,\ldots ,{\bar{y}}^{k_n}_{k_n} )= g^{k_n,{\bar{\sigma }}}\left( \rho _n,{ x^{k_n}_1\over | x^{k_n}_1|}, \ldots , { x^{k_n}_{k_n}\over | x^{k_n}_{k_n}|}\right) . \end{aligned}$$
(5.6)

Then, we have

$$\begin{aligned} f_{k_n}({\bar{y}}^{k_n}_1,\ldots ,{\bar{y}}^{k_n}_{k_n} )\le f_{k_n}( x^{k_n}_1,\ldots , x^{k_n}_{k_n} ) \end{aligned}$$

which implies

$$\begin{aligned} E({\bar{v}}_n)\le E(u_{k_n}) \end{aligned}$$
(5.7)

for every \({\bar{v}}_n\in S_{{\bar{y}}^{k_n}_1,\ldots ,{\bar{y}}^{k_n}_{k_n} }\) such that \(E({\bar{v}}_n)=f_{k_n}({\bar{y}}^{k_n}_1,\ldots ,\) \({\bar{y}}^{k_n}_{k_n} )\). Taking into account Lemma 4.6 and Corollary 4.7, arguing as in the proof of Theorem 1.1 in [12], from (5.6) we infer that

$$\begin{aligned} \liminf _{n\rightarrow \infty }{E({\bar{v}}_n)-E(u_{k_n})\over {\bar{{\varepsilon }}}_n|\mu _{k_n}|\, k_n}\ge \liminf _{n\rightarrow \infty }{1\over {\bar{{\varepsilon }}}_n|\mu _{k_n}|\, k_n}\cdot \sum _{i=1}^{k_n}C_{n,i} \end{aligned}$$

where, since \(E(u_{k_n})=f_{k_n}(x^{k_n}_1,\dots ,x^{k_n}_{k_n})\),

$$\begin{aligned} C_{n,i}=\int _{B(x_i^{k_n},R_\delta )}u^{\delta }_{k_n}(x)\, [{\bar{v}}_n(x)-u_{k_n}(x)]\, [\lambda _i^{k_n}\cdot (x-x_i^{k_n})]\, dx. \end{aligned}$$
(5.8)

For every \(n\in {\mathbb {N}}\), let us choose \(i_n\in \{1,\ldots ,k_n\}\) and assume that (up to a subsequence) \({x_{i_n}^{k_n}\over | x_{i_n}^{k_n}|}\rightarrow {\bar{x}}\in S(0,1)\) and \({1\over {\bar{{\varepsilon }}}_n}({\bar{y}}_{i_n}^{k_n}-x_{i_n}^{k_n})\cdot x_{i_n}^{k_n}\rightarrow {\bar{c}}\in {\mathbb {R}}\) as \(n\rightarrow \infty \). Taking into account Lemma 2.1, we obtain

$$\begin{aligned} \liminf _{n\rightarrow \infty } {C_{n,i_n}\over |\mu _{k_n}|{\bar{{\varepsilon }}}_n}\ge - {\bar{c}} \left[ \int _{B(0,R_{\delta })}[w^{\delta }(x)]^2dx\right] ^{-1} \int _{B(0,R_\delta )}w^{\delta }(x) \, (D w(x)\cdot {\bar{x}})\, (x\cdot {\bar{\mu }}{\bar{x}})\, dx.\nonumber \\ \end{aligned}$$
(5.9)

Notice that in (5.9) the integral does not depend on \({\bar{x}}\), in the sense that its value remains unchanged if we replace \({\bar{x}}\) by any other \({\bar{x}}'\in S(0,1)\) (see (5.5)). Therefore, it follows that

$$\begin{aligned}&\liminf \limits _{n\rightarrow \infty } {1\over {\bar{{\varepsilon }}}_n|\mu _{k_n}|\, k_n} \sum \limits _{i=1}^{k_n} C_{n,i}\ge -\left[ \int _{B(0,R_{\delta })}[w^{\delta }(x)]^2dx\right] ^{-1}\cdot \\&\quad \cdot \int _{B(0,R_{\delta })}w^{\delta }(x) \, (D w(x)\cdot {\bar{x}})\, (x\cdot {\bar{\mu }}{\bar{x}})\, dx\cdot \liminf \limits _{n\rightarrow \infty }{1\over {\bar{{\varepsilon }}}_n k_n} \sum \limits _{i=1}^{k_n}({\bar{y}}_i^{k_n}-x_i^{k_n})\cdot x_i^{k_n}. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }{1\over {\bar{{\varepsilon }}}_n k_n} \sum \limits _{i=1}^{k_n}({\bar{y}}_i^{k_n}-x_i^{k_n})\cdot x_i^{k_n}={{\bar{\mu }}\over 2} \end{aligned}$$

because of the definition of \(\rho _n\), taking into account that \((\bar{y}^{k_n}_1,\dots ,\bar{y}^{k_n}_{k_n})\in D^{k_n,\bar{\sigma }} \left( \rho _n,\right. \) \( \frac{x^{k_n}_1}{|x^{k_n}_1|}, \left. \dots , \frac{x^{k_n}_{k_n}}{|x^{k_n}_{k_n}|}\right) \). Therefore, since \(\bar{\mu }^2=1\), we obtain

$$\begin{aligned}&\liminf _{n\rightarrow \infty } {E({\bar{v}}_n)-E(u_{k_n})\over {\bar{{\varepsilon }}}_n|\mu _{k_n} |\, k_n }\ge - {1\over 2} \left[ \int _{B(0,R_{\delta })}[w^{\delta }(x)]^2dx\right] ^{-1} \cdot \\&\quad \cdot \int _{B(0,R_{\delta })}w^{\delta }(x)\, (Dw(x)\cdot {\bar{x}})(x\cdot {\bar{x}})\, dx> 0, \end{aligned}$$

which is in contradiction with (5.7). So the proof is complete. \(\square \)

Proposition 5.4

Assume that \(\sigma ={\bar{\sigma }}\) (see Lemma 4.4). Let \((x^k_1,\ldots ,x^k_k)\) and \(r_k\) be as in Proposition 3.1. Let \(u_k\) be a minimizing function for the energy functional E in \(S_{x^k_1,\ldots ,x^k_k}\) and \(\lambda ^k_1,\ldots ,\lambda ^k_k\) the corresponding Lagrange multipliers (see Proposition 2.6). Then, there exists \({\bar{k}}\in {\mathbb {N}}\) such that

$$\begin{aligned} \lambda _i^k=0 \qquad \forall k\ge {\bar{k}},\ \forall i\in \{1,\ldots ,k\}. \end{aligned}$$

Proof

Arguing by contradiction, assume that there exists a sequence \((k_n)_n\) in \({\mathbb {N}}\) such that \(\lim _{n\rightarrow \infty }k_n=\infty \) and, for all \(n\in {\mathbb {N}}\), \(\lambda ^{k_n}_i\ne 0\) for some \(i\in \{1,\ldots ,k_n\}\).

For all \(n\in {\mathbb {N}}\), choose \(i_n\in \{1,\ldots ,k_n\}\) such that

$$\begin{aligned} |{\lambda }^{k_n}_{i_n}|=\max \{|{\lambda }^{k_n}_i|\ :\ i=1,\ldots ,k_n\}. \end{aligned}$$

Thus, we have \({\lambda }^{k_n}_{i_n}\ne 0\) \(\forall n\in {\mathbb {N}}\), so we can choose a sequence \(({\hat{{\varepsilon }}}_n)_n\) of positive numbers such that

$$\begin{aligned} \lim _{n\rightarrow \infty } {{\hat{{\varepsilon }}}_n \, k_n\over |{\lambda }^{k_n}_{i_n}|}=0. \end{aligned}$$
(5.10)

Notice that \(\hat{\varepsilon }_n\rightarrow 0\) as \(n\rightarrow \infty \) because \({\lambda }^{k_n}_{i_n}\rightarrow 0\), as it follows from (2.21) taking into account (4.1) and (4.2). Now, set \(\hat{\lambda }_n={{\lambda }^{k_n}_{i_n}\over |{\lambda }^{k_n}_{i_n} |}\) and consider the points \(\hat{y}^{k_n}_1,\ldots ,\hat{y}^{k_n}_{k_n}\) in \({\mathbb {R}}^N\) such that

$$\begin{aligned} \hat{y}^{k_n}_{i_n}=|x^{k_n}_{i_n}|\, {x^{k_n}_{i_n}+{\hat{{\varepsilon }}}_n\hat{\lambda }_n\over |x^{k_n}_{i_n}+{\hat{{\varepsilon }}}_n\hat{\lambda }_n |}\ \text{ and } \hat{y}^{k_n}_i=x^{k_n}_i\ \text{ for } i\ne i_n, \ i\in \{1,\ldots ,k_n\}, \end{aligned}$$

so that \(\left( r_{k_n},\frac{\hat{y}^{k_n}_1}{|\hat{y}^{k_n}_1|},\dots , \frac{\hat{y}^{k_n}_{k_n}}{|\hat{y}^{k_n}_{k_n}|}\right) \in D^{k_n,{\bar{\sigma }}}\) for large enough n.

Then, by Proposition 3.1 we get

$$\begin{aligned} g^{k_n,{\bar{\sigma }}}\left( r_{k_n},{\hat{y}^{k_n}_1\over |\hat{y}^{k_n}_1 |},\ldots , {\hat{y}^{k_n}_{k_n}\over |\hat{y}^{k_n}_{k_n} |}\right) \le g^{k_n,{\bar{\sigma }}}\left( r_{k_n},{ x^{k_n}_1\over |x^{k_n}_1 |},\ldots , { x^{k_n}_{k_n}\over |x^{k_n}_{k_n} |}\right) =f_{k_n}(x^{k_n}_1,\dots ,x^{k_n}_{k_n}).\nonumber \\ \end{aligned}$$
(5.11)

Let us choose \((y^{k_n}_1,\ldots ,y^{k_n}_{k_n})\) in \(D^{k_n,\bar{\sigma }}\left( r_{k_n},\frac{\hat{y}^{k_n}_1}{|\hat{y}^{k_n}_1|},\dots , \frac{\hat{y}^{k_n}_{k_n}}{|\hat{y}^{k_n}_{k_n}|}\right) \) such that

$$\begin{aligned} f_{k_n}( y^{k_n}_1,\ldots , y^{k_n}_{k_n} )= g^{k_n,{\bar{\sigma }}}\left( r_{k_n},{\hat{y}^{k_n}_1\over |\hat{y}^{k_n}_1|}, \ldots , {\hat{y}^{k_n}_{k_n}\over |\hat{y}^{k_n}_{k_n}|}\right) . \end{aligned}$$
(5.12)

Then, by (5.11), we have

$$\begin{aligned} f_{k_n}(y^{k_n}_1,\ldots , y^{k_n}_{k_n})\le f_{k_n}(x^{k_n}_1,\ldots ,x^{k_n}_{k_n}), \end{aligned}$$

which implies

$$\begin{aligned} E(v_n)\le E(u_{k_n}) \end{aligned}$$
(5.13)

for every \(v_n\in S_{y^{k_n}_1,\ldots ,y^{k_n}_{k_n}}\) such that \(E(v_n)=f_{k_n}( y^{k_n}_1,\ldots ,\) \(y^{k_n}_{k_n})\). Notice that \(\lim \limits _{n\rightarrow \infty }(\hat{y}^{k_n}_{i_n}-x^{k_n}_{i_n})=0\) implies \(\lim \limits _{n\rightarrow \infty }( y^{k_n}_{i_n}-x^{k_n}_{i_n})=0\). In fact, as the points \(x^{k_n}_1,\ldots ,x^{k_n}_{k_n}\), also the points \(y^{k_n}_1,\ldots ,y^{k_n}_{k_n}\) tend, as \(n\rightarrow \infty \), to be close to spheres of \({\mathbb {R}}^N\) with centre in the origin (because of (5.12), Lemma 4.6 and Corollary 4.7). Therefore, since \(\hat{y}^{k_n}_{i_n}-x^{k_n}_{i_n}\rightarrow 0\) and the distances of both \(x^{k_n}_{i_n}\) and \(y^{k_n}_{i_n}\) from the sphere \(S(0,r_{k_n})\) tend to 0, also \(y^{k_n}_{i_n}-x^{k_n}_{i_n}\rightarrow 0\) as \(n\rightarrow \infty \). Thus, arguing as in [11,12,13] (in particular as in the proof of Theorem 2.4 in [11]), taking into account the assumption (5.10), Lemma 2.1 and (5.5), we have

$$\begin{aligned} \liminf _{n\rightarrow \infty }{E(v_n)-E(u_{k_n})\over {\hat{{\varepsilon }}}_n\, |{\lambda }^{k_n}_{i_n}|}\ge {1\over 2}\, \int _{{\mathbb {R}}^N}[w^{\delta }(x)]^2dx\cdot \liminf _{n\rightarrow \infty }{1\over \hat{\varepsilon }_n} (y_{i_n}^{k_n}-x_{i_n}^{k_n})\cdot \hat{\lambda }_n. \end{aligned}$$
(5.14)

Moreover, since

$$\begin{aligned} y^{k_n}_{i_n}=|y^{k_n}_{i_n}|\cdot {x^{k_n}_{i_n}+\hat{\varepsilon }_n\hat{\lambda }_n\over |x^{k_n}_{i_n}+\hat{\varepsilon }_n\hat{\lambda }_n |},\qquad |y^{k_n}_{i_n}|\ge (1+2{\bar{\sigma }})^{-1}\cdot r_{k_n}\quad \forall n\in {\mathbb {N}}, \end{aligned}$$

taking into account that \(\lim _{n\rightarrow \infty } {|x^{k_n}_{i_n}|\over r_{k_n}}=1\) because of Lemma 4.4 and that \(x^{k_n}_{i_n}\cdot \hat{\lambda }_n=0\) \(\forall n\in {\mathbb {N}}\) because of Proposition 5.3, we obtain by direct computation

$$\begin{aligned} \liminf _{n\rightarrow \infty }{1\over \hat{\varepsilon }_n}(y^{k_n}_{i_n}-x^{k_n}_{i_n})\cdot \hat{\lambda }_n =\liminf _{n\rightarrow \infty } {|y^{k_n}_{i_n}|\over |x^{k_n}_{i_n}+{\hat{{\varepsilon }}}_n\,\hat{\lambda }_n|} =\liminf _{n\rightarrow \infty } {|y^{k_n}_{i_n}|\over r_{k_n}} \ge (1+2{\bar{\sigma }})^{-1} \end{aligned}$$

which, combined with (5.14), implies

$$\begin{aligned} \liminf _{n\rightarrow \infty }{E(v_n)-E(u_{k_n})\over {\hat{{\varepsilon }}}_n\, |{\lambda }^{k_n}_{i_n}|}\ge 0. \end{aligned}$$
(5.15)

It is clear that (5.15) is in contradiction with (5.13), so the proof is complete. \(\square \)

Proof of Proposition 1.2

Proposition 5.4 guarantees the existence of \({\bar{k}}\in {\mathbb {N}}\) and of a solution \(u_k\), \(\forall k\ge {\bar{k}}\), having all the properties reported in Proposition 1.2. In fact, (1.10), (1.11) and (1.8) follow from Proposition 4.1, (1.12) and (1.13) follow respectively from Lemma 4.4 and Corollary 4.7. Moreover, from (1.10), (1.11), (1.8) and (2.17) we infer that (1.9) holds and \(u_k\rightarrow 0\) as \(k\rightarrow \infty \), uniformly on the compact subsets of \({\mathbb {R}}^N\) (because \(\lim _{|x|\rightarrow \infty }w(x)=0\)).

Moreover (2.19) implies \(\lim _{k\rightarrow \infty }\Vert u_k\Vert _{H^1({\mathbb {R}}^N)}=\infty \).

Finally, notice that (1.10), (1.11) and (1.8) imply that \(\liminf _{k\rightarrow \infty }E(u_k)\ge k'E_\infty (w)\) \(\forall k'\in {\mathbb {N}}\) so, as \(k'\rightarrow \infty \), we obtain \(\lim _{k\rightarrow \infty }E(u_k)=\infty \). \(\square \)

Theorem 1.1 is a direct consequence of Proposition 1.2.

Remark 5.5

In order to give examples of potentials that satisfy all the assumptions required in Theorem 1.1, consider a potential a(x), satisfying condition (1.3), such that \(a(x)-a_\infty \) is the sum of a positive radial function with polinomial decay as \(|x|\rightarrow \infty \) and of a function whose second derivatives have exponential decay. Then, the potential a(x) satisfies also conditions (1.4), (1.5) and (1.6). Thus, for example, the potential \(a(x)=a_\infty +(1+|x|^n)^{-1}+b(x)e^{-|x|}\) satisfies all the conditions of Theorem 1.1 for all \(n\in {\mathbb {N}}\) and \(a_\infty > 0\) when the function \(b(x)\in {\mathcal {C}}^2({\mathbb {R}}^N)\), satisfies \(b(x)> -a_\infty \) \(\forall x\in {\mathbb {R}}^N\) and has bounded second derivatives, as one can verify by direct computation.

Furthermore, notice that our result is new even if a(x) has radial symmetry because, unlike all the previous results obtained under radial symmetry assumptions, we obtain solutions with bumps uniformly distributed near spheres of dimension \(N-1\), as we describe in Proposition 1.2.

Moreover, if we assume that the potential a(x) satisfies in addition suitable symmetry assumptions, then we can easily construct infinitely many positive solutions with bumps distributed near a d-dimensional sphere for every integer d such that \(1\le d\le N-1\). For example, if the potential a(x) satisfies also the symmetry condition

$$\begin{aligned} a(x_1,\ldots ,x_{d+1},x_{d+2},\ldots ,x_N)= a(x_1,\ldots ,x_{d+1},|x_{d+2}|,\ldots ,|x_N|)\quad \forall (x_1,\ldots ,x_N)\in {\mathbb {R}}^N, \end{aligned}$$

then there exist also solutions with the bumps asymptotically distributed near spheres of the subspace \(\{(x_1,\ldots ,x_N)\in {\mathbb {R}}^N\) : \(x_i=0\) for \(i\ge d+2\}\). In fact, for all \(k\in {\mathbb {N}}\), there exist \(\theta _1^k,\ldots ,\theta _k^k\) in \(S^d=\{x=(x_1,\ldots ,x_N)\in {\mathbb {R}}^N\) : \(|x|=1\), \(x_i=0\) \(\forall i\ge d+2\}\) and \(\rho _1^k,\ldots ,\rho _k^k\) in \({\mathbb {R}}^+\) such that, for k large enough, every minimizing function for E in \(S_{\rho _1^k\theta _1^k,\ldots , \rho _k^k\theta _k^k}\) is a solution of problem (1.1) and satisfies similar properties as in Proposition 1.2 and Lemma 4.4, in particular the properties

$$\begin{aligned}&\lim \limits _{k\rightarrow \infty }\min \{\rho _i^k\ :\ i=1,\ldots ,k\}=\infty ,\\&\lim \limits _{k\rightarrow \infty } \min \{|\rho _i^k\theta ^k_i-\rho _j^k\theta ^k_j|\ :\ i,j=1,\ldots ,k,\ i\ne j\}=\infty ,\\&\lim \limits _{k\rightarrow \infty } \frac{\max \{\rho _i^k\ :\ i=1,\ldots ,k\}}{\min \{\rho _i^k\ :\ i=1,\ldots ,k\}} =1, \qquad \qquad \lim \limits _{{\varepsilon }\rightarrow 0}\lim \limits _{k\rightarrow \infty }{N_k'(\theta ,{\varepsilon })\over k\,{\varepsilon }^{d}}\ge c'\quad \forall \theta \in S^d \end{aligned}$$

where \(c'\) is a positive constant independent of \(\theta \) and \(N'_k(\theta ,{\varepsilon })\) denotes the number of elements of the set \(\{\theta ^k_i\in S^d\ :\ i=1,\ldots ,k,\ \theta ^k_i\in B(\theta ,{\varepsilon })\}\).

When \(d=1\) and we assume in addition that a(x) has radial symmetry in the variables \(x_1,x_2\), then there exist also other solutions (corresponding to higher critical levels of the energy functional E). In fact, for all \(k\in {\mathbb {N}}\) and \(\rho > 0\), consider the k points in \({\mathbb {R}}^N\)

$$\begin{aligned} x_i^{k,\rho }=\left( \rho \,\cos {2\pi i\over k},\rho \,\sin {2\pi i\over k},0,\ldots ,0\right) \quad \text{ for } i=1,\ldots ,k. \end{aligned}$$
(5.16)

If \(( x_1^{k,\rho },\ldots , x_k^{k,\rho })\in D_k\), set \({\varphi }_k(\rho )=f_k(x_1^{k,\rho },\ldots , x_k^{k,\rho })\). Then, as in the proof of Theorem 1.1, one can show that there exists \({\tilde{k}}\in {\mathbb {N}}\) such that, for all \(k\ge {\tilde{k}}\), \((x_1^{k,\rho },\ldots , x_k^{k,\rho })\) is in the interior of \(D_k\) and there exists \({\tilde{r}}_k> 0\) such that

$$\begin{aligned} {\varphi }_k({\tilde{r}}_k)\ge {\varphi }_k(\rho )\qquad \forall \rho > 0\ \text{ such } \text{ that } ( x_1^{k,\rho },\ldots , x_k^{k,\rho })\in D_k. \end{aligned}$$

Let \({\tilde{u}}^{k,{\tilde{r}}_k}\) be any function in \(S_{x_1^{k,{\tilde{r}}_k},\ldots , x_k^{k,{\tilde{r}}_k} }\) such that \(E({\tilde{u}}^{k,{\tilde{r}}_k})=f^k( x_1^{k,{\tilde{r}}_k},\ldots , x_k^{k,{\tilde{r}}_k} )\). Because of the radial symmetry, it follows that there exists a Lagrange multiplier \(\tilde{\mu }_k\) such that \(\lambda _i^k={\tilde{\mu }}_k\cdot x_i^{k,{\tilde{r}}_k}\) for \(i=1,\ldots ,k\). Thus, arguing by contradiction as in the proof of Proposition 5.4, one can prove that \(\tilde{\mu }_k=0\) for k large enough, namely \({\tilde{u}}^{k,{\tilde{r}}_k} \) is a solution of problem (1.1).

Remark 5.6

Unlike the results proved in [12, 13], Theorem 1.1 does not require \(\sup _{x\in {\mathbb {R}}^N}\) \(|a(x)-a_\infty |_{L^{N/2}(B(x,1))}\) to be small and, indeed, it may be arbitrarily large. For example, if \(\Omega \) is a bounded domain of \({\mathbb {R}}^N\) and for all \(s\ge 0\) \(a_s(x)=s{\bar{a}}(x)+a(x)\) \(\forall x\in {\mathbb {R}}^N\), where a(x) is as in Theorem 1.1 and \({\bar{a}}(x)\) is a nonnegative function which is positive only in \(\Omega \), for k large enough and for all \(s\ge 0\) there exists a k-bump solution \(u_{k,s}\) to Problem (1.1) with a(x) replaced by \(a_s(x)\); moreover, as \(s\rightarrow \infty \), \(u_{k,s}\) converges to a k-bump solution \({\tilde{u}}_k\) in the exterior domain \(\widetilde{\Omega }={\mathbb {R}}^N\setminus \overline{\Omega }\), with zero Dirichlet boundary condition (on the other hand, the solution \({\tilde{u}}_k\) may be also obtained directly since our method can be adapted to deal with Dirichlet problems in exterior domains).

Remark 5.7

The method developed in this paper may be also used to construct sequences \(({\hat{u}}_n)_n\) of positive solutions of problem (1.1) which converge in \(H^1_{\mathrm{loc}\,} ({\mathbb {R}}^N)\) to a positive solution \({\hat{u}}\) having infinitely many bumps (while the sequence \((u_k)_{k\ge {\bar{k}}}\) given by Theorem 1.1 converges to the trivial solution \(u\equiv 0\)). The bumps are distributed near infinitely many spheres with center in the origin. Since the radius of these spheres may be chosen in infinitely many ways, we obtain infinitely many positive solutions having infinitely many positive bumps (while the result presented in [13] guarantees only the existence of one solution having this property, under the additional assumption that \(\sup _{x\in {\mathbb {R}}^N}|a(x)-a_\infty |_{L^{N/2}(B(x,1))}\) is small enough).