1 Introduction and Main Result

This article is devoted to the regularity properties of extremal solutions of the nonlinear Dirichlet elliptic equation with quadratic convection:

$$\begin{aligned} \left\{ \begin{array}{cc} -\Delta u + g(u) |\nabla u|^{2}=\lambda f(u), &{} \text{ in } \ \Omega , \\ u > 0, &{} \text{ in } \ \Omega , \\ u=0, &{} \text{ on } \ \partial \Omega , \end{array} \right. \end{aligned}$$
(1)

where \( \Omega \subset {\mathbb {R}}^N\) \((N \ge 3) \) is a smooth bounded domain, \( \lambda \) is a positive real parameter, and f is a \(C^1\) strictly increasing function in \( [0, \infty ) \), \( f(0) > 0 \), and g is a positive function, continuous either in \( (0, \infty ) \) or in \( [0, \infty ) \), decreasing and integrable in a neighborhood of zero.

The typical examples for the nonlinearity f with the above properties are \( (1+u)^{p} \) with \( p > 1 \) and the exponential \( e^{u} \). We can also include functions with linear growth at infinity, see Mironescu and Rădulescu [33, 34]. Also for positive decreasing function g in (1), one can take \( g(s)=s^{-\gamma } \) with \( \gamma \in {(0, 1)} \) as an example.

A positive function \( u \in {W^{1,2}_{0}(\Omega )} \) is a weak solution of (1) if both \( g(u)|\nabla u|^{2} \) and f(u) belong to \({L^{1}(\Omega )} \) and for all \( \phi \in {W^{1,2}_{0}(\Omega ) \cap L^{\infty }(\Omega )} \):

$$\begin{aligned} \int _{\Omega } \nabla u \cdot \nabla \phi dx + \int _{\Omega } g(u) { |\nabla u|}^{2} \phi dx= \int _{\Omega } \lambda f(u) \phi dx. \end{aligned}$$

A solution u of problem (1) is said to be stable if \( \big ( f'(u)-g(u)f(u) \big ) \in {L^1_{loc}(\Omega )} \) and for every \( \phi \in {W^{1,2}_{0}(\Omega )} \):

$$\begin{aligned} \int _{\Omega } |\nabla \phi |^{2}dx \ge \lambda \int _{\Omega } \left( f'(u)-g(u)f(u) \right) \phi ^{2}dx. \end{aligned}$$
(2)

This condition was introduced by Arcoya [8]. Moreover, a solution u of (1) is said to be regular if \( u \in { L^{\infty }(\Omega ) } \), and minimal if \( u \le v \) a.e in \( \Omega \) for any other solution v, see Molino [35].

Quasilinear problems having lower order terms with quadratic growth with respect to the gradient play a crucial role in the study of nonlinear differential equations as they arise naturally in calculus of variations, stochastic control [11, 31] and motivated by wide applications such as thermal self-ignition in combustion theory and temperature distribution in an object heated by uniform electronic current, see [29, 30, 32].

Quasilinear Dirichlet problems of the type:

$$\begin{aligned} \left\{ \begin{array}{cc} -\Delta u + g(u) |\nabla u|^{2} = f(x, u), &{} in \ \Omega , \\ u > 0, &{} in \ \Omega , \\ u=0, &{} on \ \partial \Omega , \end{array} \right. \end{aligned}$$
(3)

in the case when the right-hand side is not depending on u have been extensively studied in the pioneering works by Boccardo et al. [9, 10, 12], but the case when f is nonlinear has been less studied. Arcoya et al. [3,4,5, 7] considered Problem (3) in the case when g has singularity and described some applications. Moreover, some results about uniqueness, comparison, and maximum principles for the general form of the quasilinear elliptic equations with quadratic growth conditions have been proved in [15]. Orsina and Puel [37] considered Problem (3) where g is a non-negative continuous function and proved several existence results for positive solutions of (3) with a power-like right-hand side. This case has been also studied recently by Boccardo et al. [13, 14]. Furthermore, it is shown that if \( g(u)= (1-u)^{-\gamma } \) in (3) where \( \gamma >0 \), then the existence and non-existence of the solutions depend on the nonlinearity f and the value of \( \gamma \). It is worth mentioning here that some related problems are considered in [23,24,25,26,27,28] where the authors established several results related to existence, non-existence, or bifurcation of positive solutions for the boundary value problem \(-\Delta u + K(x)g(u)+ |\nabla u|^{a} = \lambda f(x, u)\) in \(\Omega \), \(u = 0\) on \(\partial \Omega \), where \(\Omega \) is a smooth bounded domain, \(0< a \le 2\), \(\lambda \) is a positive parameter, and f is smooth and has a sublinear growth.

Arcoya et al. [8] proved that in Problem (1), if \( f'(s)-g(s)f(s) \) is an increasing function, such that \( 1/f \in {L^1(0,\infty )} \) and there is a positive constant c so that \( \vert f'(s)/f^2(s) \vert \le c (1+ \sqrt{g(s)}) \), then there exists a parameter \( \lambda ^*\in {(0, +\infty )}\), such that Problem (1) has a bounded minimal solution for \( \lambda < \lambda ^* \) and no solution for \( \lambda > \lambda ^* \). Furthermore, they proved that, under suitable conditions, the sequence of bounded minimal solutions for \( \lambda < \lambda ^* \) converges to a weak solution of Problem (1) for \( \lambda = \lambda ^* \), which is also stable and minimal. Molino [35] proved that if:

  1. (H1)

    \( \limsup _{s \longrightarrow \infty } g(s) < \infty \),

  2. (H2)

    \( f'(s)-g(s)f(s) > 0 \) and non-singular (\( s\ge 0 \)),

  3. (H3)

    \( e^{-G(s)} \in {L^{1}(1,\infty )} \), where \( G(s) := \int _{0}^{s} g(t) dt \),

  4. (H4)

    \( \forall C> 0\), \(\exists {\tilde{C}} > 0 \ : \ g(Cs) \le {\tilde{C}} g(s),\ \forall s < 1 \),

then there exists \( \lambda ^* \in {(0, +\infty ]} \), such that for every \( \lambda < \lambda ^* \), there is a bounded minimal solution \( u_{\lambda } \) of (1) and no solution for \( \lambda > \lambda ^* \). Also, the family of functions \( \lbrace {u_{\lambda }} \rbrace _{0<\lambda <\lambda ^*} \) is increasing and bounded in \( W_{0}^{1,2}(\Omega ) \) when the functions f and g satisfy the following extra condition:

  1. (H5)

                            \(\lim _{s \longrightarrow \infty } \dfrac{s \left( f'(s) - g(s) f(s)\right) }{f(s)} = \tau \in {(1, \infty ]}.\)

Moreover, it is proved that the increasing pointwise limit \( u^*(x) = \lim _{\lambda \longrightarrow \lambda ^*} u_{\lambda } (x) \) is a weak solution of (1) for \( \lambda = \lambda ^* \), which is called the extremal solution. Furthermore, under conditions (H1)-(H5), if \( f'(s) - g(s) f(s) \) is a strictly increasing function, then every stable solution of Problem (1) is minimal. In particular, the extremal solution \( u^* \) is stable and minimal, Molino [35].

We raise the following natural question: when is the extremal solution regular? Arcoya, Carmona, and Martínez-Aparicio [8] proved that if:

$$\begin{aligned} \alpha :=\lim _{s \rightarrow \infty } \frac{g(s)f(s)}{f'(s)} \quad \quad \text {and} \quad \quad \mu :=\lim _{s \rightarrow \infty } \frac{f(s) [f'(s)-g(s)f(s)]'}{f'(s) [f'(s)-g(s)f(s)]}, \end{aligned}$$

then the extremal solution of Problem (1) is bounded whenever:

$$\begin{aligned} N < 4(1-\alpha ) +2 \mu + 4 \sqrt{ \mu (1- \alpha )}. \end{aligned}$$

Also (see [8, Remark 4.8]), if \( g \ge 0 \) and for some \( p,k > 1, f(s) \sim ks^p \) for \( s \gg 1 \), then the above result can be improved to:

$$\begin{aligned} N < \frac{p}{p-1} \left( 4(1-\alpha ) +2 \mu + 4 \sqrt{\mu (1- \alpha )} \right) . \end{aligned}$$

As a particular case, if \( f(u)=(1+u)^p \) and \( g(u)= \frac{m}{1+u} \) in (1), where m is a positive constant and \( p >m+ \frac{1}{m+1} \), then \( u^* \) is regular whenever:

$$\begin{aligned} 3 \le N < 4\Big (\frac{p-m}{p-1}\Big ) +2 + 4 \sqrt{\frac{p-m}{p-1}}. \end{aligned}$$

Molino [35] considered Problem (1) with \( f(s)=e^{G(s)}h(s) \), where h(s) is a differentiable function in \( [0, \infty ) \) and \( h(0) > 0 \), and improved the results under assumptions (H1)–(H5). He proved that the extremal solution of Problem (1) (if h is convex) is regular whenever:

$$\begin{aligned} N < \frac{4+2({\tilde{\mu }} + {\tilde{\alpha }}) + 4 \sqrt{{\tilde{\mu }} + {\tilde{\alpha }}}}{1+ {\tilde{\alpha }}}, \end{aligned}$$

where

$$\begin{aligned} {\tilde{\alpha }}:= \lim _{s \longrightarrow \infty } \frac{g(s)h(s)}{h'(s)}~~and~~ {\tilde{\mu }}:= \lim _{s \longrightarrow \infty } \frac{h''(s) h(s)}{{h'(s)}^2}. \end{aligned}$$
(4)

Remark 1

Notice that, in (4), we always have \( {\tilde{\mu }} \ge 1-\frac{1}{\tau } > 0 \), where \(\tau \in {(1, \infty ]}\) defined in (H5), which also implies that the function h must be eventually strictly convex and \( f'(s)-f(s)g(s) \) an eventually increasing function. To see this, take an arbitrary \(\mu > {\tilde{\mu }} \), then, by the definition of \({\tilde{\mu }}\), there exists \( s_{\mu } >0 \), so that \( \frac{h''(s)}{h'(s)} < \mu \frac{ h'(s)}{h(s)} \) for all \( s > s_{\mu } \). Then, by integrating twice we get \( h(s) < C_1 t^{\frac{1}{1-\mu }} \) for t large, where \( C_1 \) is a positive constant. On the other hand, from (H5), for an arbitrary \( \tau \in {(1, \alpha )} \), there exists \( s_{\tau } >0 \), such that \( \frac{h'(s)}{h(s)} > \frac{\tau }{s} \) for all \( s > s_{\tau } \), implies that \( h(s) > C_2 s^{\tau } \) for all \( s > s_{\tau } \), where \( C_2 \) is a positive castanet. Thus, we must have \( \mu \ge 1-\frac{1}{\tau } \) that proves the claim. Moreover, notice that, by hypothesis (H2) on f and g, the function h is increasing, and as we have seen in the above, h is also a superlinear function (that is, \( \lim _{s \rightarrow \infty } \frac{h(s)}{s} \rightarrow \infty \)).

It is worth mentioning here that there is a large literature devoted to the semi-linear analogue of (1), namely the Gelfand problem:

$$\begin{aligned} \left\{ \begin{array}{cc} -\Delta u =\lambda f(u), &{} in \ \Omega , \\ u \ge 0, &{} in \ \Omega , \\ u=0, &{} on \ \partial \Omega , \end{array} \right. \end{aligned}$$
(5)

where \( \Omega \subset {\mathbb {R}}^{N \ge 1} \) is a smooth bounded domain, \( \lambda \) is a positive parameter, and \( f: [0, \infty ] \longrightarrow \mathbb {R} \) is \( C^{1}\), non-decreasing, superlinear and \( f(0) > 0 \). Regularity of the extremal solutions of (5) has been extensively studied in the literature, and it is shown that it depends extremely on the dimension N, domain \( \Omega \) and the nonlinearity f; see, for example [1, 2, 17, 19,20,21,22, 36]. It is proved that when \( f(s)=e^s \), the extremal solution \(u^*\) of (5) is regular for \( N < 10 \); also if \( f(s)= (1+s)^p \) and \( p > 1 \), then \( u^* \) is regular for \( N < 4+2(1-1/p) +4\sqrt{1-1/p} \). Then, it was conjectured (related to two open problems stated by Brezis [16] in the context of “extremal solutions”) that \(u^*\) is bounded in dimension \(N\le 9\), and also belongs to the natural energy space \(W^{1,2}_0(\Omega )\) in every dimension. Very recently, Cabré et al. [18] completely solved these two open problems and proved that stable solutions to semi-linear elliptic equations are bounded (and thus smooth) in dimension \(N\le 9\).

In this work, we consider Problem (1) with f belongs to a general class of functions. At first, for the remainder of this paper, we set:

$$\begin{aligned} h(s):=f(s)e^{-G(s)}. \end{aligned}$$
(6)

Then, we see that Problem (1) can be rewritten as:

$$\begin{aligned} \left\{ \begin{array}{cc} -\Delta u + g(u) |\nabla u|^{2}=\lambda e^{G(u)} h(u), &{} in \ \Omega , \\ u > 0, &{} in \ \Omega , \\ u=0, &{} on \ \partial \Omega , \end{array} \right. \end{aligned}$$
(7)

and a solution u of (7) is stable if \( e^{G(u)} h'(u) \in {L^1_{loc}(\Omega )} \) and satisfies:

$$\begin{aligned} \int _{\Omega } |\nabla \phi |^{2} \ge \lambda \int _{\Omega } e^{G(u)} h'(u) \phi ^{2}, \end{aligned}$$
(8)

for all \( \phi \in {W^{1,2}_{0}(\Omega )} \).

In this case, hypotheses (\(H_2\)) and (\(H_5\)) take the following simple forms, respectively:

$$\begin{aligned} h'(t)>0,~\text {and is non-singular for}~s\ge 0~~\text {and}~~\lim _{s\rightarrow \infty }\frac{sh'(s)}{h(s)}=\tau \in (1,\infty ]. \end{aligned}$$

As we mentioned in Remark (1), all previous works assume that h is a convex function (or eventually convex). However, in this paper, we remove this extra restriction and allow the function h to be nonconvex. Thus, instead of the parameters like as (4) used in previous works, we define the following new ones:

$$\begin{aligned} \alpha _{-}:= & {} \liminf _{t \longrightarrow \infty } \frac{h'(t)H(t)}{h(t)^2} \le \alpha _{+}:=\limsup _{t \longrightarrow \infty } \frac{h'(t)H(t)}{h(t)^2},\\ \beta _{-}:= & {} \liminf _{t \longrightarrow \infty } \frac{g(t)H(t)}{h(t)} \le \beta _{+}:= \limsup _{t \longrightarrow \infty } \frac{g(t)H(t)}{h(t)}, \end{aligned}$$

where \( H(t):=\int _{0}^{t} h(s)ds \). Now, we state our main regularity result.

Theorem 1

Let \( u^* \) be the extremal solution of Problem (7) and \( \Omega \) be an arbitrary bounded smooth domain. If \( 0<\alpha _{-} \le \alpha _{+}< \infty \), \(\beta _{+} < \infty \) and \( 2 \alpha _{-} + \beta _{-} \ge 1 \), then \( u^* \in {L^{\infty }(\Omega )} \) whenever:

$$\begin{aligned} N < \dfrac{4 \alpha _{-}}{\alpha _{-} + \beta _{+}} \left( 1+ \dfrac{\sqrt{\alpha _- (2\alpha _{-}+\beta _{-}-1)}}{\alpha _{+}} + \dfrac{2\alpha _{-}+\beta _{-}-1}{ 2\alpha _{+}} \right) . \end{aligned}$$
(9)

If, in addition to the above assumptions, we have \( \alpha _-+\beta _- \le 1 \), then \( u^* \in {L^{\infty }(\Omega )} \) whenever:

$$\begin{aligned} N < \dfrac{4 \alpha _{-}}{\alpha _{-} + \beta _{+}} \left( 1+\sqrt{\frac{2 \alpha _{-} + \beta _{-} -1}{\alpha _{+}}} + \dfrac{2\alpha _{-}+\beta _{-}-1}{2 \alpha _{+}} \right) . \end{aligned}$$
(10)

1.1 Examples

We provide several examples of functions that fulfill the above hypotheses.

Example 1

Consider Problem (1) with \( g(t)\equiv C \) and \( f(t)= e^{\gamma t}\) ( \(\gamma> C >0\) ), which is equivalent to Problem (7) with \( h(t)= e^{(\gamma -C)t} \). Here, it is easy to see that we have \( \alpha _{+}=\alpha _{-}=1 \) and \( \beta _{+}=\beta _{-}= \frac{C}{\gamma - C} \). Then, by (9) in Theorem 1, \( u^{*} \in {L^{\infty }(\Omega )} \) whenever:

$$\begin{aligned} N<6-4\frac{C}{\gamma }+4\sqrt{1-\frac{C}{\gamma }}. \end{aligned}$$
(11)

We remark that for a fixed \(\gamma >0\), by letting \( C \longrightarrow 0 \), the right-hand side of (11) goes to 10 which gives the optimal regularity dimension \(N\le 9 \) for the extremal solution of the limit equation [Eq. (5)], with the exponential nonlinearity. Furthermore, the above result coincides with Proposition 3.2 in [39].

Also with the above function \(g\equiv C\) where C is a positive constant, if we set \(h(t):= e^t(1+0.1 \sin t)\) in Problem  (7), then we observe that h satisfies the needed assumptions and also:

$$\begin{aligned} \dfrac{h'(t) H(t)}{{h(t)}^{2}} =\dfrac{(1+0.1 \cos t + 0.1 \sin t)(1+ 0.05 \sin t -0.05 \cos t)}{(1+0.1 \sin t)^{2}}, \end{aligned}$$
$$\begin{aligned} \dfrac{g(t)H(t)}{h'(t)} = \dfrac{C \big (1+0.05 \sin t -0.05 \cos t)}{1+0.1\sin t}, \end{aligned}$$

which are periodic functions with period \( 2 \pi \). By Mathematica, we compute:

$$\begin{aligned} \alpha _{-}=\min _{[0, 2\pi ]} \alpha (u) \approx 0.933238, \qquad \qquad \alpha _{+}=\max _{[0, 2\pi ]} \alpha (u) \approx 1.07681, \end{aligned}$$
$$\begin{aligned} \beta _{-}=\min _{[0, 2\pi ]} \beta (u) \approx 0.9338C \qquad \text {and} \qquad \beta _{+}=\max _{[0, 2\pi ]} \beta (u) \approx 1.0761C. \end{aligned}$$

Then, by (9) in Theorem 1, we see that the extremal solution \( u^{*} \in {L^{\infty }(\Omega )} \) whenever:

$$\begin{aligned} N < \dfrac{3.7329}{0.9332+1.0761C} \left( 1+ \dfrac{\sqrt{0.9332(0.8664+0.9338C)}}{1.0768} + \dfrac{0.8664+0.9338C}{2.1536} \right) . \end{aligned}$$

However, if \(C < \frac{1}{9}\), then \( \alpha _-+\beta _- < 1 \) and by (10) in Theorem 1, we can get a better upper bound for the regularity dimension for \( u^{*} \), that is:

$$\begin{aligned} N < \dfrac{3.7329}{0.9332+1.0761C} \left( 1+ \sqrt{\dfrac{0.8664+0.9338C}{1.0768}} + \dfrac{0.8664+0.9338C}{2.1536} \right) . \end{aligned}$$

Again note that by the above result, if C is sufficiently small, then \(u^*\) is regular for \(N \le 9\).

Example 2

Consider Problem (7) with \( g(t)=\frac{\delta }{t+1} \), where \( \delta \) is a positive constant and \( h(t)= (t^2+3t+3\cos t +4) \). It is not hard to see that h satisfies all the needed assumptions, but is not convex even at infinity (and, hence, none of the previous results apply). However, we easily see that \( \beta _{+}= \beta _{-}=\frac{\delta }{3} \) and \( \alpha _{+}=\alpha _{-}=\frac{2}{3} \), and then, by Theorem 1, the extremal solution \(u^*\) of Problem (7) is bounded when:

$$\begin{aligned} N < \frac{8}{2+ \delta } \left( 1+ \sqrt{\frac{\delta +1}{2}}+ \frac{\delta +1}{4} \right) . \end{aligned}$$

Example 3

Consider Problem (7) with \( g(t)=\frac{1}{t^\gamma }\), where \( \gamma \ge 1 \) and \( h(t)=e^t(15+8\sin t)\). Here, h is increasing but not convex. Indeed, we have \(h''(t)=e^t(15-16 \cos t)\), and hence, \(\liminf _{t\rightarrow \infty } h''(t)=-\infty \). However:

$$\begin{aligned} \dfrac{h'(t) H(t)}{{h(t)}^{2}} =\dfrac{(15+4 \sin t - 4 \cos t)(15+ 8 \sin t + 8 \cos t)}{(15+8 \sin t)^{2}}, \end{aligned}$$

which is a periodic function with period \( 2 \pi \). By Mathematica, we can compute that:

$$\begin{aligned} \alpha _{-}=\min _{[0, 2\pi ]} \alpha (t) \approx 0.547593 \qquad \text {and} \qquad \alpha _{+}=\max _{[0, 2\pi ]} \alpha (t) \approx 1.80247. \end{aligned}$$

Also, it is not hard to see that \( \beta _{+}= \beta _{-}=0 \). Then, by (10) in Theorem 1, \( u^* \) is bounded when \( N < 6 \).

2 Proof of the Main Result

The next simple technical lemma and its companion Proposition  1 are key ingredients in the proof of our main result.

Lemma 1

Let \( u_{\lambda } \) be the stable solution of (7). Also, \( m: [0, \infty ) \longrightarrow [0, \infty ) \) is a \( C^{1} \) function which is zero in a neighborhood of zero and satisfies:

$$\begin{aligned} K(t):= e^{G(t)} h'(t) m^2(t) -h(t) \int _{0}^{t} e^{G(s)} m'(s)^2 ds \ge 0, \quad \text {for } t \text { sufficiently large},\nonumber \\ \end{aligned}$$
(12)

where \( G(t)= \int _{0}^{t} g(s) ds \). Then, \( \Vert K(u_{\lambda })\Vert _{L^{1}(\Omega )}\le C \), where C is a constant independent of \(\lambda \).

Proof

Let \( u:=u_{\lambda }> 0 \) be the stable minimal solution of Problem (7). Taking \( \phi =m(u) \) in the stability inequality (8), we obtain:

$$\begin{aligned} \int _{\Omega } \nabla M \cdot \nabla u ~ dx\ge & {} \lambda \int _{\Omega } e^{G(u)} h'(u) m(u)^2 ~ dx \Longrightarrow \int _{\Omega } (- \Delta u) M(u) ~ dx \ge \lambda \\&\quad \int _{\Omega } e^{G(u)} h'(u) m(u)^2 ~ dx, \end{aligned}$$

where \( M(s):=\int _{0}^{s} m'(t)^2 dt \). Then, according to Eq  (7), we obtain:

$$\begin{aligned} \int _{\Omega } (\lambda e^{G(u)} h(u) - g(u) |\nabla u |^2) M(u) ~ dx \ge \lambda \int _{\Omega } e^{G(u)} h'(u) m^2(u) ~ dx. \end{aligned}$$
(13)

On the other hand, u is a weak solution of (7), and hence:

$$\begin{aligned} \int _{\Omega } \nabla u \cdot \nabla \psi (x) ~ dx + \int _{\Omega } g(u) |\nabla u |^2 \psi (x) ~ dx = \lambda \int _{\Omega } e^{G(u)} h(u) \psi (x) ~ dx, \end{aligned}$$

for every \( \psi \in {W_{0}^{1,2}(\Omega )} \). Set \( \psi (x) :=M(u(x)) - e^{-G(u(x))}\int _0^{u(x)} e^{G(t)} m'(t)^2 dt \), \(x\in \Omega \). Then, notice that we have \(\psi (x)=0\) when u(x) is near zero (by the assumption on the function m) and:

$$\begin{aligned} \nabla \psi =g(u)e^{-G(u)}\Big (\int _0^u e^{G(t)} m'(t)^2 dt \Big )\nabla u. \end{aligned}$$

Therefore, since \(\nabla \psi (x)=0\) when u(x) is near zero and g is continuous in \((0,\infty )\), we get \(\psi \in W^{1,2}_0(\Omega )\). Now, we substitute \(\psi \) in the above equality as a test function to get:

$$\begin{aligned}&\int _\Omega g(u)e^{-G(u)}\Big (\int _0^u e^{G(t)} m'(t)^2 dt \Big )|\nabla u|^2 dx+\int _\Omega g(u)M(u)|\nabla u|^2 dx\\&\qquad -\int _\Omega g(u)e^{-G(u)}\Big (\int _0^u e^{G(t)} m'(t)^2 dt\Big )|\nabla u|^2 dx\\&\quad = \lambda \int _{\Omega } e^{G(u)} h(u)M(u)~dx -\lambda \int _\Omega h(u) \Big (\int _0^u e^{G(t)} m'(t)^2dt\Big ) dx. \end{aligned}$$

Canceling the first and the third terms on the left of the equality above, we obtain:

$$\begin{aligned} \int _{\Omega } \Big (\lambda e^{G(u)} h(u) - g(u) |\nabla u |^2\Big ) M(u) ~ dx =\int _\Omega h(u) \Big (\int _0^u e^{G(t)} m'(t)^2dt\Big ) dx.\nonumber \\ \end{aligned}$$
(14)

Using (14) in (13), we then arrive at:

$$\begin{aligned} \int _{\Omega } e^{G(u)} h'(u) m^2(u) ~ dx- \int _{\Omega } h(u)\Big (\int _0^u e^{G(t)} m'(t)^2)dt\Big ) dx \le 0, \end{aligned}$$
(15)

Now, by the definition of the function K given in (12), the inequality (15) can be read as:

$$\begin{aligned} \int _{\Omega } K(u) ~ dx \le 0. \end{aligned}$$
(16)

Now, by the assumption (12), there is an \( s_{0} > 0 \) such that \( K(s) \ge 0 \), where \( s \ge s_{0} \), and then, by (16), we can write:

$$\begin{aligned} \int _{\Omega } | K(u) | dx= & {} \int _{u \le s_{0}} | K(u) | ~ dx + \int _{u \ge s_{0} } K(u) ~ dx \\\le & {} \int _{u \le s_{0}} \Big ( |K(u)| - K(u) \Big ) ~ dx \le C_{0} |\Omega |, \end{aligned}$$

where \( | \Omega | \) is the Lebesgue measure of \( \Omega \) and \( C_{0} := \sup _{s\in {[0,s_{0}]}} \Big (| K(u) | - K(u)\Big ) \) which is independent of u that proves the desired result. \(\square \)

Proposition 1

Let \( u_{\lambda } \) be the stable solution of (7) and \( w: [0, \infty ) \longrightarrow [0, \infty ) \) be a \( C^1 \) function, such that, for some \( t_{0} > 0 \), we have \( w(t) \le \frac{h'(t)}{h(t)} \), \( w^2 (t) +w'(t)+ g(t) w(t) \ge 0 \) for \( t \ge t_{0} \), where:

$$\begin{aligned} E(t):= h(t) \left( \frac{h'(t)}{h(t)} -w(t) \right) e^{G(t)} e^{2\int _{t_{0}}^{t} w(s)+\sqrt{w^2 (s) +w'(s)+ g(s) w(s) } ds}, \end{aligned}$$
(17)

and \( \dfrac{E(t)}{h(t)} \longrightarrow \infty \) as \( t \longrightarrow \infty \). Then, \( \Vert E(u_{\lambda })\Vert _{L^{1}(\Omega )} \le C\), where C is a constant independent of \(\lambda \).

Proof

Let \( m: [0, \infty ) \longrightarrow [0, \infty ) \) be a \( C^1 \) function which is zero in a neighborhood of zero and:

$$\begin{aligned} m(t)= e^{\int _{t_{0}}^t w(s)+\sqrt{w^2(s) +w'(s)+ g(s) w(s)} ds}, \ for \ t \ge t_{0}, \end{aligned}$$

where \(t_0\) and w given in the statement of the proposition. Then, using the equality

$$\begin{aligned} m'(t)= m(t)\Big [w(t)+\sqrt{w^2(t) +w'(t)+ g(t) w(t) }\Big ] \ for \ t \ge t_{0}, \end{aligned}$$

we obtain:

$$\begin{aligned} \begin{array}{ll} &{}\displaystyle \frac{d}{dt} \left( w(t) m^2(t) e^{G(t)} - \int _{t_{0}}^{t} e^{G(s)} {m'(s)}^{2} ds \right) =\\ {} &{}\displaystyle m^2(t) e^{G(t)} \left( w'(t)+2 \frac{m'(t)}{m(t)} w(t) + w(t) g(t) - (\frac{m'(t)}{m(t)})^2 \right) =0,\end{array} \end{aligned}$$

for all \( t \ge t_{0} \). It follows that:

$$\begin{aligned} \int _{t_{0}}^{t} e^{G(s)} {m'(s)}^{2} ds = w(t) m(t)^2 e^{G(t)} + C_{0}, \end{aligned}$$
(18)

where \( C_{0} \) is a constant. Now, by (18), for \( t \ge t_{0} \), we have:

$$\begin{aligned} K(t):= e^{G(t)} m(t)^2 h'(t) - h(t) \int _{0}^{t} e^{G(s)} m'(s)^2 ds= E(t)- C_{0} h(t), \end{aligned}$$
(19)

which is positive for large t sufficiently large (by the assumption), and hence, by Lemma 1, we get \( \Vert K(u_{\lambda })\Vert _{L^{1}(\Omega )} \le C_1\), where \(C_1\) is a constant independent of \(\lambda \). And since, by the assumption and (19), we have:

$$\begin{aligned} K(t)=E(t)[1-C_0 \frac{h(t)}{E(t)}]\ge \frac{E(t)}{2} ~~\text {for }t\text { sufficiently large,} \end{aligned}$$

we get also that \(\Vert E(u_{\lambda })\Vert _{L^{1}(\Omega )} \le C_2\), where \(C_2\) is a constant independent of \(\lambda \), which is the desired result. \(\square \)

2.1 Proof of Theorem 1

We now give the proof of our main result. The strategy for the proof is to apply Proposition 1, by choosing a suitable function w in term of the nonlinearity h, so that the corresponding function E defined by (17) is comparable to some power of h, to get some \(L^p\) estimates for \(e^{G(u_\lambda )} h(u_\lambda )\) (the right-hand side of Eq. (7)) independent of \(\lambda \), and then applying the standard regularity result by Stampacchia’s lemma [38].

Assume that \( \alpha _{+}, \beta _{+} < \infty \) and \( 2 \alpha _{-} + \beta _{-} \ge 1 \). Take arbitrary \( \alpha _{1}, \alpha _{2}, \alpha _3, \beta _{1}, \beta _{2}\), such that \( \alpha _{1}< \alpha _{2}< \alpha _{-} \le \alpha _{+} < \alpha _{3} \) and \( \beta _{1}< \beta _{-} \le \beta _{+} < \beta _{2} \). Then, by the definition of \( \alpha _{\pm }\) and \(\beta _{\pm }\), we can find a \( t_{0} > 0 \), so that for \( t \ge t_{0} \):

$$\begin{aligned} \alpha _{1}< \alpha _{2}< \frac{h'(t) H(t)}{h^2(t)}< \alpha _{3} \quad \text {and} \quad \beta _{1}< \frac{g(t) H(t)}{h(t)} < \beta _{2}. \end{aligned}$$
(20)

Let \( w : [0, \infty ) \longrightarrow [0, \infty ) \) be a \( C^1 \) function, such that \( w(t)= \alpha _{1} \frac{h(t)}{H(t)} \) for \( t \ge t_{0} \), where \( H(t)= \int _{0}^{t} h(s) ds \) as before. From (20), we have:

$$\begin{aligned} g(t)> \frac{\beta _{1}}{\alpha _{1}} w(t) \quad \text{ and }\quad \frac{h'(t)}{H(t)}> \alpha _1 \frac{{h(t)}^2}{{H(t)}^2} . \end{aligned}$$

Thus, using these inequalities and the definition of w, we obtain:

$$\begin{aligned}&w^{2}(t) +w'(t)+ g(t) w(t) \ge \alpha _{1} \left[ \frac{h'(t)}{H(t)} + (\alpha _{1}+\beta _{1}-1) \frac{{h(t)}^2}{{H(t)}^2}\right] \nonumber \\&\quad \ge \alpha _1 (2 \alpha _{1} + \beta _{1} -1) \frac{{h(t)}^2}{{H(t)}^2} \end{aligned}$$
(21)

for \( t \ge t_0 \).

The inequalities in (20) imply that:

$$\begin{aligned} g(t)\ge \frac{\beta _1}{\alpha _3} \frac{h'(t)}{h(t)}\quad \text{ and }\quad \frac{h'(t)}{h(t)}\ge \alpha _2 \frac{h(t)}{H(t)}\quad \text{ for } t\ge t_0; \end{aligned}$$

hence:

$$\begin{aligned} e^{G(t)} \ge C {h(t)}^{\frac{\beta _{1}}{\alpha _{3}}} \quad \text {and} \quad \frac{h'(t)}{h(t)} -w(t) \ge (\alpha _{2} - \alpha _{1}) \frac{h(t)}{H(t)} \end{aligned}$$
(22)

for \( t \ge t_{0} \).

Let the function E(t) be given as in (18) in Proposition  1. By the inequalities (20), (21), (22) and the fact that \( \int _{t_{0}}^{t} w(s) ds= \alpha _{1} (\ln H(t) - \ln H(t_{0})) \), we obtain:

$$\begin{aligned} \begin{array}{ll} E(t)&{}\displaystyle = h(t) \left( \frac{h'(t)}{h(t)} -w(t) \right) e^{G(t)} e^{2\int _{t_{0}}^{t} w(s)+\sqrt{w^2(s) +w'(s)+ g(s) w(s) } ds}\\ &{}\displaystyle \ge C h^{2+\frac{\beta _{1}}{\alpha _{3}}} H^{2\alpha _{1}+2\sqrt{\alpha _1 (2\alpha _1 + \beta _1 -1)}-1},\end{array} \end{aligned}$$
(23)

where C is a positive constant which depends only on h.

Now, writing the first inequality in (20) as \( \frac{h'(t)}{h(t)} < \alpha _{3} \frac{h(t)}{H(t)} \) for \( t_{0} > 0 \) and integrating from \( t_{0} \) to t, we obtain:

$$\begin{aligned} H(t) > C {h(t)}^{\frac{1}{\alpha _{3}}} \quad \text {for } t \ge t_0 . \end{aligned}$$
(24)

Using the above inequality in (23), we arrive at:

$$\begin{aligned} E(t) \ge {h(t)}^{\gamma }, \quad \text {where} \quad \gamma :=2 \left( 1+\dfrac{\sqrt{\alpha _1(2\alpha _{1}+\beta _{1}-1)}}{\alpha _{3}} + \frac{2\alpha _{1}+\beta _{1}-1}{2\alpha _{3}} \right) ,\nonumber \\ \end{aligned}$$
(25)

for all \( t \ge t_0 \).

Since \( \dfrac{E(t)}{h(t)} \longrightarrow \infty \) as \( t \longrightarrow \infty \), then from Proposition 1, we get \( \Vert E(u_{\lambda })\Vert _{L^{1}(\Omega )} \le C\). Next, by (25), we deduce that \( \Vert h(u_{\lambda })\Vert _{L^{\gamma }(\Omega )} \le C \), where C is a constant independent of \(\lambda \). On the other hand, by (20), we have \(g(t)\le \frac{\beta _2}{\alpha _1}\frac{h'(t)}{h(t)}\) for \(t\ge t_0\). Thus, by integration over \([t_0,t]\), we obtain:

$$\begin{aligned} e^{G(t)}h(t) \le C {h(t)}^{\frac{\alpha _1+\beta _{2}}{\alpha _{1}} }~~\text {for}~t\ge t_0. \end{aligned}$$
(26)

Therefore:

$$\begin{aligned} \Vert e^{G(u_\lambda )}h(u_\lambda )\Vert _{L^{\mu }(\Omega )} \le C \end{aligned}$$

for

$$\begin{aligned} \mu :=\frac{\alpha _1}{\alpha _1+\beta _2}\gamma =\frac{ 2 \alpha _{1}}{\alpha _{1} + \beta _{2}} \left( 1+\dfrac{\sqrt{\alpha _1 (2\alpha _{1}+\beta _{1}-1)}}{\alpha _{3}} + \dfrac{2\alpha _{1}+\beta _{1}-1}{2 \alpha _{3}} \right) , \end{aligned}$$

where C is a constant independent of \(\lambda \).

Note that \(e^{G(u_\lambda )}h(u_\lambda )\) is the right-hand side of Eq. (7), and therefore, by Stampacchia’s lemma  [38], we obtain \( u^* \in {L^{\infty }(\Omega )} \) for \( N < 2 \mu \). Since \( \alpha _{1}, \alpha _2, \alpha _{3}, \beta _{1}, \beta _{2}\) were arbitrary (in the given ranges), we conclude that \( u^* \in {L^{\infty }(\Omega )} \) for:

$$\begin{aligned} N < \dfrac{4 \alpha _{-}}{\alpha _{-} + \beta _{+}} \left( 1+ \dfrac{\sqrt{\alpha _- (2\alpha _{-}+\beta _{-}-1)}}{\alpha _{+}} + \dfrac{2\alpha _{-}+\beta _{-}-1}{ 2\alpha _{+}} \right) , \end{aligned}$$

which is the desired result in the first part of theorem.

To complete the proof, we assume that \( \alpha _{-} + \beta _{-} \le 1 \). Thus, by the above notation and by the first inequality in (20), we obtain:

$$\begin{aligned} \begin{array}{ll} \displaystyle w^{2}(t) +w'(t)+ g(t) w(t) &{}\displaystyle \ge \alpha _{1} \left[ \frac{h'(t)}{H(t)} - (1-\alpha _{1}-\beta _{1}) \frac{h^{2}(t)}{H^2(t)}\right] \\ &{}\displaystyle \ge (2 \alpha _{1} + \beta _{1} -1) \frac{h'(t)}{H(t)} \ge \left( \frac{2 \alpha _{1} + \beta _{1} -1}{\alpha _{3}}\right) \frac{{h'(t)}^{2}}{{h(t)}^{2}}\end{array} \end{aligned}$$
(27)

for all \( t \ge t_0 \).

Assuming that the function E(t) is given as in (18) in Proposition 1, relation (27) together with estimates (20) and (22) yields:

$$\begin{aligned} \begin{array}{ll} E(t)&{}\displaystyle = h(t) \left( \frac{h'(t)}{h(t)} -w(t) \right) e^{G(t)} e^{2\int _{t_{0}}^{t} w(s)+\sqrt{w^2(s) +w'(s)+ g(s) w(s) } ds}\\ &{}\displaystyle \ge C h^{2+\frac{\beta _1}{\alpha _3}+2\sqrt{\frac{2\alpha _{1}+\beta _{1}-1}{\alpha _{3}}}} H^{2\alpha _{1}-1}. \end{array} \end{aligned}$$

Using now (24), we deduce that:

$$\begin{aligned} E(t)\ge C h^{2+\frac{2\alpha _1+\beta _1-1}{\alpha _3}+2\sqrt{\frac{2\alpha _{1}+\beta _{1}-1}{\alpha _{3}}}}, \end{aligned}$$

for \(t\ge t_0\), where C is a positive constant depends only on h.

Next, with similar arguments as in the first part and using the above inequality together with relations (24), (26), and Proposition 1, we get:

$$\begin{aligned}&\Vert e^{G(u_\lambda )}h(u_\lambda )\Vert _{L^{\theta }(\Omega )} \le C \quad \text {for} \quad \theta \\&\quad :=\frac{ 2 \alpha _{1}}{\alpha _{1} + \beta _{2}} \left( 1+\sqrt{\frac{2 \alpha _{1} + \beta _{1} -1}{\alpha _{3}}} + \dfrac{2\alpha _{1}+\beta _{1}-1}{2 \alpha _{3}} \right) , \end{aligned}$$

where C is a constant independent of \(\lambda \), which implies that \( u^* \in {L^{\infty }(\Omega )} \) for \( N < 2 \theta \). Again, since \( \alpha _{1}, \alpha _2, \alpha _{3}, \beta _{1}, \beta _{2}\) were arbitrary (in the given ranges), we get \( u^* \in {L^{\infty }(\Omega )} \) for:

$$\begin{aligned} N < \dfrac{4 \alpha _{-}}{\alpha _{-} + \beta _{+}} \left( 1+\sqrt{\frac{2 \alpha _{-} + \beta _{-} -1}{\alpha _{+}}} + \dfrac{2\alpha _{-}+\beta _{-}-1}{2 \alpha _{+}} \right) , \end{aligned}$$

which completes the proof of the theorem. \(\square \)