1 Introduction and Statement of the Results

In Euclidean space \({\mathbb {R}}^3\), the theory of self-shrinkers, and to a lesser extent also expander-shrinkers, has developed a great interest in the last decades. Self-shrinkers are surfaces M characterized by the equation

$$\begin{aligned} H({\mathbf {x}})=-\frac{1}{2}\langle N({\mathbf {x}}),{\mathbf {x}}\rangle ,\quad {\mathbf {x}}\in M, \end{aligned}$$
(1.1)

where N is the Gauss map of M and \(\langle ,\rangle \) is the Euclidean metric of \({\mathbb {R}}^3\). Here, H is the trace of the second fundamental form, so the mean curvature of a sphere of radius \(r>0\) is 2/r with respect to the inward normal. Analogously, self-expanders satisfy (1.1) but replacing the factor \(-1/2\) by 1/2. Self-shrinkers play an important role in the study of the mean curvature flow, because they correspond to rescaling solutions of an early time slice. Moreover, self-shrinkers provide information about the behaviour of the singularities of the flow. The literature in the topic of self-shrinkers is sufficiently large to give a summary. We address the reader to [8, 10, 15] and references therein as a first approach.

There are very few explicit examples of self-shrinkers. First examples are vector planes, the sphere of radius 2 centered at the origin, and the round cylinder of radius \(\sqrt{2}\) whose axis passes through the origin. Other examples appear when one assumes some type of invariance of the ambient space. A first family of surfaces are those one that are invariant by a uniparametric group of translations. In such a case, Eq. (1.1) reduces in an ordinary differential equation that describes the curvature of the generating planar curve [1, 2, 13, 16]. A second type or surfaces are the helicoidal surfaces, including rotational surfaces. Rotational and helicoidal shrinkers were studied in [14, 16].

Self-shrinkers can be also seen as weighted minimal surfaces in the context of manifolds with density: see [11, 18]. Let \(e^\varphi \) be a positive density in \({\mathbb {R}}^3\), where \(\varphi \) is a smooth function in \({\mathbb {R}}^3\). We use the density \(e^\varphi \) as a weight for the surface and the volume area. Let M be a surface and let \(\Phi :(-\epsilon ,\epsilon )\times M\rightarrow {\mathbb {R}}^3\) be a compactly supported variation of M with \(\Phi (0,-)=M\). Denote by \(A_\varphi (t)\) and \(V_\varphi (t)\) the weighted area and the enclosed weighted volume of \(\Phi (\{t\}\times M)\), respectively. The formulae of the first variation of \(A_\varphi (t)\) and \(V_\varphi (t)\) are

$$\begin{aligned} A'_\varphi (0)=-\int _M H_\varphi \langle N,\xi \rangle \ \mathrm{d}A_\varphi ,\quad V_\varphi '(0)=\int _M \langle N,\xi \rangle \ \mathrm{d}A_\varphi , \end{aligned}$$

where \(\xi \) is the variational vector field of \(\Phi \) and

$$\begin{aligned} H_\varphi =H-\langle N,\nabla \varphi \rangle \end{aligned}$$

is called the weighted mean curvature. Consequently, M is a critical point of the functional area \(A_\varphi \) if and only if \(H_\varphi =0\). If we choose the function \(\varphi \) as

$$\begin{aligned} \varphi ({\mathbf {x}})=\alpha \frac{|{\mathbf {x}}^2|}{2},\quad {\mathbf {x}}\in {\mathbb {R}}^3, \end{aligned}$$
(1.2)

the expression of \(H_\varphi \) is \(H_\varphi =H({\mathbf {x}})-\alpha \langle N,{\mathbf {x}}\rangle \). In particular, self-shrinkers are critical points of the weighted area functional \(A_\varphi \) for \(\alpha =-1/2\). In case that we seek critical points of \(A_\varphi \) for arbitrary variations preserving the weighted volume, we deduce that the function \(H_\varphi \) is constant. After this motivation, and for the function \(\varphi \) given in (1.2), we generalize the notion of self-shrinkers.

Definition 1.1

Let \(\alpha ,\lambda \in {\mathbb {R}}\). A surface M in \({\mathbb {R}}^3\) is said to be a \(\alpha \)-self-similar solution of constant \(\lambda \) if

$$\begin{aligned} H({\mathbf {x}})=\alpha \langle N({\mathbf {x}}),{\mathbf {x}}\rangle +\lambda ,\quad {\mathbf {x}}\in M. \end{aligned}$$
(1.3)

The case \(\alpha =0\) corresponds with the surfaces of constant mean curvature. This situation will be discarded in this paper and we will assume \(\alpha \not =0\). Examples of solutions of Eq. (1.3) are again spheres centered at the origin and round cylinders whose axis passes through the origin, but now, and in both cases, the radius is arbitrary. Also, affine planes are solutions of (1.3).

When \(\alpha =-1/2\) in Eq. (1.3), self-shrinkers of constant \(\lambda \) were studied independently by Cheng and Wei [7] and McGonagle and Ross [17]. Since then, and if \(\alpha =-1/2\), these surfaces have received the interest for geometers: [4,5,6, 12, 19, 20].

Let us point out that Eq. (1.3) is invariant by linear isometries of \({\mathbb {R}}^3\). Therefore, if \(A:{\mathbb {R}}^3\rightarrow {\mathbb {R}}^3\) is a linear isometry and M is a \(\alpha \)-self-similar solution of constant \(\lambda \), then A(M) satisfies (1.3) with the same constants \(\alpha \) and \(\lambda \). We also notice that a surface can be a solution of (1.3) for different values of \(\alpha \) and \(\lambda \). For example, the sphere of radius 2 centered at the origin satisfies (1.3) for \((\alpha ,\lambda )=(-1/2,0)\) and \((\alpha ,\lambda )=(1/2,2)\).

In this paper, we investigate \(\alpha \)-self-similar solutions of constant \(\lambda \) under the geometric assumption that M is a ruled surface. A ruled surface is a surface that is the union of a one-parameter family of straight lines. A ruled surface can be parametrized locally by

$$\begin{aligned} X(s,t)=\gamma (s)+t\beta (s), \end{aligned}$$
(1.4)

where \(t\in {\mathbb {R}}\) and \(\gamma ,\beta :I\subset {\mathbb {R}}\rightarrow {\mathbb {R}}^3\) are smooth curves with \(|\beta (s)|=1\) for all \(s\in I\). The curve \(\gamma (s)\) is called the directrix of the surface and a line having \(\beta (s)\) as direction vector is called a ruling of the surface. In case that \(\gamma \) reduces into a point, the surface is called conical. On the other hand, if the rulings are all parallel to a fixed direction (\(\beta (s)\) is constant), the surface is called cylindrical. It is clear that a ruled surface is cylindrical if and only if it is invariant by a uniparametric group of translations whose direction is \(\beta \).

In this paper, we classify all ruled surfaces that are solutions of the \(\alpha \)-self-similar equation (1.3).

Theorem 1.2

Let M be a \(\alpha \)-self-similar solution of constant \(\lambda \). If M is a ruled surface, then M is a cylindrical surface.

This result was proved in [3] for self-shrinkers. Cylindrical surfaces with \(\alpha =-1/2\) and \(\lambda \not =0\) were classified in [4]. The key in the proof of Theorem 1.2 is that, by means of the parametrization (1.4), Eq. (1.3) is a polynomial equation on the variable t whose coefficients are functions on the variable s. Thus, all these coefficients must vanish and this allows us to prove the result. The proof of Theorem 1.2 will be carried out in Sect. 2.

Our second result refers to the study of the solutions (1.3) by the method of separation of variables. We stand for (xyz) the canonical coordinates of \({\mathbb {R}}^3\). Let M be a graph \(z=u(x,y)\), where u is a function defined in some domain \({\mathbb {R}}^2\). If M is a \(\alpha \)-self-similar solution of constant \(\lambda \), then u is a solution of

$$\begin{aligned} \text{ div }\frac{Du}{\sqrt{1+|Du|^2}}=\alpha \,\frac{u-\langle (x,y),Du\rangle }{\sqrt{1+|Du|^2}}+\lambda . \end{aligned}$$
(1.5)

This equation is a quasilinear elliptic equation and, as one can expect from the theory of minimal surfaces, it is hard to find explicit solutions of (1.5). A first approach to solve this equation is by means of the method of separation of variables. The idea is to replace a function u(xy) by a function that is the sum of two functions, each one depending in one variable. Thus, we consider \(u(x,y)= f(x)+g(y)\), where \(f:I\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(g:J\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) are smooth functions. In such a case, we prove the following result.

Theorem 1.3

If \(z=f(x)+g(y)\) is a \(\alpha \)-self-similar solution of constant \(\lambda \), then f or g is a linear function. In particular, the surface is cylindrical. Moreover, and after a linear isometry of \({\mathbb {R}}^3\), we have \(g(y)=0\) and f(x) satisfies the equation

$$\begin{aligned} \frac{f''(x)}{(1+f'(x)^2)^{3/2}}=\alpha \frac{-xf'(x)+f(x)}{\sqrt{1+f'(x)^2}}+\lambda . \end{aligned}$$
(1.6)

The proof of this result will done in Sect. 3. Since the function u(xy) is the sum of two functions of one variable, Eq. (1.5) leaves to be a partial differential equation and converts into an ordinary differential equation where appears the derivatives of the functions f and g. This will permits us, after successive differentiations, to deduce that one of the functions is linear.

2 Classification of Ruled Surfaces

In this section, we prove Theorem 1.2. The proof consists in assuming that the ruled surface is parametrized by (1.4) and that the rulings are not parallel. In such a case, we shall prove that a \(\alpha \)-self-similar solution of constant \(\lambda \) must be a plane, which it is a cylindrical surface. Let us observe that a plane is a ruled surface and that can be parametrized by (1.4) but being \(\beta \) a non-constant curve.

On the other hand, the cylindrical surfaces that satisfy (1.3) are the one-dimensional version of the \(\alpha \)-self-similar solutions. Indeed, after a linear isometry of the ambient space, we assume that the rulings are parallel to the y-line. We parametrize the surface as \(X(s,t)=\gamma (s)+t(0,-1,0)\), where \(\gamma \) is a curve contained in the xz-plane \(\Pi \) parametrized by arc-length. Then, (1.3) is

$$\begin{aligned} \kappa _\gamma (s)=\alpha \langle {\mathbf {n}}(s),\gamma (s)\rangle +\lambda , \end{aligned}$$
(2.1)

where \(\kappa _\gamma \) is the curvature of \(\gamma \) as planar curve in \(\Pi \) and \(\{\gamma '(s),{\mathbf {n}}\}\) is a positive orthonormal frame in \(\Pi \) for all \(s\in I\). Therefore, finding \(\alpha \)-self-similar solutions converts into a problem of prescribing curvature for planar curves.

Consider a ruled surface parametrized by \(X(s,t)=\gamma (s)+t\beta (s)\) as in (1.4), \(|\beta (s)|=1\), and suppose that \(\beta \) is not a constant curve. Since \(|\beta (s)|=1\), the curve \(\beta \) is a curve in the unit sphere \({\mathbb {S}}^2=\{{\mathbf {x}}:|{\mathbf {x}}|=1\}\). Without loss of generality, we assume that \(\beta \) is parametrized by arc-length, \(|\beta '(s)|=1\) for all \(s\in I\). From now, we drop the dependence of the variable of the functions. Let us take the so-called Sabban frame for spherical curves, namely, \({\mathcal {B}}=\{\beta ,\beta ',e_3:=\beta \times \beta '\}\). Furthermore

$$\begin{aligned} \begin{aligned} \beta ''&=-\beta +\Theta \, e_3 ,\quad \Theta =(\beta ,\beta ',\beta '').\\ e_3'&=-\Theta \beta '. \end{aligned} \end{aligned}$$
(2.2)

Here, we stands for (uvw) the determinant of the vectors \(u,v,w\in {\mathbb {R}}^3\).

First, we need to obtain an expression of Eq. (1.3) for the parametrization X(st). We denote with the subscripts s and t the derivatives of a function with respect to the variables s and t. Let us notice that \(X_t=\beta \) and \(X_{tt}=0\). The coefficients of the first fundamental form with respect to X are \(E=|X_s|^2\), \(F=\langle X_s,X_t\rangle \) and \(G=|X_t|^2=1\). Set \(W=EG-F^2\). Consider the unit normal vector field \(N=(X_s\times X_t)/\sqrt{W}\). Then, Eq. (1.3) is

$$\begin{aligned} (X_s,X_t,X_{ss})-2fF(X_s,X_t,X_{st})=\alpha W ( X,X_{s}, X_t) +\lambda \, W^{3/2}. \end{aligned}$$
(2.3)

A first case to discuss is when X(st) is a conical surface.

Proposition 2.1

Planes are the only conical surfaces that are \(\alpha \)-self-similar solutions of constant \(\lambda \).

Proof

Suppose that M is a conical surface parametrized by \(X(s,t)=p_0+t\beta (s)\), where \(p_0\in {\mathbb {R}}^3\) is a fixed point. Then, \(F=0\), \(W=t^2\), and Eq. (2.3) is

$$\begin{aligned} t^2(\beta ',\beta ,\beta '')-\alpha t^3(p_0,\beta ',\beta )-\lambda t^3=0. \end{aligned}$$

This is a polynomial equation in the variable t, where the coefficients depend only on the variable s. Thus, we deduce \((\beta ,\beta ',\beta '')=0\) and \(\alpha (p_0,\beta ,\beta ')-\lambda =0\). Since \(\beta \) is a curve in the unit sphere \({\mathbb {S}}^2\) parametrized by arc-length, it is not difficult to conclude from \((\beta ,\beta ',\beta '')=0\) that \(\beta \) is a great circle of \({\mathbb {S}}^2\). This proves that the surface is a plane containing the point \(p_0\), proving the result. \(\square \)

From now, we assume that the ruled surface is not conical, that is, \(\gamma \) is not a constant curve. The next step of the proof of Theorem 1.2 is to choose a suitable parametrization of the ruled surface. In a ruled surface, it is always possible to take a (not unique) special parametrization that consists in taking for \(\gamma \) a curve orthogonal to the rulings, that is, \(\langle \gamma '(s),\beta (s)\rangle =0\) for all \(s\in I\).

The derivatives of X with respect to s and t are

$$\begin{aligned}&X_s=\gamma '(s)+t\beta '(s),\quad X_t=\beta (s) \\&X_{ss}=\gamma ''(s)+t\beta ''(s),\quad X_{st}=\beta '(s),\quad X_{tt}=0. \end{aligned}$$

Then, \(F=\langle X_s,X_t\rangle =\langle \gamma ',\beta \rangle =0\), \(G=1\) and

$$\begin{aligned} E=\langle X_s,X_s\rangle =|\gamma '|^2+2t\langle \gamma ',\beta '\rangle +t^2. \end{aligned}$$
(2.4)

The unit normal vector field is

$$\begin{aligned} N=\frac{\gamma '\times \beta -t e_3}{\sqrt{E}}. \end{aligned}$$

Equation (2.3) is now

$$\begin{aligned} {\mathcal {L}}= \alpha E\left( (\gamma ',\beta ,\gamma )-t\langle e_3,\gamma \rangle \right) +\lambda E^{3/2}, \end{aligned}$$
(2.5)

where

$$\begin{aligned} {\mathcal {L}}=-(\beta ,\beta ',\beta '')t^2+t\left( (\beta ',\beta ,\gamma '')+(\gamma ',\beta ,\beta '')\right) +(\gamma ',\beta ,\gamma ''). \end{aligned}$$

Using (2.2), we write this equation as

$$\begin{aligned} {\mathcal {L}}=-\Theta \, t^2-t\left( \langle e_3,\gamma ''\rangle +\Theta \langle \gamma ',\beta '\rangle \right) +(\gamma ',\beta ,\gamma ''). \end{aligned}$$

We distinguish the cases \(\lambda =0\) and \(\lambda \not =0\).

  1. 1.

    Case \(\lambda =0\). We see (2.5) as a polynomial on the variable t, which is of degree 3 by the expression of E in (2.4). From the coefficient for \(t^3\), we have

    $$\begin{aligned} \alpha \langle e_3,\gamma \rangle =0. \end{aligned}$$

    Since \(\alpha \not =0\), we deduce \( \langle e_3(s),\gamma (s)\rangle =0\) for all \(s\in I\). Then, \(\gamma (s)\) belongs the plane determined by \(\beta (s)\) and \(\beta '(s)\). Let

    $$\begin{aligned} \gamma (s)=u(s)\beta (s)+v(s)\beta '(s) \end{aligned}$$
    (2.6)

    for some smooth functions \(u=u(s)\) and \(v=v(s)\). Now, Eq. (2.5) is \({\mathcal {L}}= \alpha E (\gamma ',\beta ,\gamma )\). Matching the coefficients on t of degree 2, 1, and 0, we obtain, respectively

    $$\begin{aligned} \Theta= & {} -\alpha (\gamma ',\beta ,\gamma )\\ \langle e_3,\gamma ''\rangle +\Theta \langle \gamma ',\beta '\rangle= & {} -2\alpha \langle \gamma ',\beta '\rangle (\gamma ',\beta ,\gamma )\\ (\gamma ',\beta ,\gamma '')= & {} \alpha |\gamma '|^2(\gamma ',\beta ,\gamma ). \end{aligned}$$

    Using the basis \({\mathcal {B}}\) and Eq. (2.6), we calculate the velocity of \(\gamma (s)\), obtaining

    $$\begin{aligned} \gamma '=(u'-v)\beta +(u+v')\beta '+v\Theta \, e_3. \end{aligned}$$
    (2.7)

    Since \(\langle \gamma ',\beta \rangle =0\), we have \(u'-v=0\). From this expression of \(\gamma '\) in combination with (2.2), we obtain \((\gamma ,\gamma ',\beta )=v^2\Theta \). Then, the three above identities become

    $$\begin{aligned} \Theta= & {} -\alpha v^2\Theta \end{aligned}$$
    (2.8)
    $$\begin{aligned} \langle e_3,\gamma ''\rangle +\Theta \langle \gamma ',\beta '\rangle= & {} -2\alpha v^2 \langle \gamma ',\beta '\rangle \Theta \end{aligned}$$
    (2.9)
    $$\begin{aligned} (\gamma ',\beta ,\gamma '')= & {} \alpha v^2|\gamma '|^2 \Theta . \end{aligned}$$
    (2.10)

    From (2.8)

    $$\begin{aligned} (1+\alpha v^2)\Theta =0. \end{aligned}$$

    We discuss two cases.

    1. (a)

      Case \(\Theta =0\). As in Proposition 2.1, the curve \(\beta (s)\) describes a great circle of \({\mathbb {S}}^2\). In particular, \(e_3=\beta \times \beta '\) is a unit constant vector orthogonal to the plane P containing \(\beta \). Moreover, from (2.6), \(\langle \gamma (s),e_3\rangle =0\) for all \(s\in I\). Thus

      $$\begin{aligned} \langle X(s,t),e_3\rangle =\langle \gamma (s)+t\beta (s),e_3\rangle =\langle \gamma (s),e_3\rangle =0. \end{aligned}$$

      This proves that the surface is part of the plane P.

    2. (b)

      Case \(\Theta \not =0\). Then

      $$\begin{aligned} 1+\alpha v^2=0. \end{aligned}$$
      (2.11)

      In particular, v is a non-zero constant function and \(v'=0\). Moreover, from (2.2) and (2.7)

      $$\begin{aligned} \begin{aligned} \gamma '&=u\beta '+v\Theta e_3,\\ \gamma ''&=-u\beta +v(1-\Theta ^2)\beta '+(u\Theta +v\Theta ')e_3. \end{aligned} \end{aligned}$$
      (2.12)

      From these expressions, we compute the terms of the identity (2.9), obtaining

      $$\begin{aligned} 2u\Theta +v\Theta '=-2\alpha uv^2\Theta . \end{aligned}$$

      Due to (2.11), the above equation is simply \(v\Theta '=0\). Since \(v\not =0\) from (2.11), we have shown that \(\Theta \) is a constant function. We now compute the terms of the identity (2.10). Because \(\Theta \) is constant, and taking into account (2.11) and (2.12), we find

      $$\begin{aligned} (\gamma ',\beta ,\gamma '')=(v^2-u^2)\Theta -v^2\Theta ^3, \\ \alpha v^2|\gamma '|^2\Theta =-(u^2+v^2\Theta ^2)\Theta . \end{aligned}$$

      Thus, (2.10) reduces \(v^2\Theta =0\), obtaining a contradiction.

  2. 2.

    Case \(\lambda \not =0\). Squaring Eq. (2.5)

    $$\begin{aligned} \Big (({\mathcal {L}}-\alpha E((\gamma ',\beta ,\gamma )-t\langle e_3, \gamma \rangle )\Big )^2-\lambda ^2 E^3=0. \end{aligned}$$
    (2.13)

    Set \(\Gamma =|\gamma '|^2\). Equation (2.13) is polynomial equation on t of degree 6 whose coefficients are functions on the variable s, and hence, all them must vanish. The coefficients for \(t^6\) and \(t^0\) are, respectively

    $$\begin{aligned}&\lambda ^2 -\alpha ^2\langle e_3,\gamma \rangle ^2=0, \end{aligned}$$
    (2.14)
    $$\begin{aligned}&\lambda ^2\Gamma ^3-\left( \alpha \Gamma (\gamma ',\beta ,\gamma )-(\gamma ',\beta ,\gamma '')\right) ^2=0. \end{aligned}$$
    (2.15)

    Then, \(\lambda =\pm \alpha \langle e_3,\gamma \rangle \) and \(\lambda \Gamma ^{3/2}=\pm (\alpha \Gamma (\gamma ',\beta ,\gamma )-(\gamma ',\beta ,\gamma ''))\). We may assume the sign \(+\) in both cases, namely

    $$\begin{aligned} \lambda = \alpha \langle e_3,\gamma \rangle ,\quad \lambda \Gamma ^{3/2}= \alpha \Gamma (\gamma ',\beta ,\gamma )-(\gamma ',\beta ,\gamma ''), \end{aligned}$$
    (2.16)

    and the reasoning in the other cases of sign is analogous. We now compute the coefficient of \(t^5\) of (2.13). From (2.14) and after some simplifications, we find

    $$\begin{aligned} \alpha \langle e_3,\gamma \rangle \Big (\Theta +\alpha \langle e_3,\gamma \rangle \langle \gamma ',\beta '\rangle + \alpha (\gamma ',\beta ,\gamma )\Big )=0. \end{aligned}$$

    We use that \(\lambda \not =0\). Because \(\langle e_3,\gamma \rangle \not =0\) by (2.14)

    $$\begin{aligned} \Theta +\alpha \langle e_3,\gamma \rangle \langle \gamma ',\beta '\rangle + \alpha (\gamma ',\beta ,\gamma )=0. \end{aligned}$$

    From here, we obtain an expression for \(\Theta \),

    $$\begin{aligned} \Theta = -\alpha \langle e_3,\gamma \rangle \langle \gamma ',\beta '\rangle -\alpha (\gamma ',\beta ,\gamma ). \end{aligned}$$
    (2.17)

    Similarly, and for the coefficient for t of (2.13) and using (2.2) and (2.15)

    $$\begin{aligned}&3\alpha \Gamma ^{1/2}\langle e_3,\gamma \rangle \langle \gamma ',\beta '\rangle -2\alpha \langle \gamma ',\beta '\rangle (\gamma ',\beta ,\gamma )+\Theta \langle \gamma ',e_3\rangle \\&\quad +\alpha \Gamma \langle e_3,\gamma \rangle -\langle e_3,\gamma ''\rangle =0; \end{aligned}$$

    hence

    $$\begin{aligned} (\gamma ',\beta ,\gamma )=\frac{\alpha \Gamma \langle e_3,\gamma \rangle -\langle e_3,\gamma ''\rangle +3\Gamma ^{1/2}\langle e_3,\gamma \rangle \langle \gamma ',\beta '\rangle +\Theta \langle \gamma ',e_3\rangle }{2\alpha \langle \gamma ',\beta '\rangle }. \end{aligned}$$

    We now take the coefficient of \(t^4\) in (2.13). This is a long expression that can be simplified by replacing the value \((\gamma ',\beta ,\gamma )\) from the above equation, together (2.14) and (2.17). By vanishing this coefficient, we arrive to

    $$\begin{aligned} -3\alpha \langle e_3,\gamma \rangle ^2\left( \Gamma ^{1/2}+\langle \gamma ',\beta '\rangle \right) ^2=0. \end{aligned}$$

    Thus

    $$\begin{aligned} \Gamma =\langle \gamma ',\beta '\rangle ^2. \end{aligned}$$
    (2.18)

    On the other hand, since \( \langle \gamma ',\beta \rangle =0\) and from the basis \({\mathcal {B}}\), we have \(\gamma '=\langle \gamma ',\beta '\rangle \beta '+\langle \gamma ',e_3\rangle e_3\). Then

    $$\begin{aligned} \Gamma =|\gamma '|^2=\langle \gamma ',\beta '\rangle ^2+\langle \gamma ',e_3\rangle ^2. \end{aligned}$$

    Combining with (2.18), we deduce \(\langle \gamma ',e_3\rangle =0\), so \(\gamma '=\langle \gamma ',\beta '\rangle \beta '\). Using the basis \({\mathcal {B}}\) again, it is immediate from (2.6) that

    $$\begin{aligned} (\gamma ',\beta ,\gamma )=-\langle \gamma ,e_3\rangle \langle \gamma ',\beta '\rangle . \end{aligned}$$

    Replacing in (2.17), we deduce \(\Theta =0\). This proves that \(\beta (s)\) is a great circle of \({\mathbb {S}}^2\). Thus, \(e_3=\beta \times \beta '\) is a unit constant vector orthogonal to the plane P containing \(\beta \). From (2.16), it follows that:

    $$\begin{aligned} \langle e_3,\gamma (s) \rangle =\frac{\lambda }{\alpha } \end{aligned}$$

    for all \(s\in I\). Finally, from the parametrization (1.4), we deduce

    $$\begin{aligned} \langle X(s,t),e_3\rangle =\langle \gamma (s),e_3\rangle +t\langle \beta (s),e_3\rangle =\frac{\lambda }{\alpha }, \end{aligned}$$

    proving that X(st) is contained in a plane parallel to P.

After the discussion of the cases \(\lambda =0\) and \(\lambda \not =0\), and from Proposition 2.1, we conclude that the surface is a plane of \({\mathbb {R}}^3\). This completes the proof of Theorem 1.2.

3 Classification of Translation Surfaces

In this section, we study the solutions of (1.3) [or equivalently of (1.5)] by the method of separation of variables. Let M be the graph of a function \(u(x,y)=f(x)+g(y)\) where \(f:I\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(g:J\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) are smooth functions. If we parametrize by \(X(x,y)=(x,y,f(x)+g(y))\), the set of points of the surface M is the sum of two planar curves, namely

$$\begin{aligned} X(x,y)=(x,0,f(x))+(0,y,g(y)). \end{aligned}$$
(3.1)

In the literature, surfaces of type \(z=f(x)+g(y)\) are called translation surfaces and they form part of a large family of “surfaces définies pour des propertiés cinématiques” following the terminology of Darboux in [9]. In case that one of the functions f or g is linear, the surface is a ruled surface. Indeed, if for example, \(g(y)=ay +b\) where \(a,b\in {\mathbb {R}}\), then \(\eta (x)=(x,0,f(x)+b)\) is the directrix of the surface and its parametrization is \(X(x,y)=\eta (x)+y(0,1,a)\). This means that M is a ruled surface where all rulings are parallel to the fixed direction (0, 1, a); in particular, the surface is cylindrical.

The proof of Theorem 1.3 is by contradiction. We assume that both functions f and g are not linear. In particular, \(f'f''\not =0\) and \(g'g''\not =0\) in some subintervals \({\tilde{I}}\subset I\) and \({\tilde{J}}\subset J\), respectively. Thus, \(f'f''g'g''\not =0\) in \({\tilde{I}}\times {\tilde{J}}\).

We use the parametrization (3.1) to calculate the Gauss map N of M

$$\begin{aligned} N=\frac{X_x\times X_y}{|X_x\times X_y|}=\frac{(-f',-g',1)}{\sqrt{1+f'^2+g'^2}}. \end{aligned}$$

Here, we denote by prime \('\) the derivative of f or g with respect to its variables. The mean curvature H of M is

$$\begin{aligned} H=\frac{(1+g'^2)f''+(1+f'^2)g''}{(1+f'^2+g'^2)^{3/2}}. \end{aligned}$$

Then, the self-similar solution equation (1.3) is

$$\begin{aligned} \frac{(1+g'^2)f''+(1+f'^2)g''}{(1+f'^2+g'^2)^{3/2}}=\alpha \frac{-xf'-y g'+f+g}{\sqrt{1+f'^2+g'^2}}+\lambda . \end{aligned}$$

The determinant of the first fundamental form is \(W=1+f'^2+g'^2\). Then, the above equation can be expressed as

$$\begin{aligned} (1+g'^2)f''+(1+f'^2)g''=\alpha (-xf'-y g'+f+g)\, W +\lambda \, W^{3/2}. \end{aligned}$$
(3.2)

The differentiation of (3.2) with respect to the variable x gives

$$\begin{aligned}&\left( 1+g'^2\right) f'''+2f'f''g''\\&\quad =-\alpha x f''\, W+2\alpha f'f''(-xf'-yg'+f+g)+3\lambda \, f'f'' W^{1/2}. \end{aligned}$$

A differentiation of this equation with respect to the variable y leads to

$$\begin{aligned} 2g'g''f'''+2f'f''g'''=-2\alpha x g'g''f''-2\alpha y f'f''g''+3\lambda f'f''g'g'' W^{-1/2}, \end{aligned}$$

or equivalently

$$\begin{aligned} 2(f'''+\alpha x f'')g'g''+2(g'''+\alpha y g'')f'f''=3\lambda \, f'f''g'g''W^{-1/2}. \end{aligned}$$
(3.3)

We separate the discussion in two cases according the constant \(\lambda \).

  1. 1.

    Case \(\lambda =0\). We divide (3.3) by \(f'f''g'g''\), obtaining

    $$\begin{aligned} \frac{f'''+\alpha xf''}{f'f''}=-\frac{g'''+\alpha yg''}{g'g''}. \end{aligned}$$

    Since the left-hand side of this equation depends on the variable x, and the right-hand one on y, it follows that there is a constant \(a\in {\mathbb {R}}\), such that

    $$\begin{aligned} \frac{f'''}{f'f''}+\alpha \frac{x}{f'}=-\frac{g'''}{g'g''}-\alpha \frac{y}{g'}=2a. \end{aligned}$$
    (3.4)

    From a first integration of both equations, we find \(m,n\in {\mathbb {R}}\), such that

    $$\begin{aligned} \begin{aligned}&f''+\alpha x f'-\alpha f=a f'^2+m,\\&g''+\alpha y g'-\alpha g'=-a g'^2+n. \end{aligned} \end{aligned}$$
    (3.5)

    By substituting into (3.2), we obtain

    $$\begin{aligned} (n+a-\alpha f)f'^2+\alpha x f'^3=(a-m+\alpha g)g'^2+\alpha y g'^3-m-n. \end{aligned}$$

    Again, we deduce the existence of a constant \(b\in {\mathbb {R}}\), such that

    $$\begin{aligned} \begin{aligned}&(n+a-\alpha f)f'^2+\alpha x f'^3=b,\\&(a-m+\alpha g)g'^2+\alpha y g'^3-m-n=b. \end{aligned} \end{aligned}$$
    (3.6)

    We now give an argument for the function f, and it may similarly done for g. The function f satisfies the first equation in (3.5) and (3.6). Differentiating the first equation of (3.6) with respect to x, it follows that:

    $$\begin{aligned} \left( 2(n+a-\alpha f)+3\alpha x f'\right) f'f''=0. \end{aligned}$$

    Taking into account that \(f'f''\not =0\), we deduce

    $$\begin{aligned} 2(n+a-\alpha f)+3\alpha x f'=0. \end{aligned}$$

    Instead to solve this equation, and to avoid the constants a and n, we differentiate again this equation with respect to x. Simplifying, we arrive to

    $$\begin{aligned} f''=-\frac{1}{3x}f'. \end{aligned}$$

    The solution of this equation is \(f(x)=cx^{2/3}+k\) where \(c,k\in {\mathbb {R}}\). Since f is not a constant function, then the constant c is not 0. Once we have the expression of f(x), we come back to the first equation of (3.5) obtaining

    $$\begin{aligned} \frac{4 a c^2}{9} x^{-2/3}+\frac{1}{3} \alpha c x^{2/3}+\frac{4 c}{9} x^{-4/3}+\alpha k+m=0 \end{aligned}$$

    for all \(x\in I\). This equation is a polynomial equation on the function \(x^{2/3}\). Then, all coefficients vanish, in particular, \(c=0\), which it is a contradiction.

  2. 2.

    Case \(\lambda \not =0\). We divide (3.3) by \(f'f''g'g''\), obtaining

    $$\begin{aligned} \frac{2(f'''+\alpha x f'')}{f'f''}+\frac{2(g'''+\alpha y g'')}{g'g''}=3\lambda \frac{1}{\sqrt{1+f'^2+g'^2}}. \end{aligned}$$

    In view of the left-hand side of this equation is the sum of a function of x with a function depending on y, if we differentiate with respect to x, and next with respect to y, the left-hand side vanishes. On the other hand, in the right-hand side, the same differentiations give

    $$\begin{aligned} 9\lambda \frac{f'f''g'g''}{(1+f'^2+g'^2)^{5/2}}=0. \end{aligned}$$

    This is a contradiction, because \(\lambda \not =0\) and \(f'f''g'g''\not =0\). This finishes the proof of Theorem 1.3.

As a final remark, we point out that the parametrization (3.1) does not coincide with (2.1), because for the translation surface (3.1), the rulings are not necessarily orthogonal to the plane containing the directrix \(\eta (x)=(x,0,f(x)+b)\) (except if \(a=0\)), such it occurs in the parametrization (2.1). If \(a=0\) (and \(b=0\)), Eq. (1.6) is Eq. (2.1) for curves \(y=f(x)\). However, the cylindrical solutions given by Theorem 1.3 coincide, up to a linear isometry, with the ones given in Theorem 1.2.