1 Introduction

Throughout the paper let \({\mathbb {R}}[X_1,\ldots , X_n]\) denote the ring of polynomials in n real variables and \(H_{n,k}\) the set of homogeneous polynomials (forms) of degree k in \({\mathbb {R}}[X_1,\ldots , X_n]\). Certifying that a form \(f\in H_{n,2d}\) assumes only non-negative values is one of the fundamental questions of real algebra. One such possible certificate is a decomposition of f as a sum of squares, i.e., one finds forms \(p_1,\ldots ,p_m\in H_{n,d}\) such that \(f=p_1^2+\dots +p_m^2\). In 1888 Hilbert [16] gave a beautiful proof showing that in general not all non-negative forms can be written as a sum of squares. In fact, he showed that the sum of squares property only characterizes non-negativity in the cases of binary forms, of quadratic forms, and of ternary quartics. In all other cases there exist forms that are non-negative but do not allow a decomposition as a sum of squares. Despite its elegance, Hilbert’s proof was not constructive. A constructive approach to Hilbert’s proof appeared in an article by Terpstra [37] in 1939, but the first explicit example was found by Motzkin in 1965 [22] and an explicit example based on Hilbert’s method was constructed by Robinson in 1969 [29]. We refer the interested reader to [24, 33] for more background on this topic.

The sum of squares decomposition of non-negative polynomials has been the cornerstone of recent developments in polynomial optimization. Following ideas of Lasserre and Parrilo, polynomial optimization problems, i.e., the task of finding \(f^{*}=\min f(x)\) for a polynomial f, can be relaxed and transferred into semidefinite optimization problems. If \(f-f^{*}\) can be written as a sum of squares, these semidefinite relaxations are in fact exact. Hence a better understanding of the difference of sums of squares and non-negative polynomials is highly desirable.

We study the case of forms in n variables of degree 2d that are symmetric, i.e., invariant under the action of the symmetric group \({\mathcal {S}}_n\) that permutes the variables. Let \({\mathbb {R}}[X_1,\ldots , X_n]^S\) denote the ring of symmetric polynomials and \(H^S_{n,2d}\) denote the real vector space of symmetric forms of degree 2d in n variables. Let \(\Sigma _{n,2d}^S\) be the cone of forms in \(H^S_{n,2d}\) that can be decomposed as sums of squares and \({\mathcal {P}}^S_{n,2d}\) be the cone of non-negative symmetric forms. Choi and Lam [7] showed that the following symmetric form of degree 4 in 4 variables is non-negative but cannot be written as a sum of squares:

$$\begin{aligned} \sum X_i^2X_j^2+\sum X_i^2X_jX_k-4X_1X_2X_3X_4. \end{aligned}$$

Thus one can conclude that \(\Sigma _{4,4}^S\ne {\mathcal {P}}^S_{4,4}\) and therefore even in the case of symmetric polynomials the sum of squares property already fails to characterize non-negativity in the first case covered by Hilbert’s classical result. These results have been recently extended by Goel et al. [13] into a full characterization of equality cases between \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\). Unfortunately, there are no other interesting cases of equality beyond those covered by Hilbert’s Theorem.

The case of even symmetric forms has also received some attention. Choi et al. [8] fully described the cones of even symmetric sextics in any number of variables, and showed that under some normalization these cones have the same limit as the number of variables grows. Harris [15] showed that even symmetric ternary octics are non-negative, only if they are sums of squares, providing a new interesting case of equality between non-negative polynomials and sums of squares. Goel et al. [14] showed that there are no other interesting cases of equality beyond Harris’ and Hilbert’s results for even symmetric forms.

In addition to the qualitative statement of Hilbert’s characterization, a quantitative understanding of the gap between sums of squares and non-negative forms has been studied by several authors. In particular, in [3] the first author added to the work of Hilbert by showing that the gap between sum of squares and non-negative forms of fixed degree grows infinitely large with the number of variables if the degree is at least 4. This result has been recently refined by Ergur to the multihomogeneous case [10]. In this article we study the relationship between symmetric sums of squares and symmetric non-negative forms. In particular, we are interested in the asymptotic behavior of the cones, which we can realize for example as symmetric mean inequalities naturally associated to a symmetric polynomial. The study of such symmetric inequalities has a long history (see for example [9]) and it is an interesting question to ask when one can use sum of squares certificates to verify such an inequality. For instance, Hurwitz [17] showed that a sum of squares decomposition can be used to verify the arithmetic mean–geometric mean inequality. Recently, Frenkel and Horváth [11] studied the connection of Minkowski’s inequality to sums of squares. Our results imply that a positive fraction of such inequalities come from sums of squares symmetric polynomials. Furthermore, in degree 4 we show that a family of symmetric power mean inequalities is valid for all n if and only if each member can be written as a sum of squares. We conjecture that this holds for all degrees.

2 Overview and Main Results

2.1 Symmetric Sums of Squares

Symmetric polynomials are classical objects in algebra. In order to represent symmetric polynomials, we will make use of the power sum polynomials.

Definition 2.1

For \(i\in {\mathbb {N}}\) define

$$\begin{aligned} P_{i}^{(n)}:=X_1^i+\dots +X_n^i \end{aligned}$$

to be the ith power sum polynomial. We will also work with the power means:

$$\begin{aligned} p_i^{(n)}:=\frac{1}{n}{P_i}^{(n)}. \end{aligned}$$

It is known (for example [20, 2.11]) that \({\mathbb {R}}[X_1,\ldots , X_n]^S\) is freely generated by the algebraically independent polynomials \(P_1^{(n)},\ldots ,P_n^{(n)}\). Hence it follows that every symmetric polynomial \(f\in {\mathbb {R}}[X_1,\ldots , X_n]^S\) of degree \(2d\le n\) can uniquely be written as

$$\begin{aligned} f=g\bigl (P^{(n)}_1,\ldots ,P^{(n)}_{2d}\bigr ) \end{aligned}$$

for some polynomial \(g\in {\mathbb {R}}[z_1,\ldots ,z_{2d}]\), with \(\deg _w g=\deg f\), where \(\deg _w\) denotes the weighted degree corresponding to the weight \((1,\ldots ,2d)\). Recall that for a natural number k a partition \(\lambda \) of k (written \(\lambda \vdash k\)) is a sequence of weakly decreasing positive integers \(\lambda =(\lambda _1,\lambda _2,\ldots ,\lambda _l)\) with \(\sum _{i=1}^l\lambda _i=k\). For \(n\ge k\) and to a partition \(\lambda =(\lambda _1,\ldots ,\lambda _l)\vdash k\) we associate polynomials

$$\begin{aligned} P_\lambda ^{(n)}:=P_{\lambda _1}^{(n)}\cdot P_{\lambda _2}^{(n)} \cdots P^{(n)}_{\lambda _l} \quad \mathrm{and}\quad p^{(n)}_{\lambda }:=p_{\lambda _1}^{(n)}\cdot p_{\lambda _2}^{(n)}\cdots p^{(n)}_{\lambda _l}. \end{aligned}$$

It now follows that for every \(n\ge k\) the families of polynomials \(\{P_\lambda :\lambda \vdash k\}\) as well as \(\bigl \{p_\lambda ^{(n)}:\lambda \vdash k\bigr \}\) form a basis of \(H_{n,k}^S\). In particular, if \(n\ge k\) then the dimension of \(H_{n,k}^S\) is equal to \(\pi (k)\), the number of partitions of k. Thus the dimension of \(H_{n,k}^S\) is constant for fixed k and all sufficiently large n.

Using representation theory of the symmetric group, and in particular so-called higher Specht polynomials, we are able to give a uniform representation of the cone of symmetric sums of squares of fixed degree 2d in terms of matrix polynomials, with coefficients that are rational functions in n (see Theorem 4.15) and similarly a uniform representation of the sequence of dual cones in terms of linear matrix polynomials whose coefficients are “symmetrizations” of sums of squares in 2d variables. This gives us in particular a better understanding of the faces of \(\Sigma ^S_{n,2d}\) that are not faces of \({\mathcal {P}}^S_{n,2d}\). We make these findings more concrete in the case of quartic symmetric forms, where we completely characterize the cone \(\Sigma _{n,4}\) and its boundary. This in particular allows us to easily compute a family of symmetric sums of square polynomials that are on the boundary of \(\Sigma _{n,4}^S\) without having a real zero, thus certifying the difference between symmetric sums of squares and symmetric non-negative forms (see Theorem 5.5).

2.2 Asymptotic Behavior of Sums of Squares and Non-Negative Forms

Our characterization allows us to study the asymptotic relationship between symmetric sums of squares and symmetric non-negative forms of fixed degree in a growing number of variables. Even though vector spaces \(H^{S}_{n,2d}\) have the same dimension \(\pi (2d)\) for all \(n\ge 2d\), there is no canonical way to identify vector spaces \(H^{S}_{n,2d}\) for different n. In fact there are several natural ways to define transition maps identifying vector spaces of symmetric forms in different numbers of variables (see for example [2]), and different transition maps will lead to different limits as n goes to infinity. The system of vector spaces \(H^S_{n,2d}\) together with transition maps will define a directed system of vector spaces, and we can define the direct limit \(H^S_{\infty ,2d}\) of vector space \(H_{n,2d}^S\) [30, Sect. 7.6]. One way of defining these transitions is by symmetrization:

Definition 2.2

For \(f\in {\mathbb {R}}[X]\) we define the symmetrization of f as

$$\begin{aligned} {{\,\mathrm{sym}\,}}_nf :=\frac{1}{n!}\sum _{\sigma \in {\mathcal {S}}_n}\sigma (f). \end{aligned}$$

The composition of the natural inclusion \(i_{n,n+1}:H_{n,2d} \rightarrow H_{n+1,2d}\) with \({{\,\mathrm{sym}\,}}_{n+1}\) defines injective maps \(\varphi _{n,n+1}:H_{n,2d}^S\rightarrow H_{n+1,2d}^S\). Therefore, we have the following.

Proposition 2.3

For \(n,m\in {\mathbb {N}}\) with \(n>m\) consider the maps \(\varphi _{m,n}:H_{m,2d}^S\rightarrow H_{n,2d}^S\) defined by

$$\begin{aligned} \varphi _{m,n}(p)={{\,\mathrm{sym}\,}}_np. \end{aligned}$$

Then the system of vector spaces \(H^S_{n,2d}\) together with the maps \(\varphi _{m,n}\) defines a directed system and for \(m\ge 2d\) the maps \(\varphi _{m,n}\) are isomorphisms.

We consider the direct limit \(H_{\infty ,k}^{\varphi }\) of the directed system above. Since the maps \(\varphi _{m,n}\) are isomorphisms for \(m \ge 2d\), it follows that \(H_{\infty ,k}^{\varphi }\) is also a real vector space of dimension \(\pi (2d)\). Therefore we have natural isomorphisms \(\varphi _n:H_{\infty ,2d}^\varphi \rightarrow H_{n,2d}^S\) for \(n \ge 2d\), which allow us to view the cones \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\) as subsets of \(H_{\infty ,2d}^\varphi \). Note that we have \(\varphi _{m,n}(\Sigma _{m,2d}^S)\subseteq \Sigma _{n,2d}^S\) and \(\varphi _{m,n}({\mathcal {P}}^S_{m,2d})\subseteq {\mathcal {P}}^S_{n,2d}\). It follows that with transition maps \(\varphi _{m,n}\) the cones of sums of squares and the cones of non-negative polynomials form nested increasing sequences in \(H_{\infty ,2d}^\varphi \). We define the following cones of non-negative elements and sums of squares in \(H_{\infty ,k}^{\varphi }\):

$$\begin{aligned} {\mathcal {P}}_{\infty ,2d}^{\varphi }&:=\bigl \{\text {f}\in H_{\infty ,2d}^{\varphi }\,:\,\varphi _n(\text {f})\in {\mathcal {P}}^S_{n,2d}\ \text {for all}\ n\ge 2d\bigr \},\\ \Sigma _{\infty ,2d}^\varphi&:=\bigl \{\text {f}\in H_{\infty ,2d}^{\varphi }\,:\,\varphi _n(\text {f})\in \Sigma ^S_{n,2d}\ \text {for all}\ n\ge 2d \bigr \}. \end{aligned}$$

The following theorem is immediate from the above discussion.

Theorem 2.4

The cones \({\mathcal {P}}_{\infty ,2d}^{\varphi }\) and \(\Sigma _{\infty ,2d}^{\varphi }\) are full-dimensional convex cones in \(H_{\infty ,2d}^\varphi \simeq {\mathbb {R}}^{\pi (2d)}\).

Forms in fixed degree make up a vanishingly small portion of non-negative forms as the number of variables grows [3]. More precisely, (non-symmetric) non-negative forms and sums of squares in n variables of degree 2d with average 1 on the unit sphere form compact convex sets \({\bar{P}}_{n,2d}\) and \({\bar{\Sigma }}_{n,2d}\) of dimension \(D=\left( {\begin{array}{c}n+d-1\\ d\end{array}}\right) -1\). It was shown in [3] that the ratio of volumes

$$\begin{aligned} \biggl (\frac{{{\,\mathrm{vol}\,}}{\bar{\Sigma }}_{n,2d}}{{{\,\mathrm{vol}\,}}{\bar{P}}_{n,2d}}\biggr )^{{1}/{D}} \end{aligned}$$

converges to 0 for all \(2d\ge 4\) as n goes to infinity. The ratio of volumes is raised to the power 1/D to take into account the effects of large dimension on volumes as the volume of \((1{+}\varepsilon )\Sigma _{n,2d}\) is equal to \((1{+}\varepsilon )^D{{\,\mathrm{vol}\,}}\Sigma _{n,2d}\).

By contrast, the cones of symmetric non-negative forms and sums of squares of fixed degree live in the vector space \(H^S_{n,2d}\) which has fixed dimension \(\pi (2d)\) for a sufficiently large number of variables n. Therefore, to prove that asymptotically symmetric sums of squares make up a non-trivial portion of symmetric non-negative forms (with respect to some transition maps) it suffices to show that both limits are full-dimensional in \(H_{\infty ,2d}^\varphi \simeq {\mathbb {R}}^{\pi (2d)}\), which is done in Theorem 2.4.

Besides the direct limit we also study symmetric power mean inequalities. We can express a symmetric form f in \(H^S_{n,2d}\) in the power mean basis \(p_{\lambda }^{(n)}\) with \(\lambda \vdash 2d\):

$$\begin{aligned} f=\sum _{\lambda \vdash 2d} c_{\lambda }p_{\lambda }^{(n)}. \end{aligned}$$

Using the power mean basis we can define transition maps \(\rho _{m,n}\) by identifying

$$\begin{aligned} \sum _{\lambda \vdash 2d} c_{\lambda }p_{\lambda }^{(m)} \quad \ \text {with}\quad \ \sum _{\lambda \vdash 2d} c_{\lambda }p_{\lambda }^{(n)}. \end{aligned}$$

As before the system of vector spaces \(H^S_{n,2d}\) together with the maps \(\rho _{m,n}\) defines a directed system, and for \(m\ge 2d\) the maps \(\rho _{m,n}\) are isomorphisms. We consider the direct limit \(H_{\infty ,k}^{\rho }\). Since the maps \(\rho _{m,n}\) are isomorphisms for \(m \ge 2d\), it follows that \(H_{\infty ,k}^{\rho }\) is again a real vector space of dimension \(\pi (2d)\). The natural isomorphisms \(\rho _n:H_{\infty ,2d}^\rho \rightarrow H_{n,2d}^S\) for \(n \ge 2d\), allow us to view the cones \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\) as subsets of \(H_{\infty ,2d}^\rho \). We will denote these images by \(\Sigma _{n,2d}^\rho \) and \({\mathcal {P}}_{n,2d}^\rho \) and consider the limit cones:

Definition 2.5

$$\begin{aligned} {\mathfrak {P}}_{2d}&:=\bigl \{{\mathfrak {f}} \in H^\rho _{\infty ,2d}\,:\,\rho _n({\mathfrak {f}})\in {\mathcal {P}}^S_{n,2d}\ \text {for all}\ n\ge 2d \bigr \}\quad \text {and}\\ {\mathfrak {S}}_{2d}&:=\bigl \{{\mathfrak {f}} \in H^\rho _{\infty ,2d}\,:\,\rho _n({\mathfrak {f}})\in \Sigma ^S_{n,2d}\ \text {for all}\ n\ge 2d \bigr \}. \end{aligned}$$

The sequences \({\mathcal {P}}^{\rho }_{n,2d}\) and \(\Sigma _{n,2d}^{\rho }\) are not nested in general. Let \(x=(X_1,\dots ,X_n)\) be a point in \({\mathbb {R}}^n\) and let \({\tilde{x}}\) be the point in \({\mathbb {R}}^{k \cdot n}\) with each \(X_i\) repeated k times. Then

$$\begin{aligned} p_i^{(k\cdot n)}({\tilde{x}})=\frac{1}{k\cdot n}(k X_1^i+\dots +k X_n^i)=p_i^{(n)}(x). \end{aligned}$$

It follows that \(f^{(k\cdot n)}\in {\mathcal {P}}^{p}_{k\cdot n,d}\) implies \(f^{(n)}\in {\mathcal {P}}^{p}_{n,d}\) and hence we get the following.

Proposition 2.6

Consider the cones \({\mathcal {P}}^{p}_{n,2d}\) as convex subsets of \({\mathbb {R}}^{\pi (d)}\) using the coefficients \(c_{\lambda }\) of \(p_{\lambda }\). Then for every \(n\ge 2d\) and \(k\in {\mathbb {N}}\) we have

$$\begin{aligned} {\mathcal {P}}^{\rho }_{k \cdot n,2d}\subseteq {\mathcal {P}}^{\rho }_{n,2d}\subset H^{\rho }_{\infty ,2d}\simeq {\mathbb {R}}^{\pi (2d)}. \end{aligned}$$

Remark 2.7

We note that the same proof also yields that \(\Sigma ^{\rho }_{k \cdot n,2d}\subseteq \Sigma ^{\rho }_{n,2d}\).

It is not directly clear from Proposition 2.6 that the sequences \({\mathcal {P}}^{\rho }_{n,2d}\) and \(\Sigma ^{\rho }_{n,2d}\) have limits, which we show separately:

Theorem 2.8

(a) The cones \({\mathfrak {S}}_{2d}\) and \({\mathfrak {P}}_{2d}\) are full-dimensional cones.

$$\begin{aligned} {\mathfrak {P}}_{2d}=\lim _{n\rightarrow \infty } {\mathcal {P}}^{\rho }_{n,2d} \quad \text {and} \quad {\mathfrak {S}}_{2d}=\lim _{n\rightarrow \infty } \Sigma ^{\rho }_{n,2d}. \end{aligned}$$
(b)

Although the cone of symmetric non-negative quartics is strictly bigger than the cone of symmetric quartic sums of squares for any number of variables \(n\ge 4\), we show that in the limit the two cones coincide:

Theorem 2.9

\({\mathfrak {P}}_{4}={\mathfrak {S}}_{4}\).

In particular, this result applies in the situation of power mean inequalities studied in [23], and hence it is possible to verify any such inequality using sums of squares. We conjecture that this happens in arbitrary degree 2d, i.e., we suggest the following.

Conjecture 1

\({\mathfrak {P}}_{2d}={\mathfrak {S}}_{2d}\) for all \(d \in {\mathbb {N}}\).

2.3 Structure of the Article and Guide for the Reader

This article is structured as follows: We provide a characterization of symmetric non-negative forms and the limit cone in Sect. 3. Section 4 provides a detailed study of symmetric sums of squares. To this end we present the general framework of how to use representation theory to study invariant sums of squares in Sect. 4.1. In Sect. 4.2 we outline the basic notions of the representation theory of the symmetric group. These results are then used in Sect. 4.3 to represent the cone of symmetric sums of squares (without restrictions on the degree) in terms of matrix polynomials in Theorems 4.11 and 4.12. The subsequent Sect. 4.4 then discusses how restricting degree allows for a uniform description of the cones \(\Sigma _{n,2d}\) in terms of the power mean bases \(p_\lambda ^{(n)}\) (Theorem 4.15). The final subsection of Sect. 4 discusses some results on the dual cone which are needed in the sequel. The subsequent Sect. 5 makes these results more concrete as we give a description of the cone of symmetric quartic sums of squares (Theorem 5.1). Furthermore, we describe the elements of the boundary of \(\Sigma _{n,4}\) that are strictly positive in Theorem 5.3 and give an explicit example of such a polynomial for every \(n\ge 4\) in Example 5.4. From this example it follows in particular that besides the cases where Hilbert showed the equality of sums of squares and non-negative forms there always exist symmetric positive definite forms which are not sums of squares (see Theorem 5.5). In Sect. 6 we explore the two notions of limits and prove Theorem 2.8. We also discuss the connection with the power mean inequalities. These power mean inequalities are then again studied in more detail in the final Sect. 7, where we show in particular that all valid power mean inequalities of degree 4 are sums of squares (Theorem 2.9).

The order of sections was chosen to present the more general statements in Sects. 3, 4, and 6 and then apply them in the quartic case in Sects. 5 and 7. Depending on reader’s preferences, one can also begin by reading Sect. 5 first before actually diving into Sect. 4, and similarly Sect. 7 before Sect. 6, while taking the necessary results from previous sections for granted.

3 Symmetric PSD Forms

We begin by characterizing the cone \({\mathfrak {P}}_{2d}\). One key result needed to describe the non-negative symmetric forms is the so-called half degree principle (see [26, 27, 38]): For a natural number \(k\in {\mathbb {N}}\) we define \(A_k\) to be the set of all points in \({\mathbb {R}}^n\) with at most k distinct components, i.e.,

$$\begin{aligned} A_k:=\{x\in {\mathbb {R}}^n:|\{X_1,\ldots ,X_n\}|\le k\}. \end{aligned}$$

The half degree principle says that a symmetric form of degree \(2d> 2\) is non-negative, if and only if it is non-negative on \(A_d\):

Proposition 3.1

(Half degree principle)  Let \(f\in H^S_{n,2d}\) and set \(k:=\max \{2,d\}\). Then f is non-negative if and only if

$$\begin{aligned} f(y)\ge 0\quad \ \text {for all}\ y\in A_{k}. \end{aligned}$$

Remark 3.2

By considering \(f-\epsilon (X_1^2+\dots +X_n^2)^d\) for a sufficiently small \(\epsilon >0\) we see that we can also replace non-negative by positive in the above theorem, thus characterizing strict positivity of symmetric forms.

A non-increasing sequence of k natural numbers \(\vartheta :=(\vartheta _1,\ldots , \vartheta _k)\) such that \(\vartheta _1+\dots + \vartheta _k=n\) is called a k-partition of n (written \(\vartheta \vdash _k n\)). Given a symmetric form \(f\in H^S_{n,2d}\) and \(\vartheta \) a k-partition of n we define \(f^{\vartheta }\in {\mathbb {R}}[t_1,\ldots , t_k]\) via

$$\begin{aligned} f^{\vartheta }(t_1,\ldots ,t_k):=f(\underbrace{t_1,\ldots ,t_1}_{\vartheta _1},\underbrace{t_2,\ldots ,t_2}_{\vartheta _2},\ldots ,\underbrace{t_{k},\ldots ,t_{k}}_{\vartheta _{k}}). \end{aligned}$$

From now on assume that \(2d>2\). Then the half-degree principle implies that non-negativity of \(f=\sum _{\lambda \vdash 2d}c_{\lambda }p_{\lambda }\) is equivalent to non-negativity of

$$\begin{aligned} f^{\vartheta }:=\sum _{\lambda \vdash 2d}c_{\lambda }p_{\lambda }^{\vartheta }(t_1,\ldots t_k) \end{aligned}$$

for all \(\vartheta \vdash _d n\), since the polynomials \(f^{\vartheta }\) give the values of f at all points with at most d parts. We note that for all \(i\in {\mathbb {N}}\) we have

$$\begin{aligned} p_i^{\vartheta }=\frac{1}{n}(\vartheta _1t_1^i+\vartheta _2t_2^i+\cdots +\vartheta _dt_d^i). \end{aligned}$$

For a partition \(\lambda =(\lambda _1,\dots ,\lambda _l) \vdash 2d\) we define a 2d-variate form \(\Phi _{\lambda }\) in the variables \(s_1,\dots ,s_d\) and \(t_1,\dots , t_d\) by

$$\begin{aligned} \Phi _{\lambda }(s_1,\dots ,s_d,t_1,\dots ,t_d)=\prod _{i=1}^l(s_1t_1^{\lambda _i}+s_2t_2^{\lambda _i}+\cdots +s_dt_d^{\lambda _i}) \end{aligned}$$

and use it to associate to any form \(f \in H^S_{n,2d}\), \(f=\sum _{\lambda \vdash 2d}c_{\lambda }p_{\lambda }\), the form

$$\begin{aligned} \Phi _f:=\sum _{\lambda \vdash 2d}c_{\lambda }\Phi _{{\lambda }}. \end{aligned}$$

Note that

$$\begin{aligned} \Phi _{\lambda }\biggl (\frac{\vartheta _1}{n},\dots ,\frac{\vartheta _d}{n},t_1,\dots t_d\biggr )=p_{\lambda }^{\vartheta }(t_1,\ldots ,t_d). \end{aligned}$$

We define the set

$$\begin{aligned} W_{n}=\bigl \{w=(w_1,\ldots w_d)\in {\mathbb {R}}^d \,:\,n\cdot w_i \in {\mathbb {N}}\cup \{0\}\ \text {and}\ w_1+\cdots +w_d=1\bigr \}. \end{aligned}$$

It follows from the arguments above that \(f \in H^S_{n,2d}\) is non-negative if and only if the forms \(\Phi _{f}(s,t)\) are non-negative forms in t for all \(w \in W_n\). This is summarized in the following corollary.

Corollary 3.3

Let \(f=\sum _{\lambda \vdash 2d }c_{\lambda }p_{\lambda }\) be a form in \(H^S_{n,2d}\). Then f is non-negative (positive) if and only if for all \(w\in W_n\) the d-variate forms \(\Phi _f(w,t)\) are non-negative (positive).

This result enables us to characterize the elements of \({\mathfrak {P}}_{2d}\). We expand the sets \(W_n\) to the standard simplex \(\Delta \) in \({\mathbb {R}}^d\):

$$\begin{aligned} \Delta :=\bigl \{\alpha =(\alpha _1,\ldots ,\alpha _d)\in [0,1]^d\,:\,\alpha _1+\dots +\alpha _d=1\bigr \}. \end{aligned}$$

Then we have the following theorem characterizing \({\mathfrak {P}}_{2d}\).

Theorem 3.4

Let \({\mathfrak {f}}\in H^{\rho }_{\infty ,2d}\) be the sequence defined by \(f^{(n)}=\sum _{\lambda \vdash 2d}c_\lambda p_\lambda ^{(n)}\). Then \({\mathfrak {f}}\in {\mathfrak {P}}_{2d}\) if and only if the 2d-variate polynomial \(\Phi _{f}(s,t)\) is non-negative on \(\Delta \times {\mathbb {R}}^d\).

Proof

Suppose that \(\Phi _{f}(s,t)\) is non-negative on \(\Delta \times {\mathbb {R}}^d\). Let \(f^{(n)}=\sum c_{\lambda }p^{(n)}_{\lambda }\). Since \(W_{n}\subset \Delta \) for all n, we see from Corollary 3.3 that \(f^{(n)}\) is a non-negative form for all n and thus \({\mathfrak {f}} \in {\mathfrak {P}}_{2d}\).

On the other hand, suppose there exists \(\alpha _0 \in \Delta \) such that \(\Phi _{f}(\alpha _0,t)<0\) for some \(t\in {\mathbb {R}}^d\). Then we can find a rational point \(\alpha \in \Delta \) with all positive coordinates and sufficiently close to \(\alpha _0\) so that \(\Phi _{f}(\alpha ,t) < 0\). Let h be the least common multiple of the denominators of \(\alpha \). Then we have \(\alpha \in W_{ah}\) for all \(a\in {\mathbb {N}}\). Choose a such that \(ah \ge 2d\). Then \(f^{(ah)}\) is negative at the corresponding point and we have \({\mathfrak {f}} \notin {\mathfrak {P}}_{2d}\). \(\square \)

4 Symmetric Sums of Squares

We now consider symmetric sums of squares. It was already observed in [12] that invariance under a group action allows us to demand sum of squares decompositions which put strong restrictions on the underlying squares. First, we explain the general approach, which uses representation theory and can be used for other groups as well. Our presentation follows the ideas of [12] which we present in a slightly different way. The interested reader is advised to consult there for more details.

4.1 Invariant Sums of Squares

Let G be a finite group acting linearly on \({\mathbb {R}}^{n}\). As G acts linearly on \({\mathbb {R}}^n\) also the \({\mathbb {R}}\)-vector space \({\mathbb {R}}[X]\) can be viewed as a G-module and by Maschke’s theorem (the reader may consult for example [34] for basics in linear representation theory) there exists a decomposition of the form

$$\begin{aligned} {\mathbb {R}}[X] = V^{(1)} \oplus V^{(2)} \oplus \cdots \oplus V^{(h)} \end{aligned}$$
(4.1)

with \(V^{(j)} = W^{(j)}_1 \oplus \cdots \oplus W^{(j)}_{\eta _j}\) and \(\nu _j := \dim W^{(j)}_i\). Here, the \(W^{(j)}_i\) are the irreducible components and the \(V^{(j)}\) are the isotypic components, i.e., the direct sums of isomorphic irreducible components. The component with respect to the trivial irreducible representation is the invariant ring \({\mathbb {R}}[X]^G\). The elements of the other isotypic components are called semi-invariants. It is classically known that each isotypic component is a finitely generated \({\mathbb {R}}[X]^{G}\)-module (see [36, Theorem 1.3]). To any element \(f\in H_{n,d}\) we can associate a symmetrization by which we mean its image under the following linear map:

Definition 4.1

For a finite group G the linear map \({\mathcal {R}}_G:H_{n,d}\rightarrow H_{n,d}^{G}\) defined by

$$\begin{aligned} {\mathcal {R}}_G(f):=\frac{1}{|G|}\sum _{\sigma \in G}\sigma (f) \end{aligned}$$

is called the Reynolds operator of G. In the case of \(G={\mathcal {S}}_n\) we say that \({\mathcal {R}}_{{\mathcal {S}}_n}(f)\) is a symmetrization of f and we write \({{\,\mathrm{sym}\,}}f\) in this case.

For a set of polynomials \(f_1,\ldots ,f_l\) we will write \(\sum {\mathbb {R}}\{f_1,\ldots ,f_l\}^2\) to refer to the sums of squares of elements in the linear span of the polynomials \(f_1,\ldots ,f_l\). It has already been observed by Gaterman and Parrilo [12] that invariant sums of squares can be written as sums of squares of semi-invariants using Schur’s Lemma. However, a closer inspection of the situation allows in many cases—as for example in the case of \({\mathcal {S}}_n\)—a finer analysis of the decomposition into sums of squares. Consider a set of forms \(\{f_{1,1},\ldots ,f_{1,\eta _1},f_{2,1},\ldots ,f_{h,\eta _h}\}\) such that for fixed j the forms \(f_{j,i}\) generate irreducible components of \(V^{(j)}\). Further assume that they are chosen in such a way, that for each j and each pair (lk) there exists a G-isomorphism \(\rho _{l,k}^{(j)}:V^{j}\rightarrow V^{j}\) which maps \(f_{j,l}\) to \(f_{j,k}\). Now for every j we consider the set \(\{f_{j,1},\ldots ,f_{j,\eta _j}\}\) which contains only one polynomial per irreducible module. However, since every irreducible module is generated by the G-orbit of only one element, every such set uniquely describes the chosen decomposition. We call such a set a symmetry basis and show that invariant sums of squares are in fact symmetrizations of sums of squares of a symmetry basis. The following theorem, which we state in a slightly more general setup, highlights the use of a symmetry basis.

Theorem 4.2

Let G be a finite group and assume that all real irreducible representations \(V\subset H_{n,d}\) are also irreducible over their complexification. Let p be a form of degree 2d that is invariant with respect to G. If p is a sum of squares, then p can be written in the form

$$\begin{aligned} p=\sum _{j}^{h} q_j,\quad \text {where each}\quad q_j\in \sum {\mathbb {R}}\{f_{j,1},\ldots f_{j,\eta _j}\}^2. \end{aligned}$$

The main tool for the proof is Schur’s Lemma, and we remark that a dual version of this theorem can be found in [28, Thm. 3.4] and [25].

Proof

Let \(p\in H_{n,2d}\) be a G-invariant sum of squares. Then there exists a symmetric positive semidefinite bilinear form

$$\begin{aligned} B:H_{n,d}\times H_{n,d}\rightarrow {\mathbb {R}}\end{aligned}$$

which is a Gram matrix for p, i.e., for every \(x\in {\mathbb {R}}^{n}\) we can write \(p(x)=B(X^{d},X^{d})\), where \(X^{d}\) stands for the d-th power of x in the symmetric algebra of \({\mathbb {R}}^{n}\). Since p is G-invariant, we have \(p={\mathcal {R}}_G(p)\) and by linearity we may assume that B is a G-invariant bilinear form. Now decompose \(H_{n,2d}\) as in (4.1) and consider the restriction of B to

$$\begin{aligned} B^{ij}:V^{(i)}\times V^{(j)}\rightarrow {\mathbb {R}}\quad \text {with}\quad i\ne j. \end{aligned}$$

For every \(v\in V^{(i)}\) the quadratic form \(B^{ij}\) defines a linear map \(\phi _v:V^{(j)}\rightarrow {\mathbb {R}}\) via \(\phi _v(w):=B^{ij}(v,w)\) and so the form \(B^{ij}\) naturally can be seen as an element of \({{\,\mathrm{Hom}\,}}^{G}\bigl ({V^{(i)}}^{*},V^{(j)}\bigr )\). Since real representations are self-dual we have that \({V^{(i)}}^{*}\) and \(V^{(j)}\) are not isomorphic and thus by Schur’s Lemma we find that \(B^{ij}(v,w)=0\) for all \(v\in V^{(i)}\) and \(w\in V^{(j)}\). So the isotypic components are orthogonal with respect to B and hence it suffices to look at

$$\begin{aligned} B^{jj}:V^{(j)}\times V^{(j)}\rightarrow {\mathbb {R}}\end{aligned}$$

individually. We have \(V^{(j)}:=\bigoplus _{k=1}^{l} W^{(j)}_{k}\), where each \(W^{(j)}_k\) is generated by a semi-invariant \(f_{j,k}\), i.e., there is a basis \(f_{j,k,1},\ldots ,f_{j,k,\nu _j}\) for every \(W^{(j)}_k\) such that the basis elements \(f_{j,k,i}\) are taken from the orbit of \(f_{j,k}\) under G. To again use Schur’s Lemma we identify \(B_j\) with its complexification \(B_j^{{\mathbb {C}}}\), which is possible since we assumed that all representations are irreducible also over \({\mathbb {C}}\). Consider a pair \(W^{(j)}_{k_1}, W^{(j)}_{k_2}\), where we allow \(k_1=k_2\). To apply Schur’s Lemma we relate the quadratic from \(B^{jj}\) to a linear map \(\psi ^{(j)}_{k_1,k_2}:W^{(j)}_{k_1}\rightarrow W^{(j)}_{k_2}\) defined on the generating set \(f_{j,k_1,1},\ldots ,f_{j,k_1,\nu _j}\) by

$$\begin{aligned} \psi ^{(j)}_{k_1,k_2}(f_{j,k_1,u}):=\sum _{v}B^{jj}(f_{j,k_1,u},f_{j,k_2,v}) f_{j,k_2,v}. \end{aligned}$$

Since we assumed that \(W^{(j)}_{k}\) are absolutely irreducible we have by Schur’s Lemma

$$\begin{aligned} \dim {{{\,\mathrm{Hom}\,}}^G\bigl (W^{(j)}_{k_1},W^{(j)}_{k_2}\bigr )}=1 \end{aligned}$$

and we can conclude that this map is unique up to scalar multiplication. Therefore it can be represented in the form \(\psi ^{(j)}_{k_1,k_2}=c_{k_1,k_2}\rho _{k_1,k_2}\), where \(\rho _{k_1,k_2}\) is the G-isomorphism with \(\rho _{k_1,k_2}(f_{j,k_1})=f_{j,k_2}\) as above. It therefore follows that

$$\begin{aligned} B^{jj}(f_{j,k_1,u},f_{j,k_2,v})=\delta _{u,v}c_{k_1,k_2}, \end{aligned}$$

where \(\delta _{u,v}\) denotes the Kronecker delta. By considering the matrix of B with respect to the basis \(f_{j,k,l}\) of \(H_{n,d}\) we see that p has the desired decomposition. \(\square \)

Remark 4.3

The above statement also holds true in the situation where one looks at sums of squares of elements of an arbitrary G-closed submodule \(T\subset {\mathbb {R}}[X]\).

In some situations it is convenient to formulate the above Theorem 4.2 in terms of matrix polynomials, i.e., matrices with polynomial entries. Given two \(k\times k\) symmetric matrices A and B define their inner product as \(\langle A,B \rangle ={\text {trace}}{(AB)}\). Define a block-diagonal symmetric matrix A with h blocks \(A^{(1)},\dots ,A^{(h)}\) with the entries of each block given by:

$$\begin{aligned} A^{(j)}_{ik}=g_{ik}^{(j)}={\mathcal {R}}_G(f_{j,i}\cdot f_{j,k}). \end{aligned}$$

Then Theorem 4.2 is equivalent to the following statement:

Corollary 4.4

With the conditions as in Theorem 4.2 let \(p\in {\mathbb {R}}[X]^G\). Then p is a sum of squares of polynomials in T if and only if p can be written as \(p=\langle A,B \rangle \), where B is a positive semidefinite matrix with real entries.

We now aim to apply Theorem 4.2 to a symmetric form \(p \in H_{n,2d}^S\). In order to do this we need to identify an explicit representative in every irreducible \({\mathcal {S}}_n\)-submodule of \(H_{n,d}\). We first recall some useful facts from the representation theory of \({\mathcal {S}}_n\). The irreducible representations in this case are the so-called Specht modules, which we will define in the following section. We refer to [18, 31] for more details.

4.2 Specht Modules as Polynomials

Let \(\lambda =(\lambda _1,\lambda _2,\ldots ,\lambda _l)\vdash n\) be a partition of n. A Young tableau of shape \(\lambda \) consists of l rows, with \(\lambda _i\) entries in the i-th row. Each entry is an element in \(\{1, \ldots , n\}\), and each of these numbers occurs exactly once. A standard Young tableau is a Young tableau in which all rows and columns are increasing. An element \(\sigma \in {\mathcal {S}}_n\) acts on a Young tableau by replacing each entry by its image under \(\sigma \). Two Young tableaux \(T_1\) and \(T_2\) are called row-equivalent if the corresponding rows of the two tableaux contain the same numbers. The classes of row-equivalent Young tableaux are called tabloids, and the equivalence class of a tableau T is denoted by \(\{T\}\). The stabilizer of a row-equivalence class is called the row-stabilizer, denoted by \({{\,\mathrm{RStab}\,}}_T\). If \(R_1,\ldots ,R_l\) are the rows or a given Young tableau T this group can be written as

$$\begin{aligned} {{\,\mathrm{RStab}\,}}_T = {\mathcal {S}}_{R_1}\times {\mathcal {S}}_{R_2}\times \cdots \times {\mathcal {S}}_{R_l}, \end{aligned}$$

where \({\mathcal {S}}_{R_i}\) is the symmetric group on the elements of row i. The action of \({\mathcal {S}}_n\) on the equivalence classes of row-equivalent Young tableaux gives rise to the permutation module \(M^{\lambda }\) corresponding to \(\lambda \) which is the \({\mathcal {S}}_n\)-module defined by

$$\begin{aligned} M^\lambda ={\mathbb {R}}\{ \{T_1\}, \ldots ,\{T_s\}\}, \end{aligned}$$

where \(\{T_1\}, \ldots , \{T_s\}\) is a complete list of \(\lambda \)-tabloids and \({\mathbb {R}}\{ \{T_1\}, \ldots ,\{T_s\}\}\) denotes their \({\mathbb {R}}\)-linear span.

Let T be a Young tableau for \(\lambda \vdash n\), and let \(C_i\) be the entries in the i-th column of t. The group

$$\begin{aligned} {{\,\mathrm{CStab}\,}}_T= {\mathcal {S}}_{C_1}\times {\mathcal {S}}_{C_2}\times \cdots \times {\mathcal {S}}_{C_\nu }, \end{aligned}$$

where \({\mathcal {S}}_{C_i}\) is the symmetric group on the elements of columns i, is called the column stabilizer of T. The irreducible representations of the symmetric group \({\mathcal {S}}_n\) are in 1-1-correspondence with the partitions of n, and they are given by the Specht modules, as explained below. For \(\lambda \vdash n\), the polytabloid associated with T is defined by

$$\begin{aligned} e_T \,= \sum _{\sigma \in {{\,\mathrm{CStab}\,}}_t}{{\,\mathrm{sgn}\,}}(\sigma )\sigma \{t\}. \end{aligned}$$

Then for a partition \(\lambda \vdash n\), the Specht module \(S^{\lambda }\) is the submodule of the permutation module \(M^\lambda \) spanned by the polytabloids \(e_T\). The dimension of \(S^{\lambda }\) is given by the number of standard Young tableaux for \(\lambda \vdash n\), which we will denote by \(s_\lambda \).

A classical construction of Specht realizes Specht modules as submodules of the polynomial ring (see [35]): For \(\lambda \vdash n\) let \(T_{\lambda }\) be a standard Young tableau of shape \(\lambda \) and \({\mathcal {C}}_1,\ldots ,{\mathcal {C}}_{\nu }\) be the columns of \(T_\lambda \). To \(T_\lambda \) we associate the monomial \(X^{t_{\lambda }}:=\prod _{i=1}^{n}X_i^{m(i)-1}\), where m(i) is the index of the row of \(T_{\lambda }\) containing i. Note that for any \(\lambda \)-tabloid \(\{T_{\lambda }\}\) the monomial \(X^{T_{\lambda }}\) is well defined, and the mapping \(\{T_{\lambda }\} \mapsto X^{T_{\lambda }}\) is an \({\mathcal {S}}_n\)-isomorphism. For any column \({\mathcal {C}}_i\) of \(T_\lambda \) we denote by \({\mathcal {C}}_i(j)\) the element in the j-th row and we associate to it a Vandermonde determinant:

$$\begin{aligned} {{\,\mathrm{Van}\,}}_{{\mathcal {C}}_{i}}:=\,\det {\begin{pmatrix} X_{ {\mathcal {C}}_i(1)}^0&{}\quad \ldots &{}\quad X_{{\mathcal {C}}_i(k)}^0 \\ \vdots &{}\quad \ddots &{}\quad \vdots \\ X_{ {\mathcal {C}}_i(1)}^{k-1}&{}\quad \ldots &{}\quad X_{{\mathcal {C}}_i(k)}^{k-1} \end{pmatrix}}=\,\prod _{j<l}(X_{{\mathcal {C}}_i(j)}-X_{{\mathcal {C}}_i(l)}). \end{aligned}$$

The Specht polynomial \(sp_{T_{\lambda }}\) associated to \(T_\lambda \) is defined as

$$\begin{aligned} sp_{T_{{\lambda }}} := \prod _{i=1}^{\nu } {{\,\mathrm{Van}\,}}_{{\mathcal {C}}_{i}}\,=\sum _{\sigma \in {{\,\mathrm{CStab}\,}}_{T_{\lambda }}}{{\,\mathrm{sgn}\,}}(\sigma )\sigma (X^{T_{\lambda }}), \end{aligned}$$

where \({{\,\mathrm{CStab}\,}}_{T_{\lambda }}\) is the column stabilizer of \(T_\lambda \). By the \({\mathcal {S}}_n\)-isomorphism \(\{T_{\lambda }\} \mapsto X^{t_{\lambda }}\), \({\mathcal {S}}_n\) acts on \(sp_{T_{{\lambda }}}\) in the same way as on the polytabloid \(e_{T_{\lambda }}\). If \(T_{\lambda ,1},\ldots ,T_{\lambda ,k}\) denote all standard Young tableaux associated to \(\lambda \), then the polynomials \(sp_{T_{\lambda ,1}},\ldots ,sp_{T_{\lambda ,k}}\) are called the Specht polynomials associated to \(\lambda \). We then have the following proposition; see [35].

Proposition 4.5

The Specht polynomials \(sp_{T_{\lambda ,1}},\ldots ,sp_{T_{\lambda ,k}}\) span an \({\mathcal {S}}_n\)-submodule of \({\mathbb {R}}[X]\) which is isomorphic to the Specht module \(S^{\lambda }\).

The Specht polynomials identify a submodule of \({\mathbb {R}}[X]\) isomorphic to \({\mathcal {S}}^{\lambda }\). In order to get a decomposition of the entire ring \({\mathbb {R}}[X]\) we will use a generalization of this construction which is described in the next section.

4.3 Higher Specht Polynomials and the Decomposition of \({\mathbb {R}}[X]\)

In what follows we will need to understand the decomposition of the polynomial ring \({\mathbb {R}}[X]\) and \({\mathcal {S}}_n\)-module \(H_{n,d}\) in terms of \({\mathcal {S}}_n\)-irreducible representations. Notice that such a decomposition is not unique. It is classically known that the ring \({\mathbb {R}}[X]\) is a free module of dimension n! over the ring of symmetric polynomials. Similarly, every isotypic component is a free \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\)-module. Therefore, one general strategy in order to get a symmetry basis of \({\mathbb {R}}[X]\) consists in building a free module basis for \({\mathbb {R}}[X]\) over \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\) which additionally is symmetry adapted, i.e., which respects a decomposition into irreducible \({\mathcal {S}}_n\)-modules. One such construction, which generalizes Specht’s original construction presented above, is due to Ariki et al. [1].

Definition 4.6

Let \(n\in {\mathbb {N}}\).

  1. (i)

    A finite sequence \(w=(w_1,\ldots ,w_n)\) of non-negative integers is called a word of length n. A word w of length n is called a permutation if the set of non-negative integers forming a word of length n is \(\{1,\ldots ,n\}\).

  2. (ii)

    Given a word w and a permutation u we define the monomial associated to the pair as \(X_u^{w}:=X_{u_1}^{w_1}\cdots X_{u_n}^{w_n}\).

  3. (iii)

    Given a permutation w we associate to w its index, denoted by i(w), by constructing the following word of length n. The word i(w) contains 0 exactly at the same position where 1 occurs in w and the other entries we define recursively with the following rule: Suppose that the entry in i(w) at a given position is c and that k occurs in w at the same position, then i(w) should be also c if it lies to the right of k and it should be \(c+1\) if it lies to the left of k.

  4. (iv)

    For \(\lambda \vdash n\) and T being a standard Young tableau of shape \(\lambda \), we define the word of T, denoted by w(T), by collecting the entries of T from the bottom to the top in consecutive columns starting from the left.

  5. (v)

    For a pair (TV) of standard \(\lambda \)-tableaux we define the monomial associated to this pair as \(X_{w(V)}^{i(w(T))}\).

Example 4.7

Consider the tableau

The resulting word is given by \(w(T)=31524\), with \(i(w(T))=10001\). Taking

we obtain \(X_{w(V)}^{i(w(T))}=X_1^0X_2^1X_3^0X_4^2X_5^1\).

Definition 4.8

Let \(\lambda \vdash n\) and T be a \(\lambda \)-tableau. Then the Young symmetrizer associated to T is the element in the group algebra \({\mathbb {R}}[{\mathcal {S}}_n]\) defined to be

$$\begin{aligned} \varepsilon _T\,=\sum _{\sigma \in {{\,\mathrm{RStab}\,}}_T}\sum _{\tau \in {{\,\mathrm{CStab}\,}}_T} {{\,\mathrm{sgn}\,}}( \tau ) \tau \sigma . \end{aligned}$$

Now let T be a standard Young tableau, and define the higher Specht polynomial associated with the pair (TV) to be

$$\begin{aligned} F_V^T(X_1,\ldots ,X_n):=\varepsilon _V\bigl (X_{w(V)}^{i(w(T))}\bigr ). \end{aligned}$$

For \(\lambda \vdash n\) we will denote by

$$\begin{aligned} {\mathcal {F}}_\lambda =\bigl \{F_V^{T}:T,V \text { run over all standard } \lambda \text {-tableaux}\bigr \} \end{aligned}$$

the set of all standard higher Specht polynomials corresponding to \(\lambda \) and by

$$\begin{aligned} {\mathcal {F}}=\bigcup _{\lambda \vdash n} {\mathcal {F}}_\lambda \end{aligned}$$

the set of all standard higher Specht polynomials.

Remark 4.9

Let \(s_{\lambda }\) denote the number of standard Young tableaux of shape \(\lambda \). It follows from the so-called Robinson–Schensted correspondence (see [31]) that

$$\begin{aligned} \sum _{\lambda \vdash n} s_\lambda ^2=n! \end{aligned}$$

Therefore the cardinality of \({\mathcal {F}}\) is exactly n!

The importance of the higher Specht polynomials now is summarized in the following theorem which can be found in [1, Thm. 1].

Theorem 4.10

The following holds for the set of higher Specht polynomials.

  1. (i)

    The set \({\mathcal {F}}\) is a free basis of the ring \({\mathbb {R}}[X]\) over the invariant ring \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\).

  2. (ii)

    For any \(\lambda \vdash n\) and standard \(\lambda \)-tableau T, the space spanned by the polynomials in

    $$\begin{aligned} {\mathcal {F}}^T_\lambda :=\bigl \{F^T_V:V \text { runs over all standard } \lambda \text {-tableaux}\bigr \} \end{aligned}$$

    is an irreducible \({\mathcal {S}}_n\)-module isomorphic to the Specht module \(S^{\lambda }\).

For every \(\lambda \vdash n\) we denote by \(V_0^{\lambda }\) the standard \(\lambda \)-tableau with entries \(\{1,\ldots ,\lambda _1\}\) in the first row, \(\{\lambda _1+1,\ldots ,\lambda _2\}\) in the second row, and so on. Consider the set

$$\begin{aligned} {\mathcal {Q}}_\lambda :=\bigl \{F^T_{V_0^{\lambda }}:T \text { runs over all standard } \lambda \text {-tableaux}\bigr \}, \end{aligned}$$

which is of cardinality \(s_\lambda \). The set \({\mathcal {Q}}_\lambda \) is a symmetry basis of the vector space spanned by \({\mathcal {F}}\). Using these polynomials we define \(s_\lambda \times s_\lambda \) matrix polynomials \(Q^{\lambda }\) by:

$$\begin{aligned} Q^\lambda ({T,T'}):={{\,\mathrm{sym}\,}}{F^T_{V_0^{\lambda }}F^{T'}_{V_0^{\lambda }}}, \end{aligned}$$
(4.2)

where \(T,T'\) run over all standard \(\lambda \)-tableaux. Since by (i) in Theorem 4.10 we know that every polynomial \(h\in {\mathbb {R}}[X]\) can be uniquely written as a linear combination of elements in \({\mathcal {F}}\) with coefficients in \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\), the following theorem can be thought of as a generalization of Corollary 4.4 to sums of squares from an \({\mathcal {S}}_n\)-module with coefficients in an \({\mathcal {S}}_n\)-invariant ring (see also [12, Thm. 6.2]):

Theorem 4.11

Let \(p\in {\mathbb {R}}[X]^{{\mathcal {S}}_n}\) be a symmetric polynomial. Then p is a sum of squares if and only if it can be written in the form

$$\begin{aligned} p=\sum _{\lambda \vdash n}\langle B^\lambda ,Q^\lambda \rangle , \end{aligned}$$

where \(Q^\lambda \) is defined in (4.2) and each \(B^\lambda \in {\mathbb {R}}[X]^{s_\lambda \times s_\lambda }\) is a sum of symmetric squares matrix polynomial, i.e., \(B^\lambda (x)=L^{t}(x)L(x)\) for some matrix polynomial L(x) whose entries are symmetric polynomials.

Each entry of the matrix \(Q^{\lambda }\) is a symmetric polynomial and thus can be represented as a polynomial in any set of generators of the ring of symmetric polynomials. We will use the power means \(p_1,\ldots ,p_n\) to phrase the next theorem. However, any other choice works similarly. With this choice of basis it follows that there exists a matrix polynomial \({\tilde{Q}}^{\lambda }(z_1,\ldots ,z_n)\) in n variables \(z_1,\ldots ,z_n\) such that

$$\begin{aligned} {\tilde{Q}}^{\lambda }(p_1(x),\ldots ,p_n(x))=Q^{\lambda }(x). \end{aligned}$$
(4.3)

With this notation one can restate Theorem 4.11 in the following way:

Theorem 4.12

Let \(f\in {\mathbb {R}}[X]^{{\mathcal {S}}_n}\) be a symmetric polynomial and \(g\in {\mathbb {R}}[z_1,\ldots ,z_n]\) such that \(f=g(p_1,\ldots ,p_n)\). Then f is a sum of squares if and only if g can be written in the form

$$\begin{aligned} g=\sum _{\lambda \vdash n} \langle B^\lambda ,{\tilde{Q}}^\lambda \rangle , \end{aligned}$$

where \({\tilde{Q}}^\lambda \) is defined in (4.3) and each \(B^\lambda \in {\mathbb {R}}[z]^{s_\lambda \times s_\lambda }\) is a sum of squares matrix polynomial, i.e., \(B^\lambda :=L(z)^{t}L(z)\) for some matrix polynomial L.

While Theorems 4.11 and 4.12 give a characterization of symmetric sums of squares in a given number of variables, we need to understand the behavior of the \({\mathcal {S}}_n\)-module \(H_{n,d}\) for polynomials of a fixed degree d in a growing number of variables n. This will be done in the next section.

4.4 The Cone \(\Sigma _{n,2d}^S\)

A symmetric sum of squares \(f \in \Sigma ^S_{n,2d}\) has to be a sum of squares from \(H_{n,d}\). Therefore we now consider restricting the degree of the squares in the underlying sum of squares representation. With a little abuse of notation we denote by \({\mathcal {F}}_{n,d}\) the vector space spanned by higher Specht polynomials for the group \({\mathcal {S}}_n\) of degree at most d. Further, for a partition \(\lambda \vdash n\) let \({\mathcal {F}}_{\lambda ,d}\) denote the span of the higher Specht polynomials of degree at most d corresponding to the Specht module \({\mathcal {S}}^{\lambda }\), i.e., \({\mathcal {F}}_{\lambda ,d}\) is exactly the isotypic component of \({\mathcal {F}}_{n,d}\) corresponding to \({\mathcal {S}}^{\lambda }\). In order to describe this isotypic component combinatorially, recall that the degree of the higher Specht polynomial \(F_T^S\) is given by the charge c(S) of S. Thus, it follows from the above construction that

$$\begin{aligned} {\mathcal {F}}_{\lambda ,d}={\text {span}}{\bigl \{F_T^{S}: S,T \text { are standard } \lambda \text {-tableaux and } c(S)\le d \bigr \}}. \end{aligned}$$

We now show that sums of squares of degree 2d in n variables can be constructed by symmetrizing sums of squares in 2d variables. So we first consider the case \(n=2d\). Let

$$\begin{aligned} {\mathcal {F}}_{2d,d}=\bigoplus _{\lambda \vdash 2d} m_{\lambda }S^{\lambda } \end{aligned}$$

be the decomposition of \({\mathcal {F}}_{2d,d}\) as an \({\mathcal {S}}_{2d}\)-module. The following proposition gives the multiplicities of the different \({\mathcal {S}}_n\)-modules appearing in the vector space of homogeneous polynomials of degree d.

Proposition 4.13

The multiplicities \(m_{\lambda }\) of the \({\mathcal {S}}_n\)-modules \({\mathcal {S}}^\lambda \) which appear in an isotypic decomposition \(H_{n,d}\) coincide with the number of standard \(\lambda \)-tableaux S with the charge of S at most d: \(c(S)\le d\).

For a partition \(\lambda \vdash 2d\) and \(n \ge 2d\) define a new partition \(\lambda ^{(n)} \vdash n\) by simply increasing the first part of \(\lambda \) by \(n-2d\): \(\lambda ^{(n)}_1=\lambda _1+n-2d\) and \(\lambda ^{(n)}_i=\lambda _i\) for \(i\ge 2\). Then the decomposition Theorem 4.10 in combination with [28, Thm. 4.7] yields that

$$\begin{aligned} {\mathcal {F}}_{n,d}=\bigoplus _{\lambda \vdash 2d} m_{\lambda }{\mathcal {S}}^{\lambda ^{(n)}}. \end{aligned}$$

For every \(\lambda \vdash 2d\) we choose \(m_\lambda \) many higher Specht polynomials \(q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda }\) that form a symmetry basis of the \(\lambda \)-isotypic component of \({\mathcal {F}}_{2d,d}\). Let \(q_\lambda =(q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda })\) be the vector with entries \(q_i^{\lambda }\). As before we construct the matrix \(Q^{\lambda }_{2d}\) by

$$\begin{aligned} Q^{\lambda }_{2d}={{\,\mathrm{sym}\,}}_{2d}q_{\lambda }^t q_\lambda , \qquad Q_{2d}^\lambda ({i,j})={{\,\mathrm{sym}\,}}_{2d}q^{\lambda }_{i}q^{\lambda }_{j}. \end{aligned}$$

Further, we define the matrix \(Q_n^{\lambda }\) by

$$\begin{aligned} Q_{n}={{\,\mathrm{sym}\,}}_{n}q_{\lambda }^t q_\lambda , \qquad Q_n^{\lambda }(i,j)={{\,\mathrm{sym}\,}}_nq_i^{\lambda }q_j^{\lambda }. \end{aligned}$$

By construction we have the following:

Proposition 4.14

The matrix \(Q_n^{\lambda }\) is the \({\mathcal {S}}_n\)-symmetrization of the matrix \(Q_{2d}^{\lambda }\):

$$\begin{aligned} Q_n^{\lambda }={{\,\mathrm{sym}\,}}_n Q_{2d}^{\lambda }. \end{aligned}$$

We now give a parametric description of the family of cones \(\Sigma ^S_{n,2d}\). Note again that this statement is given in terms of a particular basis, but similarly can be stated with any set of generators.

Theorem 4.15

Let \(f:=\sum _{\lambda \vdash 2d} c_\lambda p_\lambda ^{(n)}\in H_{n,2d}^S\). Then f is a sum of squares if and only if it can be written in the form

$$\begin{aligned} f=\sum _{\lambda \vdash 2d} \langle B^\lambda ,Q_n^\lambda \rangle , \end{aligned}$$

where each \(B^\lambda \in {\mathbb {R}}\bigl [p_1^{(n)},\ldots ,p_{d}^{(n)}\bigr ]^{m_\lambda \times m_\lambda }\) is a sum of squares matrix of power sum polynomials, i.e., \(B^\lambda =L_{\lambda }^{t}L_{\lambda }\) for some matrix polynomial \(L_{\lambda }\bigl (p_1^{(n)},\ldots ,p_{d}^{(n)}\bigr )\) whose entries are weighted homogeneous forms. Additionally, we have for every column k of \(L_{\lambda }\),

$$\begin{aligned} \deg _w Q_n^{\lambda }(i,k)+2\deg _w L_{\lambda }(k,i)=2d \end{aligned}$$

or, equivalently, every entry \(B^{\lambda }(i,j)\) of \(B^{\lambda }\) is a weighted homogeneous form such that

$$\begin{aligned} \deg _w Q_n^{\lambda }(i,j)+\deg _w B^{\lambda }(i,j)=2d. \end{aligned}$$

Proof

In order to apply Theorem 4.11 to our fixed degree situation we have to show that the forms \(\{q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda }\}\) when viewed as functions in n variables also form a symmetry basis of the \(\lambda ^{(n)}\)-isotypic component of \({\mathcal {F}}_{n,d}\) for all \(n \ge 2d\). Indeed, consider a standard Young tableau \(t_{\lambda }\) of shape \(\lambda \) and construct a standard Young tableau \(t_{\lambda ^{(n)}}\) of shape \(\lambda ^{(n)}\) by adding numbers \(2d+1,\dots ,n\) as rightmost entries of the top row of \(t_{\lambda ^{(n)}}\), while keeping the rest of the filling of \(t_{\lambda ^{(n)}}\) the same as for \(t_{\lambda }\). It follows by construction of the Specht polynomials that

$$\begin{aligned} sp_{t_{\lambda }}=sp_{t_{\lambda ^{(n)}}}. \end{aligned}$$

We may assume, the \(q_k^{(\lambda )}\) were chosen so that they map to \(sp_{t_{\lambda }}\) by an \({\mathcal {S}}_{2d}\)-isomorphism. We observe that \(sp_{t_{\lambda }}\) (and therefore \(sp_{t_\lambda ^{(n)}}\)) and \(q_k^{\lambda }\) do not involve any of the variables \(X_{j}\), \(j>2d\). Therefore both are stabilized by \({\mathcal {S}}_{n-2d}\) (operating on the last \(n-2d\) variables), and further the action on the first 2d variables is exactly the same. Thus there is an \({\mathcal {S}}_{n}\)-isomorphism mapping \(q_k^{\lambda }\) to \(sp_{t_\lambda ^{(n)}}\) and the \({\mathcal {S}}_{n}\)-modules generated by the two polynomials are isomorphic. Therefore it follows that \(q_k^{(\lambda )}\) also form a symmetry basis of the \(\lambda ^{(n)}\)-isotypic component of \({\mathcal {F}}_{n,d}\). \(\square \)

Remark 4.16

We remark that the sum of squares decomposition of \(f=\sum _{\lambda \vdash 2d} \langle B^{\lambda },Q^{\lambda }_n \rangle \), with \(B^{\lambda }=L_\lambda ^tL_{\lambda }\) can be read off as follows:

$$\begin{aligned} f=\sum _{\lambda \vdash 2d} {{\,\mathrm{sym}\,}}_n q_{\lambda }^t B^{\lambda }q_{\lambda } =\sum _{\lambda \vdash 2d} {{\,\mathrm{sym}\,}}_n ( L_{\lambda }q_{\lambda })^tL_{\lambda }q_{\lambda } . \end{aligned}$$
(4.4)

In particular, if for a fixed \(\lambda \vdash n\) and for every \(1\le i\le m_\lambda \) we denote \(\delta _i:=d-\deg q_i^{\lambda }\), then the set of polynomials

$$\begin{aligned} \bigcup _{i=1}^{m_\lambda }\bigcup _{\nu \vdash \delta _i}\left\{ q_i^{\lambda }p_\nu \right\} \end{aligned}$$
(4.5)

is a symmetry basis of the isotypic component \(H_{n,d}\) corresponding to \(\lambda \).

4.5 The Dual Cone of Symmetric Sums of Squares

Recall, that for a convex cone \(K\subset {\mathbb {R}}^n\) the dual cone \(K^*\) is defined as

$$\begin{aligned} K^{*}:=\{l \in {\text {Hom}}{({\mathbb {R}}^n,{\mathbb {R}})}:\ell (x)\ge 0\ \text {for all}\ x\in K\}. \end{aligned}$$

Our analysis of the dual cone \((\Sigma _{n,2d}^S)^*\) proceeds similarly to the analysis of the dual cone in the non-symmetric situation given in [4, 6].

Let \(S_{n,d}\) be the vector space of real quadratic forms on \(H_{n,d}\). Let \(S_{n,d}^+\) be the cone of positive semidefinite quadratic forms in \(S_{n,d}\). An element \({\mathcal {Q}}\in S_{n,d}\) is said to be \({\mathcal {S}}_n\)-invariant if \({\mathcal {Q}}(f)={\mathcal {Q}}(\sigma (f))\) for all \(\sigma \in {\mathcal {S}}_n\), \(f \in H_{n,d}\). We will denote by \({\bar{S}}_{n,d}\) the space of \({\mathcal {S}}_n\)-invariant quadratic forms on \(H_{n,d}\). Further we can identify a linear functional \(l\in (H_{n,2d}^{S})^*\) with a quadratic form \({\mathcal {Q}}_{l}\) defined by

$$\begin{aligned} {\mathcal {Q}}_\ell (f) = \ell ({{\,\mathrm{sym}\,}}f^2). \end{aligned}$$

Let \({\bar{S}}_{n,d}^+\) be the cone of positive semidefinite forms in \({\bar{S}}_{n,d}\), i.e.,

$$\begin{aligned} {\bar{S}}_{n,d}^+:=\{{\mathcal {Q}}\in {\bar{S}}_{n,d}:{\mathcal {Q}}(f)\ge 0\ \text {for all}\ f\in H_{n,d}\}. \end{aligned}$$

The following lemma is straightforward, but very important, as it allows us to identify the elements of dual cone \(l\in (\Sigma _{n,2d}^S)^*\) with quadratic forms \({\mathcal {Q}}_{\ell }\) in \({\bar{S}}_{n,d}^+\).

Lemma 4.17

A linear functional \(\ell \in (H_{n,2d}^S)^*\) belongs to the dual cone \((\Sigma _{n,2d}^S)^*\) if and only if the quadratic form \({\mathcal {Q}}_{\ell }\) is positive semidefinite.

Since for \(\ell \in (H_{n,2d}^S)^*\) we have \({\mathcal {Q}}_{\ell }\in {\bar{S}}_{n,d}\), Schur’s Lemma again applies and we can use the symmetry basis constructed above to simplify the condition that \({\mathcal {Q}}_{\ell }\) is positive semidefinite. In order to arrive at a dual statement of Corollary 4.15 we construct the following matrices:

Definition 4.18

For a partition \(\lambda \vdash 2d\) consider the block-matrix \(M_{n,\lambda }\) defined by

$$\begin{aligned} M_{n,\lambda }^{(i,j)}(\alpha ,\beta ):=\ell \bigl (p_{\alpha }\cdot p_{\beta }\cdot Q^{\lambda }_n(i,j)\bigr ), \end{aligned}$$

where in each block ij the indices \((\alpha ,\beta )\) run through all pairs of weakly decreasing sequences \(\alpha =(\alpha _1,\ldots ,\alpha _a)\) and \(\beta =(\beta _1,\ldots ,\beta _b)\) such that

$$\begin{aligned} 2d-\deg _w Q^{\lambda }_n(i,j)=\alpha _1+\dots +\alpha _a+\beta _1+\dots +\beta _b. \end{aligned}$$

With this notation the following lemma is just the dual version of Corollary 4.15 and is established by expressing Lemma 4.17 in the basis given in (4.5):

Lemma 4.19

Let \(\ell \in H_{n,2d}^{*}\) be a linear functional. Then \(\ell \in (\Sigma _{n,2d})^{*}\) if and only if for all \(\lambda \vdash 2d\) the above matrices \(M_{n,\lambda }\) are positive semidefinite.

In order to examine the kernels of quadratic forms we use the following construction. Let \(W\subset H_{n,d}\) be any linear subspace. We define \(W^{<2>}\) to be the symmetrization of the degree 2d part of the ideal generated by W:

$$\begin{aligned} W^{<2>}:=\Bigl \{h\in H_{n,2d}^S:h={{\,\mathrm{sym}\,}}\sum f_ig_i\ \text {with}\ f_i \in W,\, g_i \in H_{n,d}\Bigr \}. \end{aligned}$$

In Lemma 4.17 we identified the dual cone \((\Sigma _{n,2d}^{S})^*\) with a linear section of the cone of positive semidefinite quadratic forms \(S_{n,d}^+\) with the subspace \({\bar{S}}_{n,d}\) of symmetric quadratic forms. By a slight abuse of terminology we think of positive semidefinite forms \(Q_{\ell }\) as elements of the dual cone \((\Sigma _{n,2d})^*\). The following important proposition is a straightforward adaptation of the equivalent result in the non-symmetric case [6, Proposition 2.1]:

Proposition 4.20

Let \(\ell \in (\Sigma ^{S}_{n,2d})^*\) be a linear functional non-negative on squares and let \(W_{\ell }\subset H_{n,d}\) be the kernel of the quadratic form \({\mathcal {Q}}_{\ell }\). The linear functional \(\ell \) spans an extreme ray of \((\Sigma _{n,2d}^S)^*\) if and only if \(W_{\ell }^{<2>}\) is a hyperplane in \(H_{n,2d}^S\). Equivalently, the kernel of \({\mathcal {Q}}_\ell \) is maximal, i.e., if \(\ker {\mathcal {Q}}_\ell \subseteq \ker {\mathcal {Q}}_m\) for some \(m \in H_{n,2d}^*\) then \(m=\lambda \ell \) for some \(\lambda \in {\mathbb {R}}\).

The dual correspondence yields that any facet F of a cone K, i.e., any maximal face of K, is given by an extreme ray of the dual cone \(K^*\). More precisely, for any maximal face F of K there exists an extreme ray of \(K^*\) spanned by a linear functional \(\ell \in K^*\) such that

$$\begin{aligned} F=\{x\in K:\ell (x)=0\}. \end{aligned}$$

We now aim to characterize the extreme rays of \((\Sigma _{n,2d}^{S})^*\) that are not extreme rays of the cone \(({\mathcal {P}}_{n,2d}^{S})^*\). For \(v \in {\mathbb {R}}^n\) define a linear functional

$$\begin{aligned} \ell _v:H^S_{n,2d} \rightarrow R,\quad \ell _v(f)=f(v). \end{aligned}$$

We say that the linear functional \(\ell _v\) corresponds to point evaluation at v. It is easy to show with the same proof as in the non-symmetric case that the extreme rays of the cone \(({\mathcal {P}}_{n,2d}^{S})^*\) are precisely the point evaluations \(\ell _v\) (see [5, Chap. 4] for the non-symmetric case). Therefore we need to identify extreme rays of \((\Sigma _{n,2d}^{S})^*\) which are not point evaluations. We now examine the case of degree 4 in detail, and give an explicit construction of an element of \((\Sigma _{n,4}^{S})^*\) which does not belong to \(({\mathcal {P}}_{n,2d}^{S})^*\).

5 Symmetric Quartic Sums of Squares

We now look at the decomposition of \(H_{n,2}\) as an \({\mathcal {S}}_n\)-module in order to apply Theorem 4.2 and characterize all symmetric sums of squares of degree 4.

Theorem 5.1

Let \(f^{(n)}\in H_{n,4}\) be symmetric and \(n\ge 4\). If \(f^{(n)}\) is a sum of squares then it can be written in the form

$$\begin{aligned} f^{(n)}&=\alpha _{11}p_{(1^4)}+2\alpha _{12}p_{(2,1^2)}+\alpha _{22}p_{(2^2)}\\&\quad +\beta _{11}\bigl (p_{(2,1^2)}-p_{(1^4)}\bigr )+2\beta _{12}\bigl (p_{(3,1)}-p_{(2,1^2)}\bigr )+\beta _{22}\bigl (p_{(4)}-p_{(2^2)}\bigr )\\&\quad +\gamma \biggl (\frac{1}{2}p_{(1^4)}-p_{(2,1^2)}+\frac{n^2-3n+3}{2n^2}p_{(2^2)}+\frac{2n-2}{n^2}p_{(31)}+\frac{1-n}{2n^2} p_{(4)}\biggr ), \end{aligned}$$

where \(\gamma \ge 0\) and the matrices \((\alpha _{ij})_{2\times 2}\) and \((\beta _{ij})_{2\times 2}\) are positive semidefinite.

Proof

The statement follows directly from the arguments presented in Sect. 4.4. Following Theorem 4.15 we get that \(f^{(n)}\) has a decomposition in the form

$$\begin{aligned} f^{(n)}=B^{(n)}+\bigl \langle B^{(n-1,1)},Q_n^{(n-1,1)}\bigr \rangle + B^{(n-2,2)} \cdot Q_n^{(n-2,2)}, \end{aligned}$$

where \(B^{(n)}\) is a sum of symmetric squares, \(B^{(n-1,1)}\) is a \(2\times 2\) sum of symmetric squares matrix polynomial and due to the degree restrictions \(B^{(n-2,2)}\) is a non-negative scalar. It remains to calculate the matrices \(Q_n^{(n-1,1)}\) and \(Q_n^{(n-2,2)}\) appearing in the statement decomposition. These are defined as the symmetrization of the pairwise products of those Specht polynomials which generate the corresponding Specht modules in degree 2. In degree 2 the Specht polynomials \(X_n-X_1\) and \(X_n^2-X_1^2\) generate two disjunct irreducible \({\mathcal {S}}_n\)-modules isomorphic to the \(S^{(n-1,1)}\) part and the Specht polynomial \((X_{n-1}-X_1)(X_{n}-X_2)\) generates a module isomorphic to \(S^{(n-2,2)}\). Thus we have:

$$\begin{aligned} Q_n^{(n-1,1)}&=\begin{pmatrix} {{\,\mathrm{sym}\,}}_n(X_n-X_1)^2&{} {{\,\mathrm{sym}\,}}_n(X_n-X_1)(X_n^2-X_1^2)\\ {{\,\mathrm{sym}\,}}_n(X_n-X_1)(X_n^2-X_1^2)&{}{{\,\mathrm{sym}\,}}_n(X_n^2-X_1^2)^2 \end{pmatrix},\\ Q_n^{(n-2,2)}&={{\,\mathrm{sym}\,}}_n(X_{n-1}-X_1)^2(X_n-X_2)^2. \end{aligned}$$

Then the symmetrization can be calculated quite directly, since every product involves at most 4 variables. These calculations then yield

$$\begin{aligned} \begin{aligned} Q_n^{(n-1,1)}&=\frac{2n}{n-1}\begin{pmatrix} p_{(2)}-p_{(1^2)}&{} p_{(3)}-p_{(2,1)}\\ p_{(3)}-p_{(2,1)}&{} p_{(4)}-p_{(2^2)}\end{pmatrix},\\ Q_n^{(n-2,2)}&=\frac{8n^3}{n^3-6n^2+11n-6}\,\\&\quad \times \biggl (\frac{1}{2}p_{(1^4)}-p_{(2,1^2)}+\frac{n^2-3n+3}{2n^2}p_{(2^2)}+\frac{2n-2}{n^2}p_{(3,1)}+\frac{1-n}{2n^2}p_{(4)}\biggr ), \end{aligned} \end{aligned}$$
(5.1)

which gives exactly the statement in the theorem. \(\square \)

5.1 The Boundary of \(\Sigma _{n,4}^{S}\)

We now apply Proposition 4.20 to the case of degree 4 and examine the possible kernels of an extreme ray of \((\Sigma ^S_{n,4})^*\) which do not come from a point evaluation.

Lemma 5.2

Suppose a linear functional \(\ell \) spans an extreme ray of \((\Sigma _{n,4}^{{\mathcal {S}}})^{*}\) that is not an extreme ray of \(({\mathcal {P}}_{n,4}^S)^*\). Let Q be quadratic form corresponding to \(\ell \). Then

$$\begin{aligned} {\text {Ker}}{Q}\simeq {\mathcal {S}}^{(n)}\oplus {\mathcal {S}}^{(n-1,1)}\qquad \text {or}\qquad \ell \Biggl (\sum _{\lambda \vdash 4} c_\lambda p_{\lambda }\Biggr )=c_{(4)}+c_{(2^2)} \end{aligned}$$

and n is odd.

Proof

Since Q is an \({\mathcal {S}}_n\)-invariant quadratic form, its kernel \({{\,\mathrm{Ker}\,}}Q\subseteq H_{n,2}\) is an \({\mathcal {S}}_n\)-module. It follows from the arguments in the proof of Theorem 5.1 that \({{\,\mathrm{Ker}\,}}Q\) decomposes as

$$\begin{aligned} {{\,\mathrm{Ker}\,}}Q\simeq \alpha {\mathcal {S}}^{(n)}\oplus \beta {\mathcal {S}}^{(n-1,1)}\oplus \gamma {\mathcal {S}}^{(n-2,2)}, \end{aligned}$$

where \(\alpha ,\beta \in \{0,1,2\}\) and \(\gamma \in \{0,1\}\). We now examine the possible combinations of \(\alpha \), \(\beta \), and \(\gamma \).

As above let W denote the kernel of Q. We first observe that \(\alpha =2\) is not possible: if \(\alpha =2\) then we have \(p_2\in W\) and so \(p_2^2\in W^{<2>}_S\), which is a contradiction since \(p_2^2\) is not on the boundary of \(\Sigma _{n,4}^{{\mathcal {S}}_n}\).

By Proposition 4.20 the kernel W of Q must be maximal. Let \(w \in {\mathbb {R}}^n\) be the all 1 vector: \(w=(1,\dots ,1)\). We now observe that \(\alpha =0\) is also impossible: if \(\alpha =0\) then all forms in the kernel W of Q are 0 at w. Therefore \(\ker Q \subseteq \ker Q_{\ell _w}\) and by Proposition 4.20 we have \(Q=\lambda Q_{\ell _w}\), which is a contradiction, since Q does not correspond to point evaluation. Thus we must have \(\alpha =1\).

Since we have \(\dim H_{n,4}^{{\mathcal {S}}}=5\), from Corollary 4.20 we see that \(\dim W^{<2>}=4\). This excludes the case \(\beta =0\), since even with \(\alpha =1\) and \(\gamma =1\) the dimension of \(W^{<2>}\) is at most 3. Now suppose that \(\beta =2\), i.e., the \({\mathcal {S}}_n\)-module generated by \((X_1-X_2)p_{1}\) and \(X_1^2-X_2^2\) lies in W as well as a polynomial \(q=a p_1^2+b p_2\). We consider the symmetrizations of the five pairwise products and express these in the basis \(\{p_{(4)},p_{(3,1)}, p_{(2^2)},p_{(2,1^2)}, p_{(1^4)}\}\).

Now the condition \(\dim W^{<2>}=4\) implies that these five products cannot be linearly independent and an explicit calculation of the determinant of the corresponding matrix M yields \(\det M=b(a+b)\). We now examine the possible roots of this determinant. In the case when \(a=-b\) all polynomials in W (even if \(\gamma =1\)) will be zero at \((1,\dots ,1)\), which is excluded. Therefore the only possible case is \(b=0\). In that case, by calculating the kernel of M we see that the unique (up to a constant multiple) linear functional \(\ell \) vanishing on \(W^{<2>}\) is given by

$$\begin{aligned} \ell \Biggl (\sum _{\lambda \vdash 4} c_\lambda p_{\lambda }\Biggr )=c_{(4)}+c_{(2^2)}. \end{aligned}$$

We observe using (5.1) that we must have \(\gamma =0\) since \(\ell \bigl ({{\,\mathrm{sym}\,}}_n (X_1-X_2)^2(X_3-X_4)^2\bigr )>0\) for \(n \ge 4\). Now suppose that n is even and let \(w\in {\mathbb {R}}^n\) be given by \(w=(1,\dots ,1,-1,\dots ,-1)\) where 1 and \(-1\) occur n/2 times each. It is easy to verify that for all \(f \in W\) we have \(f(w)=0\). Therefore it follows that \(W \subseteq \ker Q_{\ell _w}\), which is a contradiction, since W is a kernel of an extreme ray that does not come from point evaluation.

When n is odd the forms in W have no common zeros and therefore \(\ell \) is not a positive combination of point evaluations. It is not hard to verify that \(\ell \) is non-negative on squares and the kernel W is maximal. Therefore by Proposition 4.20 we know that \(\ell \) spans an extreme ray of \((\Sigma ^S_{n,2d})^*\)

Finally we need to deal with the case \(\alpha =\beta =\gamma =1\). Suppose that the \({\mathcal {S}}_n\)-module W is generated by three polynomials:

$$\begin{aligned} q_1:=a p_{1}^{2}+bp_{2}, \quad q_2:=c(X_1-X_2)p_{1}+d(X_1^2-X_2^2),\quad q_3:=(X_1-X_2)(X_3-X_4). \end{aligned}$$

Again we consider the symmetrizations of the five pairwise products and represent these in a matrix M. Explicit calculations now show that

$$\begin{aligned} \det M=-(a+b)(a d^2n^2-4ad^2n+4ad^2+bd^2n^2+4bcd n+bc^2n-4bcd-bc^2). \end{aligned}$$

As \(\alpha =\beta =\gamma =1\) we must have \({\text {rank}} M=4\) since the rows of M generate \(W^{<2>}\). Again we cannot have \(a=-b\), and thus

$$\begin{aligned} a d^2n^2-4ad^2n+4ad^2+bd^2n^2+4bcd n+bc^2n-4bcd-bc^2=0. \end{aligned}$$
(5.2)

Therefore there exists a unique linear functional \(\ell \), which vanishes on \(W^{<2>}\) and comes from the kernel of M.

Let \(w\in {\mathbb {R}}^n\) be a point with coordinates \(w=(s,\dots ,s,t)\) with \(s,t \in {\mathbb {R}}\) such that

$$\begin{aligned} c n(s+t)+d((n-1)s+t)=0. \end{aligned}$$

We see that \(q_3(w)=0\) and from the above equation it also follows that for all f in the \({\mathcal {S}}_n\)-module generated by \(q_2\) we have \(f(w)=0\). Direct calculation shows that (5.2) also implies \(q_1(w)=0\). Thus we have \(W\subseteq Q_{\ell _w}\), which is a contradiction by Proposition 4.20, since W is a kernel of an extreme ray that does not come from point evaluation. We remark that it is possible to show that the functional \(\ell \) vanishing on \(W^{<2>}\) and giving rise to W is in fact a multiple of \(\ell _w\), but this is not necessary for us to finish the proof. \(\square \)

The above description allows us to explicitly characterize degree 4 symmetric sums of squares that are positive and on the boundary of \(\Sigma _{n,4}^S\).

Theorem 5.3

Let \(n\ge 4\) and \(f^{(n)}\in H_{n,4}\) be symmetric and positive on the boundary of \(\Sigma _{4,n}^{S}\). Then

  1. (i)

    either \(f^{(n)}\) can be written as

    $$\begin{aligned} f^{(n)}&=a^2p_{(4)}^{(n)}+2abp_{(31)}^{(n)}+(c^2-a^2)p_{2^2}^{(n)}\nonumber \\&\quad +\,(2cd+b^2-2ab)p_{(2,1^2)}^{(n)}+(d^2-b^2)p_{(1^4)}^{(n)}, \end{aligned}$$

    with non-zero coefficients \(a,b,c,d\in {\mathbb {R}}\setminus \{0\}\) which additionally satisfy

    $$\begin{aligned} 0&\le \frac{a(c-d)+b(d+c)}{ac}, \quad \ 0\le \frac{a (c + d) (b c - a d)}{ac},\nonumber \\ 0&\le -\frac{c+d}{a^2c^2}\bigl (a^2 (c - d) + b (a c + b c)\bigr ), \quad \\ 0&\le -\frac{c+d}{a^2c^2}\bigl ((a b c + b^2 c - a^2 d) a^2 c - (-a^2 d)^2\bigr ),\nonumber \\ 0&\le ( c+d)\bigl (( c{a}^{2}+cab) {n}^{2}+ ( {b}^{2}c-3cab+3{a}^{2}d ) n-{b}^{2}c+3cab-3{a}^{2}d\bigr ),\nonumber \end{aligned}$$
    (5.3)
  2. (ii)

    or, if n is odd, then \(f^{(n)}\) may have the form

    $$\begin{aligned} f^{(n)}&=a^2p_{(1^4)}+b_{11}\bigl (p_{(2,1^2)}-p_{(1^4)}\bigr )+2b_{12}\bigl (p_{(3,1)}-p_{(2,1^2)}\bigr )\nonumber \\&\quad +\, b_{22}\bigl (p_{(4)}-p_{(2^2)}\bigr ), \end{aligned}$$

    with coefficients \(a,b_{11}, b_{12}, b_{22}\in {\mathbb {R}}\) which additionally satisfy

    $$\begin{aligned} a\ne 0,\quad b_{11}+b_{22}\ge 0,\quad b_{11}b_{22}-b_{12}^2\ge 0. \end{aligned}$$
    (5.4)

Proof

Suppose that \(f^{(n)}\) is a strictly positive form on the boundary of \(\Sigma _{4,n}^{S}\). Then there exists a non-trivial functional l spanning an extreme ray of the dual cone \((\Sigma _{4,n}^{S})^{*}\) such that \(\ell (f^{(n)})=0\). Let \(W_{l}\subset H_{n,2}\) denote the kernel of \(Q_\ell \). In view of Lemma 5.2 we see that there are two possible situations that we need to take into consideration.

(i) We first assume that

$$\begin{aligned} W_\ell \simeq {\mathcal {S}}^{(n)}\oplus {\mathcal {S}}^{(n-1,1)}.\end{aligned}$$
(5.5)

In view of (5.5) we may assume that the \({\mathcal {S}}_n\)-module \(W_\ell \) is generated by two polynomials:

$$\begin{aligned} q_1:=c p_{1}^{2}+dp_{2}\quad \text { and }\quad q_2:=\frac{n-1}{2n}\bigl (a (X_1^2-X_2^2)+b(X_1-X_2)p_{1}\bigr ), \end{aligned}$$

where \(a,b,c,d\in {\mathbb {R}}\) are chosen such that \((0,0)\ne (a,b)\) and \((0,0)\ne (c,d)\).

Let \(q\in H_{n,2}\). By Proposition 4.20 we have

$$\begin{aligned} q\in W_{l}\quad \text {if and only if}\quad \text {the } {\mathcal {S}}_n \text {-linear map } p\mapsto \ell (pq) \text { is the zero map on } H_{n,2}. \end{aligned}$$

The dimension of the vector space of \({\mathcal {S}}_n\)-invariant quadratic maps from \(H_{n,2}\) to \({\mathbb {R}}\) is 5. However, since \(q\in W_{\ell }\), Schur’s lemma implies \(\ell (qr)=0\) for all r in the isotypic component of the type \((n-2,2)\). Let \(y_{\lambda }=\ell (p_{\lambda })\). Using explicit calculations we find that the coefficients \(y_{\lambda }\) are characterized by the following system of four linear equations:

$$\begin{aligned} 0&=\ell ({{\,\mathrm{sym}\,}}{q_1p_{2}})=c y_{(2^2)}+d y_{(2,1^2)},\\ 0&=\ell ({{\,\mathrm{sym}\,}}{q_1p_{1}^2})=c y_{(2,1^2)} +d y_{(1^4)},\\ 0&=\ell ({{\,\mathrm{sym}\,}}{q_2(X_1^2-X_2^2)})=a y_{(4)}-a y_{(2^2)}+b y_{(3,1)}-b y_{(2,1^2)},\\ 0&=\ell ({{\,\mathrm{sym}\,}}{q_2(X_1-X_2) p_{1}})=a y_{(3,1)}-a y_{(2,1^2)}+b y_{(2,1^2)}- b y_{(1^4)}. \end{aligned}$$

Since in addition we want that the form \(l\in (\Sigma _{n,d}^S)^{*}\) we must also take into account that the corresponding quadratic form \(Q_\ell \) has to be positive semidefinite. By Lemma 4.17 this is equivalent to checking that each of the two matrices

$$\begin{aligned} M_{(n)}:=\begin{pmatrix}y_{(2^2)}&{}\quad y_{(2,1^2)}\\ y_{(2,1^2)}&{}\quad y_{(1^4)}\end{pmatrix},\qquad M_{(n-1,1)}:=\begin{pmatrix}y_{(4)}-y_{(2^2)}&{}\quad y_{(3,1)}-y_{(2,1^2)}\\ y_{(3,1)}-y_{(2,1^2)}&{}\quad y_{(2,1^2)}-y_{(1^4)}\end{pmatrix} \end{aligned}$$

is positive semidefinite and

$$\begin{aligned} M_{(n-2,2)}:= & {} \frac{n^2}{2}y_{(1^4)}-n^2y_{(21^2)}+(2n-2)y_{(31)}\\&+\,\frac{n^2-3n+3}{2}y_{(2^2)}+\frac{1-n}{2}y_{(4)}\ge 0. \end{aligned}$$

Now assuming \(a=0\) we find that either \(b=0\), which is excluded, or any solution of the above linear system will have

$$\begin{aligned} y_{(2^2)}=y_{(3,1)}= y_{(2,1^2)}=y_{(1^4)}. \end{aligned}$$

By substituting this into \(M_{(n-2,2)}\) we find that

$$\begin{aligned} \frac{1-n}{2}(y_{(4)}-y_{(2^2)}) \ge 0, \end{aligned}$$

while from \(M_{(n-1,1)}\) we have \(y_{(4)}-y_{(2)^2} \ge 0\). It follows that

$$\begin{aligned} y_{(3,1)}= y_{(2,1^2)} = y_{(1^4)}=y_{(4)}=y_{(2^2)}. \end{aligned}$$

But then we find that \(\ell \) is proportional to the functional that simply evaluates at the point \((1,1,1,\ldots ,1)\), which is a contradiction since \(f^{(n)}\) is strictly positive. Thus \(a\ne 0\).

Now suppose that \(c=0\). Then we find that

$$\begin{aligned} y_{(3,1)}=y_{(1^4)}=y_{(2,1^2)}=0 \quad \text {and} \quad a(y_{(4)}-y_{(2^2)})=0. \end{aligned}$$

Since \(a\ne 0\) we find that the linear functional \(\ell \) is given by

$$\begin{aligned} \ell \left( \sum _{\lambda \vdash 4} c_{\lambda }p_\lambda \right) =c_{(4)}-c_{(2^2)}. \end{aligned}$$

By Lemma 5.2 we must have n odd in order for \(\ell \) not to be a point evaluation and this sends us to case (ii) discussed below. Meanwhile with \(a,c \ne 0\) the solution of the linear system (up to a common multiple) is given by

$$\begin{aligned}&\displaystyle y_{(4)}=\frac{-b^2cd-b^2c^2+a^2d^2}{c^2a^2},\ \ y_{(2^2)}=-\frac{da-db-bc}{ca},\ \ \\&\displaystyle y_{(3,1)}= \frac{d^2}{c^2},\ \ y_{(2,1^2)}=-\frac{d}{c},\ \ y_{(1^4)}=1, \end{aligned}$$

which then yields the conditions in (5.3).

(ii) If n is odd we know from Lemma 5.2 that there is one additional case: \(f^{(n)}\) is a sum of the square \((ap_{(11)})^2\), and a sum of squares of elements from the isotypic component of \(H_{n,2}\) which corresponds to the representation \(S^{(n-1,1)}\). Since \(f^{(n)}\) is strictly positive, we must have \(a\ne 0\) (otherwise \(f^{(n)}\) has a zero at \((1,\dots ,1)\)) and it also follows that the matrix \((b_{ij})_{2\times 2}\) must be strictly positive definite. Therefore we get the announced decomposition from Theorem 5.1. \(\square \)

Note that although the first symmetric counterexample by Choi and Lam in four variables gives \(\Sigma _{4,4}^S\subsetneq {\mathcal {P}}_{4,4}^S\) it does not immediately imply that we have strict containment for all n. However, using our methods, one can produce a sequence of strictly positive symmetric quartics that lie on the boundary of \(\Sigma _{n,4}^{S}\) for all n as a witness for the strict inclusion.

Example 5.4

For \(n\ge 4\) consider the family of polynomials

$$\begin{aligned} f^{(n)}:= & {} a^2p_{(4)}^{(n)}+2abp_{(31)}^{(n)}+(c^2-a^2)p_{(2^2)}^{(n)}\nonumber \\&+\,(2cd+b^2-2ab)p_{(2,1^2)}^{(n)}+(d^2-b^2)p_{(1^4)}^{(n)}, \end{aligned}$$

where we set \(a=1\), \(b=-{13}/{10}\), \(c=1\), and \(d=-{5}/{4}\). Further consider the linear functional \(\ell \in H_{n,4}^{*}\) with

$$\begin{aligned} \ell \Bigl (p_{(4)}^{(n)}\Bigr )={\frac{397}{200}},\ \ \ell \Bigl (p_{(2^2)}^{(n)}\Bigr )={\frac{63}{40}},\ \ \ell \Bigl (p_{(3,1)}^{(n)}\Bigr )={\frac{25}{16}},\ \ \ell \Bigl (p_{(2,1^2)}^{(n)}\Bigr )=\frac{5}{4},\ \ \ell \Bigl (p_{(1^4)}^{(n)}\Bigr )=1. \end{aligned}$$

Then we have \(\ell (f^{(n)})=0\). In addition the corresponding matrices become

$$\begin{aligned} M_{(n)}:= & {} \begin{pmatrix} {63}/{40}&{}\quad {5}/{4}\\ {5}/{4}&{}\quad 1\end{pmatrix},\quad M_{(n-1,1)}:= \begin{pmatrix}{41}/{100}&{}\quad {5}/{16}\\ {5}/{16}&{}\quad {1}/{4}\end{pmatrix},\nonumber \\ M_{(n-2,2)}:= & {} \frac{3{n}^{2}}{80}+\frac{21n}{80}-\frac{21}{80}. \end{aligned}$$

These matrices are all positive semidefinite for \(n\ge 4\) and therefore we have \(\ell \in (\Sigma _{n,4}^{S})^{*}\). This implies that \(f^{(n)}\in \partial \Sigma _{n,4}^{S}\).

Now we argue that for any \(n\in {\mathbb {N}}\) the forms \(f^{(n)}\) are strictly positive. By Corollary 3.3 it follows that \(f^{(n)}\) has a zero, if and only if there exists \(k\in \{{1}/{n},\ldots ,({n{-}1})/{n}\}\) such that the bivariate form

$$\begin{aligned} h_k(x,y)&=\Phi _f(k,1-k,x,y)\nonumber \\&=k{x}^{4}+(1-k)y^4-\frac{13}{5}\bigl ( k{x}^{3}+(1-k)y^3 \bigr )( kx+(1-k)y)\\&\qquad +\frac{179}{100}\bigl ( k{x}^{2}+(1-k)y^2 \bigr )(kx+(1-k)y)^2-\frac{51}{400}( kx+(1-k)y) ^{4} \end{aligned}$$

has a real projective zero (xy). Since \(f^{(n)}\) is a sum of squares and therefore non-negative, we also know that \(h_k(x,y)\) is non-negative for all \(k \in \{{1}/{n},\ldots ,({n{-}1})/{n}\}\). Therefore the real projective roots of \(h_k(x,y)\) must have even multiplicity. This implies that \(h_k(x,y)\) has a real root only if its discriminant \(\delta (h_k)\)—viewed as polynomial in the parameter k—has a root in the admissible range for k. We calculate

$$\begin{aligned} \delta (h_k):=-{10^{-8}}( 10000-37399k+37399{k}^{2})( 149{k}^{2}-149k+25) ^{2}( k-1) ^{3}{k}^{3}. \end{aligned}$$

We see that \(\delta (h_k)\) is zero only for

$$\begin{aligned} k\in \biggl \{0,1,\frac{1}{2}\pm \frac{7\sqrt{149}}{298},\frac{1}{2}\pm \frac{51\sqrt{37399}}{74798}i\biggr \}. \end{aligned}$$

Thus we see that for all natural numbers n there is no \(k\in \{{1}/{n},\ldots ,{(n{-}1)}/{n}\}\) such that \(h_k(x,y)\) has a real projective zero. Therefore we can conclude that for any \(n\in {\mathbb {N}}\) the form \(f^{(n)}\) will be strictly positive.

From the above example the following characterization, which recently had been independently given by Goel et al. [13], is an immediate consequence.

Theorem 5.5

The inclusion \(\Sigma _{n,2d}^S\subset {\mathcal {P}}_{n,2d}^S\) is strict except in the cases of symmetric bivariate forms, or symmetric quadratic forms, or symmetric ternary quartics.

Proof

The well-known Robinson form

$$\begin{aligned} X_1^6 + X_2^6 + X_3^6 - X_1^4X_2^2-X_1^2X_2^4 - X_1^4X_3^2- X_2^4X_3^2 - X_1^2X_3^4 -X_2^2X_3^4 + 3X_1^2X_2^2X_3^2 \end{aligned}$$

is a non-negative form which is not a sum of squares. Furthermore, for the case \(2d=4\) and \(n\ge 4\), Example 5.4 above gives for every n a positive polynomial \(f^{(n)}\) which lies on the boundary of \(\Sigma _{n,4}^S\) and therefore guarantees the existence of \(h_{n,4}\in P_{n,2d}^S\) which is positive definite but not a sum of squares. The result now follows by observing that for any positive definite form \(h\in H_{n,2d}\) that is not a sum of squares, the form \((X_1+\cdots +X_n)^2 h\in H_{n,2d+2}\) is also positive definite and not a sum of squares. Indeed, if \((X_1+\cdots +X_n)^2h=f_1^2+\ldots +f_m^2\) then \((X_1+\cdots + X_n)^2\) will divide \(f_i^2\) which yields that h is a sum of squares. \(\square \)

6 Asymptotic Behavior

In this section we study the relationship of sums of squares and non-negative forms when the number of variables tends to infinity.

6.1 Full-Dimensionality

We now consider the power mean inequalities and their limits. In order to talk about limits of our sequences of cones we use the following notion of limit of a sequence of sets, which is due to Kuratowski [19], and we refer the reader to [21, 32] for details in the context of sequences of convex sets.

Definition 6.1

Let \(\{K_n\}_{n\in {\mathbb {N}}}\) be a sequence of subsets of \({\mathbb {R}}^k\). Then a set \(K\subset {\mathbb {R}}^k\) is called the limit of the sequence, denoted by \(K=\lim \limits _{n\rightarrow \infty } K_n\), if we have

$$\begin{aligned} \limsup _{n\rightarrow \infty } K_n\subset K\subset \liminf _{n\rightarrow \infty } K_n, \end{aligned}$$

where

$$\begin{aligned} \liminf _{n\rightarrow \infty } K_n&=\biggl \{x\in {\mathbb {R}}^k:x=\lim _{n\in {\mathbb {N}}}x_n,\,x_n\in K_n\biggr \},\\ \limsup _{n\rightarrow \infty } K_n&=\biggl \{x\in {\mathbb {R}}^k:x=\lim _{m\in M\subset {\mathbb {N}}}x_m,\,x_m\in K_m\biggr \},\quad \text { for some infinite }\ M\subset {\mathbb {N}}. \end{aligned}$$

Remark 6.2

Note that the limit defined above is a closed set.

It will be convenient for the proof of Theorem 2.8 to relate the power mean inequalities to the sequences formed by the Reynolds operator. Let \(\mu =(\mu _1,\dots ,\mu _r)\) be a partition of 2d. Associate to \(\mu \) the monomial \(X_1^{\mu _1}\cdots X_{r}^{\mu _r}\) and define a symmetric form \(m_\mu ^{(n)}\) by:

$$\begin{aligned} m_\mu ^{(n)}={{\,\mathrm{sym}\,}}_nX_1^{\mu _1}\cdots X_{r}^{\mu _r}. \end{aligned}$$

This is a monomial mean basis of \(H_{n,2d}^S\). We observe that with this choice of basis of \(H_{n,2d}^S\) the transition maps \(\varphi _{m,n}\) are given by the identity matrices. Since the stabilizer of the monomial \(X_1^{\mu _1}\cdots X_{r}^{\mu _r}\) is isomorphic to \({\mathcal {S}}_{s_1}\times \ldots \times {\mathcal {S}}_{s_t}\times {\mathcal {S}}_{n-r}\), it follows that

$$\begin{aligned} m_\mu ^{(n)}=\frac{s_1!\cdots s_k!(n-r)!}{n!}{\bar{m}}_{\mu }^{(n)}=\left( {\begin{array}{c}n\\ s_1\dots s_k\end{array}}\right) ^{-1}{\bar{m}}_{\mu }^{(n)}, \end{aligned}$$

where \({\bar{m}}_{\mu }^{(n)}\) is the monomial symmetric polynomial associated to \(\mu \).

Proposition 6.3

Consider the sequences \(\Sigma _{n,2d}^{\varphi }\) and \({\mathcal {P}}^{\varphi }_{n,2d}\) embedded into \({\mathbb {R}}^{\pi (2d)}\) via the monomial mean basis. Then the limits of the resulting sequences of convex cones in \({\mathbb {R}}^{\pi (2d)}\) have limits, which we will denote by \({\mathfrak {S}}_{2d}^{\varphi }\) and \({\mathfrak {P}}_{2d}^{\varphi }\). Both of these limits are closed and full-dimensional.

Proof

Since we have \(\varphi _{n,n+1}(\Sigma _{n,2d})\subseteq \Sigma _{n+1,2d}^{S}\) and \(\varphi _{n,n+1}(P_{n,2d})\subseteq {\mathcal {P}}^{S}_{n+1,2d}\) the resulting sequences of cones are increasing. Thus by [32, Prop. 1] the limits exist and are given by

$$\begin{aligned} {\mathfrak {S}}_{2d}^{\varphi }&=\overline{\biggl \{f^{(n)}:=\sum _{\lambda \vdash 2d} c_\lambda m_{\lambda }^{(n)}\ \text {with}\ f^{(m)}\in \Sigma _{m,2d}\ \text {for one}\ m\in {\mathbb {N}}\biggr \}},\\ {\mathfrak {P}}_{2d}^{\varphi }&=\overline{\biggl \{f^{(n)}:=\sum _{\lambda \vdash 2d} c_\lambda m_{\lambda }^{(n)}\ \text {with}\ f^{(m)}\in {\mathcal {P}}_{m,2d}\ \text {for one}\ m\in {\mathbb {N}}\biggr \}}. \end{aligned}$$

Clearly, both cones are full-dimensional. \(\square \)

In order to establish the result for the power mean basis, we first have to study the relationship between these two bases:

Proposition 6.4

Let \(M_n\) be the matrix acting between the monomial mean and power mean bases of \(H_{n,2d}^S\). Then \(M_n\) converges entry-wise to a full rank matrix \(M^*\) as n grows to infinity.

Proof

The transition matrix between power sum symmetric polynomials and monomial symmetric polynomials is well understood [2]. Converting to our mean bases we have the following: let \(\nu :=(\nu _1,\ldots ,\nu _l)\vdash 2d\), \(\mu =(\mu _1,\dots ,\mu _r) \vdash 2d\), then

$$\begin{aligned} m_\mu ^{(n)}=\sum _{\nu \vdash 2d}(-1)^{r-l} \frac{(n-r)!|{{\mathcal {B}}}{{\mathcal {L}}}(\mu )^{\nu }|}{n!}n^{l}p_\nu ^{(n)}, \end{aligned}$$

where \(|\mathcal {BL}(\mu )^{\nu }|\) is the number of \(\mu \)-brick permutations of shape \(\nu \) [2]. We observe that the unique highest order of growth in n for a coefficient of \(p_{\nu }^{(n)}\) occurs when the number of parts of \(\nu \) is maximized. The unique \(\nu \) with the largest number of parts and non-zero \(|\mathcal {BL}(\mu )^{\nu }|\) is \(\mu \). Thus we have \(\nu =\mu \), \(r=l\), and

$$\begin{aligned} |\mathcal {BL}(\nu )^{\nu }|=\nu _1!\cdots \nu _l! \quad \text {and} \quad \lim _{n \rightarrow \infty } \frac{n^r(n-r)!}{n!}=1. \end{aligned}$$

Therefore we see that asymptotically

$$\begin{aligned} m_{\mu }^{(n)}=\,p_{\mu }^{(n)}\,+\sum _{\nu \vdash 2d, \,\nu \ne \mu } a_{\mu ,\nu }(n)p_{\nu }^{(n)}, \end{aligned}$$

where the coefficients \(a_{\mu ,\nu }(n)\) tend to 0 as \(n \rightarrow \infty \). The proposition now follows. \(\square \)

Now with these preparations the proof of Theorem 2.8 will be immediate after the following two lemmas.

Lemma 6.5

Let V be a finite-dimensional vector space. Let \(A_i\) be a sequence of subsets of V converging to A. Let \(M_i\) be a sequence of linear maps from V to itself converging to identity. Let \(B_i=M_i(A_i)\). Then

$$\begin{aligned} A \subseteq \liminf _{i \rightarrow \infty } B_i \quad \text {and} \quad \limsup _{i \rightarrow \infty } B_i \subseteq A, \end{aligned}$$

so A is the limit of \(B_i\).

Proof

For the first inclusion, let \(a\in A\). Since A is the limit of \(A_i\) there exists N such that for all \(i\ge N\), a is contained in \(A_i\). Let \(b_i=M_i a\). Then \(b_i\in B_i\) for all \(i\ge N\) and moreover, since the linear maps \(M_i\) converge to identity, we have that \(b_i\) converges to a. This in turn implies that \(a\in \liminf _{i \rightarrow \infty } B_i\). For the second inclusion, we remark that \(\bigl (\limsup _{i \rightarrow \infty } B_i\bigr )^c= \liminf _{i \rightarrow \infty } B_i^c\). Hence, one can argue in an analogous way by considering the complement of A. \(\square \)

From the above lemma we can easily obtain the following generalization, which shows that the conclusions also hold if the limit of the linear maps \(M_i\) is any full-rank map.

Lemma 6.6

Let V be a finite-dimensional vector space. Let \(A_i\) be a sequence of subsets of V converging to A. Let \(M_i\) be a sequence of linear maps from V to itself, converging to a full-rank linear map M. Let \(B_i=M_i(A_i)\). Then

$$\begin{aligned} M(A) \subseteq \liminf _{i \rightarrow \infty } B_i \quad \text {and} \quad \limsup _{i \rightarrow \infty } B_i \subseteq M(A), \end{aligned}$$

so M(A) is the limit of \(B_i\).

Proof

We can apply Lemma 6.5 to the sequence \(C_i=M^{-1}M_i(A_i)\). Since \(B_i=M(C_i)\), and M is a full-rank linear map, the desired conclusions follow for the sequence \(B_i\) as well. \(\square \)

The existence of the limits of sequences and their full-dimensionality now can be established by translating from Proposition 6.3.

Proof of Theorem 2.8

We only give the proof for \({\mathfrak {S}}_{2d}\) since the statement for \({\mathfrak {P}}_{2d}\) follows in an analogous manner. We first observe that the sequence \(\Sigma _{n,2d}^\rho \) is semi-nested; it follows that \(\liminf \Sigma _{n,2d}^\rho =\bigcap _{n\ge 2d} \Sigma _{n,2d}^\rho \). We now apply Lemma 6.5 to the sequence \(\Sigma _{n,2d}^\rho \), with \(A_i=\Sigma ^{\varphi }_{n,2d}\) and \(M_i\) being transition maps between monomial mean and power mean bases. From Proposition 6.4 we know that the maps \(M_i\) converge to identity. Therefore, we see that

$$\begin{aligned} {\mathfrak {S}}_{2d}\subseteq \bigcap _{n\ge 2d} \Sigma _{n,2d}^\rho \quad \text { and }\quad \limsup \Sigma _{n,2d}^\rho \subseteq {\mathfrak {S}}_{2d}. \end{aligned}$$

The theorem now follows, since the full-dimensionality is a direct consequence of Proposition 6.3. \(\square \)

7 Symmetric Mean Inequalities of Degree Four

In this last section we characterize quartic symmetric mean inequalities that are valid for all values of n. Recall from Section 2 that \({\mathfrak {P}}_4\) denotes the cone of all sequences \({\mathfrak {f}}=(f^{(4)},f^{(5)},\ldots )\) of degree 4 power means that are non-negative for all n and \({\mathfrak {S}}_4\) the cone of such sequences that can be written as sums of squares.

In the case of quartic forms the elements of \({\mathfrak {P}}_{4}\) can be characterized by a family of univariate polynomials as Theorem 3.4 specializes to the following

Proposition 7.1

Let

$$\begin{aligned} {\mathfrak {f}}:=\sum _{\lambda \vdash 4}c_{\lambda }{\mathfrak {p}}_{\lambda } \end{aligned}$$

be a linear combination of quartic symmetric power means. Then \({\mathfrak {f}}\in {\mathfrak {P}}_4\) if any only if for all \(\alpha \in [0,1]\) the bivariate form

$$\begin{aligned} \Phi ^{\alpha }_{{\mathfrak {f}}}(x,y)=\Phi _{{\mathfrak {f}}}(\alpha ,1{-}\alpha ,x,y):=\sum _{\lambda \vdash 4}c_{\lambda }\Phi _{\lambda }(\alpha ,1{-}\alpha ,x,y) \end{aligned}$$

is non-negative.

Now we turn to the characterization of the elements on the boundary of \({\mathfrak {P}}_4\).

Lemma 7.2

Let \(0\ne {\mathfrak {f}}\in {\mathfrak {P}}_4\). Then \({\mathfrak {f}}\) is on the boundary \(\partial {\mathfrak {P}}_4\) if and only if there exists \(\alpha \in (0,1)\) such that the bivariate form \(\Phi ^{\alpha }_{{\mathfrak {f}}}(x,y)\) has a double real root.

Proof

Let \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\). Suppose that for all \(\alpha \in (0,1)\) the bivariate form \(\Phi _{{\mathfrak {f}}}^\alpha \) has no double real roots. From Proposition 7.1 we know that \(\Phi _{{\mathfrak {f}}}^\alpha \) is a non-negative form for all \(\alpha \in [0,1]\) and thus \(\Phi _{{\mathfrak {f}}}^\alpha \) is strictly positive for all \(\alpha \in (0,1)\). Thus for a sufficiently small perturbation \(\tilde{{\mathfrak {f}}}\) of the coefficients \(c_{\lambda }\) of \({\mathfrak {f}}\) the form \(\Phi _{\tilde{{\mathfrak {f}}}}^\alpha \) will remain positive for all \(\alpha \in (0,1)\). Now we deal with the cases \(\alpha =0,1\).

We observe that for all \({\mathfrak {g}} \in H^\rho _{\infty ,4}\),

$$\begin{aligned} \Phi ^0_{\mathfrak {g}}(x,y)=\Phi ^{1/2}_{\mathfrak {g}}(y,y)=\Phi ^{1/2}_{\mathfrak {g}}(1,1)y^4\quad \text {and}\quad \Phi ^1_{\mathfrak {g}}(x,y)=\Phi ^{1/2}_{\mathfrak {g}}(1,1)X^4. \end{aligned}$$

By the above we must have \(\Phi ^{1/2}_{\mathfrak {g}}(1,1)>0\) and the same will be true for a sufficiently small perturbation \(\tilde{{\mathfrak {f}}}\) of \({\mathfrak {f}}\). But then it follows by Proposition 7.1 that a neighborhood of \({\mathfrak {f}}\) is in \({\mathfrak {P}}_4\), which contradicts the assumption that \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\). Therefore there exists \(\alpha \in (0,1)\) such that \(\Phi ^\alpha _{\mathfrak {{f}}}(x,y)\) has a double real root.

Now suppose \({\mathfrak {f}} \in {\mathfrak {P}}_4\) and \(\Phi _{{\mathfrak {f}}}^\alpha (x,y)\) has a double real root for some \(\alpha \in (0,1)\). Let \({\mathfrak {f}}_{\epsilon }=f-\epsilon {\mathfrak {p}}_{2^2}\). It follows that for all \(\epsilon >0\) we have \({\mathfrak {f}}_\epsilon \notin {\mathfrak {P}}_4\), since \(\Phi _{{\mathfrak {f}}_\epsilon }^{\alpha }\) is strictly negative at the double zero of \(\Phi _{{\mathfrak {f}}}^{\alpha }\). Thus \({\mathfrak {f}}\) is on the boundary of \({\mathfrak {P}}_4\). \(\square \)

We now deduce a corollary from Theorem 5.1, completely describing the polynomials in \({\mathfrak {S}}_4\).

Corollary 7.3

We have \({\mathfrak {f}}\in {\mathfrak {S}}_4\) if and only if

$$\begin{aligned} {\mathfrak {f}}&=\alpha _{11}{\mathfrak {p}}_{(1^4)}+2\alpha _{12}{\mathfrak {p}}_{(2,1^2)}+\alpha _{22}{\mathfrak {p}}_{(2^2)}+\beta _{11}\bigl ({\mathfrak {p}}_{(2,1^2)}-{\mathfrak {p}}_{(1^4)}\bigr )\nonumber \\&\quad +\,2\beta _{12}\bigl ({\mathfrak {p}}_{(3,1)}-{\mathfrak {p}}_{(2,1^2)}\bigr )+\beta _{22}\bigl ({\mathfrak {p}}_{(4)}-{\mathfrak {p}}_{(2^2)}\bigr ), \end{aligned}$$

where the matrices \((\alpha _{ij})_{2\times 2}\) and \((\beta _{ij})_{2\times 2}\) are positive semidefinite.

Proof

We observe from Theorem 5.1 that the coefficients of the squares of symmetric polynomials and of \((n{-}1,1)\) semi-invariants do not depend on n. Thus the cone generated by these sums of squares is the same for any n, and it corresponds precisely to the cone given in the statement of the corollary. Now observe that the limit of the square of the \((n{-}2,2)\) component is equal to \({\mathfrak {p}}_{(1^4)}/2-{\mathfrak {p}}_{(21^2)}+{\mathfrak {p}}_{(2^2)}/2\), which is a sum of symmetric squares. Thus the squares from the \((n{-}2,2)\) component do not contribute anything in the limit. \(\square \)

In order to algebraically characterize the elements on the boundary recall that the discriminant \({\text {disc}}f\) of a bivariate form f is a homogeneous polynomial in the coefficients of f, which vanishes exactly on the set of forms with multiple projective roots. However, note that \({\text {disc}}f=0\) alone does not guarantee that f has a double real root, since the double root may be complex.

Proposition 7.4

Let \({\mathfrak {f}}\in H^\rho _{\infty ,4}\) be of the form

$$\begin{aligned} {\mathfrak {f}}=a^2{\mathfrak {p}}_{(1^4)}+b_{11}\bigl ({\mathfrak {p}}_{(2,1^2)}-{\mathfrak {p}}_{(1^4)}\bigr )+2b_{12}\bigl ({\mathfrak {p}}_{(3,1)}-{\mathfrak {p}}_{(2,1^2)}\bigr )+b_{22}\bigl ({\mathfrak {p}}_{(4)}-{\mathfrak {p}}_{(2^2)}\bigr ), \end{aligned}$$

such that the coefficients meet the conditions in (5.4). Then for \(\alpha =1/2\) the associated form \(\Phi _{{\mathfrak {f}}}^{\alpha }(X,Y)\) has a double root at \((x,y)=(1,-1)\).

Lemma 7.5

Let \({\mathfrak {f}}\in H^\rho _{\infty ,4}\) be of the form

$$\begin{aligned} {\mathfrak {f}}=a^2{\mathfrak {p}}_4+2ab{\mathfrak {p}}_{31}+(c^2-a^2){\mathfrak {p}}_{22}+(2cd+b^2-2ab){\mathfrak {p}}_{211}+(d^2-b^2){\mathfrak {p}}_{1111}, \end{aligned}$$

such that the coefficients abcd meet the conditions in (5.3). Consider the associated form \(\Phi _{{\mathfrak {f}}}^{\alpha }\). Then there is a value \(\alpha \), \(0<\alpha <1\), such that \(\Phi _{{\mathfrak {f}}}^{\alpha }\) has a real double root.

Proof

We first show that there is a (possibly complex) double root by examining the discriminant. To this end, we find that this discriminant \(\delta _{{\mathfrak {f}}}(\alpha )\) factors as

$$\begin{aligned} \delta _{{\mathfrak {f}}}(\alpha )=16(\alpha -1)^3(c+d)^2\alpha ^3\delta _1(\alpha )\delta _2(\alpha )^2, \end{aligned}$$

where \(\delta _1\) and \(\delta _2\) are quadratic polynomials in \(\alpha \). We examine these factors \(\delta _1\) and \(\delta _2\) now assuming the conditions on abcd imposed by (5.3).

One easily checks that \(\delta _1(\alpha )=\delta _1(1{-}\alpha )\) and \(\delta _1(0)=\delta _1(1)=-16{a}^{2}(c{+}d ) ^2<0\). Further,

$$\begin{aligned} \delta _1\biggl (\frac{1}{2}\biggr )=-\frac{1}{4}(4{a}^{2}+4ab+4cd+4{c}^{2}+{b}^{2}) ^{2}<0. \end{aligned}$$

This clearly implies that the quadratic polynomial \(\delta _1\) is strictly negative on (0, 1). Moreover, the conditions in (5.3) yield \(\delta _2(0)=\delta _2(1)=a^2(c{+}d) >0\) and \(\delta _2({1}/{2})=c( 2a{+}b) ^{2}/4 <0\) since c is supposed to be negative.

It follows now that \(\delta _2(\alpha )\) has two real roots \(\alpha _{1},1-\alpha _{1}\in (0,1)\) and hence the polynomial \(\Phi _{{\mathfrak {f}}}(\alpha _{1/2},1{-}\alpha _{1/2},x,1)\) has a double root. However, it remains to verify that this double root is indeed real. In order to establish this we examine the polynomial \(\delta _2(\alpha ,a,b,d,c)\) more carefully. We have

$$\begin{aligned} \delta _2(\alpha ,a,b,c,d):= & {} {a}^{2}d+{a}^{2}c+4{\alpha }^{2}{a}^{2}d-4{a}^{2}\alpha d\nonumber \\&-\,4ab{\alpha }^{2}c+4ab\alpha c-{\alpha }^{2}{b}^{2}c+{b}^{2}\alpha c, \end{aligned}$$

and for \(\alpha \ne {1}/{2}\), one can solve for d to find

$$\begin{aligned} d=-c ( {a}^{2}-4ab{\alpha }^{2}+4ab\alpha -{\alpha }^{2}{b}^{2}+{b}^{2}\alpha )( 2a\alpha -a)^{-2}. \end{aligned}$$

This yields that \(\Phi _{{\mathfrak {f}}}(\alpha ^{*},1{-}\alpha ^{*},x,1)\) contains the factor \(( ax+a+b\alpha x-b\alpha +b) ^{2}\) and hence in this case \(\Phi _{{\mathfrak {f}}}(\alpha ^{*},1{-}\alpha ^{*},x,1)\) has real double root.

In the case \(\alpha ={1}/{2}\) it follows from the observations made above, that at a root of \(\delta _2(\alpha ,a,b,c,d)\) also all second partial derivatives have to vanish. By explicit calculations one finds that this can happen only if \(a=-{1}/{2}\) in which case the polynomial \(\delta _2\) specializes to \(c(b{-}1) ^{2}/4\). Since \(c<0\) it follows that only for \(b=1\) the discriminant can vanish. In this situation one gets

$$\begin{aligned} \Phi _{{\mathfrak {f}}}\biggl (\frac{1}{2},\frac{1}{2},x,1\biggr )=\frac{1}{16}( d+2dx+2c+2c{x}^{2}+d{x}^{2}) ^{2} \end{aligned}$$

and it follows that \(X_{1/2}:=\pm \bigl ({{-d+2\sqrt{{c}(d{-}c)}}{(2c{+}d)-1}}\bigr )\) are the two double roots in this case. The conditions imposed on cd by Theorem 5.3 ensure that \(c(d{-}c)\ge 0\), and hence these roots will also be real. Therefore we have shown that in all cases the roots are indeed real. \(\square \)

We are now in the position to show that \({\mathfrak {P}}_4={\mathfrak {S}}_4\).

Proof of Theorem 2.9

Since \({\mathfrak {S}}_4\subset {\mathfrak {P}}_4\) and both sets are closed convex cones, it suffices to show that every \({\mathfrak {f}}\) on the boundary of \({\mathfrak {S}}_4\) also lies in the boundary of \({\mathfrak {P}}_4\). It follows from Theorem 5.3 that a sequence \({\mathfrak {f}}:=(f^{(4)},f^{(5)},\ldots )\) in the boundary of \({\mathfrak {S}}_4\) that is not in the boundary of \({\mathfrak {P}}_4\) has to be a form as considered in Lemma 7.5, but by combining Lemmas 7.5 and 7.2 we find that \({\mathfrak {f}}\in \partial {\mathfrak {S}}_{4}\) implies \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\), and we can conclude that \({\mathfrak {S}}_4={\mathfrak {P}}_4\). \(\square \)

8 Conclusion and Open Questions

Besides Conjecture 1 there is another important question left open in our work. Corollary 7.3 gave a description of the asymptotic symmetric sums of squares cone in terms of the squares involved. In this description of the limit not all semi-invariant polynomials were necessary. It is natural to investigate the situation also in arbitrary degree:

Question 1

Let \({\mathfrak {f}}\in {\mathfrak {S}}_{2d}\). What semi-invariant polynomials are necessary for a description of \({\mathfrak {f}}\) as a sum of squares?

The general setup of our work focused on the case of a fixed degree. Examples like the difference of the geometric and the arithmetic mean show however, that it would be very interesting to understand also the situation where the degree is not fixed.

Question 2

What can be said about the quantitative relationship between the cones \(\Sigma ^S_{n,2d}\) and \({\mathcal {P}}^S_{n,2d}\) in asymptotic regimes other than fixed degree 2d?