Abstract
Let K be a field and put \({\mathcal {A}}:=\{(i,j,k,m)\in \mathbb {N}^{4}:\;i\le j\;\text{ and }\;m\le k\}\). For any given \(A\in {\mathcal {A}}\) we consider the sequence of polynomials \((r_{A,n}(x))_{n\in \mathbb {N}}\) defined by the recurrence
where the initial polynomials \(r_{A,0}, r_{A,1}\in K[x]\) are of degree i, j respectively and \(f_{n}\in K[x], n\ge 2\), is of degree k with variable coefficients. The aim of the paper is to prove the formula for the resultant \({\text {Res}}(r_{A,n}(x),r_{A,n-1}(x))\). Our result is an extension of the classical Schur formula which is obtained for \(A=(0,1,1,0)\). As an application we get the formula for the resultant \({\text {Res}}(r_{A,n},r_{A,n-2})\), where the sequence \((r_{A,n})_{n\in \mathbb {N}}\) is the sequence of orthogonal polynomials corresponding to a moment functional which is symmetric.
Similar content being viewed by others
1 Introduction
Let \(\mathbb {N}\) denotes the set of non-negative integers, \(\mathbb {N}_{+}\) the set of positive integers and for given \(k\in \mathbb {N}_{+}\) we write \(\mathbb {N}_{\ge k}\) for the set of positive integers \(\ge k\).
Let K be a field and consider the polynomials \(F, G\in K[x]\). The resultant \({\text {Res}}(F, G)\) of the polynomials F, G is an element of K which gives the information of possible common roots. More precisely, \({\text {Res}}(F, G)=0\) if and only if the polynomials F, G has a common factor of positive degree. The computation of resultants is, in general, a difficult task. Of special interest is the computation of resultants of pairs of polynomials which are interesting from either a number theoretic or analytic point of view. The classical result is the computation of resultant of two cyclotomic polynomials \(\Phi _{m}, \Phi _{n}\). More precisely, Apostol proved the formula
where \(\varphi \) is the Euler phi function [1].
On the other side, we have a result of Schur which allow computation of resultants of consecutive terms in the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) of the polynomials defined by a linear recurrence of degree two. More precisely, if \(r_{0}(x)=1, r_{1}(x)=a_{1}x+b_{1}\) and we define
with \(a_{n}, b_{n}, c_{n}\in \mathbb {C}\) satisfying \(a_{n}c_{n}\ne 0\). Under these assumptions, we have the following compact formula proved by Schur [9] (see also [10, p. 143]):
In factm Schur obtained a slightly different result, i.e., he obtained the expression for \(\prod _{i=1}^{n}r_{n-1}(x_{i,n})\), where \(x_{i,n}\) is the ith root of the polynomial \(r_{n}\).
The importance of the Schur method lies in its applications in the computation of discriminants of orthogonal polynomials. Indeed, Favard proved that each family of orthogonal polynomials corresponds with the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) for suitably chosen sequences \((a_{n})_{n\in \mathbb {N}}, (b_{n})_{n\in \mathbb {N}}\) and \((c_{n})_{n\in \mathbb {N}}\) (for the proof of this important theorem see [2, Theorem 4.4]). Computation of discriminants of certain classes of orthogonal polynomials can be found in [10, Theorem 6.71].
The method of Schur was generalized by Gishe and Ismail [4]. As an application, the authors reproved and generalized the result of Dilcher and Stolarsky from [3] concerning the resultant of certain linear combinations of Chebyshev polynomials of the first and the second kind. All these results were recently extended by Sawa and Uchida [8, Theorem 3.1] by a clever application of the Schur method. However, in all mentioned results we have a strong assumption on the sequence considered sequences of polynomials, i.e., the degree of nth term need to be equal to n. Thus, it is natural to ask whether the method of Schur can be generalized for other families of recursively defined polynomials. Of special interest is the situation when the polynomial near \(r_{n-1}\) in the recurrence defining the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) is of degree \(\ge 2\). Moreover, one can ask whether the initial polynomials \(r_{0}, r_{1}\) can have degrees not necessarily equal to 0 and 1 respectively. The aim of this note is to offer such a generalization and apply it to get some new resultant formulas. For the precise statement of our generalization and the main result, we refer the reader to Sect. 3.
Let us describe the content of the paper in some details. In Sect. 2 we present remainder of basic properties of the notion of resultant. In Sect. 3 we prove the main result of the paper, i.e., the expression of the resultant of consecutive terms of the sequence \((r_{A,n})_{n\in \mathbb {N}}\) (Theorem 3.1). Finally, in the last section, we apply our main result to present some applications. In particular, under some mild assumptions on the coefficients of recurrence defining the sequence \((r_{A,n})_{n\in \mathbb {N}}\) we present the expression for the resultant of the polynomials \(r_{A,n}, r_{A,n-2}\).
2 Remainder on basic properties of resultants
Let K be a field and consider the polynomials \(F, G\in K[x]\) given by
The resultant of the polynomials F, G is defined as
where \(\alpha _{1},\ldots ,\alpha _{n}\) and \(\beta _{1},\ldots ,\beta _{m}\) are the roots of F and G respectively (viewed in an appropriate field extension of K). There is an alternative formula in terms of certain determinant. More precisely, \({\text {Res}}(F,G)\) is the element of K by the determinant of the \((m+n)\times (m+n)\) Sylvester matrix given by
The expression of a resultant as a determinant of the Sylvester matrix allows to consider it for polynomials with coefficients in commutative rings (even with zero divisors). However, in the sequel we concentrate on the case when considered polynomials have coefficients in a field K.
We collect basic properties of the resultant of the polynomials F, G:
Moreover, if \(F(x)=a_{0}\) is a constant polynomial then, unless \(F=G=0\), we have
The proofs of the above properties can be find in [6, Chapter 3]. Finally, we recall an important result concerning the formula for the resultant of the polynomial G and F, provided that \(F(x)=q(x)G(x)+r(x)\). More precisely, we have the following.
Lemma 2.1
Let \(F, G\in K[x]\) be given by (2.1) and suppose that \(F(x)=q(x)G(x)+r(x)\) for some \(q, r\in K[x]\). Then we have the formula
The proof of the above lemma can be found in [7] (see also [3]).
For possible generalization of the notion of resultant for polynomials with many variables we refer the reader to [5].
3 Generalization of Schur theorem
In this section we state and prove the main result of this paper: the generalization of Schur theorem. Let K be a field. We define the set
and for given \(A\in {\mathcal {A}}\) we consider the sequence of polynomials \((r_{A,n}(x))_{n\in \mathbb {N}}\) defined in the following way:
where
We assume that \(p_{s}, q_{s}, v_{n}, a_{n,s}\in K\) (in the appropriate range of parameters s, n) and \(p_{i}q_{j}a_{n,k}\ne 0\) for each \(n\in \mathbb {N}_{\ge 2}\). Moreover, we assume that \(a_{2,k}q_{i}-v_{2}p_{i}\ne 0\) for given i, k. In other words \({\text {deg}}r_{A,0}=i, {\text {deg}}r_{A,1}=j\) and \({\text {deg}}f_{n}=k\) for each \(n\in \mathbb {N}_{\ge 2}\).
Theorem 3.1
Under the above assumptions on \(A, r_{A,0}, r_{A,1}\) and \(f_{n}\) for \(n\in \mathbb {N}_{\ge 2}\) we have the following formula
where \(e_{A}(u)=((u-2)k+j)((u-1)k+j+1)\) and
Proof
For \(n\in \mathbb {N}_{\ge 2}\) we write \(R_{n}={\text {Res}}(r_{A,n},r_{A,n-1})\). First of all note that from the assumptions on i, j, k, m, the assumption \(a_{2,k}q_{i}-v_{2}p_{i}\ne 0\) and simple use of the recurrence relation defining the sequence \((r_{A,n})_{n\in \mathbb {N}}\) we immediately note that the leading term \(L_{n}\) of the polynomial \(r_{A,n}\) is given by
and it is non-zero. In consequence, we see that
In order to give the value of the constant term, say \(C_{n}\), of \(r_{A,n}\), i.e., the value \(r_{A,n}(0)\), we consider two cases: \(m>0\) and \(m=0\). If \(m>0\), then by simple induction one can prove that
If \(m=0\) then the value \(C_{n}=r_{A,n}(0)\) satisfies the recurrence relation \(C_{n}=a_{n,0}C_{n-1}-v_{n}C_{n-2}\). In the generality we are dealing here, we can not give an exact form of \(C_{n}\) and in fact we will not need it.
We are ready to prove our theorem. However, in order to simplify the proof a bit, we first compute the resultant of the polynomials \(r_{A,2}(x), r_{A,1}(x)\). We have the following chain of equalities
where in the last equality we used the identity \({\text {Res}}(r_{A,1},x)=r_{A,1}(0)=q_{0}\).
Now let us assume that \(n\ge 3\) and consider the polynomials \(r_{A,n}(x), r_{A,n-1}(x)\). We have the following chain of equalities:
Note that the first five equalities are true for all \(m\in \mathbb {N}\) not only \(m>0\). We will need this observation later.
If \(m>0\), then from the above computations we have obtained recurrence relation for the value of \(R_{n}={\text {Res}}(r_{A,n},r_{A,n-1})\). More precisely, we have
We consider the case \(i<j \vee (i=j \wedge m<k)\) first. By simple iteration of the above recurrence together with the expression for \(R_{2}\), we obtain the formula
We note the identity
and after simplification of the resulting expression we get the first formula from the statement of our theorem with \(T_{A}=q_{j}\).
Performing exactly the same reasoning as above we get the formula from the statement in the case when \(i=k\) and \(m=k\) with \(T_{A}=(a_{2,k}q_{i}-v_{2}p_{i})/a_{2,k}\).
Let us back to the case \(m=0\). We put \(R_{n}'={\text {Res}}(r_{A,n}(x),r_{A,n-1}(x))\). First of all let us note that performing exactly the same reasoning as in the case of computation of \(R_{2}\) in case when \(m>0\), we easily get the equality
Note that \(R_{2}'\) is equal to \(R_{2}\) with m replaced by 0.
Let \(n\ge 3\). In order to find recurrence relation for \(R_{n}'\) we follow exactly the same approach as in the case of \(R_{n}\). In particular, we have
Again, from our reasoning, we see that \(R_{n}'\) is equal to \(R_{n}\) with m replaced by 0, where we taken into account the convention that \(r_{A,n-1}(0)^{0}=1\) for any value of \(r_{A,n-1}(0)\). In particular, we allow \(r_{A,n-1}(0)\) to be 0.
Summing up, our formula for \({\text {Res}}(r_{A,n},r_{A,n-1})\) from the statement of our theorem holds for each \(m\in \mathbb {N}\). \(\square \)
Remark 3.2
The formula for \({\text {Res}}(r_{A,n},r_{A,n-1})\) presented in Theorem 3.1 is not the most general one. Indeed, one can consider slightly more general recurrence and obtain similar result. More precisely, for given \(A\in {\mathcal {A}}\) one can consider the sequence \((g_{A,n}(x))_{n\in \mathbb {N}}\) defined in the following way:
where \(p_{i}q_{j}\ne 0\) and
where \(a_{n,s}b_{m}\ne 0\) for each \(n\in \mathbb {N}_{\ge 2}\). In particular h is fixed and does not depend on n. Moreover, in order to guarantee the good behavior of degree of the polynomial \(g_{A,n}\) we need to assume \(a_{2,k}q_{i}-v_{2}b_{m}p_{i}\ne 0\) for given k, i, m. With the above definitions and the assumptions, we get the equalities \({\text {deg}}g_{A,0}=i, {\text {deg}}g_{A,1}=j\) and for \(n\ge 2\) we have \({\text {deg}}g_{A,n}=(n-1)k+j\). Thus we see that the leading term \(L_{A,n}\) of the polynomial \(g_{A,n}\) has the form:
where
Now, if we put \(G_{n}={\text {Res}}(g_{A,n},g_{A,n-1})\) then, using essentially the same reasoning as in the proof of Theorem 3.1, we get the recurrence relation for the sequence \((G_{n})_{n\in \mathbb {N}_{+}}\) in the form:
By independent computation we get the equality
and the explicit formula
However, in order to compute \({\text {Res}}(g_{A,n},g_{A,n-1})\) with the help of the above formula we need to know the value of \({\text {Res}}(g_{A,s},h)\) for each \(s=1,\ldots ,n-1\), which in general is a difficult task (due to the complicated and essentially unknown form of the coefficients or \(g_{A,s}\)). We have simple expression for \({\text {Res}}(g_{A,s},h)\) only in the case when \(h(x)=x^{m}\). This is exactly the case presented in Theorem 3.1.
4 Applications
In this section we offer some application of Theorem 3.1. We consider the sequence \((r_{n})_{n\in \mathbb {N}}\) governed by the recurrence: \(r_{0}(x)=1, r_{1}(x)=a_{1}x+b_{1}\) and
where \(a_{n}, b_{n}, c_{n}\in K\) and \(a_{n}c_{n}\ne 0\) for \(n\in \mathbb {N}_{+}\) and \(m\in \{0,1\}\). For \(m=0\) we get the recurrence considered by Schur. In this case the result of Schur gives the expression for the resultant of the polynomials \(r_{n}\) and \(r_{n-1}\). Now, we show that under some assumptions on the sequences \((a_{n})_{n\in \mathbb {N}_{+}}, (b_{n})_{n\in \mathbb {N}_{+}}\) one can get nice expression for the resultant of the polynomials \(r_{n}, r_{n-2}\). More precisely, we prove the following
Theorem 4.1
Let \(m\in \{0,1\}\). Let \(a_{n}, b_{n}, c_{n}\in K\) for \(n\in \mathbb {N}_{+}\) and suppose that \(a_{n}c_{n}\ne 0\). Let us consider the sequence of polynomials \((r_{n}(x))_{n\in \mathbb {N}}\) defined by (4.1) and suppose that for each \(n\ge 2\) we have \(a_{n-2}b_{n}=a_{n}b_{n-2}\). Moreover, let us put \(d_{n}=\frac{a_{n}}{a_{n-2}}\). Then, if \(m=0\) the following formulas hold:
If \(m=1\) we have the following formulas:
Proof
In order to apply Theorem 3.1 for computation of \({\text {Res}}(r_{2n},r_{2(n-1)})\) and \({\text {Res}}(r_{2n+1},r_{2n-1})\) we need to express \(r_{n}\) in terms of \(r_{n-2}\) and \(r_{n-4}\). First, solving (4.1) with respect to \(r_{n-1}\) we get
Next, from the relation (4.1) with n replaced by \(n-1\) and the above expressions we get
Observing now that the condition \(a_{n}b_{n-2}=a_{n-2}b_{n}\) implies that the expression
does not depend on x. Thus, the relation (4.2) can be rewritten in the following equivalent form
where
First we consider the case \(m=0\). Having the above recurrence relation (4.3) it is an easy task to get the expression for \({\text {Res}}(r_{2n},r_{2(n-1)})\). Indeed, we replace n by 2n and apply Theorem 3.1 to the polynomial \(r_{A,n}(x):=r_{2n}(x), n\in \mathbb {N}\), with
After necessary simplifications we get the expression from the statement of the theorem.
Next, we note that \(r_{1}(x)=a_{1}x+b_{1}\) and from the identity \(a_{1}b_{3}=a_{3}b_{1}\) we get \(r_{3}(x)\equiv 0\pmod {a_{1}x+b_{1}}\). In consequence, from the relation (4.3) we immediately get that \(r_{2n+1}\equiv 0\pmod {a_{1}x+b_{1}}\) for each \(n\in \mathbb {N}\). Thus, in order to apply Theorem 3.1 we write \(r_{A,n}(x):=\frac{r_{2n+1}(x)}{a_{1}x+b_{1}}\) for \(n\in \mathbb {N}\) with
After necessary simplifications we get the first part of our theorem.
In case \(m=1\) we perform exactly the same reasoning. We replace n by 2n and apply Theorem 3.1 to the polynomial \(r_{A,n}(x):=r_{2n}(x), n\in \mathbb {N}\), with
Finally, in order to consider the last formula from the statement of our theorem, we note that \(r_{1}(x)=a_{1}x+b_{1}\) and from the identity \(a_{1}b_{3}=a_{3}b_{1}\) we get \(r_{3}(x)\equiv 0\pmod {a_{1}x+b_{1}}\). In consequence, from the relation (4.3) we immediately get that \(r_{2n+1}\equiv 0\pmod {a_{1}x+b_{1}}\) for each \(n\in \mathbb {N}\). Thus, in order to apply Theorem 3.1 we write \(r_{A,n}(x):=\frac{r_{2n+1}(x)}{a_{1}x+b_{1}}\) for \(n\in \mathbb {N}\) with
After necessary simplifications we get our last formula. \(\square \)
Remark 4.2
The condition saying that \(a_{n}b_{n-2}=a_{n-2}b_{n}\) for \(n\in \mathbb {N}_{\ge 3}\) seems to be quite strong. However, it is clear that for \(b_{n}=0\) this condition is satisfied. Notice that in this case we deal with an important class of orthogonal polynomials which corresponds to moment functionals which are symmetric. We recall the necessary definitions. Let \((\mu _{n})_{n\in \mathbb {N}}\) be a sequence of complex numbers and let \({\mathcal {L}}\) be a complex valued function defined on \(\mathbb {C}[x]\) satisfied the conditions
for each \(n\in \mathbb {N}\) and \(\alpha _{1}, \alpha _{2}\in \mathbb {C}\).
The moment functional is used in the definition of orthogonal polynomials. Indeed, the sequence \((Q_{n}(x))_{n\in \mathbb {N}}\) is an orthogonal sequence if:
-
(1)
\({\text {deg}}Q_{n}=n\),
-
(2)
\({\mathcal {L}}[Q_{m}(x)Q_{n}(x)]=0\) for \(m\ne n\) and \({\mathcal {L}}[Q_{n}(x)^2]\ne 0\).
The moment functional is called symmetric if all of its moments of odd order are 0, i.e., \({\mathcal {L}}[x^{2n+1}]=0\) for \(n\in \mathbb {N}\). However, this is equivalent with the condition \(b_{n}=0\) for \(n\ge 1\) (see [2, Theorem 4.3]) and guarantees the existence of our compact formula given in Theorem 4.1.
This condition is satisfied by the Legendre, Hermite, Chebyshev, Bessel, Lommel \(\ldots \) and many other sequences of orthogonal polynomials (see [2, Chapter V]). We present three illustrative examples.
Example 4.3
The sequence \((P_{n}(x))_{n\in \mathbb {N}}\) of Legendre polynomials is given by \(P_{0}(x)=1\), \(P_{1}(x)=x\) and the recurrence relation
In particular, we have
It is clear that \(a_{n-2}b_{n}=a_{n}b_{n-2} (=0)\) for \(n\ge 2\) and we get
After necessary simplifications, we get the following formulas:
with the convention that \(0^{0}=1\). As a simple consequence of our computations, we get that the polynomials \(P_{n}, P_{n-2}\) are co-prime or their only common root is \(x=0\) for each \(n\in \mathbb {N}_{\ge 2}\).
Example 4.4
In the case of the Hermite polynomials \((H_{n})_{n\in \mathbb {N}}\) we have \(H_{0}(x)=1\), \( H_{1}(x)=x\) and, for \(n\ge 2\), we have the recurrence relation
In particular, we have
It is clear that \(a_{n-2}b_{n}=a_{n}b_{n-2} (=0)\) for \(n\ge 2\) and we get that \(d_{n}=1\). In consequence, after necessary simplifications, we get the following formulas:
Example 4.5
Using Theorem 3.1 we prove a resultant formula for the sequence of polynomials with combinatorial coefficients. For given \(a\in \mathbb {N}\) and \(n\in \mathbb {N}\) we consider the polynomial
The sequence \((V_{n} (x)_{n\in \mathbb {N}}\) is not a sequence of orthogonal polynomials. It is clear that \(V_{0}(x)=1\), \(V_{1}(x)=2(x+1)\). For \(n\ge 2\), the recurrence relation
holds. This relation can be proved easily by induction on n with the help of the recurrence satisfied by the sequence of central binomial coefficients \((\left( {\begin{array}{c}2n\\ n\end{array}}\right) )_{n\in \mathbb {N}}\). We omit the details. Now in order to get the formula for \({\text {Res}}(V_{n}(x),V_{n-1}(x))\) it is enough to apply Theorem 3.1 with
After necessary simplifications we get the formula
Note that the sequence of polynomials \((V_{n}(x))_{n\in \mathbb {N}}\) (or to be more precise: the recurrence relation defining the sequence) satisfies also the assumption of Theorem 4.1. Thus, one can also compute the value of the resultant \({\text {Res}}(V_{n}(x),V_{n-2}(x))\).
References
T.M. Apostol, Resultants of cyclotomic polynomials. Proc. Am. Math. Soc. 24, 457–462 (1970)
T.S. Chihara, An Introduction to Orthogonal Polynomials (Dover Books on Mathematics, Mineaola, 1978)
K. Dilcher, K.B. Stolarsky, Resultants and discriminants of Chebyshev and related polynomials. Trans. Am. Math. Soc. 357, 965–981 (2005)
J.E. Gishe, M.E.H. Ismail, Resultants of Chebyshev polynomials. Z. Anal. Anwend. 27, 499–508 (2008)
I.M. Gelfand, M.M. Kapranov, A.V. Zelevinsky, Discriminants, Resultants, and Multidimensional Determinants, Mathematics: Theory and Applications (Birkhäuser, Boston, 1994)
M. Mignotte, Mathematics for Computer Algebra (Springer, New York, 1992)
M. Pohst, H. Zassenhaus, Algorithmic Algebraic Number Theory (Cambridge University Press, Cambridge, 1989)
M. Sawa, Y. Uchida, Discriminants of classical quasi-orthogonal polynomials with application to Diophantine equations. J. Math. Soc. Jpn. 71(3), 831–860 (2019)
I. Schur, Affektlose Gleichungen in der Theorie der Laguerreschen und Hermiteschen Polynome. J. Reine Angew. Math. 165, 52–58 (1931)
G. Szegő, Orthogonal Polynomials (4th ed.), American Mathematical Society Colloquium Publications, vol. 23 (American Mathematical Society, Providence, 1975)
Acknowledgements
The author express his gratitude to the anonymous referee for constructive suggestions which improved the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ulas, M. On a generalization of Schur theorem concerning resultants. Period Math Hung 83, 1–11 (2021). https://doi.org/10.1007/s10998-020-00369-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10998-020-00369-4