1 Introduction

Let \(\mathbb {N}\) denotes the set of non-negative integers, \(\mathbb {N}_{+}\) the set of positive integers and for given \(k\in \mathbb {N}_{+}\) we write \(\mathbb {N}_{\ge k}\) for the set of positive integers \(\ge k\).

Let K be a field and consider the polynomials \(F, G\in K[x]\). The resultant \({\text {Res}}(F, G)\) of the polynomials FG is an element of K which gives the information of possible common roots. More precisely, \({\text {Res}}(F, G)=0\) if and only if the polynomials FG has a common factor of positive degree. The computation of resultants is, in general, a difficult task. Of special interest is the computation of resultants of pairs of polynomials which are interesting from either a number theoretic or analytic point of view. The classical result is the computation of resultant of two cyclotomic polynomials \(\Phi _{m}, \Phi _{n}\). More precisely, Apostol proved the formula

$$\begin{aligned} {\text {Res}}(\Phi _{m}, \Phi _{n})={\left\{ \begin{array}{ll}\begin{array}{ll} p^{\varphi (n)} &{} \text{ if }\;\frac{m}{n}\;\text{ is } \text{ a } \text{ power } \text{ of } \text{ a } \text{ prime }\;p, \\ 1 &{} \text{ otherwise }, \end{array} \end{array}\right. } \end{aligned}$$

where \(\varphi \) is the Euler phi function [1].

On the other side, we have a result of Schur which allow computation of resultants of consecutive terms in the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) of the polynomials defined by a linear recurrence of degree two. More precisely, if \(r_{0}(x)=1, r_{1}(x)=a_{1}x+b_{1}\) and we define

$$\begin{aligned} r_{n}(x)=(a_{n}x+b_{n})r_{n-1}(x)-c_{n}r_{n-2}(x),\quad n\ge 2, \end{aligned}$$

with \(a_{n}, b_{n}, c_{n}\in \mathbb {C}\) satisfying \(a_{n}c_{n}\ne 0\). Under these assumptions, we have the following compact formula proved by Schur [9] (see also [10, p. 143]):

$$\begin{aligned} {\text {Res}}(r_{n}, r_{n-1})=(-1)^{\frac{n(n-1)}{2}}\prod _{i=1}^{n-1}a_i^{2(n-i)}c_{i+1}^{i}. \end{aligned}$$

In factm Schur obtained a slightly different result, i.e., he obtained the expression for \(\prod _{i=1}^{n}r_{n-1}(x_{i,n})\), where \(x_{i,n}\) is the ith root of the polynomial \(r_{n}\).

The importance of the Schur method lies in its applications in the computation of discriminants of orthogonal polynomials. Indeed, Favard proved that each family of orthogonal polynomials corresponds with the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) for suitably chosen sequences \((a_{n})_{n\in \mathbb {N}}, (b_{n})_{n\in \mathbb {N}}\) and \((c_{n})_{n\in \mathbb {N}}\) (for the proof of this important theorem see [2, Theorem 4.4]). Computation of discriminants of certain classes of orthogonal polynomials can be found in [10, Theorem 6.71].

The method of Schur was generalized by Gishe and Ismail [4]. As an application, the authors reproved and generalized the result of Dilcher and Stolarsky from [3] concerning the resultant of certain linear combinations of Chebyshev polynomials of the first and the second kind. All these results were recently extended by Sawa and Uchida [8, Theorem 3.1] by a clever application of the Schur method. However, in all mentioned results we have a strong assumption on the sequence considered sequences of polynomials, i.e., the degree of nth term need to be equal to n. Thus, it is natural to ask whether the method of Schur can be generalized for other families of recursively defined polynomials. Of special interest is the situation when the polynomial near \(r_{n-1}\) in the recurrence defining the sequence \((r_{n}(x))_{n\in \mathbb {N}}\) is of degree \(\ge 2\). Moreover, one can ask whether the initial polynomials \(r_{0}, r_{1}\) can have degrees not necessarily equal to 0 and 1 respectively. The aim of this note is to offer such a generalization and apply it to get some new resultant formulas. For the precise statement of our generalization and the main result, we refer the reader to Sect. 3.

Let us describe the content of the paper in some details. In Sect. 2 we present remainder of basic properties of the notion of resultant. In Sect. 3 we prove the main result of the paper, i.e., the expression of the resultant of consecutive terms of the sequence \((r_{A,n})_{n\in \mathbb {N}}\) (Theorem 3.1). Finally, in the last section, we apply our main result to present some applications. In particular, under some mild assumptions on the coefficients of recurrence defining the sequence \((r_{A,n})_{n\in \mathbb {N}}\) we present the expression for the resultant of the polynomials \(r_{A,n}, r_{A,n-2}\).

2 Remainder on basic properties of resultants

Let K be a field and consider the polynomials \(F, G\in K[x]\) given by

$$\begin{aligned} \begin{aligned} F(x)&=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0},\\ G(x)&=b_{m}x^{m}+b_{m-1}x^{m-1}+\cdots +b_{1}x+b_{0}. \end{aligned} \end{aligned}$$
(2.1)

The resultant of the polynomials FG is defined as

$$\begin{aligned} {\text {Res}}(F,G)=a_{n}^{m}b_{m}^{n}\prod _{i=1}^{n}\prod _{j=1}^{m}(\alpha _{i}-\beta _{j}), \end{aligned}$$

where \(\alpha _{1},\ldots ,\alpha _{n}\) and \(\beta _{1},\ldots ,\beta _{m}\) are the roots of F and G respectively (viewed in an appropriate field extension of K). There is an alternative formula in terms of certain determinant. More precisely, \({\text {Res}}(F,G)\) is the element of K by the determinant of the \((m+n)\times (m+n)\) Sylvester matrix given by

$$\begin{aligned} \left( \begin{array}{ccccccc} a_{n} &{} a_{n-1} &{} a_{n-2} &{} \ldots &{} 0 &{} 0 &{} 0 \\ 0 &{} a_{n} &{} a_{n-1} &{} \ldots &{} 0 &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} &{} \vdots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} a_{1} &{} a_{0} &{} 0 \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 &{} a_{1} &{} a_{0} \\ b_{m} &{} b_{m-1} &{} b_{m-2} &{} \ldots &{} 0 &{} 0 &{} 0 \\ 0 &{} b_{m} &{} b_{m-1} &{} \ldots &{} 0 &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} &{} \vdots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} b_{1} &{} b_{0} &{} 0 \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 &{} b_{1} &{} b_{0} \\ \end{array} \right) . \end{aligned}$$

The expression of a resultant as a determinant of the Sylvester matrix allows to consider it for polynomials with coefficients in commutative rings (even with zero divisors). However, in the sequel we concentrate on the case when considered polynomials have coefficients in a field K.

We collect basic properties of the resultant of the polynomials FG:

$$\begin{aligned} {\text {Res}}(F,G)&=a_{n}^{m}\prod _{i=1}^{n}G(\alpha _{i})=b_{m}^{n}\prod _{i=1}^{m}F(\beta _{i}), \end{aligned}$$
(2.2)
$$\begin{aligned} {\text {Res}}(F,G)&=(-1)^{nm}{\text {Res}}(G,F), \end{aligned}$$
(2.3)
$$\begin{aligned} {\text {Res}}(F,G_{1}G_{2})&={\text {Res}}(F,G_{1}){\text {Res}}(F,G_{2}). \end{aligned}$$
(2.4)

Moreover, if \(F(x)=a_{0}\) is a constant polynomial then, unless \(F=G=0\), we have

$$\begin{aligned} {\text {Res}}(F,G)={\text {Res}}(a_{0},G)={\text {Res}}(G,a_{0})=a_{0}^{m}. \end{aligned}$$
(2.5)

The proofs of the above properties can be find in [6, Chapter 3]. Finally, we recall an important result concerning the formula for the resultant of the polynomial G and F, provided that \(F(x)=q(x)G(x)+r(x)\). More precisely, we have the following.

Lemma 2.1

Let \(F, G\in K[x]\) be given by (2.1) and suppose that \(F(x)=q(x)G(x)+r(x)\) for some \(q, r\in K[x]\). Then we have the formula

$$\begin{aligned} {\text {Res}}(G,F)=b_{m}^{{\text {deg}}F-{\text {deg}}r}{\text {Res}}(G,r). \end{aligned}$$

The proof of the above lemma can be found in [7] (see also [3]).

For possible generalization of the notion of resultant for polynomials with many variables we refer the reader to [5].

3 Generalization of Schur theorem

In this section we state and prove the main result of this paper: the generalization of Schur theorem. Let K be a field. We define the set

$$\begin{aligned} {\mathcal {A}}:=\{(i,j,k,m)\in \mathbb {N}^{4}:\;i\le j\;\text{ and }\;m\le k\} \end{aligned}$$

and for given \(A\in {\mathcal {A}}\) we consider the sequence of polynomials \((r_{A,n}(x))_{n\in \mathbb {N}}\) defined in the following way:

$$\begin{aligned} r_{A,0}(x)&=\sum _{s=0}^{i}p_{s}x^{s},\quad r_{A,1}(x)=\sum _{s=0}^{j}q_{s}x^{s},\\ r_{A,n}(x)&=f_{n}(x)r_{A,n-1}(x)-v_{n}x^{m}r_{A,n-2}(x)\quad \text{ for }\quad n\ge 2, \end{aligned}$$

where

$$\begin{aligned} f_{n}(x)=\sum _{s=0}^{k}a_{n,s}x^{s}. \end{aligned}$$

We assume that \(p_{s}, q_{s}, v_{n}, a_{n,s}\in K\) (in the appropriate range of parameters sn) and \(p_{i}q_{j}a_{n,k}\ne 0\) for each \(n\in \mathbb {N}_{\ge 2}\). Moreover, we assume that \(a_{2,k}q_{i}-v_{2}p_{i}\ne 0\) for given ik. In other words \({\text {deg}}r_{A,0}=i, {\text {deg}}r_{A,1}=j\) and \({\text {deg}}f_{n}=k\) for each \(n\in \mathbb {N}_{\ge 2}\).

Theorem 3.1

Under the above assumptions on \(A, r_{A,0}, r_{A,1}\) and \(f_{n}\) for \(n\in \mathbb {N}_{\ge 2}\) we have the following formula

$$\begin{aligned} {\text {Res}}&(r_{A,n}(x),r_{A,n-1}(x))\\&=(-1)^{\sum _{u=2}^{n}e_{A}(u)}T_{A}^{(2k-m)(n-2)}q_{0}^{m(n-1)}q_{j}^{k+j-m-i}\left( \prod _{u=0}^{n-2}v_{u+2}^{uk+j}\right) \times \\&\quad \left( \prod _{s=1}^{n-1}a_{s+1,0}^{m(n-s-1)}a_{s+1,k}^{(2k-m)(n-s-1)}\right) {\text {Res}}(r_{A,1}(x),r_{A,0}(x)), \end{aligned}$$

where \(e_{A}(u)=((u-2)k+j)((u-1)k+j+1)\) and

$$\begin{aligned} T_{A}={\left\{ \begin{array}{ll}\begin{array}{lll} q_{j}, &{} &{}\text{ if }\;i<j \vee (i=j \wedge m<k), \\ \frac{a_{2,k}q_{i}-v_{2}p_{i}}{a_{2,k}}, &{} &{}\text{ if }\;i=j \wedge m=k. \end{array}\end{array}\right. } \end{aligned}$$

Proof

For \(n\in \mathbb {N}_{\ge 2}\) we write \(R_{n}={\text {Res}}(r_{A,n},r_{A,n-1})\). First of all note that from the assumptions on ijkm, the assumption \(a_{2,k}q_{i}-v_{2}p_{i}\ne 0\) and simple use of the recurrence relation defining the sequence \((r_{A,n})_{n\in \mathbb {N}}\) we immediately note that the leading term \(L_{n}\) of the polynomial \(r_{A,n}\) is given by

$$\begin{aligned} L_{n}={\left\{ \begin{array}{ll}\begin{array}{lll} q_{j}\prod _{s=1}^{n-1}a_{s+1,k}, &{} &{}\text{ if }\;i<j \vee (i=j \wedge m<k), \\ T_{A}\prod _{s=1}^{n-1}a_{s+1,k}, &{} &{}\text{ if }\;i=j \wedge m=k, \end{array} \end{array}\right. } \end{aligned}$$

and it is non-zero. In consequence, we see that

$$\begin{aligned} {\text {deg}}r_{A,n}=(n-1)k+j. \end{aligned}$$

In order to give the value of the constant term, say \(C_{n}\), of \(r_{A,n}\), i.e., the value \(r_{A,n}(0)\), we consider two cases: \(m>0\) and \(m=0\). If \(m>0\), then by simple induction one can prove that

$$\begin{aligned} C_{n}=r_{A,n}(0)=q_{0}\prod _{s=1}^{n-1}a_{s+1,0}. \end{aligned}$$

If \(m=0\) then the value \(C_{n}=r_{A,n}(0)\) satisfies the recurrence relation \(C_{n}=a_{n,0}C_{n-1}-v_{n}C_{n-2}\). In the generality we are dealing here, we can not give an exact form of \(C_{n}\) and in fact we will not need it.

We are ready to prove our theorem. However, in order to simplify the proof a bit, we first compute the resultant of the polynomials \(r_{A,2}(x), r_{A,1}(x)\). We have the following chain of equalities

$$\begin{aligned} \begin{array}{lll} R_{2}&{}={\text {Res}}(r_{A,2},r_{A,1})={\text {Res}}(f_{2}r_{A,1}-v_{2}x^{m}r_{A,0},r_{A,1}) &{} \\ &{}=(-1)^{j(k+j)}{\text {Res}}(r_{A,1},f_{2}r_{A,1}-v_{2}x^{m}r_{A,0}) &{} \text{ by }\;(3) \\ &{}=(-1)^{j(k+j)}q_{j}^{k+j-(m+i)}{\text {Res}}(r_{A,1},-v_{2}x^{m}r_{A,0}) &{} \text{ by } \text{ Lemma }\; 2.1 \\ &{}=(-1)^{j(k+j)}q_{j}^{k+j-(m+i)}(-v_{2})^{j}{\text {Res}}(r_{A,1},x)^{m}{\text {Res}}(r_{A,1},r_{A,0}) &{} \text{ by }\;(4) \\ &{}=(-1)^{j(k+j+1)}v_{2}^{j}q_{0}^{m}q_{j}^{k+j-m-i}{\text {Res}}(r_{A,1},r_{A,0}), &{} \text{ by }\;(2), (5) \end{array} \end{aligned}$$

where in the last equality we used the identity \({\text {Res}}(r_{A,1},x)=r_{A,1}(0)=q_{0}\).

Now let us assume that \(n\ge 3\) and consider the polynomials \(r_{A,n}(x), r_{A,n-1}(x)\). We have the following chain of equalities:

$$\begin{aligned} \begin{array}{lll} R_{n}&{}={\text {Res}}(r_{A,n},r_{A,n-1})={\text {Res}}(f_{n}r_{A,n-1}-v_{n}x^{m}r_{A,n-2},r_{A,n-1}) &{} \\ &{}=(-1)^{((n-1)k+j)((n-2)k+j)}{\text {Res}}(r_{A,n-1},f_{n}r_{A,n-1}-v_{n}x^{m}r_{A,n-2}) &{} \text{ by }\;(3)\\ &{}=(-1)^{((n-1)k+j)((n-2)k+j)}L_{n-1}^{2k-m}{\text {Res}}(r_{A,n-1},-v_{n}x^{m}r_{A,n-2}) &{} \text{ by } \text{ Lemma }\; 2.1 \\ &{}=(-1)^{((n-1)k+j)((n-2)k+j)}L_{n-1}^{2k-m}{\text {Res}}(r_{A,n-1},-v_{n}x^{m}){\text {Res}}(r_{A,n-1},r_{A,n-2})&{} \text{ by }\;(4)\\ &{}=(-1)^{((n-1)k+j)((n-2)k+j)}L_{n-1}^{2k-m}(-v_{n})^{(n-2)k+j}r_{A,n-1}(0)^{m}R_{n-1} &{} \text{ by }\;(2), (5)\\ &{}=(-1)^{e_{A}(n)}v_{n}^{(n-2)k+j}L_{n-1}^{2k-m}C_{n-1}^{m}R_{n-1}. \end{array} \end{aligned}$$

Note that the first five equalities are true for all \(m\in \mathbb {N}\) not only \(m>0\). We will need this observation later.

If \(m>0\), then from the above computations we have obtained recurrence relation for the value of \(R_{n}={\text {Res}}(r_{A,n},r_{A,n-1})\). More precisely, we have

$$\begin{aligned} R_{n}=(-1)^{e_{A}(n)}v_{n}^{(n-2)k+j}L_{n-1}^{2k-m}C_{n-1}^{m}R_{n-1}. \end{aligned}$$

We consider the case \(i<j \vee (i=j \wedge m<k)\) first. By simple iteration of the above recurrence together with the expression for \(R_{2}\), we obtain the formula

$$\begin{aligned} R_{n}&=(-1)^{\sum _{u=3}^{n}e_{A}(u)}\left( \prod _{u=3}^{n}v_{u}^{(u-2)k+j}\right) \left( \prod _{u=2}^{n-1}L_{u}^{2k-m}C_{u}^{m}\right) R_{2}\\&=(-1)^{\sum _{u=2}^{n}e_{A}(u)}v_{2}^{j}q_{0}^{m}q_{j}^{k+j-m-i}\left( \prod _{u=3}^{n}v_{u}^{(u-2)k+j}\right) \\&\quad \times \left[ \prod _{u=2}^{n-1}\left( q_{0}^{m}q_{j}^{2k-m}\prod _{s=1}^{u-1}a_{s+1,0}^{m}a_{s+1,k}^{2k-m}\right) \right] \times R_{1}. \end{aligned}$$

We note the identity

$$\begin{aligned} \prod _{u=2}^{n-1}\prod _{s=1}^{u-1}a_{s+1,0}^{m}a_{s+1,k}^{2k-m}=\prod _{s=1}^{n-1}a_{s+1,0}^{m(n-s-1)}a_{s+1,k}^{(2k-m)(n-s-1)}, \end{aligned}$$

and after simplification of the resulting expression we get the first formula from the statement of our theorem with \(T_{A}=q_{j}\).

Performing exactly the same reasoning as above we get the formula from the statement in the case when \(i=k\) and \(m=k\) with \(T_{A}=(a_{2,k}q_{i}-v_{2}p_{i})/a_{2,k}\).

Let us back to the case \(m=0\). We put \(R_{n}'={\text {Res}}(r_{A,n}(x),r_{A,n-1}(x))\). First of all let us note that performing exactly the same reasoning as in the case of computation of \(R_{2}\) in case when \(m>0\), we easily get the equality

$$\begin{aligned} R_{2}'=(-1)^{j(k+j+1)}v_{2}^{j}q_{j}^{k+j-i}{\text {Res}}(r_{A,1},r_{A,0}). \end{aligned}$$

Note that \(R_{2}'\) is equal to \(R_{2}\) with m replaced by 0.

Let \(n\ge 3\). In order to find recurrence relation for \(R_{n}'\) we follow exactly the same approach as in the case of \(R_{n}\). In particular, we have

$$\begin{aligned} \begin{array}{lll} R_{n}'&{}=(-1)^{((n-1)k+j)((n-2)k+j)}L_{n-1}^{2k}{\text {Res}}(r_{A,n-1},-v_{n}){\text {Res}}(r_{A,n-1},r_{A,n-2}) &{}\\ &{}=(-1)^{((n-1)k+j)((n-2)k+j)}L_{n-1}^{2k}(-v_{n})^{(n-2)k+j}R_{n-1}' &{} \text{ by }\;(5)\\ &{}=(-1)^{e_{A}(n)}v_{n}^{(n-2)k+j}L_{n-1}^{2k}R_{n-1}'. \end{array} \end{aligned}$$

Again, from our reasoning, we see that \(R_{n}'\) is equal to \(R_{n}\) with m replaced by 0, where we taken into account the convention that \(r_{A,n-1}(0)^{0}=1\) for any value of \(r_{A,n-1}(0)\). In particular, we allow \(r_{A,n-1}(0)\) to be 0.

Summing up, our formula for \({\text {Res}}(r_{A,n},r_{A,n-1})\) from the statement of our theorem holds for each \(m\in \mathbb {N}\). \(\square \)

Remark 3.2

The formula for \({\text {Res}}(r_{A,n},r_{A,n-1})\) presented in Theorem 3.1 is not the most general one. Indeed, one can consider slightly more general recurrence and obtain similar result. More precisely, for given \(A\in {\mathcal {A}}\) one can consider the sequence \((g_{A,n}(x))_{n\in \mathbb {N}}\) defined in the following way:

$$\begin{aligned} g_{A,0}(x)&=\sum _{s=0}^{i}p_{s}x^{s},\quad g_{A,1}(x)=\sum _{s=0}^{j}q_{s}x^{s},\\ g_{A,n}(x)&=f_{n}(x)r_{A,n-1}(x)-v_{n}h(x)r_{A,n-2}(x)\quad \text{ for }\quad n\ge 2, \end{aligned}$$

where \(p_{i}q_{j}\ne 0\) and

$$\begin{aligned} f_{n}(x)=\sum _{s=0}^{k}a_{n,s}x^{s},\quad h(x)=\sum _{s=0}^{m}b_{s}x^{s}, \end{aligned}$$

where \(a_{n,s}b_{m}\ne 0\) for each \(n\in \mathbb {N}_{\ge 2}\). In particular h is fixed and does not depend on n. Moreover, in order to guarantee the good behavior of degree of the polynomial \(g_{A,n}\) we need to assume \(a_{2,k}q_{i}-v_{2}b_{m}p_{i}\ne 0\) for given kim. With the above definitions and the assumptions, we get the equalities \({\text {deg}}g_{A,0}=i, {\text {deg}}g_{A,1}=j\) and for \(n\ge 2\) we have \({\text {deg}}g_{A,n}=(n-1)k+j\). Thus we see that the leading term \(L_{A,n}\) of the polynomial \(g_{A,n}\) has the form:

$$\begin{aligned} L_{A,n}=T_{A}\prod _{s=1}^{n-1}a_{s+1,k}, \end{aligned}$$

where

$$\begin{aligned} T_{A}={\left\{ \begin{array}{ll}\begin{array}{lll} q_{j}, &{} &{}\text{ if }\;i<j \vee (i=j \wedge m<k), \\ \frac{a_{2,k}q_{i}-v_{2}b_{m}p_{i}}{a_{2,k}}, &{} &{}\text{ if }\;i=j \wedge m=k. \end{array}\end{array}\right. } \end{aligned}$$

Now, if we put \(G_{n}={\text {Res}}(g_{A,n},g_{A,n-1})\) then, using essentially the same reasoning as in the proof of Theorem 3.1, we get the recurrence relation for the sequence \((G_{n})_{n\in \mathbb {N}_{+}}\) in the form:

$$\begin{aligned} G_{n}=(-1)^{e_{A}(n)}v_{n}^{(n-2)k+j}L_{A,n-1}^{2k-m}{\text {Res}}(g_{A,n-1},h)G_{n-1}. \end{aligned}$$

By independent computation we get the equality

$$\begin{aligned} G_{2}=(-1)^{e_{A}(2)}v_{2}^{j}q_{j}^{k+j-m-i}{\text {Res}}(g_{A,1},h)G_{1}, \end{aligned}$$

and the explicit formula

$$\begin{aligned} G_{n}&=(-1)^{\sum _{u=2}^{e_{A}(u)}}q_{j}^{k+j-m-i}T_{A}^{(n-2)(2k-m)}\\&\quad \times \left( \prod _{u=0}^{n-2}v_{u+2}^{uk+j}\right) \left( \prod _{s=1}^{n-1}a_{s+1,k}^{(2k-m)(n-s-1)}{\text {Res}}(g_{A,s},h)\right) G_{1}. \end{aligned}$$

However, in order to compute \({\text {Res}}(g_{A,n},g_{A,n-1})\) with the help of the above formula we need to know the value of \({\text {Res}}(g_{A,s},h)\) for each \(s=1,\ldots ,n-1\), which in general is a difficult task (due to the complicated and essentially unknown form of the coefficients or \(g_{A,s}\)). We have simple expression for \({\text {Res}}(g_{A,s},h)\) only in the case when \(h(x)=x^{m}\). This is exactly the case presented in Theorem 3.1.

4 Applications

In this section we offer some application of Theorem 3.1. We consider the sequence \((r_{n})_{n\in \mathbb {N}}\) governed by the recurrence: \(r_{0}(x)=1, r_{1}(x)=a_{1}x+b_{1}\) and

$$\begin{aligned} r_{n}(x)=(a_{n}x+b_{n})r_{n-1}(x)-c_{n}x^{m}r_{n-2}(x), n\ge 2, \end{aligned}$$
(4.1)

where \(a_{n}, b_{n}, c_{n}\in K\) and \(a_{n}c_{n}\ne 0\) for \(n\in \mathbb {N}_{+}\) and \(m\in \{0,1\}\). For \(m=0\) we get the recurrence considered by Schur. In this case the result of Schur gives the expression for the resultant of the polynomials \(r_{n}\) and \(r_{n-1}\). Now, we show that under some assumptions on the sequences \((a_{n})_{n\in \mathbb {N}_{+}}, (b_{n})_{n\in \mathbb {N}_{+}}\) one can get nice expression for the resultant of the polynomials \(r_{n}, r_{n-2}\). More precisely, we prove the following

Theorem 4.1

Let \(m\in \{0,1\}\). Let \(a_{n}, b_{n}, c_{n}\in K\) for \(n\in \mathbb {N}_{+}\) and suppose that \(a_{n}c_{n}\ne 0\). Let us consider the sequence of polynomials \((r_{n}(x))_{n\in \mathbb {N}}\) defined by (4.1) and suppose that for each \(n\ge 2\) we have \(a_{n-2}b_{n}=a_{n}b_{n-2}\). Moreover, let us put \(d_{n}=\frac{a_{n}}{a_{n-2}}\). Then, if \(m=0\) the following formulas hold:

$$\begin{aligned} {\text {Res}}(r_{2n},r_{2(n-1)})&=\prod _{i=1}^{n-1}(a_{2i-1}a_{2i})^{4(n-i)}(c_{2i}c_{2i+1}d_{2i+2})^{2i},\\ {\text {Res}}\left( \frac{r_{2n+1}}{a_{1}x+b_{1}},\frac{r_{2n-1}}{a_{1}x+b_{1}}\right)&=\prod _{i=1}^{n-1}(a_{2i}a_{2i+1})^{4(n-i)}(c_{2i+1}c_{2i+2}d_{2i+3})^{2i}. \end{aligned}$$

If \(m=1\) we have the following formulas:

$$\begin{aligned} {\text {Res}}(r_{2n},r_{2(n-1)})&=\prod _{i=1}^{n-1}(a_{2i-1}^{3}a_{2i}^{3}b_{2i-1}b_{2i})^{n-i}(c_{2i+1}c_{2i+2}d_{2i+2})^{2i},\\ {\text {Res}}\left( \frac{r_{2n+1}}{a_{1}x+b_{1}},\frac{r_{2n-1}}{a_{1}x+b_{1}}\right)&=b_{1}^{1-n}\prod _{i=1}^{n-1}b_{2i+1}(a_{2i}^{3}a_{2i+1}^{3}b_{2i-1}b_{2i})^{n-i}(c_{2i+2}c_{2i+3}d_{2i+3})^{2i}. \end{aligned}$$

Proof

In order to apply Theorem 3.1 for computation of \({\text {Res}}(r_{2n},r_{2(n-1)})\) and \({\text {Res}}(r_{2n+1},r_{2n-1})\) we need to express \(r_{n}\) in terms of \(r_{n-2}\) and \(r_{n-4}\). First, solving (4.1) with respect to \(r_{n-1}\) we get

$$\begin{aligned} r_{n-1}&=\frac{1}{a_{n}x+b_{n}}(r_{n}+c_{n}x^{m}r_{n-2}),\\ r_{n-3}&=\frac{1}{a_{n-2}x+b_{n-2}}(r_{n-2}+c_{n-2}x^{m}r_{n-4}). \end{aligned}$$

Next, from the relation (4.1) with n replaced by \(n-1\) and the above expressions we get

$$\begin{aligned} \begin{aligned} \frac{1}{a_{n}x+b_{n}}(&r_{n}+c_{n}x^{m}r_{n-2})\\&=(a_{n-1}x+b_{n-1})r_{n-2}-\frac{c_{n-1}x^{m}}{a_{n-2}x+b_{n-2}}(r_{n-2}+c_{n-2}x^{m}r_{n-4}). \end{aligned} \end{aligned}$$
(4.2)

Observing now that the condition \(a_{n}b_{n-2}=a_{n-2}b_{n}\) implies that the expression

$$\begin{aligned} \frac{a_{n}x+b_{n}}{a_{n-2}x+b_{n-2}}=\frac{a_{n}(a_{n}x+b_{n})}{a_{n}a_{n-2}x+a_{n}b_{n-2}}=\frac{a_{n}(a_{n}x+b_{n})}{a_{n-2}(a_{n}x+ab_{n}}=\frac{a_{n}}{a_{n-2}}=d_{n}. \end{aligned}$$

does not depend on x. Thus, the relation (4.2) can be rewritten in the following equivalent form

$$\begin{aligned} r_{n}=h_{n}(x)r_{n-2}-c_{n-1}c_{n-2}d_{n}x^{m}r_{n-4}, \end{aligned}$$
(4.3)

where

$$\begin{aligned} h_{n}(x)=a_{n-1}a_{n}x^2+(a_{n}b_{n-1}+a_{n-1}b_{n})x-(c_{n}+c_{n-1}d_{n})x^{m}+b_{n-1}b_{n}. \end{aligned}$$

First we consider the case \(m=0\). Having the above recurrence relation (4.3) it is an easy task to get the expression for \({\text {Res}}(r_{2n},r_{2(n-1)})\). Indeed, we replace n by 2n and apply Theorem 3.1 to the polynomial \(r_{A,n}(x):=r_{2n}(x), n\in \mathbb {N}\), with

$$\begin{aligned} A=(i,j,k,m)=(0,2,2,0),\; f_{n}(x)=h_{2n}(x),\; v_{n}=c_{2n-1}c_{2n-2}d_{2n},\; T_{A}=a_{1}a_{2}. \end{aligned}$$

After necessary simplifications we get the expression from the statement of the theorem.

Next, we note that \(r_{1}(x)=a_{1}x+b_{1}\) and from the identity \(a_{1}b_{3}=a_{3}b_{1}\) we get \(r_{3}(x)\equiv 0\pmod {a_{1}x+b_{1}}\). In consequence, from the relation (4.3) we immediately get that \(r_{2n+1}\equiv 0\pmod {a_{1}x+b_{1}}\) for each \(n\in \mathbb {N}\). Thus, in order to apply Theorem 3.1 we write \(r_{A,n}(x):=\frac{r_{2n+1}(x)}{a_{1}x+b_{1}}\) for \(n\in \mathbb {N}\) with

$$\begin{aligned} A=(i,j,k,m)=(1,2,2,0),\; f_{n}(x)=h_{2n+1}(x),\; v_{n}=c_{2n-1}c_{2n}d_{2n+1}, \; T_{A}=a_{2}a_{3}. \end{aligned}$$

After necessary simplifications we get the first part of our theorem.

In case \(m=1\) we perform exactly the same reasoning. We replace n by 2n and apply Theorem 3.1 to the polynomial \(r_{A,n}(x):=r_{2n}(x), n\in \mathbb {N}\), with

$$\begin{aligned} A=(i,j,k,m)=(0,2,2,0),\; f_{n}(x)=h_{2n}(x),\; v_{n}=c_{2n-1}c_{2n-2}d_{2n},\; T_{A}=a_{1}a_{2}. \end{aligned}$$

Finally, in order to consider the last formula from the statement of our theorem, we note that \(r_{1}(x)=a_{1}x+b_{1}\) and from the identity \(a_{1}b_{3}=a_{3}b_{1}\) we get \(r_{3}(x)\equiv 0\pmod {a_{1}x+b_{1}}\). In consequence, from the relation (4.3) we immediately get that \(r_{2n+1}\equiv 0\pmod {a_{1}x+b_{1}}\) for each \(n\in \mathbb {N}\). Thus, in order to apply Theorem 3.1 we write \(r_{A,n}(x):=\frac{r_{2n+1}(x)}{a_{1}x+b_{1}}\) for \(n\in \mathbb {N}\) with

$$\begin{aligned} A=(i,j,k,m)=(1,2,2,0),\; f_{n}(x)=h_{2n+1}(x),\; v_{n}=c_{2n-1}c_{2n}d_{2n+1}, \; T_{A}=a_{2}a_{3}. \end{aligned}$$

After necessary simplifications we get our last formula. \(\square \)

Remark 4.2

The condition saying that \(a_{n}b_{n-2}=a_{n-2}b_{n}\) for \(n\in \mathbb {N}_{\ge 3}\) seems to be quite strong. However, it is clear that for \(b_{n}=0\) this condition is satisfied. Notice that in this case we deal with an important class of orthogonal polynomials which corresponds to moment functionals which are symmetric. We recall the necessary definitions. Let \((\mu _{n})_{n\in \mathbb {N}}\) be a sequence of complex numbers and let \({\mathcal {L}}\) be a complex valued function defined on \(\mathbb {C}[x]\) satisfied the conditions

$$\begin{aligned} {\mathcal {L}}[x^{n}]=\mu _{n},\quad {\mathcal {L}}[\alpha _{1}F_{1}(x)+\alpha _{2}F_{2}(x)]=\alpha _{1}{\mathcal {L}}[F_{1}(x)]+\alpha _{2}{\mathcal {L}}[F_{2}(x)], \end{aligned}$$

for each \(n\in \mathbb {N}\) and \(\alpha _{1}, \alpha _{2}\in \mathbb {C}\).

The moment functional is used in the definition of orthogonal polynomials. Indeed, the sequence \((Q_{n}(x))_{n\in \mathbb {N}}\) is an orthogonal sequence if:

  1. (1)

    \({\text {deg}}Q_{n}=n\),

  2. (2)

    \({\mathcal {L}}[Q_{m}(x)Q_{n}(x)]=0\) for \(m\ne n\) and \({\mathcal {L}}[Q_{n}(x)^2]\ne 0\).

The moment functional is called symmetric if all of its moments of odd order are 0, i.e., \({\mathcal {L}}[x^{2n+1}]=0\) for \(n\in \mathbb {N}\). However, this is equivalent with the condition \(b_{n}=0\) for \(n\ge 1\) (see [2, Theorem 4.3]) and guarantees the existence of our compact formula given in Theorem 4.1.

This condition is satisfied by the Legendre, Hermite, Chebyshev, Bessel, Lommel \(\ldots \) and many other sequences of orthogonal polynomials (see [2, Chapter V]). We present three illustrative examples.

Example 4.3

The sequence \((P_{n}(x))_{n\in \mathbb {N}}\) of Legendre polynomials is given by \(P_{0}(x)=1\), \(P_{1}(x)=x\) and the recurrence relation

$$\begin{aligned} P_{n}(x)=\frac{2n-1}{n}xP_{n-1}(x)-\frac{n-1}{n}P_{n-2}(x) \quad \text {for } n\ge 2. \end{aligned}$$

In particular, we have

$$\begin{aligned} a_{n}=\frac{2n-1}{n},\quad b_{n}=0,\quad c_{n}=\frac{n-1}{n}. \end{aligned}$$

It is clear that \(a_{n-2}b_{n}=a_{n}b_{n-2} (=0)\) for \(n\ge 2\) and we get

$$\begin{aligned} d_{n}=\frac{(n-2)(2n-1)}{n(2n-5)}. \end{aligned}$$

After necessary simplifications, we get the following formulas:

$$\begin{aligned} {\text {Res}}&(P_{2n}(x),P_{2(n-1)}(x))\\&=\prod _{s=0}^{n-1}\left( \frac{s(2s-1)(4s+3)}{(s+1)(2s+1)(4s-1)}\right) ^{2s}\left( \frac{(4s+1)(4s+3)}{2(s+1)(2s+1)}\right) ^{4(n-s-1)},\\ \\ {\text {Res}}&\left( \frac{P_{2n+1}(x)}{x},\frac{P_{2n-1}(x)}{x}\right) \\&=\prod _{s=0}^{n-1}\left( \frac{s(2s+1)(4s+5)}{(s+1)(2s+3)(4s+1)}\right) ^{2s}\left( \frac{(4s+3)(4s+5)}{2(s+1)(2s+3)}\right) ^{4(n-s-1)}, \end{aligned}$$

with the convention that \(0^{0}=1\). As a simple consequence of our computations, we get that the polynomials \(P_{n}, P_{n-2}\) are co-prime or their only common root is \(x=0\) for each \(n\in \mathbb {N}_{\ge 2}\).

Example 4.4

In the case of the Hermite polynomials \((H_{n})_{n\in \mathbb {N}}\) we have \(H_{0}(x)=1\), \( H_{1}(x)=x\) and, for \(n\ge 2\), we have the recurrence relation

$$\begin{aligned} H_{n}(x)=2xH_{n-1}(x)-2(n-1)H_{n-2}(x). \end{aligned}$$

In particular, we have

$$\begin{aligned} a_{n}=2,\quad b_{n}=0,\quad c_{n}=2(n-1). \end{aligned}$$

It is clear that \(a_{n-2}b_{n}=a_{n}b_{n-2} (=0)\) for \(n\ge 2\) and we get that \(d_{n}=1\). In consequence, after necessary simplifications, we get the following formulas:

$$\begin{aligned} {\text {Res}}(H_{2n}(x),H_{2(n-1)}(x))&=2^{7n(n-1)}\prod _{s=1}^{n-1}((2s-1)s)^{2s},\\ {\text {Res}}\left( \frac{H_{2n+1}(x)}{x},\frac{H_{2n-1}(x)}{x}\right)&=2^{7n^2-3n-2}\prod _{s=1}^{n-1}((2s+1)s)^{2s}. \end{aligned}$$

Example 4.5

Using Theorem 3.1 we prove a resultant formula for the sequence of polynomials with combinatorial coefficients. For given \(a\in \mathbb {N}\) and \(n\in \mathbb {N}\) we consider the polynomial

$$\begin{aligned} V_{n}(x)=\sum _{i=0}^{n}\left( {\begin{array}{c}2i\\ i\end{array}}\right) \left( {\begin{array}{c}2(n-i)\\ n-i\end{array}}\right) x^{i}. \end{aligned}$$

The sequence \((V_{n} (x)_{n\in \mathbb {N}}\) is not a sequence of orthogonal polynomials. It is clear that \(V_{0}(x)=1\), \(V_{1}(x)=2(x+1)\). For \(n\ge 2\), the recurrence relation

$$\begin{aligned} V_{n}(x)=\frac{2(2n-1)}{n}(x+1)V_{n-1}(x)-\frac{16(n-1)}{n}xV_{n-2}(x). \end{aligned}$$

holds. This relation can be proved easily by induction on n with the help of the recurrence satisfied by the sequence of central binomial coefficients \((\left( {\begin{array}{c}2n\\ n\end{array}}\right) )_{n\in \mathbb {N}}\). We omit the details. Now in order to get the formula for \({\text {Res}}(V_{n}(x),V_{n-1}(x))\) it is enough to apply Theorem 3.1 with

$$\begin{aligned} A=(0,1,1,1),\; q_{0}=q_{1}=2,\; a_{n,0}=a_{n,1}=\frac{2(2n-1)}{n},\;v_{n}=\frac{16(n-1)}{n}. \end{aligned}$$

After necessary simplifications we get the formula

$$\begin{aligned} {\text {Res}}(V_{n}(x),V_{n-1}(x))=2^{3n(n-1)}\prod _{s=1}^{n-1}\left( \frac{s}{2s+1}\right) ^{s}\left( \frac{2s+1}{s+1}\right) ^{n-1}. \end{aligned}$$

Note that the sequence of polynomials \((V_{n}(x))_{n\in \mathbb {N}}\) (or to be more precise: the recurrence relation defining the sequence) satisfies also the assumption of Theorem 4.1. Thus, one can also compute the value of the resultant \({\text {Res}}(V_{n}(x),V_{n-2}(x))\).