Next Article in Journal
Integral Equations: Theories, Approximations, and Applications
Previous Article in Journal
Degeneracy Patterns of Chiral Companions at Finite Temperature
Previous Article in Special Issue
An Improved Estimation Method of Mutual Inductance Angle for a Two-Dimensional Wireless Power Transfer System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rational Interpolation: Jacobi’s Approach Reminiscence

Faculty of Applied Mathematics and Control Processes, St. Petersburg State University, 7-9 Universitetskaya nab., 199034 St. Petersburg, Russia
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1401; https://doi.org/10.3390/sym13081401
Submission received: 18 June 2021 / Revised: 22 July 2021 / Accepted: 28 July 2021 / Published: 1 August 2021
(This article belongs to the Special Issue Current Trends in Symmetric Polynomials with Their Applications Ⅲ)

Abstract

:
We treat the interpolation problem { f ( x j ) = y j } j = 1 N for polynomial and rational functions. Developing the approach originated by C. Jacobi, we represent the interpolants by virtue of the Hankel polynomials generated by the sequences of special symmetric functions of the data set like { j = 1 N x j k y j / W ( x j ) } k N and { j = 1 N x j k / ( y j W ( x j ) ) } k N ; here, W ( x ) = j = 1 N ( x x j ) . We also review the results by Jacobi, Joachimsthal, Kronecker and Frobenius on the recursive procedure for computation of the sequence of Hankel polynomials. The problem of evaluation of the resultant of polynomials p ( x ) and q ( x ) given a set of values { p ( x j ) / q ( x j ) } j = 1 N is also tackled within the framework of this approach. An effective procedure is suggested for recomputation of rational interpolants in case of extension of the data set by an extra point.

1. Introduction

Given the data set for the variables x and y
x x 1 x 2 x N y y 1 y 2 y N ,
with distinct nodes { x 1 , , x N } R , we treat the interpolation problem in a sense of finding a function f ( x ) such that f ( x j ) = y j R for j { 1 , , N } . Systematic exploration of the problem was started since 17th century; for the historical overview and the review of numerous applications of the relevant theory, we refer to [1] and the references cited therein.
Depending on the concrete objectives, the solution to the problem can be looked for in various classes of interpolants like algebraic polynomial, trigonometric polynomial, sum of exponentials, rational functions, etc. The two interpolation problems dealt with in the present paper are polynomial and rational ones.
In comparison with the polynomial interpolation problem, the rational interpolation one has its beginning a century and a half later, with the first explicit formula due to Cauchy [2]. Further its development was made by Kronecker [3] and Netto [4] (we briefly discuss it in Section 5). The interest to the problem revives in the second part of the 20th century and is connected with its application in Approximation Theory, Control Theory (recovering of the transfer function from the frequency responses) and Error Correcting Codes; in the latter case, the problem is treated in finite fields. We refer to [5,6,7,8] for recent developments and further references on the rational interpolation problem as well as to its generalization, known as rational Hermite’s or osculatory rational interpolation problem, where the values of some derivatives for f ( x ) are assumed to be specified at some nodes.
The present paper is devoted to an approach to the rational interpolation problem originated in 1846 by Carl Jacobi [9]. Within the framework of this approach, the numerator and the denominator of the rational interpolant
f ( x ) = p ( x ) / q ( x ) , deg p ( x ) + deg q ( x ) = N 1
are constructed in the form of the so called Hankel polynomials, i.e., polynomials represented in the determinantal form as
H k ( x ) = c 0 c 1 c 2 c k c 1 c 2 c 3 c k + 1 c k 1 c k c k + 1 c 2 k 1 1 x x 2 x k .
with the entries { c 0 , c 1 , , c 2 k 1 } C called its generators. Numerator and denominator of the rational interpolant can be expressed as Hankel polynomials of appropriate orders with the generating sequences chosen in the form of suitable symmetric functions of the data set (1), namely
τ ˜ k = j = 1 N 1 y j x j k W ( x j ) and τ k = j = 1 N y j x j k W ( x j ) for k { 0 , 1 , }
correspondingly; here W ( x ) = j = 1 N ( x x j ) . Jacobi’s approach has not got much attention and further development in modern computational mathematics; perhaps the only exception is the paper [10]. This disregard is probably caused by an evident computational bottleneck, namely the computation of a parameter dependent determinant. In case of its large order, it is not a trivial task.
Fortunately a specific structure of the involved determinants helps a lot: there exists a recursive procedure for the Hankel polynomial computation. It is based on the identity linking the Hankel determinants of three consecutive orders:
A k H k 2 ( x ) + ( B k x ) H k 1 ( x ) + 1 A k H k ( x ) 0 ,
here, A k and B k are some constants that can be expressed via the coefficients of H k 2 ( x ) and H k 1 ( x ) . Formula (4) looks similar to the recurrent formula connecting orthogonal polynomials. It is indeed the case for the positive definite Hankel matrix, i.e., under this condition, the Hankel polynomials H 1 ( x ) , H 2 ( x ) , are orthogonal with respect to the inner product introduced via this matrix taken as the Gram matrix. In [11], this idea was extended to the Hankel matrices with the non-zero leading principal minors. Application of the recursive procedures for computation of the sequences of Hankel polynomials to the rational interpolation problem provides one with an opportunity to efficiently compute not only a single interpolant, but the whole family of fractions with all possible combinations of degrees for the numerator and the denominator satisfying the restriction deg p ( x ) + deg q ( x ) = N 1 .
One issue of this scheme should be additionally discussed, namely constants’ evaluation in (4). For the classical orthogonal polynomials or for their counterparts from [11], this is done via the inner product computations involving polynomials H k 1 ( x ) and x H k 1 ( x ) . However, there exists an alternative algorithm for the computation of the constants in (4). Surprisingly, this version of the identity (4) is also related to Jacobi with the original idea outlined in his paper [12] as of 1836, and further complete treatment executed by his disciple Ferdinand Joachimsthal [13]. This approach is free of the orthogonality concept and is based on purely determinantal formalism.
The present paper is organized as follows. We first discuss the properties of Hankel polynomials in Section 3 starting with the Jacobi–Joachimsthal result. The first two of the present authors have excavated this result and published it with the original Joachimsthal’s proof in the paper [14]. For the convenience of a reader of the present paper, we decided to include an English translation of the proof (original Jacobi’s paper was written in Latin, Joachimsthal’s—in German, the paper [14] is in Russian). Further investigation of the 19th century heredity, results in discovering the generalization of the Jacobi–Joachimsthal result (the cases of degeneracy of (4) like, for instance, A k = 0 ) in the works by Kronecker and Frobenius [15]; we include their results in Section 3. The necessity in detailed presentation of this classical algebra material (with the exception of Theorem 8, which is due to the present authors) is justified by its complete forgetfulness in the next two centuries. Indeed, we failed to find any references to them either in the monographs [16,17,18] or even in research papers. This state of affairs might seem unfair, but not an extraordinary one, if not for one circumstance. As a matter of fact, the referred results should be treated as the gist of the algorithm, which is nowadays known as the Berlekamp–Massey algorithm [19,20]; it was suggested for the decoding procedure in Bose–Chaudhury–Hocquenghem (BCH) codes and for finding the minimal polynomial of a linear recurrent sequence.
The deployment of this Hankel polynomial formalism to the main problem of the paper is started in Section 4 with the treatment of the polynomial interpolation. We discuss here some properties of the sums like (3), i.e., symmetric functions of the data set. The rational interpolant construction is dealt with in Section 5 where the efficiency of the two approaches (i.e., Jacobi–Joachimsthal and orthogonality-based) for computation of the Hankel polynomial via (4) is discussed.
The results of Section 6 and Section 7 should be regarded as completely original. In Section 6, we deal with the question which relates to the uniqueness problem for the rational interpolation. Having the data set (1) generated by some rational fraction p ( x ) / q ( x ) with deg p ( x ) + deg q ( x ) = N 1 , is it possible to establish that the fraction is irreducible, (or, equivalently, that polynomials p ( x ) and q ( x ) do not possess a common zero) avoiding, as an intermediate step, their explicit representation? The question is equivalent to the possibility of expressing the resultant of polynomials p ( x ) and q ( x ) in terms of the entries of the data set (1). We prove that the resultant can be expressed in the form of an appropriate Hankel determinant generated by any of the sequences (3).
In Section 7, we present an effective procedure for recomputation of Hankel polynomials for the data set (1) complemented by an additional point. It turns out that generically the kth order Hankel polynomial composed for the extended data set can be expressed as a linear combination of two of its counterparts (of the orders k and k 1 ) constructed for the initial set (1).
Relationship of the Jacobi’s approach to the rational interpolation problem with that one based on barycentric formula [6,21] is discussed in Section 8.
In Section 9, we briefly outline an application of the Hankel polynomial formalism to the problems of Coding Theory. We address here the Berlekamp–Welch algorithm [22] for the error detection and correction in a data sequence.

2. Algebraic Preliminaries

For the convenience of further references, we recall here some auxiliary results from the Matrix theory and Polynomial Field theory.
Theorem 1
(Kronecker [23,24]). If in a given matrix a certain r th order minor is not zero, and all the ( r + 1 ) th order minors containing that r th order minor are zero, then all the ( r + 1 ) th minors are zero, and, therefore, the rank of the matrix equals r .
Theorem 2
(Sylvester [16]). For a square matrix A of the order n denote by
A i 1 i 2 i n j 1 j 2 j n
its minor standing in the rows i 1 , i 2 , , i k and in the columns j 1 , j 2 , , j k . The following Sylvester’s identity is valid:
A 1 2 n 2 1 2 n 2 · det A = A 1 2 n 2 n 1 1 2 n 2 n 1 · A 1 2 n 2 n 1 2 n 2 n A 1 2 n 2 n 1 2 n 2 n 1 · A 1 2 n 2 n 1 1 2 n 2 n .
We next recall the definition of the resultant of polynomials [8,23,25]. For the polynomials
p ( x ) = p 0 x n + p 1 x n 1 + + p n and q ( x ) = q 0 x m + q 1 x m 1 + + q m
with p 0 0 , q 0 0 , n 1 , m 1 their resultant is formally defined as
R ( p ( x ) , q ( x ) ) = p 0 m j = 1 n q ( λ j )
where λ 1 , , λ n denote the zeros of p ( x ) (counted in accordance with their multiplicities). Equivalently, the resultant can be defined as
R ( p ( x ) , q ( x ) ) = ( 1 ) m n q 0 n = 1 m p ( μ )
where μ 1 , , μ m denote the zeros of q ( x ) (counted in accordance with their multiplicities). As for the constructive methods of computing the resultant as a polynomial function of the coefficients of p ( x ) and q ( x ) , this can be done with the aid of several determinantal representations (like Sylvester’s, Bézout’s or Kronecker’s).
Example 1.
For the polynomials
p ( x ) = p 0 x 3 + p 1 x 2 + p 2 x + p 3 a n d q ( x ) = q 0 x 5 + + q 5
with p 0 0 , q 0 0 , their resultant in Sylvester’s form is the ( 3 + 5 ) -order determinant:
R ( p ( x ) , q ( x ) ) = p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 q 0 q 1 q 2 q 3 q 4 q 5 q 0 q 1 q 2 q 3 q 4 q 5 q 0 q 1 q 2 q 3 q 4 q 5 5 3
Not indicated entries of the determinant are assigned to zero.
Theorem 3.
Polynomials p ( x ) and q ( x ) possess a common zero if and only if their resultant vanishes: R ( p ( x ) , q ( x ) ) = 0 .
An important particular case related to the resultant of a polynomial and its derivative gives rise to a special notion: the expression
D ( p ( x ) ) = ( 1 ) n ( n 1 ) / 2 p 0 R ( p ( x ) , p ( x ) )
is known as the discriminant of the polynomial p ( x ) . It is a polynomial function in the coefficients p 0 , , p n .
Corollary 1.
Polynomial p ( x ) , deg p ( x ) 2 possesses a multiple zero if and only if D ( p ( x ) ) = 0 .
We conclude the list of algebraic preliminaries with Hermann Weyl’s result known as the Principle of the Irrelevance of Algebraic Inequalities ([26], p. 4, Lemma (1.1.A); in the following original statement, k stands for an infinite integral domain):
Theorem 4
(Weyl H.). A k-polynomial F ( x , y , ) vanishes identically if it vanishes numerically for all sets of rational values x = α , y = β , subject to a number of algebraic inequalities
R 1 ( α , β , ) 0 , R 2 ( α , β , ) 0 ,

3. Hankel Determinants and Polynomials

For a (finite or infinite) sequence of complex numbers
{ c j } j = 0 = c 0 , c 1 ,
a square matrix in which each ascending skew-diagonal from left to right is constant
c i + j 2 i , j = 1 k = c 0 c 1 c 2 c k 1 c 1 c 2 c 3 c k c k 2 c k 1 c k c 2 k 3 c k 1 c k c k + 1 c 2 k 2 k × k
is called Hankel matrix of order k generated by the sequence (9); its determinant will be denoted by H k ( { c } ) or simply H k if it will not cause misunderstandings.
The following result is a particular case of Sylvester’s identity (5).
Theorem 5.
Hankel determinants of three consecutive orders are linked by the equality (sometimes also referred to as the Jacobi identity [17]):
H k 2 H k = H k 1 c 0 c 1 c k 3 c k 1 c 1 c 2 c k 2 c k c k 3 c k 2 c 2 k 6 c 2 k 4 c k 1 c k c 2 k 4 c 2 k 2 c 0 c 1 c k 3 c k 1 c 1 c 2 c k 2 c k c k 3 c k 2 c 2 k 6 c 2 k 4 c k 2 c k 1 c 2 k 5 c 2 k 3 2 .
If we replace the last row of the Hankel matrix of order k + 1 with the row of powers of x, then the corresponding determinant
H k ( x ; { c } ) = c 0 c 1 c 2 c k c 1 c 2 c 3 c k + 1 c k 1 c k c k + 1 c 2 k 1 1 x x 2 x k ( k + 1 ) × ( k + 1 )
or simply H k ( x ) , is called the kth Hankel polynomial [17] generated by the sequence (9). Expansion of (12) by its last row yields
H k ( x ) h k 0 x k + h k 1 x k 1 + h k 2 x k 2 + with h k 0 = H k .
Thus, deg H k ( x ) = k iff H k 0 . We will also utilize an alternative representation for Hankel polynomial in the form of Hankel determinant resulting from linear transformation of the columns of the determinant (12):
H k ( x ; { c } ) = ( 1 ) k c 1 c 0 x c 2 c 1 x c k c k 1 x c 2 c 1 x c 3 c 2 x c k + 1 c k x c k c k 1 x c k + 1 c k x c 2 k 1 c 2 k 2 x k × k = ( 1 ) k H k ( { c j + 1 c j x } j = 0 )
It turns out that there exists an explicit linear relation between any three consecutive Hankel polynomials generated by arbitrary sequence (9).
Theorem 6
(Jacobi, Joachimsthal, refs. [12,13]). Any three consecutive Hankel polynomials
H k 2 ( x ) , H k 1 ( x ) , H k ( x )
are linked by the following identity
H k 2 H k 2 ( x ) + H k h k 1 , 1 H k 1 h k 1 H k H k 1 x H k 1 ( x ) + H k 1 2 H k ( x ) 0
which hereinafter will be referred to as the JJ-identity.
Proof is a slightly modernized original proof from [13].
Lemma 1.
Let the Hankel polynomial H k ( x ) be generated by the sequence
c j = = 1 m λ j , for j { 0 , , 2 k 1 }
for arbitrary distinct λ 1 , , λ m with m > k ; hence, c 0 = m . Then, the following equalities are valid:
= 1 m λ j H k ( λ ) = 0 i f j { 0 , , k 1 } , H k + 1 i f j = k .
Proof. 
λ 1 j H k ( λ 1 ) + λ 2 j H k ( λ 2 ) + + λ m j H k ( λ m ) = λ 1 j c 0 c 1 c k c 1 c 2 c k + 1 c k 1 c k c 2 k 1 1 λ 1 λ 1 k + + λ m j c 0 c 1 c k c 1 c 2 c k + 1 c k 1 c k c 2 k 1 1 λ m λ m k .
Using the linear property of the determinant convert this linear combination of determinants into a single one:
c 0 c 1 c k c 1 c 2 c k + 1 c k 1 c k c 2 k 1 c j c j + 1 c j + k .
If j < k , then the last determinant possesses two identical rows. Therefore, in this case, it is just zero. For j = k the obtained determinant coincides with H k + 1 .  □
Proof of Theorem 6. 
First consider the case where the generating sequence for the polynomials H k 2 ( x ) , H k 1 ( x ) , H k ( x ) is given by (16).  □
Assuming that H k 1 0 (i.e., deg H k 1 ( x ) = k 1 ) divide H k ( x ) by H k 1 ( x ) :
H k ( x ) Q ( x ) H k 1 ( x ) + R ( x ) .
Here, the undetermined coefficients of the quotient Q ( x ) can be determined from those of H k ( x ) and H k 1 ( x ) via equating the coefficients of corresponding powers of x in the both sides of (18):
Q ( x ) = Q 0 + Q 1 x where Q 1 = H k H k 1 , Q 0 = H k 1 h k 1 H k h k 1 , 1 H k 1 2 ,
or, equivalently,
Q ( x ) 1 H k 1 2 0 H k 1 H k H k 1 h k 1 , 1 h k 1 1 x 0 .
To find the coefficients of the remainder
R ( x ) = R 0 + R 1 x + + R k 2 x k 2
substitute x = λ 1 , , x = λ m into (18):
H k ( λ ) = Q 1 λ + Q 0 H k 1 ( λ ) + R 0 + R 1 λ + + R k 2 λ k 2 = 1 m .
Summation of these equalities yields
= 1 m H k ( λ ) = Q 0 = 1 m H k 1 ( λ ) + Q 1 = 1 m λ H k 1 ( λ ) + ( c 0 R 0 + c 1 R 1 + + c k 2 R k 2 ) .
Due to (17), one gets
0 = c 0 R 0 + c 1 R 1 + + c k 2 R k 2 .
Next multiply every equality (22) by the corresponding λ j and sum up the obtained equalities by . For j { 1 , , k 3 } , the resulting equality looks similar to the previously deduced:
0 = c j R 0 + c j + 1 R 1 + + c j + k 2 R k 2 .
Multiplication of equalities (22) by λ k 2 yields something different:
0 = H k Q 1 + c k 2 R 0 + c k 1 R 1 + + c 2 k 4 R k 2 .
Unifying the obtained relations for R 0 , , R k 2 with (21), one gets the linear system:
c 0 R 0 + c 1 R 1 + + c k 2 R k 2 = 0 , c 1 R 0 + c 2 R 1 + + c k 1 R k 2 = 0 , c k 3 R 0 + c k 2 R 1 + + c 2 k 5 R k 2 = 0 , c k 2 R 0 + c k 1 R 1 + + c 2 k 4 R k 2 + H k Q 1 = 0 , R 0 + x R 1 + + x k 2 R k 2 R ( x ) = 0 .
Consider it as a system of homogeneous equations with respect to the variables R 0 , R 1 , , R k 2 , 1. Since it possesses a nontrivial solution, its determinant necessarily vanishes:
c 0 c 1 c k 2 0 c 1 c 2 c k 1 0 c k 3 c k 2 c 2 k 5 0 c k 2 c k 1 c 2 k 4 H k Q 1 1 x x k 2 R ( x ) = 0 .
Expansion of the determinant by its last column yields:
H k 1 R ( x ) + H k Q 1 H k 2 ( x ) 0 .
Together with the already obtained expression (19) for Q 1 , this confirms the validity of (15) for the particular case of the generating sequence given by (16).
Consider now the case of arbitrary generating sequence (9). For any given sequence of complex numbers c 1 , , c 2 k 1 it is possible to find complex numbers λ 1 , , λ m with m > 2 k 1 such that the equations (16) become consistent. These numbers can be chosen to be the zeros of a polynomial of the degree m whose first 2 k 1 Newton sums [25] coincide with { c j } j = 1 2 k 1 .
To complete the proof of (15), one should fill one gap in the arguments of the previous paragraph. Whereas the numbers c 1 , , c 2 k 1 can be chosen arbitrarily, the number c 0 takes the positive integer value, namely m. Thus, the validity of (15) is proved only for any positive integer c 0 . However, this equality is an algebraic one in c 0 . Being valid for an infinite set of integers, it should be valid for any c 0 C .
Joachimsthal did not provide the proof of the result for the case H k 1 = 0 . Theorem 4 manages this case. The left-hand side of (15) and H k 1 are polynomials in c 0 , , c 2 k 1 .
The relationship (15) gives rise to a more symmetric form:
Corollary 1.
If H k 0 , H k 1 0 , then the JJ-identity can be written down as
H k H k 1 H k 2 ( x ) x h k 1 , 1 H k 1 + h k 1 H k H k 1 ( x ) + H k 1 H k H k ( x ) 0 .
The next result will be used in one of the further proofs.
Corollary 2.
For the coefficient h k 1 , 2 , the following representation is valid:
h k 1 , 2 = c 0 c 1 c k 3 c k 1 c 1 c 2 c k 2 c k c k 3 c k 2 c 2 k 6 c 2 k 4 c k 1 c k c 2 k 4 c 2 k 2 c 0 c 1 c k 3 c k c 1 c 2 c k 2 c k + 1 c k 2 c k 1 c 2 k 5 c 2 k 2 .
Proof. 
Take the coefficient of x k 2 from the left-hand side of the JJ-identity:
H k 2 H k 2 + H k H k 1 , 1 H k 1 h k 1 h k 1 , 1 H k H k 1 h k 1 , 2 + H k 1 2 h k 2 = 0 .
Under assumption that H k 1 0 , H k 0 , it follows
h k 1 , 2 = H k H k 2 + h k 1 , 1 2 H k 1 h k 1 h k 1 , 1 H k 1 h k 2 H k .
Apply Sylvester’s identity (5) to both fractions in the right-hand side. For the first one, we take it in the version (11) while for the second one we apply it to the matrix
c 0 c 1 c k 2 c k 1 c k c 1 c 2 c k 1 c k c k + 1 c k 2 c k 1 c 2 k 4 c 2 k 3 c 2 k 2 c k 1 c k c 2 k 3 c 2 k 2 c 2 k 1 0 0 0 0 1 ( k + 1 ) × ( k + 1 )
with the determinant equal to H k :
h k 1 h k 1 , 1 H k 1 h k 2 H k = c 0 c 1 c k 3 c k c 1 c 2 c k 2 c k + 1 c k 2 c k 1 c 2 k 5 c 2 k 2 .
To cover the case where H k 1 = 0 or H k = 0 , one should apply the result of Theorem 4.  □
The JJ-identity permits one to generate the recursive procedure for computation of Hankel polynomials. Indeed, assume that the expressions for H k 2 ( x ) and H k 1 ( x ) are already computed and
H k 1 ( x ) h k 1 , 0 x k 1 + h k 1 , 1 x k 2 + + h k 1 , k 1 , h k 1 , 0 = H k 1 .
Then in (15) all the constants are also evaluated except for H k and h k 1 for which one has only their determinantal representations:
H k = c 0 c 1 c k 2 c k 1 c 1 c 2 c k 1 c k c k 2 c k 1 c 2 k 4 c 2 k 3 c k 1 c k c 2 k 3 c 2 k 2 , h k 1 = c 0 c 1 c k 2 c k c 1 c 2 c k 1 c k + 1 c k 2 c k 1 c 2 k 4 c 2 k 2 c k 1 c k c 2 k 3 c 2 k 1 .
These determinants differ from the transposed determinantal representation for H k 1 ( x ) only in their last columns. Expansions by the entries of these columns have the same values for corresponding cofactors, and, therefore, the following formulas:
h k 0 = H k = c k 1 h k 1 , k 1 + c k h k 1 , k 2 + + c 2 k 2 h k 1 , 0 , h k 1 = ( c k h k 1 , k 1 + c k + 1 h k 1 , k 2 + + c 2 k 1 h k 1 , 0 )
allow one to evaluate h k 0 and h k 1 via the already computed coefficients of H k 1 ( x ) .
However, the just-outlined algorithm for recursive computation of H k ( x ) fails for the case where H k 1 = 0 .
Theorem 7.
Let H k 2 0 , H k 1 = 0 . If h k 1 , 1 = 0 , then H k = 0 and
H k 1 ( x ) 0 ,
and
H k ( x ) h k 2 H k 2 H k 2 ( x ) .
Otherwise,
H k 1 ( x ) h k 1 , 1 H k 2 H k 2 ( x )
and
H k ( x ) 1 H k 2 3 H k H k 2 h k 1 , 1 H k 3 ( x ) + 0 0 H k 2 H k 0 H k 2 h k 2 , 1 h k 1 H k 2 h k 2 , 1 h k 2 , 2 h k 2 1 x x 2 0 H k 2 ( x ) .
Proof. 
If H k 2 0 and H k 1 = 0 , h k 1 , 1 = 0 then all the ( k 1 ) th order minors of the matrix
c 0 c 1 c k 3 c k 2 c k 1 c 1 c 2 c k 2 c k 1 c k c k 3 c k 2 c 2 k 6 c 2 k 5 c 2 k 4 c k 2 c k 1 c 2 k 5 c 2 k 4 c 2 k 3 ( k 1 ) × k
are zero. This follows from Theorem 1. Therefore, all the coefficients of the polynomial H k 1 ( x ) are also zero since they are the ( k 1 ) th minors of this matrix.
To establish the validity of (28), it is sufficient to prove that the remainder of the division of H k ( x ) by H k 2 ( x ) is identically zero. This will be done later, while now we intend to prove (29). Since H k 1 = 0 , the identity (15) can be rewritten as
H k ( H k H k 2 ( x ) + h k 1 , 1 H k 1 ( x ) ) 0 .
If h k 1 , 1 0 , then the Sylvester identity (5) leads one to
H k H k 2 = h k 1 , 1 2 ,
from which it follows that
H k 0 and H k = h k 1 , 1 2 / H k 2 .
Consequently, the identity (31) leads one to
H k 1 ( x ) H k h k 1 , 1 H k 2 ( x ) h k 1 , 1 H k 2 H k 2 ( x )
which proves (29).
We now intend to prove (28) and (30). Divide H k ( x ) by H k 2 ( x ) :
H k ( x ) Q ( x ) H k 2 ( x ) + R ( x ) .
Here, the quotient Q ( x ) equals h k 2 / H k 2 if h k 1 , 1 = 0 , while for the case h k 1 , 1 0 one has
Q ( x ) Q 0 + Q 1 x + Q 2 x 2
with coefficients determined by the equalities:
Q 2 = H k H k 2 , Q 1 = H k 2 h k 1 H k h k 2 , 1 H k 2 2 , Q 0 = H k 2 2 h k 2 H k H k 2 2 h k 2 , 2 H k h k 2 , 1 2 H k 2 3 .
To find the coefficients of the remainder
R ( x ) = R 0 + R 1 x + + R k 3 x k 3
use arguments similar to those used in the proof of Theorem 6. First, consider the case where the generating sequence (9) is given by (16). Substitute x = λ into (32)
H k ( λ ) = ( Q 0 + Q 1 λ + Q 2 λ 2 ) H k 2 ( λ ) + R 0 + + R k 3 λ k 3 = 1 m ,
and sum up the obtained equalities. Due to (17) one gets
0 = c 0 R 0 + c 1 R 1 + + c k 3 R k 3 .
Similar equalities result from multiplication of (35) by λ j for j { 1 , , k 5 } and further summation by :
0 = c 1 R 0 + c 2 R 1 + + c k 2 R k 3 , 0 = c k 5 R 0 + c k 4 R 1 + + c 2 k 8 R k 3 .
Multiplication of (35) by λ k 4 and summation yields
0 = Q 2 H k 1 + c k 4 R 0 + c k 3 R 1 + + c 2 k 7 R k 3 ,
and, since by assumption H k 1 = 0 , the obtained equality looks similar to the previous ones. Multiplication of (35) by λ k 3 and summation leads to
0 = Q 2 j = 1 m λ j k 1 H k 2 ( λ j ) + c k 3 R 0 + c k 2 R 1 + + c 2 k 6 R k 3
with
= 1 m λ k 1 H k 2 ( λ ) = c 0 c 1 c k 2 c 1 c 2 c k 1 c k 3 c k 2 c 2 k 5 = 1 m λ k 1 = 1 m λ k = 1 m λ 2 k 3 = c 0 c 1 c k 2 c 1 c 2 c k 1 c k 3 c k 2 c 2 k 5 c k 1 c k c 2 k 3 = h k 1 , 1 .
If h k 1 , 1 = 0 then the obtained linear system of equalities with respect to R 0 , , R k 3 , namely
c i R 0 + c i + 1 R 1 + + c k + i 3 R k 3 = 0 for i { 0 , , k 3 }
implies that { R j = 0 } j = 0 k 3 (since the determinant of the system equals H k 2 0 ). This proves (28). For the case h k 1 , 1 0 , we unify all the obtained relationships with (34) and compose the linear system with respect to R 0 , , R k 3 , 1 . Since it is consistent, its determinant should vanish:
c 0 c 1 c k 2 0 c 1 c 2 c k 1 0 c k 3 c k 2 c 2 k 6 h k 1 , 1 Q 2 1 x x k 3 R ( x ) = 0 .
Expansion of the determinant by its last column and the use of the Formula (33) lead to
H k 2 R ( x ) h k 1 , 1 H k H k 2 H k 3 ( x )
which completes the proof of (30).  □
Remark 1.
Formulas of Theorem 7 allow one to organize recursive computation for H k ( x ) if the polynomials H k 2 ( x ) and H k 3 ( x ) are already computed. The involved factors, such as h k 2 , 1 , h k 2 , 2 , h k 1 , 1 and h k 1 , are either treated known as the coefficients of H k 2 ( x ) , H k 3 ( x ) , or can be calculated by Formulas (27). The only exception is the value for h k 2 . For its computation, we suggest the following original result.
Theorem 8.
Under the assumptions H k 2 0 , H k 1 = 0 , one has:
h k 2 = 1 H k 2 c 0 c 1 c k 2 c 1 c 2 c k 1 c k 3 c k 2 c 2 k 5 c k c k + 1 c 2 k 2 2 c 0 c 1 c k 1 c 1 c 2 c k c k 2 c k 1 c 2 k 3 c k + 1 c k + 2 c 2 k = 1 H k 2 j = 0 k 2 h k 2 , j c 2 k j 2 2 j = 1 k 1 h k 1 , j c 2 k j .
Proof. 
Is based on the equality (25). By replacing there ( k 1 ) k , we start with
h k 2 = c 0 c 1 c k 3 c k 1 c k c 1 c 2 c k 2 c k c k + 1 c k 2 c k 1 c 2 k 5 c 2 k 3 c 2 k 2 c k 1 c k c 2 k 4 c 2 k 2 c 2 k 1 k × k = c 0 c 1 c k 2 c k c 1 c 2 c k 1 c k + 1 c k 2 c k 1 c 2 k 4 c 2 k 2 c k c k + 1 c 2 k 2 c 2 k c 0 c 1 c k 2 c k 1 c 1 c 2 c k 1 c k c k 2 c k 1 c 2 k 4 c 2 k 3 c k + 1 c k + 2 c 2 k 1 c 2 k .
We represent the first determinant with the aid of Sylvester’s identity (5):
1 H k 2 H k 1 c 0 c 1 c k 3 c k c 1 c 2 c k 2 c k + 1 c k 3 c k 2 c 2 k 6 c 2 k 4 c k c k + 1 c 2 k 4 c 2 k 2 c 0 c 1 c k 2 c 1 c 2 c k 1 c k 3 c k 2 c 2 k 5 c k c k + 1 c 2 k 2 2 .
If H k 1 = h k 1 , 1 = 0 , then the expansions of the remained determinants by their last rows lead to (8).  □
As for the extension of the results of the present section in the 19th century, we recall here one generalization due to Kronecker [3] and Frobenius [15]. It relates to the general degenerate case of the vanishing of several entries in the sequence of Hankel determinants H 1 , H 2 , . The key point in their treatment is the number of consecutive zeros in this sequence.
Theorem 9
(Frobenius). Let
H k 0 , H k + 1 = 0 , , H k + r 1 = 0 , H k + r 0 w h e r e r > 2 .
Then
H k + 1 ( x ) 0 , , H k + r 2 ( x ) 0
and
H k + r 1 ( x ) ( 1 ) r ( r 1 ) / 2 B 0 , r 1 H k r 1 H k ( x ) ,
while
H k + r ( x ) 1 H k r + 1 Q ( x ) H k ( x ) ( 1 ) r ( r 1 ) / 2 B 0 , r 1 r + 1 H k 1 ( x ) .
Here
Q ( x ) = B 00 B 01 B 0 , r 1 D 0 B 10 B 11 B 1 , r 1 D 0 x + D 1 B r 1 , 0 B r 1 , 1 B r 1 , r 1 D 0 x r 1 + D 1 x r 2 + + D r 1 B r 0 B r 1 B r , r 1 D 0 x r + D 1 x r 1 + + D r 1 x + D r
with
B i j = c 0 c 1 c k 1 c k + j c 1 c 2 c k c k + j + 1 c k 1 c k c 2 k 2 c 2 k + j 1 c k + i c k + i + 1 c 2 k + i 2 c 2 k + i + j , D i = c 0 c 1 c k 1 c 1 c 2 c k c k 2 c k 1 c 2 k 3 c k + i 1 c k + i 2 c 2 k + i 2 .
The r × r -submatrix of (39) standing in the upper left corner happens to possess the Hankel structure, i.e.,
B i 1 j 1 = B i 2 j 2 i f i 1 + j 1 = i 2 + j 2 , { i 1 , i 2 , j 1 , j 2 } { 0 , 1 , , r 1 } ,
with its first generating elements equal to zero: B 00 = 0 , B 01 = 0 , , B 0 , r 2 = 0 .
If the coefficients of H k 1 ( x ) and H k ( x ) are known, then the expressions for D 0 , D 1 , , D r and B 0 , r 1 , B r , 0 can be computed via the linear relationships similar to (27). As for the values B i j with i + j r , i > 0 , j > 0 , we are sure that there exist formulas for their evaluation similar to (8), i.e., their explicit representations as combinations of the coefficients of polynomials H k 1 ( x ) and H k ( x ) . We failed to find them in classical papers, we also have not yet succeeded in deducing them ourselves. Our confidence is based on the coincidence of the results of Theorems 7 and 9 with the algorithm known as the Berlekamp–Massey algorithm [19,20]. Massey suggested it for the problem of finding the minimal polynomial of a linear recurrent sequence. The relationship of this problem to that of finding some characteristics of an appropriate Hankel matrix is due to Kronecker and partially discussed in [18]; in the papers [27], the backgrounds of this algorithm in terms of systems of linear equations with Hankel matrices have been discussed, with some of deduced conclusions equivalent to those presented here in terms of Hankel polynomials. The paper [28] extends the orthogonality oriented approach to the rational interpolation problem initiated in [11] to the case of the vanishing of several leading principal minors of the Hankel matrix. The result is very similar to Theorem 8 (though it is obtained by a different technique).
As it is mentioned in the Introduction of the present paper, it looks like the results of Theorems 7 and 9 are lost in the past.

4. Polynomial Interpolation via Symmetric Functions of the Data Set

The classical polynomial interpolation problem consists in finding the polynomial
p ( x ) = p 0 x N 1 + p 1 x N 2 + + p N 1
satisfying the data set (1), i.e.,
p ( x j ) = y j for j { 1 , N } .
This problem always possesses a unique solution that can be found via resolving the system of linear relationships (41) with the Vandermonde matrix or, in other terms, via the determinant evaluation
p ( x ) 1 1 j < k N ( x k x j ) 1 x 1 x 1 2 x 1 N 1 y 1 1 x 2 x 2 2 x 2 N 1 y 2 1 x N x N 2 x N N 1 y N 1 x x 2 x N 1 0 .
This computation is usually performed by virtue of some auxiliary constructions like representation of the interpolant in either the Newton or the Lagrange form. The last one stems from the expansion of the determinant from (42) by its last column:
p ( x ) j = 1 N y j W j ( x ) W j ( x j ) = j = 1 N y j W j ( x ) W ( x j ) .
where
W ( x ) = j = 1 N ( x x j )
and
W k ( x ) = W ( x ) x x k j = 1 j k N ( x x j ) for k { 1 , , N } .
The following easily deduced equality will be used in some further proofs:
j = 1 N W j ( x j ) = j = 1 N W ( x j ) = ( 1 ) N ( N 1 ) / 2 1 j < k N ( x k x j ) 2 .
It should be noted that the representation (43) does not immediately provide one with the canonical form for the interpolant, i.e., an explicit expression for its coefficients. In order to extract them from this formula, let us prove first a preliminary result
Theorem 10.
Let G ( x ) G 0 x N 1 + G 1 x N 2 + + G N 1 be a polynomial of a degree at most N 1 . Then the following equalities are valid
j = 1 N G ( x j ) W ( x j ) = 0 i f deg G < N 1 ; G 0 i f deg G = N 1 .
Proof. 
Use the following Lagrange formula:
G ( x ) W ( x ) G ( x 1 ) W ( x 1 ) ( x x 1 ) + G ( x 2 ) W ( x 2 ) ( x x 2 ) + + G ( x N ) W ( x N ) ( x x N )
provided that { G ( x j ) 0 } j = 1 N . From this the expansion of the fraction G ( x ) / W ( x ) in the Laurent series in negative powers of x can be derived:
G ( x ) W ( x ) j = 1 N G ( x j ) W ( x j ) 1 x + j = 1 N G ( x j ) x j W ( x j ) 1 x 2 + + j = 1 N G ( x j ) x j k W ( x j ) 1 x k + 1 +
On the other hand, multiplying the formal expansion
G ( x ) W ( x ) c 0 x + c 1 x 2 + + c k x k + 1 +
by W ( x ) x N + w 1 x N 1 + + w N , one obtains the following formulas for determining the coefficients c 0 , c 1 , recursively:
G 0 = c 0 , G 1 = c 0 w 1 + c 1 , G 2 = c 0 w 2 + c 1 w 1 + c 2 ,
Comparing the two obtained forms for the expansion of G ( x ) / W ( x ) yields the claimed equalities (47).  □
Setting G ( x ) x k in (47) results in
Corollary 1.
The following Euler–Lagrange equalities are valid:
j = 1 N x j k W ( x j ) = 0 i f k { 0 , , N 2 } ; 1 i f k = N 1 .
Corollary 2.
Let
G ( x ) G 0 x M + G 1 x M 1 + + G M , F ( x ) F 0 x n + F 1 x n 1 + + F n ,
G 0 0 , F 0 0 , and F ( x ) possesses only simple zeros λ 1 , , λ n not coinciding with x 1 , , x N . Then the following equalities are valid
j = 1 N G ( x j ) x j k F ( x j ) W ( x j ) = = 1 n G ( λ ) λ k F ( λ ) W ( λ ) i f k { 0 , , N + n M 2 } ; G 0 F 0 = 1 n G ( λ ) λ N + n M 1 F ( λ ) W ( λ ) i f k = N + n M 1 .
Theorem 11.
Calculate two sequences of values:
σ k = j = 1 N x j N + k 1 W ( x j ) f o r k { 1 , , N }
and
τ k = j = 1 N y j x j k W ( x j ) f o r k { 0 , N 1 } .
The following recursive formulas connect the values (50) and (51) with the coefficients of the interpolation polynomial:
τ 0 = p 0 , τ k = p 0 σ k + p 1 σ k 1 + + p k 1 σ 1 + p k f o r k { 1 , , N 1 } .
Proof. 
Due to equalities (48) one has
τ k = j = 1 N x j k y j W ( x j ) = j = 1 N ( p 0 x j N 1 + p 1 x j N 2 + + p k x j N k 1 + + p N 1 ) x j k W ( x j ) = p 0 σ k + p 1 σ k 1 + + p k 1 σ 1 + p k .
 □
Remark 2.
In the 19th century publications, the expression (50) treated as a symmetric function of x 1 , , x N was known as alephfunction ((Germ.) Alephfunktion ) and was denoted by k ( x 1 , , x N ) [29]. Despite of its rationally looking definition, it happens to be a complete homogeneous symmetric polynomial, i.e., it equals the sum of all possible monomials of the degree k in the variables x 1 , , x N . Thus, for instance,
σ 2 ( x 1 , x 2 , x 3 ) x 1 2 + x 2 2 + x 3 2 + x 1 x 2 + x 1 x 3 + x 2 x 3 .
The expression τ k defined by (51) can also be treated as a symmetric function of pairs of variables ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) , i.e., such a function whose value is unchanged when any of the pairs are interchanged. Although the present authors suppose that the result of Theorem 11 is contained in an old textbook in Algebra, this source remains still hidden from them.
We now turn to an alternative representation for the interpolant.
Theorem 12.
Assume that y j 0 for j { 1 , N } . Calculate the sequence of values:
τ ˜ k = j = 0 N 1 y j x j k W ( x j ) f o r k { 0 , , 2 N 2 } .
The interpolation polynomial can be represented in the Hankel polynomial form:
p ( x ) = ( 1 ) N ( N 1 ) / 2 j = 1 N y j τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ N 1 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ N τ ˜ N 2 τ ˜ N 1 τ ˜ N τ ˜ 2 N 3 1 x x 2 x N 1 H N 1 ( x ; { τ ˜ } ) .
Proof. 
Assume that the interpolation polynomial (40) exists. Rewrite the equalities (41) in the form
p N 1 1 y j + p N 2 x j y j + + p 1 x j N 2 y j + p 0 x j N 1 y j = 1 for j { 1 , , N } .
Multiply each of these equalities by the corresponding multiplier 1 / W ( x j ) and sum the obtained results. Due to (48), one gets
p N 1 τ ˜ 0 + p N 2 τ ˜ 1 + + p 1 τ ˜ N 2 + p 0 τ ˜ N 1 = 0 .
Similar equalities result from multiplication of (54) by { x j k / W ( x j ) } k = 1 N 2 :
p N 1 τ ˜ 1 + p N 2 τ ˜ 2 + + p 1 τ ˜ N 1 + p 0 τ ˜ N = 0 , p N 1 τ ˜ 2 + p N 2 τ ˜ 3 + + p 1 τ ˜ N + p 0 τ ˜ N + 1 = 0 , , p N 1 τ ˜ N 2 + p N 2 τ ˜ N 1 + + p 1 τ ˜ 2 N 4 + p 0 τ ˜ 2 N 3 = 0 .
Linear combination of (54) multiplied by x j N 1 / W ( x j ) yields something different:
p N 1 τ ˜ N 1 + p N 2 τ ˜ N + + p 1 τ ˜ 2 N 3 + p 0 τ ˜ 2 N 2 = 1 .
Unifying all the obtained equalities with (40), one obtains a system of linear equations with respect to p N 1 , p N 2 , , p 0 . If the interpolant exists, then it should satisfy the following identity:
τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ N 1 0 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ N 0 τ ˜ N 2 τ ˜ N 1 τ ˜ N τ ˜ 2 N 3 0 τ ˜ N 1 τ ˜ N τ ˜ N + 1 τ ˜ 2 N 2 1 1 x x 2 x N 1 p ( x ) 0 .
Expansion of the determinant by its last column gives
p ( x ) H N 1 ( x , { τ ˜ } ) H N ( { τ ˜ } )
provided that H N ( { τ ˜ } ) 0 . To prove the latter, represent the determinant as the product:
H N ( { τ ˜ } ) = 1 1 1 x 1 x 2 x N x 1 2 x 2 2 x N 2 x 1 N 1 x 2 N 1 x N N 1 · 1 y 1 W ( x 1 ) 0 0 0 1 y 2 W ( x 2 ) 0 0 0 1 y N W ( x N ) × 1 x 1 x 1 2 x 1 N 1 1 x 2 x 2 2 x 2 N 1 1 x N x N 2 x N N 1 = 1 j < k N ( x k x j ) 2 j = 1 N y j j = 1 N W ( x j ) = ( 46 ) ( 1 ) N ( N 1 ) / 2 j = 1 N y j .
Therefore, the denominator in the right-hand side of (55) does not vanish.
One can utilize the last expression for the direct deduction of the validity of the Formula (53) as a solution to the polynomial interpolation problem. Indeed, let us substitute x = x 1 into the numerator of the fraction in the right-hand side of (53):
H N 1 ( x 1 ; { τ ˜ } ) = τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ N 1 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ N τ ˜ N 2 τ ˜ N 1 τ ˜ N τ ˜ 2 N 3 1 x 1 x 1 2 x 1 N 1 N × N
( 14 ) ( 1 ) N + 1 τ ˜ 1 x 1 τ ˜ 0 τ ˜ 2 x 1 τ ˜ 1 τ ˜ N 1 x 1 τ ˜ N 2 τ ˜ 2 x 1 τ ˜ 1 τ ˜ 3 x 1 τ ˜ 2 τ ˜ N x 1 τ ˜ N 1 τ ˜ N 1 x 1 τ ˜ N 2 τ ˜ N x 1 τ ˜ N 1 τ ˜ 2 N 3 x 1 τ ˜ 2 N 4 ( N 1 ) × ( N 1 ) .
Consider an entry of the last determinant
τ ˜ k x 1 τ ˜ k 1 = x 1 k y 1 W ( x 1 ) + x 2 k y 2 W ( x 2 ) + + x N k y N W ( x N ) x 1 k y 1 W ( x 1 ) x 2 k 1 x 1 y 2 W ( x 2 ) x N k 1 x 1 y N W ( x N ) = x 2 k 1 ( x 2 x 1 ) y 2 W ( x 2 ) + + x N k 1 ( x N x 1 ) y N W ( x N ) .
It can be easily verified that
x j x 1 W ( x j ) = 1 W 1 ( x j ) for j { 2 , , N } and W 1 ( x ) ( 45 ) k = 2 N ( x x k ) .
Thus, introducing the sequence
T ˜ k = τ ˜ k x 1 τ ˜ k 1 = j = 2 N x j k y j W 1 ( x j ) for k { 0 , , 2 N 4 }
one gets
H N 1 ( x 1 ; { τ ˜ } ) = ( 1 ) N 1 T ˜ 0 T ˜ 1 T ˜ N 2 T ˜ 1 T ˜ 2 T ˜ N 1 T ˜ N 2 T ˜ N 1 T ˜ 2 N 4 = ( 1 ) N 1 H ( T ˜ ) .
The last determinant can be evaluated by Formula (56):
H N 1 ( x 1 ; { τ ˜ } ) = ( 1 ) N 1 ( 1 ) ( N 1 ) ( N 2 ) / 2 / j = 2 N y j .
Combining this equality with (56), one gets in (55): p ( x 1 ) = y 1 . Other nodes x j are dealt with similarly.  □
At first glance, Formula (53) seems to have no advantage over the algorithm from Theorem 11. We dispel doubts in its practical utility in Section 9. The present section is concluded by a technical result.
Theorem 13.
Under the condition of Theorem 12, one has
H N ( x ; { τ ˜ } ) H N ( { τ ˜ } ) j = 1 N ( x x j ) ( 1 ) N ( N 1 ) / 2 j = 1 N y j j = 1 N ( x x j ) .
Proof. 
Let us prove that
H N ( x j ; { τ ˜ } ) = 0 , for j { 1 , , N } .
The linear combination
j = 1 N x j k W ( x j ) H N ( x j ; { τ ˜ } ) = τ ˜ 0 τ ˜ 1 τ ˜ N τ ˜ 1 τ ˜ 2 τ ˜ N + 1 τ ˜ N 1 τ ˜ N τ ˜ 2 N 1 τ ˜ k τ ˜ k + 1 τ ˜ N + k .
equals zero for k { 0 , , N 1 } . Therefore we have N linear homogeneous equalities
j = 1 N x j k 1 W ( x j ) H N ( x j ; { τ ˜ } ) = 0 , j { 1 , , N }
valid for N values H N ( x j ; { τ ˜ } ) j = 1 N . Since
det x j k 1 W ( x j ) j , k = 1 N = ( 46 ) ( 1 ) N ( N 1 ) / 2 1 j < k N ( x k x j ) 0 ,
all the equalities (60) are valid. Thus, we know all the zeros of the polynomial H N ( x ; { τ ˜ } ) , while its leading coefficient equals to H N ( { τ ˜ } ) . The latter has been already evaluated by (56). This completes the proof.  □

5. Rational Interpolation

We now search for polynomials (6) such that the rational function
r ( x ) = p ( x ) / q ( x )
satisfies the data set (1), i.e.,
r ( x j ) = y j for j { 1 , , N = n + m + 1 } .
Hereinafter, we will not distinguish the solutions to the problem with numerator and denominator multiplied by a common numerical factor.
The first solution to the problem was suggested in the form of generalization of the Lagrange representation (43) for the polynomial interpolant.
Theorem 14
(Cauchy, ref. [2]). Denote
W j 1 j 2 j ¯ ( x ) = s = 1 ( x x j s ) a n d W j 1 j 2 j ( x ) = W ( x ) W j 1 j 2 j ¯ ( x ) .
Rational interpolant (61) can be found via the formulas
p ( x ) = ( j 1 , j 2 , , j m + 1 ) s = 1 m + 1 y j s s = 1 m + 1 W j 1 j 2 j m + 1 ( x j s ) W j 1 j 2 j m + 1 ( x )
and
q ( x ) = ( 1 ) m n ( j 1 , j 2 , , j m ) = 1 m y j j { 1 , , N } { j 1 , , j m } W j 1 j 2 j m ¯ ( x j ) W j 1 j 2 j m ¯ ( x ) .
Both sums are taken with respect to all combinations m + 1 and, respectively, m integers from the set { 1 , , N } .
Being valid generically, Cauchy’s solution fails for some particular choices of the data set (1). Whereas the polynomial interpolation problem always has a solution, the rational interpolation one is not always resolvable. This defect was first discovered by Kronecker [3], and later discussed by Netto [4]. To exemplify this, we first generate from the condition (62) the system of equations
p ( x j ) = y j q ( x j ) for j { 1 , , N }
or, equivalently,
p n + p n 1 x j + + p 0 x j n = q m y j + q m 1 x j y j + + q 0 x j m y j for j { 1 , , N }
which is linear with respect to the N + 1 coefficients of p ( x ) and q ( x ) . The principal solvability of this system can be established with the aid of Linear Algebra methods, like, for instance via Gaussian elimination procedure.
Example 2.
Given the data set
x 1 0 1 2 3 y 1 1 1 / 3 3 1 / 13
find its rational interpolant in the form r ( x ) = p ( x ) / q ( x ) with deg p ( x ) = 1 , deg q ( x ) = 3 .
Solution. Resolving the system (64), one gets the expressions:
p ( x ) x 2 , q ( x ) x 3 x 2 x 2 .
However, p ( 2 ) = 0 and q ( 2 ) = 0 , and therefore the condition r ( 2 ) = 3 is not satisfied. It is not satisfied even if we cancel the numerator and denominator by the common linear factor.
Explanation for this phenomenon of the unattainable points consists in the nonequivalence of the passage from (62) to (63) since for some node x j one might obtain a solution to the linear system (64) satisfying both conditions p ( x j ) = 0 and q ( x j ) = 0 .
On the other hand, solution to the problem might be not unique.
Example 3.
For the data set
x 1 0 1 2 3 y 1 1 1 / 3 1 / 7 1 / 13
generated by the rational function 1 / ( x 2 + x + 1 ) there exists infinitely many rational interpolants with deg p ( x ) = 1 , deg q ( x ) = 3 in the form ( x λ ) / ( ( x λ ) ( x 2 + x + 1 ) ) where λ { 1 , 0 , 1 , 2 , 3 } .
We now pass to an alternative approach to the problem, due to Jacobi.
Theorem 15.
Let y j 0 for j { 1 , , N } . Compute the values
τ k = j = 1 N y j x j k W ( x j ) f o r k { 0 , , 2 m }
and
τ ˜ k = j = 1 N 1 y j x j k W ( x j ) f o r k { 0 , , 2 n 1 } ,
and generate the corresponding Hankel polynomials H m ( x ; { τ } ) and H n ( x ; { τ ˜ } ) . If
H n ( { τ ˜ } ) 0
and
H m ( x j ; { τ } ) 0 f o r j { 1 , , N }
then there exists a unique rational interpolant with deg p ( x ) = n , deg q ( x ) m = N n 1 which can be expressed as:
p ( x ) = H m + 1 ( { τ } ) H n ( x ; { τ ˜ } ) = τ 0 τ 1 τ m τ 1 τ 2 τ m + 1 τ m 1 τ m τ 2 m 1 τ m τ m + 1 τ 2 m · τ ˜ 0 τ ˜ 1 τ ˜ n τ ˜ 1 τ ˜ 2 τ ˜ n + 1 τ ˜ n 1 τ ˜ n τ ˜ 2 n 1 1 x x n ,
q ( x ) = H n ( { τ ˜ } ) H m ( x ; { τ } ) = τ ˜ 0 τ ˜ 1 τ ˜ n 1 τ ˜ 1 τ ˜ 2 τ ˜ n τ ˜ n 1 τ ˜ n τ ˜ 2 n 2 · τ 0 τ 1 τ m τ 1 τ 2 τ m + 1 τ m 1 τ m τ 2 m 1 1 x x m .
Proof. 
Can be found in [11], and we refer here only with the underlying idea leading to the Hankel polynomial appearance. If a desired interpolant exists, then the equalities (64) are valid. Multiply the jth equality by x j k / W ( x j ) for k { 0 , , m 1 } and sum the obtained equalities by j. Due to the Euler–Lagrange equalities (48), one arrives at a system of equations
q m τ 0 + q m 1 τ 1 + + q 0 τ m = 0 , q m τ 1 + q m 1 τ 2 + + q 0 τ m + 1 = 0 , , q m τ m 1 + q m 1 τ m + + q 0 τ 2 m 1 = 0 .
Therefore, the denominator of the fraction should satisfy the relation
A q ( x ) τ 0 τ 1 τ m τ 1 τ 2 τ m + 1 τ m 1 τ m τ 2 m 1 1 x x m
for some constant factor A.
In a similar way, multiplying the equalities
p n 1 y j + p n 1 x j y j + + p 0 x j n y j = q m + q m 1 x j + + q 0 x j m , j { 1 , , N }
by x j / W ( x j ) for { 0 , , n 1 } and summarizing by j, one gets the equality for the numerator:
B p ( x ) τ ˜ 0 τ ˜ 1 τ ˜ n τ ˜ 1 τ ˜ 2 τ ˜ n + 1 τ ˜ n 1 τ ˜ n τ ˜ 2 n 1 1 x x n H n ( { τ ˜ } ) x n +
for some constant factor B. Due to assumption (67), B 0 and deg p ( x ) = n . This can be interpreted as a polynomial interpolation problem for the data set { ( x j , 1 / y j ) } j = 1 N . Thereby the rational interpolation problem is reduced to the polynomial interpolation problem discussed in Section 4.  □
Corollary 1.
The following relationship is valid:
H n ( { τ ˜ } ) H m ( { τ } ) = H n + 1 ( { τ ˜ } ) H m + 1 ( { τ } ) .
Example 4.
Given the data set
x 3 2 1 0 1 2 3 y 3 1 4 1 5 9 3
find all rational functions r ( x ) = p ( x ) / q ( x ) with deg p ( x ) + deg q ( x ) 6 satisfying it.
Solution. Since we do not know a priori the degrees of the numerator and the denominator of r ( x ) , we have to compute the sequences (65) and (66) for the maximal possible number of elements:
τ 0 = 61 720 , τ 1 = 9 80 , τ 2 = 17 240 , , τ 12 = 981007 240 ;
τ ˜ 0 = 77 2880 , τ ˜ 1 = 119 8640 , τ ˜ 2 = 167 8640 , , τ ˜ 12 = 3923929 8640 .
Now, compute the Hankel polynomials of the first and the second order:
H 1 ( x ; { τ } ) = 61 720 x + 9 80 , H 2 ( x ; { τ } ) = 403 21600 h 2 , 0 x 2 + 37 720 h 2 , 1 x + 379 7200 h 2 , 2 .
Computation of H 3 ( x ; { τ } ) can be organized with the aid of the JJ-identity (24):
H 3 ( x ; { τ } ) h 3 , 0 h 2 , 0 2 H 1 ( x ; { τ } ) + h 3 , 0 h 2 , 0 x h 2 , 1 h 2 , 0 + h 3 , 1 h 3 , 0 H 2 ( x ; { τ } )
where all the constants are already known except for h 3 , 0 = H 3 ( { τ } ) and h 3 , 1 . To find the latter, utilize the equalities (27)
h 3 , 0 = H 3 ( { τ } ) = τ 4 h 2 , 0 + τ 3 h 2 , 1 + τ 2 h 2 , 2 = 1379 64800 ,
h 3 , 1 = ( τ 5 h 2 , 0 + τ 4 h 2 , 1 + τ 3 h 2 , 2 ) = 127 10800 .
Therefore,
H 3 ( x ; { τ } ) 1379 64800 x 3 + 127 10800 x 2 + 5111 64800 x 17 1200 .
Continuing the recursive utilization of the JJ-identity (24), we get further:
H 4 ( x ; { τ } ) 1609 21600 x 4 347 10800 x 3 + 368 675 x 2 2201 5400 x 111 160 , H 5 ( x ; { τ } ) 763 1440 x 5 + 301 144 x 4 + 1715 288 x 3 2947 144 x 2 + 301 80 x + 357 16 , H 6 ( x ; { τ } ) 693 16 h 6 , 0 x 6 357 16 h 6 , 1 x 5 9201 16 x 4 + 3489 16 x 3 + 7149 4 x 2 621 4 x 1620 h 6 , 6 .
We need one extra computation, namely
H 7 ( { τ } ) = τ 12 h 6 , 0 + τ 11 h 6 , 1 + + τ 6 h 6 , 6 = 1620 .
Thus, all the potential denominators of the interpolation fractions are computed. In parallel, similar recursive procedure can be organized for the numerator’s computation:
H 1 ( x ; { τ ˜ } ) = 77 2880 x 119 8640 , H 2 ( x ; { τ ˜ } ) = 763 2332800 h ˜ 2 , 0 = H 2 ( { τ ˜ } ) x 2 + 301 233280 h ˜ 2 , 1 x + 37 86400 h ˜ 2 , 2 ,
h ˜ 3 , 0 = H 3 ( { τ ˜ } ) = τ ˜ 4 h ˜ 2 , 0 + τ ˜ 3 h ˜ 2 , 1 + τ ˜ 2 h ˜ 2 , 2 = 1609 34992000 ,
h ˜ 3 , 1 = ( τ ˜ 5 h ˜ 2 , 0 + τ ˜ 4 h ˜ 2 , 1 + τ ˜ 3 h ˜ 2 , 2 ) = 347 17496000 ,
H 3 ( x ; { τ ˜ } ) h ˜ 3 , 0 h ˜ 2 , 0 2 H 1 ( x ; { τ ˜ } ) + h ˜ 3 , 0 h ˜ 2 , 0 x h ˜ 2 , 1 h ˜ 2 , 0 + h ˜ 3 , 1 h ˜ 3 , 0 H 2 ( x ; { τ ˜ } ) 1609 34992000 x 3 347 17496000 x 2 7181 34992000 x + 17 1944000 , H 4 ( x ; { τ ˜ } ) 1379 104976000 x 4 + 127 17496000 x 3 4771 52488000 x 2 13 81000 x 379 11664000 , H 5 ( x ; { τ ˜ } ) 403 34992000 x 5 + 37 1166400 x 4 707 6998400 x 3 13 43200 x 2 13 72000 x 1 14400 , H 6 ( x ; { τ ˜ } ) 61 1166400 x 6 + 1 14400 x 5 + 181 233280 x 4 17 25920 x 3 841 291600 x 2 + 1 3600 x 1 1620 .
Now we are able to compose the set of rational interpolants:
r 0 , 6 ( x ) = H 7 ( { τ } ) H 6 ( x ; { τ } ) 8640 231 x 6 119 x 5 3067 x 4 + 1163 x 3 + 9532 x 2 828 x 8640 , r 1 , 5 ( x ) = h 6 , 0 H 1 ( x ; { τ ˜ } ) h ˜ 1 , 0 H 5 ( x ; { τ } ) 270 ( 33 x + 17 ) 109 x 5 430 x 4 1225 x 3 + 4210 x 2 774 x 4590 , r 2 , 4 ( x ) = h 5 , 0 H 2 ( x ; { τ ˜ } ) h ˜ 2 , 0 H 4 ( x ; { τ } ) 15 ( 763 x 2 + 3010 x + 999 ) 1609 x 4 + 694 x 3 11776 x 2 + 8804 x + 14985 , ,
and finally the polynomial interpolant
r 6 , 0 ( x ) = h 1 , 0 H 6 ( x ; { τ ˜ } ) h ˜ 6 , 0 61 720 x 6 9 80 x 5 181 144 x 4 + 17 16 x 3 + 841 180 x 2 9 20 x + 1 .
Formulation of Theorem 15 is due to the authors of [11]. Jacobi did not bother himself in [9] with the questions of existence or uniqueness of a solution to the interpolation problem. He just only suggested that the denominator of the (potential candidate) rational interpolant can be represented in the form of the Hankel polynomial H m ( x ; { τ } ) . On its computation, the rational interpolation problem is reduced to the polynomial interpolation one for the numerator of the fraction. Jacobi did not care himself either on computational optimization of the algorithm like the one outlined in the solution to Example 4 exploiting their own Formula (24).
We next take a closer look at this JJ-identity. The linear relation of the type (4) connecting three consecutive Hankel polynomials generated by an arbitrary sequence reminds the similar looking recurrence identity for the classical orthogonal polynomials [30,31]:
P k ( x ) ( A k x + B k ) P k 1 ( x ) + C k P k 2 ( x ) , k { 2 , 3 , }
for some real constants A k , B k , C k . The proof of this identity is traditionally carried out using the orthogonality of the system { P j ( x ) } j = 0 k . For instance, in case of monic polynomials P j ( x ) , the following formula for evaluation of B k is suggested
B k = x P k 1 ( x ) , P k 1 ( x ) / P k 1 ( x ) , P k 1 ( x )
where the inner product for a pair of real polynomials P ( x ) , Q ( x ) is defined via
P ( x ) , Q ( x ) = a b P ( t ) Q ( t ) w ( t ) d t
with a positive and integrable on the interval a < x < b function w ( x ) .
The relationship of orthogonal polynomials to the Hankel ones is evident from the form of the Gram matrix for sequence
1 , x , x 2 , , x n .
This matrix is of Hankel type: for
P ( x ) = P 0 + P 1 x + + P n x n , Q ( x ) = Q 0 + Q 1 x + + Q n x n ,
one has
P ( x ) , Q ( x ) = ( P 0 , P 1 , , P n ) M ( Q 0 , Q 1 , , Q n )
where
M = μ j + k 2 j , k = 1 n + 1 ; { μ j = x j , 1 } j = 0 2 n .
The Gram–Schmidt orthogonalization process applied to the sequence (74) can be interpreted as that of successive computation of Hankel polynomials { H k ( x ; { μ } ) } k = 1 n .
Though orthogonal polynomials are Hankel polynomials, the converse is not necessarily true. An immediate counterexample demonstrates this: for the generating sequence { 1 , 1 , 1 , 1 , 161 , 999 } , the first three Hankel polynomials are as follows
H 1 ( x ) x + 1 , H 2 ( x ) 0 , H 3 ( x ) 25600 ( x + 1 ) ,
and H 1 ( x ) , H 3 ( x ) cannot be orthogonal. The necessary condition for the orthogonality of the sequence of Hankel polynomials is the positive definiteness of their Hankel matrix. In this context, the result of Theorem 6 cannot be reduced to the case of orthogonal polynomials, it indeed needs the orthogonality-free proof.
The authors of [11] take a step further. Considering the Hankel matrix M = τ j + k 2 j , k = 1 N generated by (65), they replace the requirement of its positive definiteness by that of non-vanishing of all its leading principal minors H 1 ( { τ } ) , , H N ( { τ } ) . Formula (75) is kept for the definition of the inner product and it becomes equivalent to
P ( x ) , Q ( x ) = j = 1 N y j W ( x j ) P ( x j ) Q ( x j ) .
In this way, some of the traditional properties of the classical inner product are lost. However, the relationship of the type connecting the (quasi)orthogonal Hankel polynomials H 1 ( x ; { τ } ) , H 2 ( x ; { τ } ) , remains valid with the new definition of the inner product utilized in the formulas like (73) for the involved constants computation. For finding H k ( x ; { τ } ) (provided that H k 2 ( x ; { τ } ) and H k 1 ( x ; { τ } ) are known), either the 2 k + 3 inner products computations in R k + 1 via (75) are needed or, equivalently, the same number of polynomial evaluations for the inner product in the form (76).
Returning to Formula (27) for computation of the constants in the JJ-identity (4), one can notice that the cost of this is equivalent to only 2 inner products in R k + 1 . We do not have any idea how to clear up the perplexity why the more general and computationally effective result is completely forgotten; we have not found any reference to it later than 1910.
We next intend to determine what assumption from those posed in Theorem 15 is responsible for the uniqueness of the solution to the rational interpolation problem, i.e., the one that prevents the cases like that in Example 3.

6. Resultant Interpolation

Let us clarify the relationship of the Hankel determinants H m + 1 ( { τ } ) and H n ( { τ ˜ } ) , which appeared in Theorem 15, to the problem of existence of a nontrivial common factor for the numerator and the denominator of the rational interpolant. This is closely related to the notion mentioned in Section 2, namely the resultant R ( p ( x ) , q ( x ) ) of the polynomials (6).
Given the polynomials (6) compute their values
p ( x j ) , q ( x j ) for j { 1 , , N = m + n + 1 }
and assume that none of them is zero. Generate the Hankel matrices
H ( τ ) = τ i + j 2 i , j = 1 m + 1 , H ( τ ˜ ) = τ ˜ i + j 2 i , j = 1 n + 1 .
by the sequences
τ k = j = 1 N p ( x j ) q ( x j ) x j k W ( x j ) k = 0 2 m , τ ˜ k = j = 1 N q ( x j ) p ( x j ) x j k W ( x j ) k = 0 2 n
with W ( x ) defined by (44).
Theorem 16.
The following equalities are valid
H n ( { τ ˜ } ) = ( 1 ) m n + n ( n + 1 ) / 2 p 0 j = 1 N p ( x j ) R ( p ( x ) , q ( x ) ) ,
H n + 1 ( { τ ˜ } ) = ( 1 ) m n + n ( n + 1 ) / 2 q 0 j = 1 N p ( x j ) R ( p ( x ) , q ( x ) ) ,
H m ( { τ } ) = ( 1 ) m ( m + 1 ) / 2 q 0 j = 1 N q ( x j ) R ( p ( x ) , q ( x ) ) , H m + 1 ( { τ } ) = ( 1 ) m ( m + 1 ) / 2 p 0 j = 1 N q ( x j ) R ( p ( x ) , q ( x ) ) .
Proof. 
Will be illuminated for a particular case n = 3 , m = 5 . Consider first the case where p ( x ) possesses only simple zeros; denote them by λ 1 , λ 2 , λ 3 . Construct a new sequence:
η k = = 1 3 q ( λ ) λ k p ( λ ) W ( λ ) for k { 0 , 1 , } .
From (49) it follows that τ ˜ k = η k for k { 0 , , 5 } and therefore
H 3 ( { τ ˜ } ) = τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ 2 τ ˜ 3 τ ˜ 4 = η 0 η 1 η 2 η 1 η 2 η 3 η 2 η 3 η 4
= 1 1 1 λ 1 λ 2 λ 3 λ 1 2 λ 2 2 λ 3 2 · q ( λ 1 ) p ( λ 1 ) W ( λ 1 ) 0 0 0 q ( λ 2 ) p ( λ 2 ) W ( λ 2 ) 0 0 0 q ( λ 3 ) p ( λ 3 ) W ( λ 3 ) · 1 λ 1 λ 1 2 1 λ 2 λ 2 2 1 λ 3 λ 3 2
= 1 j < k 3 ( λ j λ k ) 2 = 1 3 q ( λ ) = 1 3 W ( λ ) = 1 3 p ( λ ) .
Using the definition of the resultant in the form (7), the last expression can be represented as
1 j < k 3 ( λ j λ k ) 2 = 1 3 q ( λ ) R ( p ( x ) , W ( x ) ) p 0 9 p 0 3 ( 1 ) 3 1 j < k 3 ( λ j λ k ) 2
Finally use the alternative definition of the resultant (8):
p 0 6 = 1 3 q ( λ ) R ( p ( x ) , W ( x ) ) = p 0 R ( p ( x ) , q ( x ) ) ( 1 ) 9 × 3 j = 1 9 p ( x j ) .
Thus, the equality (79) is true.
We just proved this equality under the additional assumption that polynomial p ( x ) has all its zeros distinct. To extend this result to the general case, the traditional trick consists in application of Theorem 4. By Corollary 1, the condition of distinction (simplicity) of zeros of polynomial with symbolic (indeterminate) coefficients can be expressed as an algebraic inequality with respect to these coefficients. In accordance with the referred theorem, the algebraic identity which is valid under an extra assumption in the form of algebraic inequality, should be valid for all values of indeterminates.
To prove the equality (80), multiply the determinant
H 4 ( { τ ˜ } ) = τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ 4 τ ˜ 2 τ ˜ 3 τ ˜ 4 τ ˜ 5 τ ˜ 3 τ ˜ 4 τ ˜ 5 τ ˜ 6
from the right-hand side by the determinant
1 0 0 p 3 / p 0 0 1 0 p 2 / p 0 0 0 1 p 1 / p 0 0 0 0 1 ,
evidently equal to 1. This results in the determinant which differs from the initial one only in the last column:
1 p 0 τ ˜ 0 τ ˜ 1 τ ˜ 2 p 0 τ ˜ 3 + p 1 τ ˜ 2 + p 2 τ ˜ 1 + p 3 τ ˜ 0 τ ˜ 1 τ ˜ 2 τ ˜ 3 p 0 τ ˜ 4 + p 1 τ ˜ 3 + p 2 τ ˜ 2 + p 3 τ ˜ 1 τ ˜ 2 τ ˜ 3 τ ˜ 4 p 0 τ ˜ 5 + p 1 τ ˜ 4 + p 2 τ ˜ 3 + p 3 τ ˜ 2 τ ˜ 3 τ ˜ 4 τ ˜ 5 p 0 τ ˜ 6 + p 1 τ ˜ 5 + p 2 τ ˜ 4 + p 3 τ ˜ 3 .
Compute the entries of this column:
p 0 τ ˜ 3 + p 1 τ ˜ 2 + p 2 τ ˜ 1 + p 3 τ 0 = j = 1 9 q ( x j ) p ( x j ) ( p 0 x j 3 + p 1 x j 2 + p 2 x j + p 3 ) W ( x j ) = j = 1 9 q ( x j ) p ( x j ) p ( x j ) W ( x j ) = j = 1 9 q ( x j ) W ( x j ) = ( 47 ) 0 ,
and similarly
p 0 τ ˜ 4 + p 1 τ ˜ 3 + p 2 τ ˜ 2 + p 3 τ ˜ 1 = 0 , p 0 τ ˜ 5 + p 1 τ ˜ 4 + p 2 τ ˜ 3 + p 3 τ ˜ 2 = 0 ,
whereas
p 0 τ ˜ 6 + p 1 τ ˜ 5 + p 2 τ ˜ 4 + p 3 τ ˜ 3 = j = 1 9 q ( x j ) x j 3 W ( x j ) = q 0 .
Therefore,
H 4 ( { τ ˜ } ) = 1 p 0 τ ˜ 0 τ ˜ 1 τ ˜ 2 0 τ ˜ 1 τ ˜ 2 τ ˜ 3 0 τ ˜ 2 τ ˜ 3 τ ˜ 4 0 τ ˜ 3 τ ˜ 4 τ ˜ 5 q 0 = q 0 p 0 H 3 ( { τ ˜ } )
and with already deduced expression (81) for H 3 ( { τ ˜ } ) , one gets the validity of (80).  □
Remark 3.
The number of the interpolation values (77) exceeds twice that of the coefficients of both polynomials p ( x ) and q ( x ) , i.e., the set of interpolation values is redundant for the resultant evaluation. However, from the statement of Theorem 16, one can notice that only the set { p ( x j ) / q ( x j ) } j = 1 n + m + 1 of ratios (or their reciprocals) is involved in the resultant’s computation. This set is not redundant. (As for some extra multiples in the right-hand sides of Formulas (79) and (80), one may ignore the case of their vanishing as a practically improbable if the polynomials are randomly chosen over infinite fields).

7. Hankel Polynomials Computation for Extended Data Set

Let the data set (1) be extended by adding an extra point:
x x 1 x 2 x N x N + 1 y y 1 y 2 y N y N + 1 with x N + 1 { x 1 , , x N } .
Is it possible to organize the computation of the rational interpolant for this set based on already computed interpolant for the initial set (1)? In other words: we are looking for an analogue of the Newton interpolation polynomial construction procedure.
In the framework of the approach discussed in Section 5, the stated problem can be reformulated as that of effective computation of Hankel polynomials generated by the sequence
τ ^ k = j = 1 N + 1 x j k y j W ^ ( x j ) ; here W ^ ( x ) = j = 1 N + 1 ( x x j ) ( 44 ) W ( x ) ( x x N + 1 ) .
and a similar one obtained on replacing y j by 1 / y j , under the assumption that the corresponding Hankel polynomials for smaller data set (1) are already at our disposal.
Theorem 17.
Hankel polynomial (we slightly simplify here the notation: H k = H k ( { τ } ) , H ^ k = H k ( { τ ^ k } ) )
H ^ k ( x ; { τ ^ } ) = τ ^ 0 τ ^ 1 τ ^ k τ ^ 1 τ ^ 2 τ ^ k + 1 τ ^ k 1 τ ^ k τ ^ 2 k 1 1 x x k H ^ k x k + .
generated by the sequence (83) is linked to polynomials H k ( x ; { τ } ) and H k 1 ( x ; { τ } ) generated by the sequence (65) via the following identity
H k H ^ k ( x ; { τ ^ } ) H ^ k H k ( x ; { τ } ) + H ^ k + 1 H k 1 ( x ; { τ } ) .
Proof. 
Is based on the following equality connecting the sums (65) and (83):
τ k = τ ^ k + 1 x N + 1 τ ^ k
which can be deduced in exactly the same manner as the equality (57) in the proof of Theorem 12. With its aid, represent H k ( x ; { τ } ) first as the determinant of the order k + 1 :
H k ( x ; { τ } ) = τ ^ 1 x N + 1 τ ^ 0 τ ^ 2 x N + 1 τ ^ 1 τ ^ k + 1 x N + 1 τ ^ k τ ^ 2 x N + 1 τ ^ 1 τ ^ 3 x N + 1 τ ^ 2 τ ^ k + 2 x N + 1 τ ^ k + 1 τ ^ k x N + 1 τ ^ k 1 τ ^ k + 1 x N + 1 τ ^ k τ ^ 2 k x N + 1 τ ^ 2 k 1 1 x x k
and then as the determinant of the order k + 2 :
= ( 1 ) k + 1 τ ^ 0 τ ^ 1 τ ^ k 1 τ ^ 1 τ ^ 2 τ ^ k + 1 x N + 1 τ ^ k τ ^ k + 1 τ ^ 2 k x N + 1 k 1 x x k 0 ( k + 2 ) × ( k + 2 ) .
Application of Sylvester’s identity (5) to the latter results in (85).  □
Corollary 1.
The following equality is valid:
H ^ k + 1 H k 1 ( x N + 1 ) = ( 1 ) k H k 2 H k ( x N + 1 ) H ^ k .
Proof. 
Substitution x = x N + 1 into (85) results in
H ^ k ( x N + 1 ) H k H ^ k H k ( x N + 1 ) + H ^ k + 1 H k 1 ( x N + 1 )
Comparing the coefficients of x k in H k ( x ) and in (87) yields that
H k = ( 1 ) k H ^ k ( x N + 1 ) .
which completes the proof of (88).  □
Formula (88) gives rise to the recursive procedure for Hankel determinants { H ^ } = 1 k + 1 computation, with the final result as follows:
Theorem 18.
Let { H j ( x N + 1 ) 0 } j = 1 k . One has
H ^ k + 1 = ( 1 ) k H k ( x N + 1 ) j = 1 k H j 2 H j ( x N + 1 ) H j 1 ( x N + 1 ) + τ ^ 0 , w i t h H 0 ( x ) 1 .
Proof. 
Is carried out by induction. For k = 1 , the formula evidently follows from (88). Assume that (89) holds for some k. Let us prove its validity for k + 1 . From (88) one arrives at
H ^ k + 2 = 1 H k ( x N + 1 ) ( 1 ) k + 1 H k + 1 2 H k + 1 ( x N + 1 ) H ^ k + 1
and assuming that H k + 1 ( x N + 1 ) 0 ,
= ( 1 ) k + 1 H k + 1 ( x N + 1 ) H k + 1 2 H k + 1 ( x N + 1 ) H k ( x N + 1 ) + ( 1 ) k H k ( x N + 1 ) H ˜ k + 1 .
Application of induction hypothesis (89) completes the proof.  □
Combination of the preceding results finalized in computational formula for
H ^ k ( x ) = 1 H k 1 ( x N + 1 ) ( 1 ) k H k H k 1 ( x ) H ^ k H k H k ( x N + 1 ) H k 1 ( x ) H k 1 ( x N + 1 ) H k ( x )
under the natural assumption for the non-vanishing of the denominators. It turns out that computation of Hankel polynomials for the extended data set (82) can be performed directly via their counterparts for the initial data set (1) without evaluation of all the symmetric functions (83) (except for τ ^ 0 ).
Example 5.
Add the point x = 4 , y = 7 to the data set of Example 4 and find the rational interpolant r ^ 3 , 4 ( x ) = p ( x ) / q ( x ) with deg p ( x ) = 3 , deg q ( x ) = 4 satisfying the extended set.
Solution. To find the denominator by Formula (90):
H ^ 4 ( x ) = 1 H 3 ( 4 ) H 4 H 3 ( x ) H ^ 4 H 4 H 4 ( 4 ) H 3 ( x ) H 3 ( 4 ) H 4 ( x )
we use the already found in solution of Example 4 expressions for Hankel polynomials H 3 ( x ) , H 4 ( x ) . The values of the leading coefficients of these polynomials, as well as those of H 1 ( x ) , H 2 ( x ) , complemented with the additional computation of τ ^ 0 = 1 / 112 permits one to find
H ^ 4 = ( 89 ) H 3 ( 4 ) H 1 2 H 1 ( 4 ) + H 2 2 H 1 ( 4 ) H 2 ( 4 ) + H 3 2 H 2 ( 4 ) H 3 ( 4 ) + τ ^ 0 = 17117 27216000 .
Thus,
H ^ 4 ( x ) = 17117 27216000 x 4 249211 54432000 x 3 52343 27216000 x 2 + 1165699 54432000 x + 53 20160 .
Computation of the numerator of the wanted fraction is performed in a similar way using the already computed in Example 4 expressions for H ˜ 2 ( x ) and H ˜ 3 ( x ) . Finally,
r ^ 3 , 4 ( x ) = 924015 x 3 + 642060 x 2 + 5084535 x + 143100 34234 x 4 249211 x 3 104686 x 2 + 1165699 x + 143100 .
We do not discuss here the exceptional cases where some of polynomials H k ( x ) vanish at x N + 1 . As for a relationship of the obtained result to the Newton polynomial interpolant construction procedure (since the former should contain the latter as a particular case) we restrict ourselves to the reference to Theorem 13.

8. Comparison with the Barycentric Representation

We intend here to establish the relationship of Jacobi’s approach to the one based on the so-called barycentric representation.
Theorem 19.
(Ref. [6]). If a solution to the rational interpolation problem (61)–(62) with m n exists, then u = [ u 1 , , u N ] is a vector of weights in one of its barycentric representations
r ( x ) = j = 1 N u j x x j y j / j = 1 N u j x x j
if and only if u belongs to the kernel of the ( N 1 ) × N stack matrix
A = V H
where
V = 1 1 1 x 1 x 2 x N x 1 2 x 2 2 x N 2 x 1 n 1 x 2 n 1 x N n 1 , H = y 1 y 2 y N x 1 y 1 x 2 y 2 x N y N x 1 m 1 y 1 x 2 m 1 y 2 x N m 1 y N .
First, we claim that m + 1 vectors
Ξ j = x 1 j / W ( x 1 ) , , x N j / W ( x N ) for j { 0 , 1 , , m }
belong to the kernel of the Vandermonde matrix V . Indeed this evidently follows from the equalities (48). Next we look for a linear combination v 0 Ξ 0 + v 1 Ξ 1 + + v m Ξ m , which belongs to the kernel of the matrix H :
y 1 y 2 y N x 1 y 1 x 2 y 2 x N y N x 1 m 1 y 1 x 2 m 1 y 2 x N m 1 y N
× 1 / W ( x 1 ) x 1 / W ( x 1 ) x 1 m / W ( x 1 ) 1 / W ( x 2 ) x 2 / W ( x 2 ) x 2 m / W ( x 2 ) 1 / W ( x N ) x N / W ( x N ) x N m / W ( x N ) v 0 v 1 v m = O m × 1 ,
or, equivalently, in the notation of symmetric functions (65) of the data set,
τ 0 τ 1 τ m τ 1 τ 2 τ m + 1 τ m 1 τ m τ 2 m 1 m × ( m + 1 ) v 0 v 1 v m = O m × 1 .
Provided that the rank of the matrix standing in the left-hand side of this equation equals m, the dimension of its kernel equals 1 with the basis vector consisting of the sequence of algebraic complements to the entries of the last row of the matrix
τ 0 τ 1 τ m τ 1 τ 2 τ m + 1 τ m 1 τ m τ 2 m 1 * * * ( m + 1 ) × ( m + 1 ) .
Therefore, this vector must be proportional (collinear) to the vector of the coefficients of the denominator of the rational interpolant represented by (70).
Consequently, the Hankel polynomial approach to the rational interpolation problem can be interpreted as a method for resolution of the system of linear equations for determining the vector of weights in barycentric representation of the interpolant from Theorem 19.

9. Redundant Data Set with Systematic Errors

We now turn to a problem of data storage systems, namely the one aimed at establishing the locations of erroneous blocks in their sequence containing together with data (information) blocks and also some specially designed checksums. This problem relates to the theory of the Error Correcting Codes. To explain the idea of one of the practical procedures from this theory, we first present an artificial example.
Example 6.
Consider the data set
x 2 1 0 1 2 3 4 y 30 7 8 9 11 35 60
It is known that initially it has been generated by the polynomial of degree two with integer coefficients, but later on, a suspicion arises that up to two y-values are corrupted. Find the erroneous values and restore the initial polynomial.
Solution. Compute successively Hankel polynomials from Theorem 15 and try to factorize them over Q . We succeed at
H 4 ( x ; { τ ˜ } ) 1 1451520000 ( x + 1 ) ( x 2 ) ( 4 x 2 3 x + 8 )
and at
H 2 ( x ; { τ } ) 77 320 ( x + 1 ) ( x 2 ) .
It can be easily verified that the values of polynomial f ( x ) = 4 x 2 3 x + 8 coincide with the y-values from the data set at all the given x-values except for x = 1 and x = 2 . Thus, the integer zeros of H 2 ( x ; { τ } ) give the location of the erroneous values while the square factor of H 4 ( x ; { τ ˜ } ) coincides with the “true” polynomial generating the remaining correct y-values.  □
We now concretize the problem statement. Let the data set (1) of rational numbers be redundant for the polynomial computation, i.e., the number of true values in it exceeds k + 1 where k = deg f ( x ) is known a priori. The number of corrupted y-values is unknown, but one may expect that it is much less than those of true. We need to find x-values corresponding to the erroneous y-values. The first step in dealing with this problem should be a test for the error presence. The next result is a trivial consequence of Theorem 11.
Theorem 20.
For the polynomial interpolant generated by (1) to be of the degree k < N 1 , it is necessary and sufficient that
τ 0 = 0 , , τ N k 2 = 0 , τ N k 1 0 .
Thus, the uncorrupted data set (1) passes the test { τ j = 0 } j = 0 N k 2 , and then the Formula (53) permits one to compute the polynomial. One may expect that the corrupted data set would not pass the test: generically, the degree of the interpolant for the random data set equals N 1 > k . If the number E of erroneous values is given to one by revelation, the following result can be utilized for their location.
Theorem 21.
(Ref. [32]). Let E { 1 , 2 , , N / 2 1 } and e 1 , , e E be distinct numbers from { 1 , 2 , , N } . Let polynomial f ( x ) be of a degree k < N 2 E . Let the data set satisfy the conditions
(a) 
y j = f ( x j ) for j { 1 , , N } \ { e 1 , , e E } ,
(b) 
y ^ e s : = f ( x e s ) y e s for s { 1 , , E } .
Then,
H E ( x ; { τ } ) s = 1 E ( y e s y ^ e s ) 1 s < t E ( x e t x e s ) 2 s = 1 E W ( x e s ) s = 1 E ( x x e s ) .
This result confirms one empirical conclusion from solution to Example 6. The other one can also be justified, i.e., under the conditions of Theorem 21, the following identity is valid, up to a sign [33]:
H k + E ( x ; { τ ˜ } ) a 0 N k 2 E 1 j = 1 N y j H E ( x ; { τ } ) f ( x ) ,
here, a 0 stands for the leading coefficient of f ( x ) .
Unfortunately, in the practice of the error correcting codes, the exact number of erroneous values, as well as their location, is seldom known a priori. Therefore, in order to find the polynomial from Theorem 21, one should verify the reducibility of the successive polynomials H 1 ( x ; { τ } ) , H 2 ( x ; { τ } ) , . This reason immediately refers to the results of Section 3 for the effective procedure of computation of the next polynomial in the succession when the previous ones are already known.
We now recall the idea of the Berlekamp-Welch algorithm [22] for the error correction codes. Assume the integers y 1 , , y k are treated as information blocks to be protected against failures when put into computer data storage. For this aim, let us organize the checksums computation in the following way. First, chose arbitrary distinct integers x 1 , , x k , say { x j = j } j = 1 k . Next, compute the polynomial interpolant f ( x ) of the degree k 1 for the data set { ( x j , y j ) } j = 1 k . Finally, evaluate f ( x ) at the points { x j = j } j = k + 1 N . The obtained data set { ( x j , f ( x j ) ) } j = 1 N is redundant for the f ( x ) computation, but now, in the view of preceding considerations, it might sustain some amount of data corruptions.
The real life application of the algorithm is performed in finite fields, say GF ( 2 D ) for sufficiently large positive integer D. Usually, { x j = a j 1 } j = 1 N where a stands for a primitive element of the field. The polynomial (92) is addressed as the error locator polynomial. In [22], the algorithm of its construction is treated as the consequence of a linear system (64) solving. The presence of errors in the data set causes the appearance of a nontrivial common factor for the numerator and the denominator, i.e., the conditions of Theorem 15 are not fulfilled. Theorem 21 demonstrates that it is possible to make a version of the Berlekamp–Welch algorithm which is free from the rational interpolation stage.
Compared with infinite fields, several computational difficulties appear when implementing the algorithms in GF ( 2 D ) . We just list some of them: finding the inversion and discrete logarithm of an element, evaluation and zero finding for a polynomial over GF ( 2 D ) , etc. The vanishing of H k 1 in the JJ-identity (15) is to be treated as an exception in an infinite field, but it has a non-zero probability in GF ( 2 D ) . A complete treatment of these issues is to be done in further publications on the subject.

10. Conclusions

We have addressed an approach for the solution of the polynomial and rational interpolation problem originated in the paper by Jacobi. It consists first in representation of the interpolant by virtue of appropriate Hankel polynomials and further recursive computation of the sequence of these polynomials via the Jacobi–Joachimsthal identity and its development. This yields one an opportunity to compute effectively the whole family of rational interpolants for the given data set.
Problems for further investigation relating the theoretical backgrounds of the Jacobi’s approach are mentioned in Section 3. Extension of the approach to the (mentioned in Introduction) rational Hermite interpolation problem [34] looks promising.
We trace some other topics of potential interest related to the approach:
  • The problem of choice of the interpolant with the prescribed restrictions on its entries. For the case of polynomials over R , this might be the demand of absence of either real zeros for the denominator or zeros lying in the right half-plane of the complex plane (in other words, the denominator must be a stable polynomial). Is it possible to resolve these types of problems in the framework of the Hankel polynomial sequence construction? The positive answer can be guessed from the title of the original Joachimsthal’s paper [13]. Indeed, it is devoted to the zero separation problem for a univariate polynomial over R .
  • Multivariate interpolation. The treatment of this problem meets more obstacles than the univariate counterpart. For instance, one can select such a data sets { ( x j , y j , z j ) } j = 1 9 R 3 with distinct pairs ( x j , y j ) for which the polynomial interpolant
    f ( x , y ) R [ x , y ] , deg f = 3 , { ( x j , y j ) = z j } j = 1 9
    does not exist. On the other hand, there exist counterparts for the results from Section 4 for multivariate symmetric functions. For instance, an analogue of Theorem 10 is known as (what a surprise!) the Jacobi equality [35]. Therefore, the extendability of the everywhere Jacobi’s approach to the multivariate rational interpolation problem looks intriguing.

Author Contributions

Conceptualization and methodology, A.U.; investigation, I.B. and E.K.; literature search, A.U. and E.K.; algorithm implementation, I.B.; writing—original draft preparation, A.U. and I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank the anonymous reviewers for their very constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meijering, E. A chronology of interpolation: From ancient astronomy to modern signal and image processing. Proc. IEEE 2002, 90, 319–342. [Google Scholar] [CrossRef] [Green Version]
  2. Bradley, R.E.; Sandifer, C.E. Cauchy’s Cours d’Analyse. An Annotated Translation; Springer: New York, NY, USA, 2009. [Google Scholar]
  3. Kronecker, L. Zur Theorie der Elimination einer Variabeln aus zwei algebraischen Gleichungen. Mon. Königlichen Preuss. Akad. Wiss. Berl. 1881, 535–600. Available online: https://www.biodiversitylibrary.org/page/39003100 (accessed on 27 April 2021).
  4. Netto, E. Zur Cauchy’schen Interpolationsaufgabe. Math. Ann. 1893, 42, 453–456. [Google Scholar] [CrossRef]
  5. Beckermann, B.; Labahn, B.; Villard, G. Fraction–free computation of matrix rational interpolants and matrix gcd’s. SIAM J. Matrix Anal. Appl. 2000, 22, 114–144. [Google Scholar] [CrossRef]
  6. Berrut, J.-P.; Baltensperger, R.; Mittelmann, H.D. Recent developments in barycentric rational interpolation. In Trends and Applications in Constructive Approximation; Birkhäuser: Basel, Switzerland, 2005; pp. 27–51. [Google Scholar]
  7. D’Andrea, C.; Krick, T.; Szanto, A. Subresultants, Sylvester sums and the rational interpolation problem. J. Symb. Comput. 2015, 68, 72–83. [Google Scholar] [CrossRef]
  8. Von zur Gathen, J.; Gerhard, J. Modern Computer Algebra, 3rd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  9. Jacobi, C.G.J. Über die Darstellung einer Reihe gegebner Werthe durch eine gebrochne rationale Function. J. Reine Angew. Math. 1846, 30, 127–156. [Google Scholar]
  10. Kumaresan, R. Identification of rational transfer function from frequency response samples. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 925–934. [Google Scholar] [CrossRef]
  11. Eğecioğlu, Ö.; Koç, Ç.K. A fast algorithm for rational interpolation via orthogonal polynomials. Math. Comp. 1989, 53, 249–264. [Google Scholar] [CrossRef] [Green Version]
  12. Jacobi, C.G.J. De eliminatione variabilis e duabus aequationibus algebraicis. J. Reine Angew. Math. 1836, 15, 101–124. [Google Scholar]
  13. Joachimsthal, F. Bemerkungen über den Sturm’schen Satz. J. Reine Angew. Math. 1854, 48, 386–416. [Google Scholar]
  14. Uteshev, A.Y.; Baravy, I. Solution of the rational interpolation problem via the Hankel polynomial construction. Vestn. Sankt-Peterbg. Univ. Seriya 10 2016, 4, 31–43. (In Russian) [Google Scholar] [CrossRef]
  15. Frobenius, G. Über das Trägheitsgesetz der quadratischen Formen. Verl. Königlichen Akad. Wiss. 1894, 241–256, 407–431. [Google Scholar]
  16. Gantmacher, F.R. The Theory of Matrices; Chelsea: New York, NY, USA, 1959. [Google Scholar]
  17. Henrici, P. Applied and Computational Complex Analysis: Power Series, Integration, Conformal Mapping, Location of Zeros; John Wiley & Sons: New York, NY, USA, 1974; Volume 1. [Google Scholar]
  18. Iohvidov, I.S. Hankel and Toeplitz Matrices and Forms: Algebraic Theory; Birkhäuser: Boston, MA, USA, 1982. [Google Scholar]
  19. Berlekamp, E.R. Algebraic Coding Theory; Aegean Park Press: Laguna Hills, CA, USA, 1984. [Google Scholar]
  20. Blahut, R.E. Fast Algorithms for Digital Signal Processing; Addison-Wesley Longman Publishing: Boston, MA, USA, 1985. [Google Scholar]
  21. Trefethen, L.N. Approximation Theory and Approximation Practice; SIAM: Philadelphia, PA, USA, 2013. [Google Scholar]
  22. Welch, L.R.; Berlekamp, E.R. Error Correction of Algebraic Block Codes. U.S. Patent 4,633,470, 30 December 1986. [Google Scholar]
  23. Bôcher, M. Introduction to Higher Algebra, 2nd ed.; Dover: Mineola, NY, USA, 2004. [Google Scholar]
  24. Kronecker, L. Bemerkungen zur Determinanten–Theorie. J. Reine Angew. Math. 1870, 72, 152–175. [Google Scholar]
  25. Uspensky, J.V. Theory of Equations; McGraw–Hill: New York, NY, USA, 1948. [Google Scholar]
  26. Weyl, H. The Classical Groups: Their Invariants and Representations, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 1946. [Google Scholar]
  27. Jonckheere, E.; Ma, C. A simple Hankel interpretation of the Berlekamp-Massey algorithm. Linear Algebra Appl. 1989, 125, 65–76. [Google Scholar] [CrossRef] [Green Version]
  28. Gemignani, L. Rational interpolation via orthogonal polynomials. Comput. Math. Appl. 1993, 26, 27–34. [Google Scholar] [CrossRef] [Green Version]
  29. Vahlen, K.T. Rationale Funktionen der Wurzeln; symmetrische und Affektfunktionen. In Encyklopadie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen; Meyer, W.F., Ed.; Teubner: Leipzig, Germany, 1898–1904; Volume 1, pp. 449–479. Available online: https://link.springer.com/chapter/10.1007/978-3-663-16017-5_12#citeas (accessed on 3 June 2021).
  30. Chihara, T.S. An Introduction to Orthogonal Polynomials; Gordon and Breach: New York, NY, USA, 1978. [Google Scholar]
  31. Ismail, M.E.H. Classical and Quantum Orthogonal Polynomials in One Variable; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  32. Bogdanov, A.V.; Uteshev, A.Y.; Khvatov, V. Error detection in the decentralized voting protocol. In Proceedings of the ICCSA 2019, Saint Petersburg, Russia, 1–4 July 2019; Lecture Notes in Computer Science. Springer: Cham, UK, 2019; Volume 11620, pp. 485–494. [Google Scholar]
  33. Uteshev, A.Y.; Baravy, I. Solution of interpolation problems via the Hankel polynomial construction. arXiv 2016, arXiv:Cs.SC/1603.08752. [Google Scholar] [CrossRef]
  34. Cortadellas, T.; D’Andrea, C.; Montoro, E. Rational interpolation and the Euler–Jacobi formula. Monogr. Real Acad. Cienc. Zaragoza 2018, 43, 75–78. [Google Scholar]
  35. Uteshev, A.Y.; Shulyak, S.G. Hermite’s method of separation of solutions of systems of algebraic equations and its applications. Linear Algebra Appl. 1992, 177, 49–88. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Uteshev, A.; Baravy, I.; Kalinina, E. Rational Interpolation: Jacobi’s Approach Reminiscence. Symmetry 2021, 13, 1401. https://doi.org/10.3390/sym13081401

AMA Style

Uteshev A, Baravy I, Kalinina E. Rational Interpolation: Jacobi’s Approach Reminiscence. Symmetry. 2021; 13(8):1401. https://doi.org/10.3390/sym13081401

Chicago/Turabian Style

Uteshev, Alexei, Ivan Baravy, and Elizaveta Kalinina. 2021. "Rational Interpolation: Jacobi’s Approach Reminiscence" Symmetry 13, no. 8: 1401. https://doi.org/10.3390/sym13081401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop