1 Introduction

The Bessel function \(J_{0}(x)\) is present in a lot of applications like electrodynamics (Jackson 1998; Blachman and Mousavineezhad 1986; Rothwell 2009), mechanics (Kang 2014), diffusion in cylinder and waves in kinetic theory in plasma physics (Chen 2010), generalized Bessel functions are investigated in Khosravian-Arab et al. (2017) and others relevant application for especial function are in Masjed-Jamei and Dehghan (2006), Dehghan et al. (2011), Lakestani and Dehghan (2010), Dehghan (2006), Abbaszadeh and Dehghan (2020), Watson (1966), Byron and Fuller (1992) and Eyyuboglu et al. (2008). There are several kinds of approximations including polynomials ones, each one limited to a given interval of the variable (Abramovitz and Stegun 1970). The series expansion of this function is entire, but for large values of the variable you need a lot of terms to get good accuracy, and its calculation becomes cumbersome. An analytic and very accurate approximation is presented here, which is good for every positive value of the variable, including complex value, and very useful in the applications. Furthermore, it is simple to calculate, and it can be differentiated and integrated as the exact one. This approximation is determined using only eight parameters, and the way to obtain is by a peculiar method, which uses simultaneously the power and asymptotic expansion of the function. In this way, a bridge function between both expansions is built and the result is very good. That is, the accuracy is very high for both the function and its zeros, which are also very important for its applications. The technique used here is an extension of the so-called Multipoint Quasi-rational Approximation, MPQA (Martin et al. 1992; Diaz and Martin 2018; Martin et al. 2009, 2017, 2018, 2020; Maass and Martin 2018), with a new way to define auxiliary functions, which will be combined with the rational functions. Three kinds of approximations have been determined, using the same number of parameters. The accuracy of the best one is higher than \(10^{-4}\). Furthermore, the errors of the approximation for large values of the variable are several orders of magnitude smaller than those in the small interval where the maximum relative error appears. In Sect. 2, the general theoretical treatment is presented. In Sect. 3, the simplest approximations found here are described, and the results are shown, including the calculation of the zeros. Section 4 is devoted to Conclusion.

2 Theoretical treatment

The power series and asymptotic expansion of the Bessel function \(J_{0}(x)\) are well known and given, respectively by

$$\begin{aligned} J_{0}(x)=\displaystyle \sum _{k=0}^{\infty }\frac{(-1)^k}{k!\Gamma (k+1)}\left( \frac{x}{2}\right) ^{2k} \end{aligned}$$
(1)

and

$$\begin{aligned} J_{0}(x)\sim \sqrt{\frac{2}{\pi x}}\left[ \cos \left( x-\frac{\pi }{4}\right) +\frac{1}{8x}\sin \left( x-\frac{\pi }{4}\right) +\cdots \right] . \end{aligned}$$
(2)

For the present treatment, it is more convenient to write the asymptotic expansion as

$$\begin{aligned} J_{0}(x)\sim \sqrt{\frac{1}{\pi x }}\left[ \left( 1-\frac{1}{8x}+\cdots \right) \cos (x)+\left( 1+\frac{1}{8x}+\cdots \right) \sin (x)\right] . \end{aligned}$$
(3)

The terms of the power series to be used in the present treatment are

$$\begin{aligned} J_{0}(x)= 1-\frac{1}{4}x^2+\frac{1}{64}x^4-\frac{1}{2304}x^6+\cdots \end{aligned}$$
(4)

The idea of the multipoint quasirational technique, MPQA, is to find a combination of a few elementary functions in such a way that this ensemble will be a bridge between the power and asymptotic expansions,which means that the expansions coincide with several terms. To do that, a combination of rational functions with trigonometric and fractional powers are used. The structure of the present approximation is

$$\begin{aligned} \tilde{J_{0}}(x)= & {} \frac{1}{(1+\lambda ^4x^2)^{1/4}(1+qx^2)} \left[ \left[ p_{0}+p_{1}x^2+p_{2}(1+\lambda ^4x^2)^{1/2}\right] \cos (x)\right. \nonumber \\&\left. +\, \left[ (\tilde{p}_{0}+\tilde{p}_{1}x^2)(1+\lambda ^4x^2)^{1/2}+\tilde{p}_{2}x^2\right] \frac{\sin (x)}{x}\right] . \end{aligned}$$
(5)

It is important to explain why this structure is the good one. The first requirement to be accomplished by the approximation function \(\tilde{J}_{0}(x)\) is that the power series should have only even powers, as the exact function \(J_{0}(x)\). The second requirement is that the behaviour of the asymptotic expansion should be as that shown in Eq. (3). To accomplish this requirement, we have introduced the common factor \((1+\lambda ^4x^2)^{1/2}\) in the denominator, which will give the required factor \(\sqrt{x}\), when x tends to infinite. In the case of \(\cos (x)\), it is clear that the right structure is obtained. In the case of \(\sin (x)\), there is also the additional factor 1/x in the denominator, which is required to obtain only even powers in the power series. Now, identifying the corresponding terms in Eqs. (3) and (5), the following equations are obtained for the parameters.

$$\begin{aligned}&p_{1}=\frac{\lambda }{\sqrt{\pi }}q\qquad \qquad \tilde{p}_{1}=\frac{1}{\lambda \sqrt{\pi }}q \end{aligned}$$
(6)
$$\begin{aligned}&p_{2}=-\frac{1}{8\lambda \sqrt{\pi }}q\qquad \qquad \tilde{p}_{2}=\frac{\lambda }{8\sqrt{\pi }}q, \end{aligned}$$
(7)

where the parameter \(\lambda \) will be taken as a real positive number.

The \(\lambda \)-determination is done with a different procedure than that followed for the parameters q and p\('\)s. There are seven parameters to determine not including \(\lambda \), and there are four equations until now. It is necessary to get three additional equations. These are obtained by equalizing the corresponding first three terms of the power expansions of \(J_{0}(x)\) and \(\tilde{J}_{0}(x)\).

2.1 Equations

The power series to be used are

$$\begin{aligned}&f_{1}(x)=(1+\lambda ^4x^2)^{1/4}=1+\frac{1}{4}\lambda ^4x^2-\frac{3}{32}\lambda ^8x^4+\cdots \end{aligned}$$
(8)
$$\begin{aligned}&f_{2}(x)=(1+\lambda ^4x^2)^{1/2}=1+\frac{1}{2}\lambda ^4x^2-\frac{1}{8}\lambda ^8x^4+\cdots \end{aligned}$$
(9)
$$\begin{aligned}&f_{3}(x)=\sin (x)=x-\frac{1}{6}x^3+\frac{1}{120}x^5-\cdots \end{aligned}$$
(10)
$$\begin{aligned}&f_{4}(x)=\cos (x)=1-\frac{1}{2}x^2+\frac{1}{24}x^4-\cdots \end{aligned}$$
(11)
$$\begin{aligned}&f_{5}(x)=J_{0}(x)=1-\frac{1}{4}x^2+\frac{1}{64}x^4-\frac{1}{2304}x^6+\cdots \end{aligned}$$
(12)

Prior to equalizing terms, it is necessary to rationalize \(\tilde{J}_{0}(x)\) in order to avoid non-linear equations, that is,

$$\begin{aligned} x f_{1}(x)f_{5}(x)(1+qx^2)= & {} x\left[ (p_{0}+p_{1}x^2)+p_{2}f_{2}(x)\right] f_{4}(x)\nonumber \\&+\,\left[ (\tilde{p}_{0}+\tilde{p}_{1}x^2)f_{2}(x)+\tilde{p}_{2}x^2\right] f_{3}(x). \end{aligned}$$
(13)

3 Approximation and results

3.1 First approximation

By equalizing the terms \(x^0\), \(x^1\) and \(x^2\) in both sides of the Eq. (13) and considering Eqs. (6) and (7), the following relations are obtained:

$$\begin{aligned}&p_{0}+\tilde{p}_{0}-\frac{1}{8}\frac{q}{\lambda \sqrt{\pi }}=1 \end{aligned}$$
(14)
$$\begin{aligned}&q-\frac{1}{4}+\frac{\lambda ^4}{4}\nonumber \\&\quad = \frac{9}{8}\frac{\lambda q}{\sqrt{\pi }}+\frac{17}{16}\frac{q}{\lambda \sqrt{\pi }}-\frac{1}{16}\frac{q\lambda ^3}{\sqrt{\pi }}+\frac{1}{2}\tilde{p}_{0}\lambda ^4-\frac{1}{6}\tilde{p}_{0}-\frac{1}{2}p_{0} \end{aligned}$$
(15)
$$\begin{aligned}&\qquad -\frac{1}{4}q+\frac{1}{64}+\frac{1}{4}\lambda ^4q-\frac{1}{16}\lambda ^4-\frac{3}{32}\lambda ^8=-\frac{25}{48}\frac{\lambda q}{\sqrt{\pi }}-\frac{11}{64}\frac{q}{\lambda \sqrt{\pi }}+\frac{17}{32}\frac{q\lambda ^3}{\sqrt{\pi }}\nonumber \\&\qquad + \frac{1}{64}\frac{q\lambda ^7}{\sqrt{\pi }}-\frac{1}{8}\tilde{p}_{0}\lambda ^8-\frac{1}{12}\tilde{p}_{0}\lambda ^4+\frac{1}{120}\tilde{p}_{0}+\frac{1}{24}p_{0}. \end{aligned}$$
(16)

Thus, there are seven equations: Eq. (6) (two equations), Eq. (7) (two equations), Eqs. (14), (15) and (16). The numbers of parameters to be considered are now q, \(p_{0}\), \(p_{1}\), \(p_{2}\), \( \tilde{p}_{0}\), \(\tilde{p}_{2}\) and \(\tilde{p}_{1}\). The parameter \(\lambda \) will be obtained by a different method as it will be explained later. Therefore, there are the same numbers of parameters than the equations.

In this way, all the p\('\)s and q\('\)s can be obtained as a functions of \(\lambda \). Introducing all of these values in Eq. (5), the approximation \(\tilde{J}_{0}(x,\lambda )\) is obtained. The parameter \(\lambda \) will be determined as the value that minimizes the maximum absolute error of \(|J_{0}-\tilde{J}_{0}|\). Thus, we give values to \(\lambda \), and we look for the maximum absolute error, \(\tilde{\epsilon }_{m}(\lambda )\). Changing the values of \(\lambda \) different values of absolute error are obtained, and we will looking for the maximum absolute error, \(\tilde{\epsilon }_{m}(\lambda )\), corresponding to each \(\lambda \). The best \(\lambda _{0}\) will be the one that produces the minimum of the \(\tilde{\epsilon }_{m}(\lambda )\), and these will be denoted by \(\lambda _{0}\) and \(\tilde{\epsilon }_{m}(\lambda _{0})\). In this way, the value of \(\lambda _{0}\) is found to be equal to 0.8152 . The maximum error for this value of \(\lambda _{0}\) is 0.00971, and the maximum relative error for the zeros is 0.000077. Introducing this value of \(\lambda \) in the parameters, we obtain the first approximation. These are given in Table 1.

Table 1 Values of the parameters of the approximate \(\tilde{J}_{0}(x)\), Eq. (5), for \(\lambda =\lambda _0\)

In Fig. 1, the absolute errors as a function of x are shown for \(\lambda _{0}\)= 0.8152.

Fig. 1
figure 1

\(|J_0(x)-\tilde{J}_0(x)|\) as a function of x, for \(\lambda _0 = 0.8152\)

The values of the exact function \(J_{0}(x)\) are obtained with the program MAPLE, where the Bessel functions can be determined with high precision. MAPLE is powerful technology tool, with algorithms well qualified.

In Fig. 2 the relative errors of the zeros are shown for this value of \(\lambda _{0}\). In this figure, the maximum errors are also shown clearly.

Fig. 2
figure 2

Relative errors of the zeros as a function of the cardinal number n, for \(\lambda _{0} = 0.8152\)

It is interesting point out that in previous works, the relative errors were considering, instead of the absolute ones. Here the analysis is different because the zeros of the functions \(J_{0}(x)\), which would be producing relative errors of infinite values. However the relative errors of \(J_{0}(x)\) can be also analyzed, comparing the values of x, wherein the zeros of \(J_{0}(x)\) and \(\tilde{J}_{0}(x)\) appear.

That is, to understand in a better way, the behaviour of the approximation compared with the actual function \(J_{0}(x)\), it is interesting to calculate the relative error of the maximum and minimum of the functions. This is shown in Fig. 3. It can be seen that the maximum error is in the first maximum of the function, \(J_{0}(x)\), and it is given by 0.01. Here the cardinal number of the maximums and minimums is written as \(\tilde{n}\), to avoid the confusion with the number n identifying the zeros. The errors in Fig. 3 are relative errors. Thus, there are different than those in Fig. 1, which are absolute errors, and this was the reason of using \(|J_{0}-\tilde{J}_{0}|\).

Fig. 3
figure 3

Relative errors of the maximum and minimum as a function of \(\tilde{n}\), for \(\lambda _{0} = 0.8152\)

3.2 Second approximation

Now let us consider a different way to obtain the equations for the parameters determination. The two Eq. (7), obtained through the second terms of the asymptotic equation are not considered now. Instead of that more equations derived from the power series have to be included. The new approximation \({J}_{0}(x,\lambda )\) is obtained with the Eq. (6), and five additional equations coming from the power series. The five equations in this case are

$$\begin{aligned}&\tilde{p}_{0}+p_{2}+p_{0}=1 \end{aligned}$$
(17)
$$\begin{aligned}&\frac{1}{4}\lambda ^4+q-\frac{1}{4}\nonumber \\&\quad =\frac{\lambda q}{\sqrt{\pi }}+\frac{q}{\lambda \sqrt{\pi }}+\frac{1}{2}p_{2}\lambda ^4+\frac{1}{2}\tilde{p}_{0}\lambda ^4-\frac{1}{6}\tilde{p}_{0}+\tilde{p}_{2}-\frac{1}{2}p_{2}-\frac{1}{2}p_{0} \end{aligned}$$
(18)
$$\begin{aligned}&\frac{1}{4}\lambda ^4q-\frac{3}{32}\lambda ^8-\frac{1}{4}q-\frac{1}{16}\lambda ^4+\frac{1}{64}\nonumber \\&\quad = -\frac{1}{2}\frac{\lambda q}{\sqrt{\pi }}-\frac{1}{6}\frac{q}{\lambda \sqrt{\pi }}+\frac{1}{2}\frac{q\lambda ^3}{\sqrt{\pi }}-\frac{1}{8}\tilde{p}_{0}\lambda ^8-\frac{1}{4}p_{2}\lambda ^4\nonumber \\&\quad - \frac{1}{8}p_{2}\lambda ^8-\frac{1}{12}\tilde{p}_{0}\lambda ^4+\frac{1}{120}\tilde{p}_{0}-\frac{1}{6}\tilde{p}_{2}+\frac{1}{24}p_{2}+\frac{1}{24}p_{0} \end{aligned}$$
(19)
$$\begin{aligned}&\quad -\frac{1}{16}\lambda ^4q-\frac{3}{32}\lambda ^3q+\frac{7}{128}\lambda ^12+\frac{1}{64}q+\frac{1}{256}\lambda ^4\nonumber \\&\quad + \frac{3}{128}\lambda ^8-\frac{1}{2304}=\frac{1}{24}\frac{\lambda q}{\sqrt{\pi }}+\frac{1}{120}\frac{q}{\lambda \sqrt{\pi }}-\frac{1}{12}\frac{q\lambda ^3}{\sqrt{\pi }}\nonumber \\&\quad - \frac{1}{8}\frac{q\lambda ^7}{\sqrt{\pi }}+\frac{1}{48}\tilde{p}_{0}\lambda ^8+\frac{1}{16}\tilde{p}_{0}\lambda ^{12}+\frac{1}{16}p_{2}\lambda ^{12}+\frac{1}{16}p_{2}\lambda ^8\nonumber \\&\quad + \frac{1}{240}\tilde{p}_{0}\lambda ^4+\frac{1}{48}p_{2}\lambda ^4-\frac{1}{5040}\tilde{p}_{0}+\frac{1}{120}\tilde{p}_{2}-\frac{1}{720}p_{2}-\frac{1}{720}p_{0} \end{aligned}$$
(20)
$$\begin{aligned}&\frac{1}{256}\lambda ^4q+\frac{3}{128}\lambda ^8q+\frac{7}{128}\lambda ^{12}q-\frac{77}{2048}\lambda ^{16}-\frac{1}{2304}q\nonumber \\&\quad -\frac{1}{9216}\lambda ^4-\frac{3}{2048}\lambda ^8-\frac{7}{512}\lambda ^{12}+\frac{1}{147456}\nonumber \\&\qquad =-\frac{1}{720}\frac{\lambda q}{\sqrt{\pi }}-\frac{1}{5040}\frac{q}{\lambda \sqrt{\pi }}+\frac{1}{240}\frac{q\lambda ^3}{\sqrt{\pi }}+\frac{1}{48}\frac{q\lambda ^7}{\sqrt{\pi }}\nonumber \\&\quad +\frac{1}{16}\frac{q\lambda ^{11}}{\sqrt{\pi }}-\frac{1}{10080}\tilde{p}_{0}\lambda ^4-\frac{1}{960}\tilde{p}_{0}\lambda ^8-\frac{1}{90}\tilde{p}_{0}\lambda ^{12}\nonumber \\&\quad -\frac{5}{128}\tilde{p}_{0}\lambda ^{16}-\frac{1}{1440}p_{2}\lambda ^4-\frac{1}{32}p_{2}\lambda ^{12}-\frac{1}{192}p_{2}\lambda ^8\nonumber \\&\quad -\frac{5}{128}p_{2}\lambda ^{16}+\frac{1}{362880}\tilde{p}_{0}-\frac{1}{5040}\tilde{p}_{2}\nonumber \\&\quad +\frac{1}{40320}p_{2}+\frac{1}{40320}p_{0}. \end{aligned}$$
(21)

The procedure now is like in the preceding case, and the best \(\lambda \) is \(\lambda _{1} = 0.865\). The maximum absolute error is now 0.00009, and the maximum relative error of the zeros is 0.00004. Introducing this value of \(\lambda _{1}\), the parameters q and p\('\) are given in Table 2.

Table 2 Values of the parameters of the approximate \(\tilde{J}_{0}(x)\), Eq. (5), for \(\lambda =\lambda _1\)

Now, we can proceed to Fig. 4, where the absolute errors for this approximation are shown. The relative errors of the zeros appear now in Fig. 5, where the maximum error and the errors for each zero are shown.

Fig. 4
figure 4

\(|J_{0}(x)-\tilde{J}_{0}(x)|\) as a function of x, for \(\lambda _{1} = 0.865\)

Fig. 5
figure 5

Relative errors of the zeros as a function of the cardinal number n, for \(\lambda _{1} = 0.865\)

As in the previous case, the relative errors of the maximum and minimum are shown in Fig. 6. The maximum, relative error now is 0.00025. Comparing this approximation with the first ones the errors are now much lower.

Fig. 6
figure 6

Relative errors of the maximum and minimum as a function of \(\tilde{n}\), for \(\lambda _{1} = 0.865\)

3.3 Third approximation

As in the previous cases, the same number of parameters have to be determined. However, the way to determine the parameters will be different, and the result will be such that the accuracy decreases in terms of magnitude. That is, everything changes using different ways to determine the corresponding parameters. Now the two Eq. (6) coming from the leading terms of the asymptotic expansion are not considered. We are using now Eqs. (17) to (21) derived from the power series as well as two additional equations coming also from this series.

There is now again the right number of equations considering \(\lambda \) as given, that is, all the parameters p\('\)s and q\('\)s are obtained as a function of \(\lambda \). The \(\lambda \) determination is done in the same way than the proceeding case, that is, by looking for the smallest of the maximum absolute errors. The value of \(\lambda \) now obtained is \(\lambda _{1}\) = 0.625. The values of the parameters p\('\)s and q determined with \(\lambda _{1}\) = 0.625 are shown in Table 3. These are also the values used in Fig. 7.

Figure 7 shows the absolute errors as a function of the variable x, with the above value of \(\lambda \). The maximum absolute error is about 0.00025.

Fig. 7
figure 7

\(|J_{0}(x)-\tilde{J}_{0}(x)|\) as a function of x, for \(\lambda _{2} = 0.625\)

The relative errors of the zeros are now shown in Fig. 8 and the maximum relative errors are in the first and fourth ones. These are 0.0006 and 0.00062. The behaviour of the errors in the zeros is in agreement with the absolute errors of \(|J_{0}(x)-\tilde{J}_{0}(x)|\), that is the maximum values are for small values of x, which are in agreement with the relative errors of the zeros of \(J_{0}(x)\), which are higher for the first ones, decreasing rapidly after that (Fig. (8)).

Considering now the three approximations, it becomes clear that the best results are those with the second procedure, that is, when only the leading terms of the asymptotic expansions are used together with equations coming from the power series.

It is interesting to compare our approximations with those given for other authors. The most important of these are the polynomial approximations given in pages 369 and 370 of Abramovitz and Stegun (1970). It is not necessary to go to graph comparing errors understand the differences and advantage of each approach. The first difference is that they use two analytic function, one for \(-3< x < 3\), and the other for \(x > 3\). In our case only one analytic function is needed. The second difference is that they need 21 parameters (7 parameters by the first approach and 14 for the second one), and in our case, only 8 parameters are needed. Their advantage is that they obtained a little higher accuracy, because the error given by them is \(|\epsilon | < 5\times 10^{-8}\). The third difference is that the function \(J_{0}(x)\) is an even function and the analytic functions now obtained are also even functions, that is, good for positive and negative values of x. In the approximations presented by Abramovitz, the one for \(-3< x < 3\) is even, but the other one for \(x > 3\) is not even and, therefore, is not valid for negative values of x.

Fig. 8
figure 8

Relative errors of the zeros as a function of the cardinal number n, for \(\lambda _{2} = 0.625\)

Table 3 Values of the parameters of the approximate \(\tilde{J}_{0}(x)\), Eq. (5), for \(\lambda =\lambda _2\)
Fig. 9
figure 9

Relative errors of the maximum and minimum as a function of \(\tilde{n}\), for \(\lambda _{2} = 0.625\)

4 Conclusion

Three highly accurate analytic approximations have been determined for the Bessel function \(J_{0}(x)\). The three approximations have the same structure and number of parameters, but different way of calculation. The array of the parameters in the approximations is performed considering both the power series and the asymptotic expansion. This is done in such a way that the approximation can be considered a bridge between both above expansions. The technique here followed is like a mirror of the MPQA technique. However, some differences are introduced in the way to determine the parameters. In the best approximation to \({J}_{0}(x)\), only the leading terms of the asymptotic expansions are used and all the other equations are coming by equalizing the corresponding terms in the power series of both functions \(J_{0}(x)\) and \(\tilde{J}_{0}(x)\). However, to avoid non-linear equations with the parameters to determine, it is convenient to multiply both functions by factor functions to rationalizes the approximation \(\tilde{J}_{0}(x)\). The zeros of the approximations also almost coincide with those of \(J_{0}(x)\). The maximum absolute error of the best approximation is 0.00009, and the maximum relative error of the zeros is 0.00004. Both errors are so low that this approximate function \({J}_{0}(x)\) can be used with confidence in almost all the applications of the Bessel function \(J_{0}(x)\). Furthermore, the calculation now is very simple and can be done with a simple pocket calculator.

The best approximation (second approximation) is obtained using the leading terms of the asymptotic expansions as well as the right number of terms coming for the power series.