1 Introduction

Conventional Gaussian distribution parameter estimation method uses the basic properties of normal distribution to the ordinary state of the normal distribution parameter estimation, but using the conventional approach to estimate the normal distribution parameter estimation of carbonation depth. Due to the factors affecting carbonization depth parameter estimation more, shooting low deficiencies for this application in the carbonation depth of the normal distribution parameter is estimated [1, 2]. Building the normal distribution parameter estimation model, using the least squares method, structures, the normal distribution parameter estimation model framework, and the conditions of using nonparametric test method of Monte Carlo avoids the plug-in unknown parameters, the expression of normal distribution parameters. At the same time, the parameters of the network signal are estimated by the experimental analysis, and the parameters of the network signal are calculated by the characteristics of its normal distribution. Based on the normal distribution parameter of the linear deviation calculation and determining the parameters of similar values, using the Bayesian function of carbonation depth completes the determination of carbonation depth deviation estimates of the normal distribution. In order to ensure the effectiveness of the design of the normal distribution parameter estimation method, the type of carbide simulation test environment uses two different kinds of normal distribution parameter estimation method, which is applied to the carbonation depth of the normal distribution parameter estimation simulation test. The test results show that the normal distribution parameter estimation method is proposed with high effectiveness.

The rest of this paper is organized as follows: Section 2 discusses the construction of the normal distribution parameter estimation model, followed by the normal distribution parameter estimation of carbonization depth in Section 3. The example analysis is discussed in Section 4. Section 5 concludes the paper with summary and future research directions.

2 The proposed algorithm

2.1 The framework of normal distribution parameter estimation model is established

Setting up the normal distribution parameter estimation model framework was originally done in 1733, by a German mathematician and astronomer, Abraham, dermot foer (Abraham DE Moivre), which was proposed for the first time. Laplace (Marquis DE Laplace) and Gaussian (Carl Friedrich Gauss), on the normal distribution, have also made a contribution to the research. First of all, the Gaussian distribution is applied to astronomical research [3], which is used to study the error theory. Laplace associated with the central limit theorem; “yuan” error theory was put forward for the first time. After their pioneering work, there has been more and more scientific workers to the normal distribution, which is widely used in the parameter estimation of carbonation depth. Through the efforts of scholars’ research, the least squares method was finally developed. It is applied to the theory of probability and mathematical statistics; besides, the normal distribution is widely used in practice.

Assuming that the random variable x is normally distributed, the probability density function is

\( f(x)=\frac{1}{\sqrt{2\pi }}\exp \left\{-\left.\frac{{\left(x-\mu \right)}^2}{2{\sigma}^2}\right\}\right.,-\infty <x<\infty, \) (1)

in which, μ is called the mean value parameter, σ is called the variance parameter, and - < μ < , σ > 0 is satisfied, which is written as x~N(μ, σ2). The probability density function graph of its normal distribution is shown in Fig. 1.The normal distribution function is [4]

Fig. 1
figure 1

Probability density function graph of normal distribution

\( F(x)=\frac{1}{\sqrt{2\pi }}{\int}_{-\infty}^x{\mathrm{e}}^{\frac{{\left(x-\mu \right)}^2}{2{\sigma}^2}} dt \) (2)

As can be seen from the graph of probability density function of normal distribution in Fig. 1, the f(x) curve is a bell curve that is symmetric about x = μ. The characteristic is two, whose ends are low, middle is high, and both sides are symmetrical. When x equals to x, f(x) can get maximum \( 1/\sqrt{2\pi \sigma} \). When x closes to ± ∞, f(x) closes to 0. The curve will get an inflection point when x equals to μ ± σ. It can be seen from Fig. 2 that the distribution function is a smooth S curve and a normal distribution function graph.

Fig. 2
figure 2

Distribution function graph of normal distribution

When σ = 0.5, μ takes different values on the normal distribution graph, as shown in Fig. 3. The size of fixed parameters can be seen, changing the parameters of average as well as the graphic translation but not change its shape along the x axis, showing the position of probability density function of the normal distribution.

Fig. 3
figure 3

Regular distribution graph with a minimum of 0.5 difference

When σ = 0, μ takes different values on the normal distribution graph, as shown in Fig. 4. When μ has fixed average parameters, changing the scale parameter sigma, normal distribution probability density function of the basic position and shape remains unchanged, only on the longitudinal tensile and compression effect. It is a little bit flat as it gets smaller and smaller.

Fig. 4
figure 4

A normal distribution graph of interest; σ = 0

Finally, the normal distribution parameter estimation model framework can be expressed a s[5] follows:

If random variable x~N(μ, σ2), it comes \( \frac{x-\mu }{\sigma}\sim N\left(0,1\right) \), making x1, x2, …, xn

a sample table from a normal distribution of x~N(μ, σ2).

The normal distribution parameter estimation model framework can be expressed as

$$ {s}^2=\frac{\theta }{n-1}\sum \limits_{i=1}^n{\left({x}_i-\overline{X}\right)}^2 $$
(3)
$$ {s}^2=\frac{\theta }{n-1}\sum \limits_{i=1}^n{\left({x}_i-\overline{X}\right)}^2 $$
(4)

Including

(1) Mutual independence from \( \overline{X} \) and s2,

(2) \( \overline{X}\sim N\left(\mu, {\sigma}^2/n\right) \), and

(3) \( \frac{\left(n-1\right){s}^2}{\sigma^2}\sim {\chi}^2\left(n-1\right) \)

2.2 Determine the expression of normal distribution parameters

Based on the construction of the normal distribution parameter estimation model framework, analysis framework of asymptotic distribution depends on the unknown parameter theta. When the sample size is small, the critical value which is determined by the limit distribution of inspection efficiency is lower. Studies have shown that the available conditions of nonparametric test method of Monte Carlo is used to avoid plug-in to estimate unknown parameters and improve the inspection under the small sample of efficacy. The related work can refer to Zhu & Neuhaus (2000), He & Zhu (2003), Zhu (2003), Ze & Ng (2003), Zhu (2005), etc [6, 7]. \( {y}_i^{\ast }={V}_{i0}^{-1/2}/\left({y}_i-{x}_i{\beta}_0\right)\left(i=1,2,\dots, n\right) \). According to Verbeke & Lesaffre (1996), there are equations 5, 6, and 7.

$$ \frac{\partial {L}_i\left({\theta}_0\right)}{\partial \beta }={x}_i^{\hbox{'}}{V}_{i0}^{-1/2}/{y}_i^{\ast } $$
(5)
$$ \frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\sigma}^2}=-\frac{1}{2}{trV}_{i0}^{-1}+\frac{1}{2}{y_i^{\ast}}^{\hbox{'}}{V}_{i0}^{-1/2}{y}_i^{\ast } $$
(6)
$$ \frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\delta}_j}=-\frac{1}{2} tr\left({V}_{i0}^{-1}{z}_i\frac{\partial D}{\partial {\delta}_j}{z}_i^{\hbox{'}}\right)+\frac{1}{2}{y_i^{\ast}}^{\hbox{'}}{V}_{i0}^{-1/2}{z}_i\frac{\partial D}{\partial {\delta}_j}{z}_i^{\hbox{'}}{V}_{i0}^{-1/2}/{y}_i^{\ast } $$
(7)

Therefore,

$$ {Q}_i\left({y}_i^{\ast };{\theta}_0\right)={\left(\frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\beta}^{\hbox{'}}},\frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\sigma}^2},\frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\delta}_1},\dots, \frac{\partial {L}_i\left({\theta}_0\right)}{\partial {\delta}_k}\right)}^{\hbox{'}} $$
(8)

It means \( {u}_{0i}=u\left({\theta}_0\right)={W}_{i0}^{-1/2}{D}_0{z}_i^{\hbox{'}}{V}_{i0}^{-1/2}/{y}_i^{\ast } \),\( {u}_{0i}={u}_{0i}\left({y}_i^{\ast}\right) \). From formula 3, formula 5, and formula 8, we can get

$$ {\displaystyle \begin{array}{l}{G}_n(t)=\frac{1}{n}\sum \limits_{i=1}^n\Big[\cos \left({t}^{\hbox{'}}{u}_{0i}\left({y}_i^{\ast}\right)\right)+\sin \left({t}^{\hbox{'}}{u}_{0i}\left({y}_i^{\ast}\right)\right)-\exp \left\{-\frac{{\left\Vert t\right\Vert}^2}{2}\right\}\\ {}+\left\{\frac{1}{2}{\left({t}^{\hbox{'}}{u}_{0i}\left({y}_i^{\ast}\right)\right)}^2-\frac{{\left\Vert t\right\Vert}^2}{2}-{t}^{\hbox{'}}{u}_{0i}\left({y}_i^{\ast}\right)\right\}-\exp \left\{-\frac{{\left\Vert t\right\Vert}^2}{2}\right\}\Big]\\ {}+{a}^{\hbox{'}}\left(t;{\theta}_0\right){\varOmega}_0^{-1}\frac{1}{n}\sum \limits_{i=1}^n{Q}_i\left({y}_i^{\ast };{\theta}_0\right)+ op\left({n}^{-1/2}\right)\end{array}} $$

Under the normal hypothesis, it is easy to get a normal distribution that \( {y}_i^{\ast } \) follows the standard normal distribution. Therefore, we use the following conditional Monte Carlo method to express the state distribution parameter estimation model. The expression is as follows [8, 9]:

First of all, we produce a sample set of y0n = (y01, y02, …, y0n). In this formula, y01, y02, …, y0n are mutually independent, which obeys to N(0, Il1), …, N(0, Ilm). In other words, there is the same distribution of y01, y02, …, y0n and \( {y}_1^{\ast },{y}_2^{\ast },\dots, {y}_n^{\ast } \).

Then, the simulation value of Gn(t) was calculated, and the simulation value of Gn(t) was

$$ {\displaystyle \begin{array}{l}{G}_n\left({Y}_{0n},t\right)=\frac{1}{n}\sum \limits_{i=1}^n\Big[\cos \left({t}^{\hbox{'}}{u}_{0i}\left({y}_{0i}\right)\right)+\sin \left({t}^{\hbox{'}}{u}_{0i}\left({y}_{0i}\right)\right)-\exp \left\{-\frac{{\left\Vert t\right\Vert}^2}{2}\right\}\\ {}+\left\{\frac{1}{2}{\left({t}^{\hbox{'}}{u}_{0i}\left({y}_{0i}\right)\right)}^2-\frac{{\left\Vert t\right\Vert}^2}{2}-{t}^{\hbox{'}}{u}_{0i}\left({y}_{0i}\right)\right\}-\exp \left\{-\frac{{\left\Vert t\right\Vert}^2}{2}\right\}\Big]\\ {}+\frac{1}{n}\sum \limits_{i=1}^n{a}_i^{\hbox{'}}\left(t;\hat{\theta}\right){\hat{\varOmega}}^{-1}{Q}_i\left({y}_i^{\ast };{\theta}_0\right)\frac{1}{n}\sum \limits_{i=1}^n{Q}_i\left({y}_{0i};\hat{\theta}\right)\end{array}} $$

The corresponding test statistic of Gn(t) is \( {T}_n,r\left({y}_{0n}\right)=n{\int}_{r^q}{G}_n^2\left({y}_{0n},t\right)\varphi r(t) dt \). The result of Tn, r(y0n) is \( {T}_n,r\left({y}_{0n}^{(1)}\right),\dots, {T}_n,r\left({y}_{0n}^{(m)}\right) \).

In the end, the1-α sample fraction was calculated as the magnitude of Tn, r. In addition, we can calculate the estimated value of P, which is shown in equation 9.

$$ {P}_n=k/\left(m+1\right) $$
(9)

Including \( k=\#\left\{{T}_{n,r}\left({Y}_{0n}^{(j)}\right)\ge {T}_{n,r}^0,j=0,1,\dots, m\right\} \);\( {T}_{n,r}^{(0)}={T}_{n,r} \)

The expression signal curve of normal distribution parameters is shown in Fig. 5.

Fig. 5
figure 5

Schematic curves of normal distribution parameters

Based on the framework of the normal distribution parameter estimation model, the expression of normal distribution parameter is determined, and the construction of normal distribution parameter estimation model is realized.

3 The normal distribution parameter estimation of carbonization depth is realized

3.1 Determine the linear deviation of normal distribution parameters

In the normal distribution parameter estimation of carbonization depth, the factors influencing the hit ratio of parameter estimation mainly include the linear deviation of normal distribution parameter and the maximum similarity value of parameters. The linear deviation of the normal distribution parameter is to determine the deviation function and estimate the distance of the function. To determine the linear deviation of normal distribution parameters, setx1, x2,...,xn to be the total sample from the f(x| θ1, θ2, …, θk). In order to be the overall sample of its probability density function, assume the existence of the overall k-order origin moment. Namely, for the whole j (0 < j < k), μk is existent. Assuming that θ1, θ2, …, θk can be expressed asμ1, μ2, …, μk, θj = θj(μ1, μ2, …, μk) can be given as shown in equation 10 [10].

$$ {\hat{\theta}}_j={\theta}_j\left({a}_1,{a}_2,\dots, {a}_k\right),j=1,\dots, k, $$
(10)

In the equation, a1, a2, …, ak is the first k sample origin moments \( {a}_j=\frac{1}{n}\sum \limits_{j=1}^n{x}_i^j \). Furthermore, if we want to estimate the function of θ1, θ2, …, θk, η = g(θ1, θ2, …, θk) will give a direct estimate as shown in equation 11.

$$ \hat{\eta}=g\left({\hat{\theta}}_1,{\hat{\theta}}_2,\dots, {\hat{\theta}}_k\right) $$
(11)

When k equals to 1, we can usually use the sample mean to estimate the unknown parameters. If k equals to 2, we can estimate unknown parameters from the first and second order origin moments [11, 12].

x1, x2,..., xn are assumed to be the random samples from x~N(μ, σ2), which is defined as θ1 = μ, θ2 = σ2. Therefore, it comes that

$$ {a}_1=\overline{X},{a}_2=\frac{1}{n}\sum \limits_{i=1}^n{x}_i^2 $$
(12)

So we should solve for that

$$ \overline{X}=\mu, \frac{1}{n}\sum \limits_{i=1}^n{x}_i^2={\mu}^2+{\sigma}^2 $$
(13)

Solve for μ and σ2 then get an estimate:

$$ {\hat{\mu}}_U=\overline{X},{\hat{\sigma}}_U^2=\frac{1}{n-1}\sum \limits_{i=1}^n{\left({x}_i-\overline{X}\right)}^2 $$
(14)

Revised estimates to unbiased estimates:

$$ \overline{X}=\frac{1}{n}\sum \limits_{i=1}^n{x}_i,{s}^2=\frac{1}{n-1}\sum \limits_{i=1}^n{\left({x}_i-\overline{X}\right)}^2 $$
(15)

Among them, \( \overline{X} \) and s2 are mutually independent, \( \overline{X}\sim N\left(\mu, {\sigma}^2/n\right),\frac{\left(n-1\right){s}^2}{\sigma^2}\sim {\chi}^2\left(n-1\right) \). This moment, \( E\left(\overline{X}\right)=\mu, E\left({s}^2\right)={\sigma}^2 \).

The linear deviation function of normal distribution parameters can be obtained as

$$ {\hat{\mu}}_{UE}\frac{1}{n}\sum \limits_{i=1}^n{x}_i=\overline{X} $$
(16)
$$ {\hat{\sigma}}_{UE}^2=\frac{1}{n-1}\sum \limits_{i=1}^n{\left({x}_i-\overline{X}\right)}^2={s}^2 $$
(17)

3.2 Determine the maximum similarity value of normal distribution parameters

Assume f(x, θ) to be the probability density function of the population, including θ ∈ Θ. As a parameter vector consisting of one or more unknown parameters, Θ is parameter space. If x1,x2,...,xn are the samples from the totality, L(θ; x1, x2, …, xn) is taken as the joint probability density function of the sample, which is recorded as L(θ), so equation 18 is as follows:

$$ L\left(\theta \right)=\left(\theta; {x}_1,{x}_2,\dots, {x}_n\right)=f\left({x}_1,\theta \right)f\left({x}_2,\theta \right)\dots f\left({x}_n,\theta \right) $$
(18)

In this equation, L(θ) is named as sample likelihood function. If some statistic \( \hat{\theta}=\hat{\theta}\left({x}_1,{x}_2,\dots, {x}_n\right) \) meets the following condition \( L\left(\hat{\theta}\right)=\underset{\theta \in \Theta}{\max }L\left(\theta \right) \) (1.11), \( \hat{\theta} \) is called the Maximum Likelihood Estimation of θ, which is abbreviated as MLE [13, 14].

Assuming x1, x2, …, xn are samples from x~N(μ, σ2), which is the normal population, the joint probability density function is

$$ L\left(\theta \right)=\prod \limits_{i=1}^nf\left({x}_i;\mu, {\sigma}^2\right)={\left(\frac{1}{\sqrt{2\pi \sigma}}\right)}^n\exp \left\{-\left.\frac{\sum \limits_{i=1}^n{\left({x}_i-\mu \right)}^2}{2{\sigma}^2}\right\}\right. $$
(19)

The logarithmic likelihood function is

$$ \ln L\left(\theta \right)=\frac{n}{2}\ln \left(2{\pi \sigma}^2\right)-\frac{1}{2{\sigma}^2}\sum \limits_{i=1}^n{\left({x}_i-\mu \right)}^2 $$
(20)

Take the derivative of the above two parameters,

$$ \frac{\partial \ln L\left(\theta \right)}{\partial {\sigma}^2}=\frac{1}{\sigma^2}\sum \limits_{i=1}^n{\left({x}_i-\mu \right)}^2 $$
(21)
$$ \frac{\partial \ln L\left(\theta \right)}{\partial {\sigma}^2}=\frac{-n}{2{\sigma}^2}+\frac{1}{2{\sigma}^4}\sum \limits_{i=1}^n{\left({x}_i-\mu \right)}^2=0 $$
(22)

The maximum similarity of normal distribution parameters,

$$ {\hat{\mu}}_{MLE}=\frac{1}{n}\sum \limits_{i=1}^n{x}_i=\overline{X} $$
(23)
$$ {\hat{\mu}}_{MLE}^2=\frac{1}{n}\sum \limits_{i=1}^n\left({x}_i-{\overline{X}}^2\right)=\frac{n-1}{n}{s}^2 $$
(24)

3.3 The Bayesian function of carbonization depth was established to estimate the parameters

According to the normal distribution of the Bayesian statistics from the a priori knowledge about the general information, carbonation depth is the Bayesian function of normal distribution, the sample information, and three kinds of information to carry on the statistical inference, which rely on the normal distribution parameter of the linear deviation, the normal distribution parameter of the enormous similarity values, and the discriminant. Bayesian for any unknown variable is the most fundamental point of view, in which θ can be regarded as a random variable. Besides, using a probability distribution to describe the unknown situation of θ is called prior distribution [15, 16]. The implementation form and process of the Bayesian formula are as follows:

At first, f(x; θ) is assumed to stand for the population that depends on the density function of the parameter. As θ in the parametric space Θ is random variable, f(x; θ)represents conditions of density of X when θ is determined. f(x; θ) is written as f(x| θ) in Bayes’ theorem, which represents conditional probability density function when the random variable θ gives a specific value to totality X.

Then, the prior distribution π(θ) is selected according to the prior information of parameters θ.

Next, from Bayes’ point of view, the sample x = (x1, x2, …, xn) is produced by two steps.

First, a sample θ' is generated from the prior distribution π(θ) determined in step 2. Then x = (x1, x2, …, xn) is generated from f(x; θ'). At this point, the joint conditional probability function of the sample tree can be obtained:

$$ f\left(x|\theta \hbox{'}\right)=f\left({x}_1,{x}_2,\dots, {x}_n|\theta \hbox{'}\right)=\prod \limits_{i=1}^nf\left({x}_i|\theta \hbox{'}\right) $$
(25)

In Eq. (25), the sample information and the overall information are integrated, so it is called the likelihood function. Because θ' of the third step is an unknown hypothesis, which is based on the selected prior distribution, all possibilities of θ' should be considered and the joint distribution of samples x and parameters θ should be obtained.

$$ h\left(x,\theta \right)=f\left(x|\theta \right)\pi \left(\theta \right) $$
(26)

Finally, the above expression combines the three available information. The statistical inference θ on the unknown parameters needs to be calculated. When there is no sample information, we can only judge the parameters according to the prior distribution. After getting the sample observation value of x = (x1, x2, …, xn), the result can be deduced according to h(x, θ). h(x, θ) is decomposed:

$$ h\left(x,\theta \right)=\pi \left(\theta |x\right)m(x) $$
(27)

m(x) is marginal density function:

$$ m(x)={\int}_{\Theta}h\left(x,\theta \right) d\theta ={\int}_{\Theta}f\left(x|\theta \right)\pi \left(\theta \right) d\theta $$
(28)

There is not any information about θ in the equation. π(θ| x) makes inference to θ. In this point, the equation of π(θ| x) is as follows:

$$ \pi \left(\theta |x\right)=\frac{h\left(x,\theta \right)}{m(x)}=\frac{f\left(x|\theta \right)\pi \left(\theta \right)}{\int_{\Theta}f\left(x|\theta \right)\pi \left(\theta \right) d\theta}= cf\left(x|\theta \right)\pi \left(\theta \right) $$
(29)

In this equation, c has nothing to do with θ. Equation 29 is the form of the probability density function of the Bayesian formula. Set in the sample x, parameter θ of the conditional distribution is called the posterior distribution. It focuses on the overall, sample, and all the related parameters of a priori information, and it has ruled out all information which has nothing to do with the parameters of the result. Therefore, based on the posterior distribution θ of parameters,π(x| θ) of statistical inference can be improved and be more effective [17].

Assuming \( {\hat{\theta}}_B \) to be the Bayesian of θ estimation, comprehensive information is about various posterior distribution. The information is extracted from π(θ| x) to get the results of \( {\hat{\theta}}_B \). When the loss function is square loss, a commonly used standard of Bayesian estimation is to minimize it with the correct posterior mean square error criterion MSE.

\( MES\left({\hat{\theta}}_B|x\right)={E}^{\theta \mid x}\left({\hat{\theta}}_B-\theta \right)\hbox{'}\left({\hat{\theta}}_B-\theta \right) \)

$$ ={\int}_{\Theta}{\left({\hat{\theta}}_B-\theta \right)}^2\pi \left(\theta |x\right) d\theta $$
$$ ={\hat{\theta}}_B^2-2{\hat{\theta}}_B{\int}_{\Theta}\theta \pi \left(\theta |x\right) d\theta +{\int}_{\Theta}{\theta}^2\pi \left(\theta |x\right) d\theta $$

Eθ ∣ x stands for the minimum value of the expectations with posterior distribution. It can be seen that the type is a quadratic trinomial of \( {\hat{\theta}}_B \); and binomial coefficient is positive. Therefore, there will be a minimum, and the minimum value is as follows.

$$ {\hat{\theta}}_B={\int}_{\Theta}\theta \pi \left(\theta |x\right) d\theta ={E}^{\theta \mid x}\left(\theta |x\right) $$
(30)

Through the type, it can be seen on the mean square error criterion that the Bayesian estimation of parameter theta θ is the posterior mean, theta, and posterior mean error minimum. For the normal distribution parameter of the Bayesian estimation problem, according to the principle of Bayesian estimation, the estimation is for the posterior distribution function of expectations. In this paper, the posterior distribution calculation is simplified used fully in statistics for the computation of the posterior distribution [18]:

$$ \left\{\begin{array}{l}{\hat{\mu}}_B=\iint \mu \pi \left(\theta |Y\right) d\theta \\ {}{\hat{\sigma}}_B^2=\iint {\sigma}^2\pi \left(\theta |Y\right) d\theta \end{array}\right. $$
(31)

Through double integral, it is difficult to directly calculate the Bayesian estimation of the explicit solution of theta. At the same time, MCMC method is conducted in numerical simulation under different prior [19].

Using Devroye’s thoughts obey the nuclear formula of distribution of the parameters θ and conditions of sample σ2, combining German algorithms to calculate and determine the carbonation depth of the Bayesian function. Algorithm process is as follows [20]:

(1) Few parameters given initial values μ, σ2 to be remembered \( {\mu}_0,{\sigma}_0^2 \), and will be the first step j and, respectively, for μj and σ2 to μj and \( {\sigma}_j^2 \);

(2) Produce obedience μj + 1 from \( {\pi}_1\left(\mu |{\sigma}_j^2,Y\right) \);

(3) Produce \( {\sigma}_{j+1}^2 \) obedience from π1(σ2| μj, Y); and

(4) Repeat step 2 and step 3 N times

by calculating the Bayesian estimates l(μ, σ2) through \( \frac{1}{N-{m}_0}\sum \limits_{j={m}_0+1}^Nl\left(\mu, {\sigma}_j^2\right) \), including those for debugging. Based on the normal distribution parameter of the linear deviation calculation, which is similar to the parameters of great value to determine, the application in the carbonation depth of the Bayesian parameter estimation function is implemented [21].

4 Experiment test and result analysis

In order to guarantee the normal distribution in this paper, the determination of carbonation depth deviation estimates the validity of the simulation experiments analysis [22]. During the trial, there is a different type of carbide as the test object, and the normal distribution parameter estimation of carbonation depth is simulated in the test. On the depth of different types of carbide, as well as the environment and carries in the simulation guarantees the validity of the test. The use of conventional Gaussian distribution parameter estimation method for comparison object compare the simulation results. The test data is presented in the same data in the chart, through calculation of the percentage of normal distribution parameter test conclusion [23].

4.1 Data preparation

In this paper, in order to ensure the accuracy of the simulation test, simulation process is used to provide different types of carbide as test objects. Using two different normal distribution parameter estimation methods is the normal distribution parameter estimation of carbonation depth simulation experiment. The simulation experiment results are analyzed due to the different methods of analysis results, and the analysis methods are different. Therefore, test process is to guarantee the environmental parameters. In this paper, the results of the test data set are shown in Table 1.

Table 1 Simulation test parameter

Put two methods of normal distribution in the above-mentioned operation environment. Loading simulation data type carbide and imitating the loading type carbide simulation parameters are as follows (Table 2):

Table 2 Carbon-type simulation parameters

4.2 Test process design

In order to verify the hit ratio of normal distribution parameters of two different normal distribution parameter estimation methods, the integrity test of normal distribution estimation results was carried out. The error rate test of the results and the normal distribution estimation results were carried out, and the two test results were recorded. The error rate of the normal distribution parameters was calculated according to the probability formula of the normal distribution parameters, and the comparison was made.

First of all, the prepared data were inputted into the computer simulation system, and the computer simulation system was set up in accordance with the requirements in Table 1 to perform correlation operations.

Then, in the same time period, under the same test environment and the same influence parameters, the integrity test of the normal distribution estimation results is carried out. Otherwise, the error rate test of the normal distribution estimation results is conducted.

Finally, the third party analysis and recording software is used to analyze the relevant data generated by the computer simulation equipment. Meanwhile, simulation of laboratory personnel operation and simulation of computer equipment factors of uncertainty are eliminated. The simulation test of normal distribution parameters of carbonization depth was carried out for different types of carbonization and different normal distribution parameter estimation methods. The results are shown in the comparison result curve of this test and weighted analysis is carried out. The experimental results are obtained by using the normal distribution parameter hit ratio calculation formula.

4.3 Test analysis of normal distribution estimation results

In the test process, two different carbonization types and different normal distribution parameter estimation methods were used to carry out the integral test analysis of normal distribution estimation results. The comparison curve of the overall test results of the normal distribution estimation results is shown in Fig. 6.

Fig. 6
figure 6

The curve of comparison results of the whole test with normal distribution estimation results

According to the comparison result curve of the whole test of the normal distribution estimation result, by using the third party analysis and recording software, the weighted analysis shows that the overall result of the normal distribution parameter estimation method designed in this paper is 78.42%. The result of the traditional normal distribution parameter estimation method is 37.68%.

4.4 Error rate test analysis of normal distribution estimation results

At the same time, the error rate test of normal distribution estimation results was conducted for different carbonization types and different normal distribution parameter estimation methods. The error rate test results comparison curve of the normal distribution estimation results is shown in Fig. 7.

Fig. 7
figure 7

Error rate test comparison curve of normal distribution estimation results

According to the results of normal distribution estimation error rate comparison test curve, using the weighted analysis by the third-party analysis and recording software makes the error rate of the normal distribution parameter estimation method designed in this paper at 9.8%, while the traditional normal error rate in the distribution parameter estimation method is 23.7%.

4.5 Normal distribution parameter hit ratio calculation

The error rate of normal distribution estimation results and normal distribution estimation results were substituted into the normal distribution parameter hit ratio calculation formula. Its normal distribution parameter hit ratio calculation formula is as follows:

\( \chi =\frac{1}{n}\sum \limits_{i=1}^n\left({C}_i-{KQ}_i\right), \) (32)

in which C represents the integrity test results of the Gaussian distribution estimation. Q represents the normal distribution estimation result error rate. K represents the simulation coefficient of the test and is taken 0.98 in the paper. n represents trial stretch and is taken 400.

The method that is put forward is named χ1 and the normal method is called χ2. If Δχ = χ1 − χ2 is a positive number, it represents a decrease in risk management. If Δχ = χ1 − χ2is a negative, it represents the risk reduction. Δχ can be taken in equation 32.

$$ {\displaystyle \begin{array}{l}\Delta \chi ={\chi}_1-{\chi}_2\\ {}=\frac{1}{n}\sum \limits_{i=1}^n\left({C}_{1i}-{KQ}_{1i}\right)-\frac{1}{n}\sum \limits_{i=1}^n\left({C}_{2i}-{KQ}_{2i}\right)\\ {}=0.221209\end{array}} $$

Compared with the conventional parameter estimation method, the proposed parameter estimation method increases the hit ratio by 22.12%, which is suitable for the normal distribution parameter estimation of carbonization depth.

5 Conclusion

Normal distribution is proposed in this paper to determine carbonation depth deviation estimation. Based on the construction of the normal distribution parameter estimation model and the normal distribution parameter of the linear deviation calculation, the result is determined with the maximum similarity value of the parameter. The normal distribution parameter estimation of carbonization depth is realized by using the Bayesian function of carbonization depth. The experimental data show that the proposed method is highly effective. It is hoped that this study can provide theoretical basis for the normal distribution parameter estimation method of carbonization depth.