Next Article in Journal
What Is So Special about Quantum Clicks?
Previous Article in Journal
Distributed Steganography in PDF Files—Secrets Hidden in Modified Pages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Entropy for Inverse Lomax Distribution under Multiple Censored Data

by
Rashad A. R. Bantan
1,
Mohammed Elgarhy
2,
Christophe Chesneau
3,* and
Farrukh Jamal
4
1
Department of Marine Geology, Faculty of Marine Sience, King AbdulAziz University, Jeddah 21551, Saudi Arabia
2
Valley High Institute for Management Finance and Information Systems, Obour, Qaliubia 11828, Egypt
3
Department of Mathematics, Université de Caen, LMNO, Campus II, Science 3, 14032 Caen, France
4
Department of Statistics, Govt. S.A Postgraduate College Dera Nawab Sahib, Bahawalpur, Punjab 63100, Pakistan
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(6), 601; https://doi.org/10.3390/e22060601
Submission received: 6 April 2020 / Revised: 18 May 2020 / Accepted: 25 May 2020 / Published: 28 May 2020

Abstract

:
The inverse Lomax distribution has been widely used in many applied fields such as reliability, geophysics, economics and engineering sciences. In this paper, an unexplored practical problem involving the inverse Lomax distribution is investigated: the estimation of its entropy when multiple censored data are observed. To reach this goal, the entropy is defined through the Rényi and q-entropies, and we estimate them by combining the maximum likelihood and plugin methods. Then, numerical results are provided to show the behavior of the estimates at various sample sizes, with the determination of the mean squared errors, two-sided approximate confidence intervals and the corresponding average lengths. Our numerical investigations show that, when the sample size increases, the values of the mean squared errors and average lengths decrease. Also, when the censoring level decreases, the considered of Rényi and q-entropies estimates approach the true value. The obtained results validate the usefulness and efficiency of the method. An application to two real life data sets is given.

1. Introduction

Entropy is one of the most popular measure of uncertainty. As former mathematical work, Reference [1] proposed a theory on the concept of entropy, with numerical indicators as well. This theory was enhanced by numerous other entropy-like measures, arising from various applied fields. In this regard, a complete survey can be found in Reference [2]. Here, we focus our attention on two of the most famous entropy measures—the Rényi entopy by Reference [3] and the q-entropy by Reference [4] (also called Tsallis entropy). The Rényi entropy finds its source in the information theory and the q-entropy comes from the statistical physics, with a plethora of applications in their respective fields. For a random variable X having the probability density function (pdf) f ( x ; φ ) , where φ represents the corresponding parameters, these two entropy measures are, respectively, defined by
I δ ( X ; φ ) = 1 1 δ log + f ( x ; φ ) δ d x ,
where δ 1 and δ > 0 , and
H q ( X ; φ ) = 1 q 1 1 + f ( x ; φ ) q d x ,
where q 1 and q > 0 . In particular, the Rényi entropy contains several well-known entropy measures, such as the Hartley entropy given by lim δ 0 I δ ( X ; φ ) , the Shannon entropy obtained as lim δ 1 I δ ( X ; φ ) and the collision entropy given as I 2 ( X ; φ ) .
In most of the observed phenomena, at least φ is unknown, and the entropy as well. For this reason, the theoretical or practical statistical treatment of the entropy have been the object of all the attentions, in various settings. Among the notable studies in this regard, we may refer the reader to Reference [5] discussing the entropy of ordered sequences and order statistic, Reference [6] focusing on the entropy of upper record values, Reference [7] proposing the entropy of hybrid censoring schemes, Reference [8] discussing the entropy of progressively censored samples, Reference [9] investigating the estimation of the entropy for the Weibull distribution under the progressive censoring scheme, Reference [10] using the maximum likelihood and Bayes estimators via doubly-generalized Type II hybrid censored samples to estimate the entropy for the Rayleigh distribution, Reference [11] studying the Bayes estimation of the entropy for the Weibull distribution under the generalized progressive hybrid censoring scheme, Reference [12] providing the maximum likelihood and Bayes estimators of the entropy for the Weibull distribution under a generalized progressive hybrid censoring scheme, Reference [13] studying the estimation of the entropy for the generalized exponential distribution based on record values, Reference [14] discussing the Shannon entropy for the Lomax distribution based on the generalized progressively hybrid censoring scheme and Reference [15] investigating point and interval estimation of the Shannon entropy for the inverse Weibull distribution under multiple censored data.
This paper provides a contribution to the estimation of the Rényi and q-entropies for the inverse Lomax distribution when multiple censored data are observed. Let us now motivate the consideration of the inverse Lomax distribution in this setting, as well as the multiple censored data. First and foremost, the inverse Lomax (IL) distribution is a lifetime distribution, defined as the distribution of the reciprocal of a random variable following the famous Lomax (L) distribution. Mathematically, the L distribution is defined by the cumulative distribution function (cdf) and pdf given as
F * ( x ; α , θ ) = 1 1 + x θ α , f * ( x ; α , θ ) = α θ 1 + x θ α 1 , x , α , θ > 0 ,
respectively. Here, α > 0 is a shape parameter and θ > 0 is a scale parameter. The essential of the L distribution can be found in References [16,17,18]. Thus, for a random variable X having the L distribution, the random variable Y = 1 / X follows the IL distribution with cdf and pdf given by
F ( x ; α , θ ) = 1 + x 1 θ α , f ( x ; α , θ ) = α θ x 2 1 + x 1 θ α 1 , x , α , θ > 0 ,
respectively. Among the advantages of the IL distribution, the corresponding probability functions are tractable, it is parsimonious in parameters and possesses a non-monotonic hazard rate function; it possesses decreasing and upside-down bathtub shapes. The practical usefulness of the related model is illustrated in Reference [19] for its application to the analyses of geophysical data and in Reference [20] for its application in economics and actuarial sciences. Also, we refer to Reference [21] for the estimation of the reliability parameter via Type II censoring samples, in Reference [22] for the estimation of the parameters based on hybrid censored samples, and in Reference [23] for the Bayesian estimation of the two-component mixture of the IL distribution under the Type I censoring scheme. Also, recent studies have proposed extensions of the IL distributions for further purposes. In this regard, let us cite Reference [24] for the inverse power Lomax (or power IL) distribution, Reference [25] for the Weibull IL distribution, Reference [26] for the alpha power transformed IL distribution, Reference [27] for the Marshall-Olkin IL distribution and Reference [28] for the odd generalized exponentiated IL distribution.
However, as far as we know, despite its interest, the estimation of entropy measures for the IL distribution, such as the Rényi and q-entropies, remains an unexplored aspect. This study fills this gap by considering this problem under the realistic scenario of multiple censored data. This scenario commonly occurs where several censoring levels logically exist, which is often the case for many applications in life testing and survival analysis. We refer the reader to References [29,30,31], as well as the recent estimation studies of Reference [32] and Reference [15]. In our statistical framework, after investigating the maximum likelihood estimates of α and θ , estimates for the Rényi and q-entropies are derived. Then, two-sided approximate confidence intervals of the Rényi and q-entropies are discussed. A complete numerical study is performed, showing the favorable behavior of the obtained estimates at various sample sizes. In particular, the mean squared errors, approximate confidence intervals along with the corresponding average lengths are used as benchmarks. Our numerical investigations show that, when the sample size increases, the values of the mean squared errors and average lengths decrease. Also, when the censoring level decreases, the considered of Rényi and q-entropies estimates approach the true value. Two real life data sets, one physiological data set and one economic data set, are used to illustrate the findings.
The rest of the article is arranged as follows. The Rényi and q-entropies for the IL distribution are expressed in Section 2. Section 3 studies their estimation under multiple censored data. Simulation and numerical results are given in Section 4. An application to real data sets is presented in Section 5. The article ends with some concluding remarks in Section 6.

2. Expressions of the Rényi and q-Entropies

Let X be a random variable following the IL distribution with parameters α and θ . Then, by (1) and (3) with φ = ( α , θ ) , the Rényi entropy of X is given by
I δ ( X ; α , θ ) = 1 1 δ log 0 + α δ θ δ x 2 δ 1 + x 1 θ δ ( α + 1 ) d x ,
where δ 1 and δ > 0 . It follows from the standard equivalence results and criterion for Riemann integrability that I δ ( X ; α , θ ) exists if and only if 2 δ 1 > 0 and δ ( α 1 ) + 1 > 0 . So the final conditions are δ 1 , δ > 1 / 2 and δ ( α 1 ) + 1 > 0 . Note that the last condition is alway satisfied if α 1 , which will be considered in the coming simulation study.
Under these conditions, by applying the change of variables y = x 1 / θ , we can rewrite the integral term as
0 + α δ θ δ x 2 δ 1 + x 1 θ δ ( α + 1 ) d x = α δ θ δ 1 0 + y 2 ( δ 1 ) ( 1 + y ) δ ( α + 1 ) d y = α δ θ δ 1 B 2 δ 1 , δ ( α 1 ) + 1 ,
where B ( x , y ) refers to the beta function defined by B ( x , y ) = 0 + t x 1 ( 1 + t ) ( x + y ) d t , x , y > 0 or, alternatively, B ( x , y ) = 0 1 t x 1 ( 1 t ) y 1 d t .
Therefore, after some algebraic manipulations, we get
I δ ( X ; α , θ ) = 1 1 δ log α δ θ δ 1 B 2 δ 1 , δ ( α 1 ) + 1 = δ 1 δ log α log θ + 1 1 δ log B 2 δ 1 , δ ( α 1 ) + 1 ,
with δ 1 , δ > 1 / 2 and δ ( α 1 ) + 1 > 0 .
Similarly, based on (2) with φ = ( α , θ ) and (4) applied with δ = q , the q-entropy of X is given by
H q ( X ; α , θ ) = 1 q 1 1 0 + α q θ q x 2 q 1 + x 1 θ q ( α + 1 ) d x = 1 q 1 1 α q θ q 1 B 2 q 1 , q ( α 1 ) + 1 ,
with q 1 , q > 1 / 2 and q ( α 1 ) + 1 > 0 . For practical purposes, (5) and (6) are the required expressions of Rényi and q-entropies of X, written as simple functions of the parameters α and θ .

3. Estimation of Rényi and q-Entropies

3.1. Mathematical Basics on Multiple Censored Data Setting

For our estimation study, we consider the situation of multiple censored data (including the type I and type II censoring), following the setting of Reference [30] (Section 1.3.2). We may also refer to Reference [25] in the context of the inverse Weibull distribution. The general framework can be summarized as follows. Let X be a random variable having the cdf and pdf given by f ( x ; φ ) and F ( x ; φ ) , respectively. Based on n units under a certain test, we get n values x 1 , , x n of which
  • n f values are (independent) observations of X for n f failed units,
  • n m values are (independent) observations of X for n m censored (nonfailed) units,
with n m + n f = n . Then, the likelihood function for φ can be expressed as
L ( φ ) = K i = 1 n f ( x i ; φ ) ε i , f 1 F ( x i ; φ ) ε i , m ,
where ε i , f = 1 if the ith unit failed, and 0 otherwise (so i = 1 n ε i , f = n f ), ε i , m = 1 if the ith unit censored, and 0 otherwise (so i = 1 n ε i , m = n m ), and K denotes a secondary constant (independent of φ ). Then, the maximum likelihood estimates (MLEs) of φ are obtained by maximizing L ( φ ) with respect to φ . All the details in this regard can be found in Reference [30] (Section 1.3.2).

3.2. Considered Estimates

Hence, if a random variable X follows the IW distribution with parameters α and θ , based on (3), the likelihood function for φ = ( α , θ ) can be expressed as
L ( α , θ ) = K i = 1 n f ( x i ; α , θ ) ε i , f 1 F ( x i ; α , θ ) ε i , m = K i = 1 n α θ x i 2 1 + x i 1 θ α 1 ε i , f 1 1 + x i 1 θ α ε i , m .
The MLEs are thus obtained by maximizing L ( α , θ ) with respect to α and θ . In this regard, the log-likelihood function is useful, and can be expressed as
log [ L ( α , θ ) ] = log K + n f log α n f log θ 2 i = 1 n ε i , f log ( x i ) α + 1 i = 1 n ε i , f log 1 + x i 1 θ + i = 1 n ε i , m log 1 1 + x i 1 θ α .
Then, the first partial derivatives of log [ L ( α , θ ) ] with respect to α and θ , are obtained as follows
log [ L ( α , θ ) ] α = n f α i = 1 n ε i , f log 1 + x i 1 θ + i = 1 n ε i , m 1 + x i 1 θ α log 1 + x i 1 θ 1 1 + x i 1 θ α
and
log [ L ( α , θ ) ] θ = n f θ + α + 1 i = 1 n ε i , f θ 2 x i + θ α θ 2 i = 1 n ε i , m x i 1 1 + x i 1 θ α 1 1 1 + x i 1 θ α .
Hence, the MLEs of α and θ are determined by solving the following equations: log [ L ( α , θ ) ] / α = 0 and log [ L ( α , θ ) ] / θ = 0 , simultaneously. These equations cannot be solved analytically, so numerical iterative techniques must be applied in this regard. For the purpose of this study, the MLEs of α and θ are denoted by α ^ and θ ^ , respectively. Hence, based on (5) and (6), owing to the plugging approach, natural estimates for the entropies I δ ( X ; α , θ ) and H q ( X ; α , θ ) , are, respectively, given by
I ^ δ ( X ) = I ^ δ ( X ; α ^ , θ ^ ) = δ 1 δ log α ^ log θ ^ + 1 1 δ log B 2 δ 1 , δ ( α ^ 1 ) + 1 ,
with δ 1 , δ > 1 / 2 and δ ( α ^ 1 ) + 1 > 0 , and
H ^ q ( X ) = H ^ q ( X ; α ^ , θ ^ ) = 1 q 1 1 α ^ q θ ^ q 1 B 2 q 1 , q ( α ^ 1 ) + 1 ,
with q 1 , q > 1 / 2 and q ( α ^ 1 ) + 1 > 0 .

3.3. Confidence Intervals

Owing to the invariance property, I ^ δ ( X ) and H ^ q ( X ) are also the MLEs of I δ ( X ; α , θ ) and H q ( X ; α , θ ) , respectively. Therefore, the well-known theory of the maximum likelihood method can be applied to I ^ δ ( X ) and H ^ q ( X ) . In particular, invoking the so-called Delta theorem, under some technical regularity conditions, the subjacent asymptotic distribution of I ^ δ ( X ) can be approximated by the normal distribution with mean I δ ( X ; α , θ ) and variance D J 1 D T , where J 1 denotes the inverse of the observed information matrix and D = ( I δ ( X ; α , θ ) / α , I δ ( X ; α , θ ) / θ ) α = α ^ , θ = θ ^ , both can be determined from (5). Therefore, the two-sided approximate confidence interval for the Rényi entropy at the confidence level 100 ( 1 ν ) % with ν ( 0 , 1 ) is given by Υ I δ ( ν ) = [ L I δ ( ν ) , U I δ ( ν ) ] (so P I δ ( X ) Υ I δ ( ν ) = 1 ν ), where
L I δ ( ν ) = I ^ δ ( X ) z ν / 2 σ ^ I ^ δ ( X ) , U I δ ( ν ) = I ^ δ ( X ) + z ν / 2 σ ^ I ^ δ ( X ) ,
σ ^ I ^ δ ( X ) = [ D J 1 D T ] 1 / 2 and z ν / 2 is the 100 ( 1 ν / 2 ) standard normal percentile. A similar result holds for H ^ q ( X ) . We also refer to References [15,31] for more detail.

4. Simulation Study

Here, a simulation study is assessed to investigate the performance of the Rényi and q-entropies estimates given by (7) and (8), respectively. In this regard, we use the mean squared errors (MSEs), two-sided approximate confidence intervals along with their corresponding average lengths (ALs) (i.e., defined AL is the average value of U-L, where L and U denotes the lower and upper bounds of the corresponding interval, respectively) based on multiple censored data (or samples). We adopt the methodology of Reference [15]. Thus, the following procedure is conducted:
  • 3000 random samples of sizes n = 50 , 100, 150, 200 and 300 are generated from the IL distribution based on multiple censored sample.
  • The values of parameters are selected as
    Set 1 :   ( α = 1.2 , θ = 2 ) , Set 2 :   ( α = 1.5 , θ = 2 ) .
    For the failures at censoring level (CL), we arbitrary chose CL = 0.5 and 0.7 (for instance, CL = 0.7 means that the observations are based on 30 % failed units and 70 % censored units).
  • The true values for I δ ( X ; α , θ ) and H q ( X ; α , θ ) given by (5) and (6), and the average estimates I ^ δ ( X ) and H ^ q ( X ) given by (7) and (8) are calculated, respectively. Different values for δ and q are considered.
  • Finally, the average of the obtained estimates, MSEs and ALs with level 95 % (so ν = 0.05 ) are computed.
All the numerical results are presented in Table 1, Table 2, Table 3 and Table 4 for the Rényi entropy, and Table 5, Table 6, Table 7 and Table 8 for the q-entropy. All is calculated by the use of the mathematical software Mathcad.
Figure 1, Figure 2, Figure 3 and Figure 4 provide a graphical approach of the MSEs and ALs of the Rényi entropy estimates, whereas Figure 5, Figure 6, Figure 7 and Figure 8 provide a graphical approach of the MSEs and ALs of the q-entropy estimates.
Here, some remarks can be formulated about the behavior of the Rényi and q-entropies estimates according to Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8:
  • The MSEs of I ^ δ ( X ) decrease as the sample size increases.
  • The ALs of I ^ δ ( X ) decrease as the sample size increases.
  • The MSEs of I ^ δ ( X ) increase when the value of δ increases.
  • The ALs of I ^ δ ( X ) increase when the value of δ increases.
  • The MSEs of H ^ q ( X ) decrease as the sample size increases.
  • The ALs of H ^ q ( X ) decrease as the sample size increases.
  • The MSEs of H ^ q ( X ) decrease when the value of q increases.
  • The ALs of H ^ q ( X ) decrease when the value of q increases.
  • In almost situations the MSE of I ^ δ ( X ) at CL = 0.5 is less than the MSE of I ^ δ ( X ) at CL = 0.7 .
  • In almost situations the MSE of H ^ q ( X ) at CL = 0.5 is less than the MSE of H ^ q ( X ) at CL = 0.7 .
These facts prove the good accuracy of our entropy estimates, which are logically recommended for further practical purposes.

5. Application

In this Section, two real life data sets are used to illustrate the finding, both described below.
The first data set is a physiological data set extracted from Reference [33]. It concerns twenty Duchenne patients (6–18 years age) with classical type of the muscular dystrophy. The Electrocardiography of these 20 patients based on the heart rate is given below in Table 9.
The second data set is an economic data set extracted from the following electronic address: https://tradingeconomics.com/pakistan/consumer-price-index-cpi. It refers to Pakistan Consumer Price Index (CPI) in Pakistan from May 2019 to April 2020. The data are collected in Table 10.
Then, based on the data, adopting the multiple censored data scheme, we apply I ^ δ ( X ) and H ^ q ( X ) to estimate I δ ( X ; α , θ ) and H q ( X ; α , θ ) where X denotes the considered random variables of interest, assuming to follow the IL distribution. Different values for CL, δ and q are considered. The obtained numerical results are displayed in Table 11 and Table 12 for the first and second data sets, respectively.
Thus, Table 11 and Table 12 show some numerical values of estimated entropies in a concrete scenario, following the multiple censored data scheme. We see that the results depends on the entropy parameters ( δ or q), and also, the value for CL, beyond the standard complete standard (which can be obtained by taking CL = 0 ).

6. Concluding Remarks

This article studies the estimation of the Rényi and q-entropies for inverse Lomax distribution under multiple censored data. We propose an efficient estimation strategy by using the maximum likelihood and plugging methods. The behavior of the Rényi and q-entropies estimates are calculated in terms of their mean squared errors and average lengths (depending on two-sided approximate confidence intervals). Numerical results are provided, showing that, as the sample size increases, the mean squared errors of our estimates decrease. Also, it can be observed that, as the sample size increases, the average lengths of our estimates decreases. Thus, the proposed estimates reveal to be efficient, providing new useful tools with potential applicability in many applied situations dealing with entropy of the inverse Lomax distribution. The article ends by presenting an application to two real life data sets.

Author Contributions

R.A.R.B., M.E., C.C., and F.J. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deanship of Scientific Research (DSR), King AbdulAziz University, Jeddah, under grant No. (RG-2-150-37).

Acknowledgments

The authors are very grateful to the two reviewers which have provided very constructive comments, improving significantly some parts of the paper. This project was funded by the Deanship of Scientific Research (DSR), at King Abdulaziz University, Jeddah, under grant No. (RG-2-150-37). The authors, therefore, acknowledge with thanks to DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Amigo, J.M.; Balogh, S.G.; Hernandez, S. A brief review of generalized entropies. Entropy 2018, 20, 813. [Google Scholar] [CrossRef] [Green Version]
  3. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Fourth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  4. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  5. Wong, K.M.; Chan, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inf. Theory 1990, 6, 276–284. [Google Scholar] [CrossRef]
  6. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Entropy properties of record statistics. Stat. Pap. 2007, 48, 197–213. [Google Scholar] [CrossRef]
  7. Morabbi, H.; Razmkhah, M. Entropy of hybrid censoring schemes. J. Stat. Res. 2010, 6, 161–176. [Google Scholar] [CrossRef]
  8. Abo-Eleneen, Z.A. The entropy of progressively censored samples. Entropy 2011, 13, 437–449. [Google Scholar] [CrossRef] [Green Version]
  9. Cramer, E.; Bagh, C. Minimum and maximum information censoring plans in progressive censoring. Commun. Stat. Theory Methods 2011, 40, 2511–2527. [Google Scholar] [CrossRef]
  10. Cho, Y.; Sun, H.; Lee, K. An estimation of the entropy for a Rayleigh distribution based on doubly generalized Type II hybrid censored samples. Entropy 2014, 16, 3655–3669. [Google Scholar] [CrossRef] [Green Version]
  11. Cho, Y.; Sun, H.; Lee, K. Estimating the entropy of a Weibull distribution under generalized progressive hybrid censoring. Entropy 2015, 17, 102–122. [Google Scholar] [CrossRef] [Green Version]
  12. Lee, K. Estimation of entropy of the inverse Weibull distribution under generalized progressive hybrid censored data. J Korea Inf. Sci. Soc. 2017, 28, 659–668. [Google Scholar]
  13. Manoj, C.; Asha, P.S. Estimation of entropy for generalized exponential distribution based on record values. J Indian Soc. Probab. Stat. 2018, 19, 79–96. [Google Scholar]
  14. Liu, S.; Gui, W. Estimating the entropy for Lomax distribution based on generalized progressively hybrid censoring. Symmetry 2019, 11, 1219. [Google Scholar] [CrossRef] [Green Version]
  15. Hassan, A.S.; Zaki, A.N. Estimation of entropy for inverse Weibull distribution under multiple censored data. J. Taibah Univ. Sci. 2019, 13, 331–337. [Google Scholar] [CrossRef] [Green Version]
  16. Hassan, A.; Al-Ghamdi, A. Optimum step stress accelerated life testing for Lomax distribution. J. Appl. Sci. 2009, 5, 2153–2164. [Google Scholar]
  17. Kilany, N.M. Weighted Lomax distribution. SpringerPlus 2016, 5, 1862. [Google Scholar] [CrossRef] [Green Version]
  18. Lomax, K.S. Business failures: Another example of the analysis of failure data. J. Am. Stat. Assoc. 1954, 49, 847–852. [Google Scholar] [CrossRef]
  19. McKenzie, D.; Miller, C.; Falk, D.A. The Landscape Ecology of Fire; Springer: Dordrecht, The Netherlands; Heidelberg, Germany; New York, NY, USA, 2011. [Google Scholar]
  20. Kleiber, C.; Kotz, S. Statistical Size Distributions in Economics and Actuarial Sciences; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2003. [Google Scholar]
  21. Singh, S.K.; Singh, U.; Yadav, A.S. Reliability estimation for inverse Lomax distribution under type II censored data using Markov chain Monte Carlo method. Int. Math. Stat. 2016, 17, 128–146. [Google Scholar]
  22. Yadav, A.S.; Singh, S.K.; Singh, U. On hybrid censored inverse Lomax distribution: Application to the survival data. Statistica 2016, 76, 185–203. [Google Scholar]
  23. Reyad, H.M.; Othman, S.A. E-Bayesian estimation of two-component mixture of inverse Lomax distribution based on type-I censoring scheme. J. Adv. Math. Comput. Sci. 2018, 26, 1–22. [Google Scholar] [CrossRef]
  24. Hassan, A.S.; Abd-Allah, M. On the inverse power Lomax distribution. Ann. Data Sci. 2018, 6, 259–278. [Google Scholar] [CrossRef]
  25. Hassan, A.S.; Mohamed, R.E. Weibull inverse Lomax distribution. Pak. J. Stat. Oper. 2019, 15, 587–603. [Google Scholar] [CrossRef] [Green Version]
  26. ZeinEldin, R.A.; Haq, M.A.U.; Hashmi, S.; Elsehety, M. Alpha power transformed inverse Lomax distribution with different methods of estimation and applications. Complexity 2020, 2020, 1860813. [Google Scholar] [CrossRef] [Green Version]
  27. Maxwell, O.; Chukwu, A.U.; Oyamakin, O.S.; Khaleel, M.A. The Marshall-Olkin inverse Lomax distribution (MO-ILD) with application on cancer stem cell. J. Adv. Math. Comput. Sci. 2019, 33, 1–12. [Google Scholar] [CrossRef]
  28. Maxwell, O.; Kayode, A.A.; Onyedikachi, I.P.; Obi-Okpala, C.I.; Victor, E.U. Useful generalization of the inverse Lomax distribution: Statistical properties and application to lifetime data. Am. J. Biomed. Sci. Res. 2019, 6, 258–265. [Google Scholar] [CrossRef] [Green Version]
  29. Lawless, F.J. Statistical Models and Methods for Lifetime Data; Wiley: New York, NY, USA, 2003. [Google Scholar]
  30. Pham, H. Springer Handbook of Engineering Statistics; Springer: London, UK, 2006. [Google Scholar]
  31. Tobias, P.A.; Trindade, D.C. Applied Reliability, 2nd ed.; Chapman and Hall/CRC: New York, NY, USA, 1995. [Google Scholar]
  32. Wang, F.K.; Cheng, Y. EM algorithm for estimating the Burr XII parameters with multiple censored data. Qual. Reliab. Eng. Int. 2010, 6, 615–630. [Google Scholar] [CrossRef]
  33. Ansari, N.; Siddiqui, P.Q.R.; Kirmani, S.R. Electrocardiographic studies in the patients of Duchenne muscular dystrophy. J. Pak. Med. Assoc. 1980, 34, 117–121. [Google Scholar]
Figure 1. (a) Mean squared errors (MSEs) and (b) average lengths (ALs) of Rényi entropy estimates for different sample sizes at Set1 and CL = 0.5 .
Figure 1. (a) Mean squared errors (MSEs) and (b) average lengths (ALs) of Rényi entropy estimates for different sample sizes at Set1 and CL = 0.5 .
Entropy 22 00601 g001
Figure 2. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set1 and CL = 0.7 .
Figure 2. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set1 and CL = 0.7 .
Entropy 22 00601 g002
Figure 3. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set2 and CL = 0.5 .
Figure 3. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set2 and CL = 0.5 .
Entropy 22 00601 g003
Figure 4. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set2 and CL = 0.7 .
Figure 4. (a) MSEs and (b) ALs of Rényi entropy estimates for different sample sizes at Set2 and CL = 0.7 .
Entropy 22 00601 g004
Figure 5. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set1 and CL = 0.5 .
Figure 5. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set1 and CL = 0.5 .
Entropy 22 00601 g005
Figure 6. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set1 and CL = 0.7 .
Figure 6. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set1 and CL = 0.7 .
Entropy 22 00601 g006
Figure 7. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set2 and CL = 0.5 .
Figure 7. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set2 and CL = 0.5 .
Entropy 22 00601 g007
Figure 8. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set2 and CL = 0.7 .
Figure 8. (a) MSEs and (b) ALs of q-entropy estimates for different sample sizes at Set2 and CL = 0.7 .
Entropy 22 00601 g008
Table 1. Rényi entropy estimates at Set1 and censoring level (CL) = 0.5 .
Table 1. Rényi entropy estimates at Set1 and censoring level (CL) = 0.5 .
n δ = 0.8 δ = 1.2 δ = 1.5
Exact ValueEstimatesMSEALExact VlaueEstimatesMSEALExact ValueEstimatesMSEAL
500.90.8417.301 *0.2420.5340.5470.0140.4650.4130.3190.0260.514
1000.8536.014 *0.240.5333.727 *0.2390.4266.08 *0.301
1500.9374.572 *0.2230.5343.123 *0.2190.3983.316 *0.218
2000.8753.006 *0.190.5633.052 *0.1850.4313.242 *0.212
3000.8961.217 *0.1360.551.348 *0.1290.4271.974 *0.165
* indicates that the value multiply 10 3 .
Table 2. Rényi entropy estimates at Set1 and CL = 0.7 .
Table 2. Rényi entropy estimates at Set1 and CL = 0.7 .
n δ = 0.8 δ = 1.2 δ = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
500.91.0150.0210.3510.5340.6440.0220.3830.4130.5110.0150.483
1001.0030.0140.2460.6180.0190.3680.5060.0140.232
15010.0130.230.6160.0170.360.5010.0130.231
2000.9640.010.2050.6120.0130.3330.4838.4 *0.153
3000.9939.959 *0.1430.6036.797 *0.1790.4636.007 *0.093
* indicates that the value multiply 10 3 .
Table 3. Rényi entropy estimates at Set2 and CL = 0.5 .
Table 3. Rényi entropy estimates at Set2 and CL = 0.5 .
n δ = 0.8 δ = 1.2 δ = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
501.0081.0387.815 *0.3260.6520.7390.0180.410.5350.5550.0150.466
1001.0264.843 *0.2380.6184.152 *0.2390.5516.032 *0.268
1500.9943.394 *0.2220.6723.718 *0.1910.554.025 *0.242
2000.9992.418 *0.1890.6490.321 *0.1250.5423.882 *0.162
3001.0091.401 *0.0840.650.237 *0.0450.531.578 *0.154
* indicates that the value multiply 10 3 .
Table 4. Rényi entropy estimates at Set2 and CL = 0.7 .
Table 4. Rényi entropy estimates at Set2 and CL = 0.7 .
n δ = 0.8 δ = 1.2 δ = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
501.0081.1590.0260.4380.6520.7990.020.3550.5350.6350.0110.17
1001.0990.0180.3890.7650.0190.2940.6269.926 *0.161
1501.0590.0140.3230.7328.262 *0.2690.5956.224 *0.16
2001.0360.0110.2550.7157.068 *0.2160.5752.324 *0.158
3001.019.338 *0.1350.7126.514 *0.2110.5642.189 *0.146
* indicates that the value multiply 10 3 .
Table 5. q-entropy estimates at Set1 and CL = 0.5 .
Table 5. q-entropy estimates at Set1 and CL = 0.5 .
n q = 0.8 q = 1.2 q = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
508.5698.8760.4092.496−2.91−2.8390.0290.663−0.243−0.180.0160.433
1008.3930.1150.587 2.858 0.0260.594 0.221 0.0150.416
1508.5080.0420.566 2.95 9.99 *0.349 0.227 9.035 *0.362
2008.5170.0240.411 2.925 9.573 *0.327 0.233 3.265 *0.22
3008.5260.010.339 2.914 7.183 *0.275 0.251 1.895 *0.165
* indicates that the value multiply 10 3 .
Table 6. q-entropy estimates at Set1 and CL = 0.7 .
Table 6. q-entropy estimates at Set1 and CL = 0.7 .
n q = 0.8 q = 1.2 q = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
508.5698.9690.3691.791−2.91−2.7520.0380.679−0.243−0.0810.0280.376
1008.9430.2741.769−2.7620.0330.543−0.1440.0150.357
1508.9390.2621.385−2.7740.0310.379−0.1760.0130.294
2008.9050.1520.663−2.8140.030.336−0.1810.0120.139
3008.8340.1420.43−2.8870.0160.305−0.2239.603 *0.124
* indicates that the value multiply 10 3 .
Table 7. q-entropy estimates at Set2 and CL = 0.5 .
Table 7. q-entropy estimates at Set2 and CL = 0.5 .
n q = 0.8 q = 1.2 q = 1.5
Exact ValueEstimatesMSEALExact ValueEstimatesMSEALExact ValueEstimatesMSEAL
508.9558.6280.1151.326−2.703−2.5840.0360.743−0.08−0.0240.0160.492
1008.7270.1130.969−2.6410.0280.61−0.0270.0150.438
1509.0320.060.915−2.7330.0110.384−0.1265.395 *0.183
2008.9810.0460.805−2.7113.923 *0.244−0.1013.845 *0.128
3008.9720.0220.581−2.6983.703 *0.193−0.11.28 *0.115
* indicates that the value multiply 10 3 .
Table 8. q-entropy estimates at Set2 and CL = 0.7 .
Table 8. q-entropy estimates at Set2 and CL = 0.7 .
n q = 0.8 q = 1.2 q = 1.5
Exact ValueEstimatesMSEALExact VauleEstimatesMSEALExact ValueEstimatesMSEAL
508.9559.3780.2081.441−2.703−2.5140.0430.628−0.080.080.0260.478
1009.3090.1731.045−2.5550.0390.5160.0460.0190.361
1509.2920.160.67−2.5570.0240.2050.0360.0180.27
2009.1520.1450.636−2.5920.0190.189−4.67 *9.329 *0.229
3009.1470.1080.333−2.6490.010.146−0.057.114 *0.151
* indicates that the value multiply 10 3 .
Table 9. First data set: Heart rate data for twenty Duchenne patients.
Table 9. First data set: Heart rate data for twenty Duchenne patients.
8090909410090103100116102
11214012012010010012080120100
Table 10. Second data set: Consumer Price Index (CPI) in Pakistan from May 2019 to April 2020.
Table 10. Second data set: Consumer Price Index (CPI) in Pakistan from May 2019 to April 2020.
245.94246.82252.46255.94257.87260.46
263.59262.82266.97266.245266.87267.12
Table 11. Estimated of Rényi and q-entropies at CL = 0.5 and CL = 0.7 for the first data set.
Table 11. Estimated of Rényi and q-entropies at CL = 0.5 and CL = 0.7 for the first data set.
CL = 0.5CL = 0.7
Rényi Entropyq-EntropyRényi Entropyq-Entropy
δ = 0.8 δ = 1.2 q = 0.8 q = 1.2 δ = 0.8 δ = 1.2 q = 0.8 q = 1.2
−4.768−6.8061.0171.002−4.594−9.2781.061.061
Table 12. Estimated of Rényi and q-entropies at CL = 0.5 and CL = 0.7 for the second data set.
Table 12. Estimated of Rényi and q-entropies at CL = 0.5 and CL = 0.7 for the second data set.
CL = 0.5CL = 0.7
Rényi Entropyq-EntropyRényi Entropyq-Entropy
δ = 0.8 δ = 1.2 q = 0.8 q = 1.2 δ = 0.8 δ = 1.2 q = 0.8 q = 1.2
−2.868−3.7461.0241.004−4.063−4.4111.0331.015

Share and Cite

MDPI and ACS Style

Bantan, R.A.R.; Elgarhy, M.; Chesneau, C.; Jamal, F. Estimation of Entropy for Inverse Lomax Distribution under Multiple Censored Data. Entropy 2020, 22, 601. https://doi.org/10.3390/e22060601

AMA Style

Bantan RAR, Elgarhy M, Chesneau C, Jamal F. Estimation of Entropy for Inverse Lomax Distribution under Multiple Censored Data. Entropy. 2020; 22(6):601. https://doi.org/10.3390/e22060601

Chicago/Turabian Style

Bantan, Rashad A. R., Mohammed Elgarhy, Christophe Chesneau, and Farrukh Jamal. 2020. "Estimation of Entropy for Inverse Lomax Distribution under Multiple Censored Data" Entropy 22, no. 6: 601. https://doi.org/10.3390/e22060601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop