当前位置: X-MOL 学术Math. Geosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Entropy and Information Content of Geostatistical Models
Mathematical Geosciences ( IF 2.8 ) Pub Date : 2020-06-25 , DOI: 10.1007/s11004-020-09876-z
Thomas Mejer Hansen

Geostatistical models quantify spatial relations between model parameters and can be used to estimate and simulate properties away from known observations. The underlying statistical model, quantified through a joint probability density, most often consists of both an assumed statistical model and the specific choice of algorithm, including tuning parameters controlling the algorithm. Here, a theory is developed that allows one to compute the entropy of the underlying multivariate probability density when sampled using sequential simulation. The self-information of a single realization can be computed as the sum of the conditional self-information. The entropy is the average of the self-information obtained for many independent realizations. For discrete probability mass functions, a measure of the effective number of free model parameters, implied by a specific choice of probability mass function, is proposed. Through a few examples, the entropy measure is used to quantify the information content related to different choices of simulation algorithms and tuning parameters.



中文翻译:

地统计模型的熵和信息量

地统计模型可量化模型参数之间的空间关系,并可用于估计和模拟远离已知观测值的属性。通过联合概率密度进行量化的基础统计模型通常由假定的统计模型和算法的特定选择组成,包括控制算法的调整参数。在这里,发展了一种理论,该理论允许在使用顺序仿真进行采样时计算基础多元概率密度的熵。单个实现的自我信息可以计算为条件自我信息的总和。熵是许多独立实现所获得的自我信息的平均值。对于离散概率质量函数,是对自由模型参数的有效数量的度量,提出了由概率质量函数的特定选择所隐含的含义。通过几个示例,熵测度用于量化与仿真算法和调整参数的不同选择有关的信息内容。

更新日期:2020-06-25
down
wechat
bug