当前位置: X-MOL 学术Appl. Math. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Information potential for some probability density functions
Applied Mathematics and Computation ( IF 3.5 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.amc.2020.125578
Ana-Maria Acu , Gülen Başcanbaz-Tunca , Ioan Rasa

Abstract This paper is related to the information theoretic learning methodology, whose goal is to quantify global scalar descriptors (e.g., entropy) of a given probability density function (PDF). In this context, the core concept is the information potential (IP) S [ s ] ( x ) : = ∫ R p s ( t , x ) d t , s > 0 of a PDF p(t, x) depending on a parameter x; it is naturally related to the Renyi and Tsallis entropies. We present several such PDF, viewed also as kernels of integral operators, for which a precise relation exists between S[2](x) and the variance Var[p(t, x)]. For these PDF we determine explicitly the IP and the Shannon entropy. As an application to Information Theoretic Learning we determine two essential indices used in this theory: the expected value E[log p(t, x)] and the variance Var[log p(t, x)]. The latter is an index of the intrinsic shape of p(t, x) having more statistical power than kurtosis. For a sequence of B-spline functions, considered as kernels of Steklov operators and also as PDF, we investigate the sequence of IP and its asymptotic behaviour. Another special sequence of PDF consists of kernels of Kantorovich modifications of the classical Bernstein operators. Convexity properties and bounds of the associated IP, useful in Information Theoretic Learning, are discussed. Several examples and numerical computations illustrate the general results.

中文翻译:

一些概率密度函数的信息潜力

摘要 本文与信息论学习方法有关,其目标是量化给定概率密度函数 (PDF) 的全局标量描述符(例如,熵)。在这种情况下,核心概念是 PDF p(t, x) 的信息势 (IP) S [ s ] ( x ) : = ∫ R ps ( t , x ) dt , s > 0 取决于参数 x ; 它自然与 Renyi 和 Tsallis 熵有关。我们提出了几个这样的 PDF,也被视为积分运算符的内核,对于它们 S[2](x) 和方差 Var[p(t, x)] 之间存在精确的关系。对于这些 PDF,我们明确确定 IP 和香农熵。作为对信息理论学习的应用,我们确定了该理论中使用的两个基本指标:期望值 E[log p(t, x)] 和方差 Var[log p(t, x)]。后者是 p(t, x) 的内在形状的指数,其统计功效比峰度大。对于一系列 B 样条函数,被视为 Steklov 算子的核,也被视为 PDF,我们研究了 IP 序列及其渐近行为。PDF 的另一个特殊序列由经典 Bernstein 算子的 Kantorovich 修改核组成。讨论了在信息理论学习中有用的相关 IP 的凸性属性和边界。几个例子和数值计算说明了一般结果。PDF 的另一个特殊序列由经典 Bernstein 算子的 Kantorovich 修改核组成。讨论了在信息理论学习中有用的相关 IP 的凸性属性和边界。几个例子和数值计算说明了一般结果。PDF 的另一个特殊序列由经典 Bernstein 算子的 Kantorovich 修改核组成。讨论了在信息理论学习中有用的相关 IP 的凸性属性和边界。几个例子和数值计算说明了一般结果。
更新日期:2021-01-01
down
wechat
bug