当前位置: X-MOL 学术SIAM J. Sci. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sharper Probabilistic Backward Error Analysis for Basic Linear Algebra Kernels with Random Data
SIAM Journal on Scientific Computing ( IF 3.0 ) Pub Date : 2020-10-22 , DOI: 10.1137/20m1314355
Nicholas J. Higham , Theo Mary

SIAM Journal on Scientific Computing, Volume 42, Issue 5, Page A3427-A3446, January 2020.
Standard backward error analyses for numerical linear algebra algorithms provide worst-case bounds that can significantly overestimate the backward error. Our recent probabilistic error analysis, which assumes rounding errors to be independent random variables [SIAM J. Sci. Comput., 41 (2019), pp. A2815--A2835], contains smaller constants but its bounds can still be pessimistic. We perform a new probabilistic error analysis that assumes both the data and the rounding errors to be random variables and assumes only mean independence. We prove that for data with zero or small mean we can relax the existing probabilistic bounds of order $\sqrt{n}\mkern1muu$ to much sharper bounds of order $u$, which are independent of $n$. Our fundamental result is for summation and we use it to derive results for inner products, matrix--vector products, and matrix--matrix products. The analysis answers the open question of why random data distributed on $[-1,1]$ leads to smaller error growth for these kernels than random data distributed on [0,1]. We also propose a new algorithm for multiplying two matrices that transforms the rows of the first matrix to have zero mean and we show that it can achieve significantly more accurate results than standard matrix multiplication.


中文翻译:

基本线性代数核具有随机数据的更精确的概率向后误差分析

SIAM科学计算杂志,第42卷,第5期,第A3427-A3446页,2020年1月。
用于数字线性代数算法的标准后向误差分析提供了最坏情况下的边界,这些边界会大大高估后向误差。我们最近的概率误差分析,假设舍入误差为独立的随机变量[SIAM J. Sci。[Comput。41(2019),pp。A2815--A2835],包含较小的常量,但其范围仍然是悲观的。我们执行新的概率误差分析,该假设将数据和舍入误差都假定为随机变量,并且仅假设均值独立性。我们证明,对于零均值或较小均值的数据,我们可以将现有的$ \ sqrt {n} \ mkern1muu $的概率边界放宽到与$ n $无关的更为锐利的$ u $边界。我们的基本结果是求和,我们用它来得出内积,矩阵-向量积,和矩阵-矩阵产品。该分析回答了一个开放性问题,即为什么分布在$ [-1,1] $上的随机数据比分布在[0,1]上的随机数据导致这些内核的错误增长更小。我们还提出了一种将两个矩阵相乘的新算法,该算法将第一个矩阵的行转换为均值为零,并且表明与标准矩阵相乘相比,它可以实现更为准确的结果。
更新日期:2020-12-04
down
wechat
bug