当前位置: X-MOL 学术arXiv.cs.MS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations
arXiv - CS - Mathematical Software Pub Date : 2019-04-25 , DOI: arxiv-1904.11263
Michael Hopkins and Mantas Mikaitis and Dave R. Lester and Steve Furber

Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit (LSB) in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).

中文翻译:

用于求解神经常微分方程的随机舍入和降低精度的定点算法

尽管双精度浮点运算目前在高性能计算中占主导地位,但人们对更小、更简单的算术类型越来越感兴趣。主要原因是能源效率和内存占用和带宽的潜在改进。但是,简单地切换到低精度类型通常会导致数值误差增加。我们研究了提高精度降低的定点算术类型准确性的方法,使用神经科学数值计算的一个重要领域中的示例:常微分方程 (ODE) 的解。Izhikevich 神经元模型用于证明舍入在从显式 ODE 求解算法中产生准确的尖峰时序方面具有重要作用。特别是,与在一系列神经元行为和 ODE 求解器中舍入到最近的单精度浮点和定点算法相比,具有随机舍入的定点算法始终导致更小的误差。还研究了一种计算成本低得多的替代方案,其灵感来自抖动的概念,抖动是一种广泛理解的机制,用于在数字信号处理中提供低于最低有效位 (LSB) 的分辨率。这些结果将对其他学科领域的 ODE 求解产生影响,并且还应与偏微分方程 (PDE) 所代表的大量实际问题直接相关。还研究了一种计算成本低得多的替代方案,其灵感来自抖动的概念,抖动是一种广泛理解的机制,用于在数字信号处理中提供低于最低有效位 (LSB) 的分辨率。这些结果将对其他学科领域的 ODE 求解产生影响,并且还应与偏微分方程 (PDE) 所代表的大量实际问题直接相关。还研究了一种计算成本低得多的替代方案,其灵感来自抖动的概念,抖动是一种广泛理解的机制,用于在数字信号处理中提供低于最低有效位 (LSB) 的分辨率。这些结果将对其他学科领域的 ODE 求解产生影响,并且还应与偏微分方程 (PDE) 所代表的大量实际问题直接相关。
更新日期:2020-01-23
down
wechat
bug