当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs
Frontiers in Computational Neuroscience ( IF 2.1 ) Pub Date : 2021-01-26 , DOI: 10.3389/fncom.2021.627620
Bruno Golosio , Gianmarco Tiddia , Chiara De Luca , Elena Pastorelli , Francesco Simula , Pier Stanislao Paolucci

Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 108 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.



中文翻译:

使用GPU快速仿真高度连接的尖峰皮质模型

在过去的十年中,人们对并行硬件系统的发展越来越感兴趣,这些并行硬件系统用于模拟尖峰神经元的大规模网络。与其他高度并行的系统相比,GPU加速的解决方案具有成本相对较低且用途广泛的优势,这还归功于可以使用CUDA-C / C ++编程语言。NeuronGPU是一个GPU库,用于基于新颖的峰值传递算法,以C ++和CUDA-C ++编程语言编写的尖峰神经网络模型的大规模仿真。该库包括简单的LIF(泄漏集成和发射)神经元模型,以及几个具有基于电流或电导的突触的多突触AdEx(自适应指数集成和发射)神经元模型,不同类型的尖峰发生器,用于记录峰值 状态变量和参数,它支持用户定义的模型。通过以CUDA-C ++编写的并行五阶Runge-Kutta方法和自适应步长控制方法,可以对AdEx模型动力学微分方程进行数值求解。在这项工作中,我们使用AdEx或Izhikevich神经元模型以及基于电导率的或基于电导率的或基于电导率的或基于电导率的或基于电导率的或基于电导率的或基于电导率或基于当前的突触。在这些模型上,我们将显示,所提出的库在每秒生物活动的模拟时间方面达到了最先进的性能。特别是,使用单个NVIDIA GeForce RTX 2080 Ti GPU板,可以以非常接近实时的速度模拟8个连接,而1,000,000个AdEx神经元与每个神经元1,000个连接的平衡网络的仿真时间约为每秒70 s生物活动。

更新日期:2021-02-17
down
wechat
bug