当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Integration of Leaky-Integrate-and-Fire Neurons in Standard Machine Learning Architectures to Generate Hybrid Networks: A Surrogate Gradient Approach
Neural Computation ( IF 2.7 ) Pub Date : 2021-09-16 , DOI: 10.1162/neco_a_01424
Richard C Gerum 1 , Achim Schilling 2
Affiliation  

Up to now, modern machine learning (ML) has been based on approximating big data sets with high-dimensional functions, taking advantage of huge computational resources. We show that biologically inspired neuron models such as the leaky-integrate-and-fire (LIF) neuron provide novel and efficient ways of information processing. They can be integrated in machine learning models and are a potential target to improve ML performance. Thus, we have derived simple update rules for LIF units to numerically integrate the differential equations. We apply a surrogate gradient approach to train the LIF units via backpropagation. We demonstrate that tuning the leak term of the LIF neurons can be used to run the neurons in different operating modes, such as simple signal integrators or coincidence detectors. Furthermore, we show that the constant surrogate gradient, in combination with tuning the leak term of the LIF units, can be used to achieve the learning dynamics of more complex surrogate gradients.

To prove the validity of our method, we applied it to established image data sets (the Oxford 102 flower data set, MNIST), implemented various network architectures, used several input data encodings and demonstrated that the method is suitable to achieve state-of-the-art classification performance.

We provide our method as well as further surrogate gradient methods to train spiking neural networks via backpropagation as an open-source KERAS package to make it available to the neuroscience and machine learning community. To increase the interpretability of the underlying effects and thus make a small step toward opening the black box of machine learning, we provide interactive illustrations, with the possibility of systematically monitoring the effects of parameter changes on the learning characteristics.



中文翻译:

在标准机器学习架构中集成 Leaky-Integrate-and-Fire 神经元以生成混合网络:替代梯度方法

到目前为止,现代机器学习 (ML) 一直基于利用大量计算资源来逼近具有高维函数的大数据集。我们展示了受生物学启发的神经元模型,例如泄漏积分和发射 (LIF) 神经元,提供了新颖而有效的信息处理方式。它们可以集成到机器学习模型中,是提高机器学习性能的潜在目标。因此,我们推导出了 LIF 单元的简单更新规则,以对微分方程进行数值积分。我们应用代理梯度方法通过反向传播训练 LIF 单元。我们证明了调整 LIF 神经元的泄漏项可用于在不同的操作模式下运行神经元,例如简单的信号积分器或符合检测器。此外,

为了证明我们方法的有效性,我们将其应用于已建立的图像数据集(Oxford 102 花卉数据集,MNIST),实现了各种网络架构,使用了多种输入数据编码并证明该方法适用于实现状态-最先进的分类性能。

我们提供了我们的方法以及进一步的替代梯度方法,以通过反向传播训练尖峰神经网络作为开源 KERAS 包,使其可用于神经科学和机器学习社区。为了提高潜在影响的可解释性,从而朝着打开机器学习的黑匣子迈出一小步,我们提供了交互式插图,可以系统地监控参数变化对学习特征的影响。

更新日期:2021-09-17
down
wechat
bug