当前位置: X-MOL 学术J. Royal Soc. Interface › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On reaction network implementations of neural networks
Journal of The Royal Society Interface ( IF 3.7 ) Pub Date : 2021-04-14 , DOI: 10.1098/rsif.2021.0031
David F Anderson 1 , Badal Joshi 2 , Abhishek Deshpande 1
Affiliation  

This paper is concerned with the utilization of deterministically modelled chemical reaction networks for the implementation of (feed-forward) neural networks. We develop a general mathematical framework and prove that the ordinary differential equations (ODEs) associated with certain reaction network implementations of neural networks have desirable properties including (i) existence of unique positive fixed points that are smooth in the parameters of the model (necessary for gradient descent) and (ii) fast convergence to the fixed point regardless of initial condition (necessary for efficient implementation). We do so by first making a connection between neural networks and fixed points for systems of ODEs, and then by constructing reaction networks with the correct associated set of ODEs. We demonstrate the theory by constructing a reaction network that implements a neural network with a smoothed ReLU activation function, though we also demonstrate how to generalize the construction to allow for other activation functions (each with the desirable properties listed previously). As there are multiple types of ‘networks’ used in this paper, we also give a careful introduction to both reaction networks and neural networks, in order to disambiguate the overlapping vocabulary in the two settings and to clearly highlight the role of each network’s properties.



中文翻译:


神经网络的反应网络实现



本文涉及利用确定性建模的化学反应网络来实现(前馈)神经网络。我们开发了一个通用的数学框架,并证明与神经网络的某些反应网络实现相关的常微分方程(ODE)具有理想的属性,包括(i)存在唯一的正不动点,这些不动点在模型的参数中是平滑的(对于梯度下降)和(ii)快速收敛到固定点,无论初始条件如何(有效实现所必需的)。为此,我们首先在神经网络和常微分方程系统的不动点之间建立连接,然后通过正确关联的常微分方程组构建反应网络。我们通过构建一个反应网络来演示该理论,该反应网络实现具有平滑 ReLU 激活函数的神经网络,尽管我们还演示了如何推广该构造以允许其他激活函数(每个激活函数都具有前面列出的所需属性)。由于本文使用了多种类型的“网络”,我们还对反应网络和神经网络进行了仔细的介绍,以消除两种设置中重叠词汇的歧义,并清楚地突出每个网络属性的作用。

更新日期:2021-04-14
down
wechat
bug