当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-11 , DOI: arxiv-2009.05359
Beren Millidge, Alexander Tschantz, Anil K Seth, Christopher L Buckley

The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning. However, a key question remains as to whether backprop can be formulated in a manner suitable for implementation in neural circuitry. The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global signals as in standard backprop. Recently several algorithms for approximating backprop using only local signals have been proposed. However, these algorithms typically impose other requirements which challenge biological plausibility: for example, requiring complex and precise connectivity schemes, or multiple sequential backwards phases with information being stored across phases. Here, we propose a novel algorithm, Activation Relaxation (AR), which is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system. Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, utilises only a single parallel backwards relaxation phase, and can operate on arbitrary computation graphs. We illustrate these properties by training deep neural networks on visual classification tasks, and describe simplifications to the algorithm which remove further obstacles to neurobiological implementation (for example, the weight-transport problem, and the use of nonlinear derivatives), while preserving performance.

中文翻译:

激活松弛:大脑中反向传播的局部动力学近似

误差算法的反向传播(backprop)在最近深度学习的成功中起到了重要作用。然而,一个关键问题仍然是反向传播是否可以以适合在神经电路中实现的方式来制定。主要挑战是确保任何候选公式仅使用本地信息,而不是像标准反向传播那样依赖全局信号。最近已经提出了几种仅使用本地信号来逼近反向传播的算法。然而,这些算法通常会施加其他挑战生物合理性的要求:例如,需要复杂和精确的连接方案,或多个连续的反向阶段,信息跨阶段存储。在这里,我们提出了一种新颖的算法,激活松弛(AR),这是通过将反向传播梯度构建为动态系统的平衡点来激发的。我们的算法快速且稳健地收敛到正确的反向传播梯度,只需要单一类型的计算单元,仅利用一个并行的反向松弛阶段,并且可以在任意计算图上进行操作。我们通过在视觉分类任务上训练深度神经网络来说明这些特性,并描述算法的简化,这些简化消除了神经生物学实现的进一步障碍(例如,权重传输问题和非线性导数的使用),同时保持性能。只需要一种类型的计算单元,只使用一个并行的向后松弛阶段,并且可以对任意计算图进行操作。我们通过在视觉分类任务上训练深度神经网络来说明这些特性,并描述算法的简化,这些简化消除了神经生物学实现的进一步障碍(例如,权重传输问题和非线性导数的使用),同时保持性能。只需要一种类型的计算单元,只使用一个并行的向后松弛阶段,并且可以对任意计算图进行操作。我们通过在视觉分类任务上训练深度神经网络来说明这些特性,并描述算法的简化,这些简化消除了神经生物学实现的进一步障碍(例如,权重传输问题和非线性导数的使用),同时保持性能。
更新日期:2020-10-13
down
wechat
bug