当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scaling up learning with GAIT-prop
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11598
Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven

Backpropagation of error (BP) is a widely used and highly successful learning algorithm. However, its reliance on non-local information in propagating error gradients makes it seem an unlikely candidate for learning in the brain. In the last decade, a number of investigations have been carried out focused upon determining whether alternative more biologically plausible computations can be used to approximate BP. This work builds on such a local learning algorithm - Gradient Adjusted Incremental Target Propagation (GAIT-prop) - which has recently been shown to approximate BP in a manner which appears biologically plausible. This method constructs local, layer-wise weight update targets in order to enable plausible credit assignment. However, in deep networks, the local weight updates computed by GAIT-prop can deviate from BP for a number of reasons. Here, we provide and test methods to overcome such sources of error. In particular, we adaptively rescale the locally-computed errors and show that this significantly increases the performance and stability of the GAIT-prop algorithm when applied to the CIFAR-10 dataset.

中文翻译:

使用GAIT-prop扩大学习范围

误差的反向传播(BP)是一种广泛使用且非常成功的学习算法。但是,它在传播误差梯度时依赖非本地信息,因此似乎不太可能在大脑中学习。在过去的十年中,已经进行了许多研究,重点是确定是否可以使用其他更合理的生物学计算方法来近似BP。这项工作建立在这样的局部学习算法(梯度调整的增量目标传播(GAIT-prop))的基础上,该算法最近已显示出在生物学上看似合理的方式近似于BP。此方法构造局部的,逐层的权重更新目标,以实现合理的信用分配。但是,在深层网络中,由于多种原因,由GAIT-prop计算的局部权重更新可能会偏离BP。在这里,我们提供并测试方法来克服此类错误源。特别是,我们自适应地重新缩放了本地计算的误差,并表明当将其应用于CIFAR-10数据集时,这显着提高了GAIT-prop算法的性能和稳定性。
更新日期:2021-02-24
down
wechat
bug