当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Front Contribution instead of Back Propagation
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05569
Swaroop Mishra, Anjana Arunkumar

Deep Learning's outstanding track record across several domains has stemmed from the use of error backpropagation (BP). Several studies, however, have shown that it is impossible to execute BP in a real brain. Also, BP still serves as an important and unsolved bottleneck for memory usage and speed. We propose a simple, novel algorithm, the Front-Contribution algorithm, as a compact alternative to BP. The contributions of all weights with respect to the final layer weights are calculated before training commences and all the contributions are appended to weights of the final layer, i.e., the effective final layer weights are a non-linear function of themselves. Our algorithm then essentially collapses the network, precluding the necessity for weight updation of all weights not in the final layer. This reduction in parameters results in lower memory usage and higher training speed. We show that our algorithm produces the exact same output as BP, in contrast to several recently proposed algorithms approximating BP. Our preliminary experiments demonstrate the efficacy of the proposed algorithm. Our work provides a foundation to effectively utilize these presently under-explored "front contributions", and serves to inspire the next generation of training algorithms.

中文翻译:

前端贡献而不是反向传播

深度学习在多个领域的出色记录源于错误反向传播 (BP) 的使用。然而,几项研究表明,不可能在真正的大脑中执行 BP。此外,BP 仍然是内存使用和速度的重要且未解决的瓶颈。我们提出了一种简单、新颖的算法,即 Front-Contribution 算法,作为 BP 的紧凑替代方案。在训练开始之前计算所有权重相对于最终层权重的贡献,并将所有贡献附加到最后一层的权重,即有效的最终层权重是它们自身的非线性函数。然后,我们的算法本质上使网络崩溃,排除了对不在最后一层中的所有权重进行权重更新的必要性。参数的减少导致内存使用量降低和训练速度提高。我们表明,与最近提出的几种近似 BP 的算法相比,我们的算法产生了与 BP 完全相同的输出。我们的初步实验证明了所提出算法的有效性。我们的工作为有效利用这些目前尚未充分探索的“前沿贡献”奠定了基础,并有助于激发下一代训练算法。
更新日期:2021-06-11
down
wechat
bug