当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Probabilistic bounds on data sensitivity in deep rectifier networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-13 , DOI: arxiv-2007.06192
Blaine Rister and Daniel L. Rubin

Neuron death is a complex phenomenon with implications for model trainability, but until recently it was measured only empirically. Recent articles have claimed that, as the depth of a rectifier neural network grows to infinity, the probability of finding a valid initialization decreases to zero. In this work, we provide a simple and rigorous proof of that result. Then, we show what happens when the width of each layer grows simultaneously with the depth. We derive both upper and lower bounds on the probability that a ReLU network is initialized to a trainable point, as a function of model hyperparameters. Contrary to previous claims, we show that it is possible to increase the depth of a network indefinitely, so long as the width increases as well. Furthermore, our bounds are asymptotically tight under reasonable assumptions: first, the upper bound coincides with the true probability for a single-layer network with the largest possible input set. Second, the true probability converges to our lower bound when the network width and depth both grow without limit. Our proof is based on the striking observation that very deep rectifier networks concentrate all outputs towards a single eigenvalue, in the sense that their normalized output variance goes to zero regardless of the network width. Finally, we develop a practical sign flipping scheme which guarantees with probability one that for a $k$-layer network, the ratio of living training data points is at least $2^{-k}$. We confirm our results with numerical simulations, suggesting that the actual improvement far exceeds the theoretical minimum. We also discuss how neuron death provides a theoretical interpretation for various network design choices such as batch normalization, residual layers and skip connections, and could inform the design of very deep neural networks.

中文翻译:

深度整流网络中数据敏感性的概率界限

神经元死亡是一种复杂的现象,对模型的可训练性有影响,但直到最近它才被凭经验测量。最近的文章声称,随着整流器神经网络的深度增长到无穷大,找到有效初始化的概率降低到零。在这项工作中,我们提供了该结果的简单而严格的证明。然后,我们展示了当每层的宽度与深度同时增长时会发生什么。我们推导出 ReLU 网络初始化为可训练点的概率的上限和下限,作为模型超参数的函数。与之前的说法相反,我们表明可以无限地增加网络的深度,只要宽度也增加。此外,在合理的假设下,我们的边界是渐近紧的:首先,上限与具有最大可能输入集的单层网络的真实概率一致。其次,当网络宽度和深度都无限增长时,真实概率收敛到我们的下限。我们的证明基于惊人的观察结果,即非常深的整流器网络将所有输出集中到一个单一的特征值,从某种意义上说,无论网络宽度如何,它们的归一化输出方差都为零。最后,我们开发了一种实用的符号翻转方案,它以概率保证对于 $k$ 层网络,活训练数据点的比率至少为 $2^{-k}$。我们通过数值模拟证实了我们的结果,表明实际改进远远超过了理论最小值。
更新日期:2020-07-14
down
wechat
bug