当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Predicting the outputs of finite networks trained with noisy gradients
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-04-02 , DOI: arxiv-2004.01190
Gadi Naveh, Oded Ben-David, Haim Sompolinsky and Zohar Ringel

A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the Neural Tangent Kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its weights maps to the so-called Neural Network Gaussian Process (NNGP). Here we consider a DNN training protocol, involving noise, weight decay and finite width, whose outcome corresponds to a certain non-Gaussian stochastic process. An analytical framework is then introduced to analyze this non-Gaussian process, whose deviation from a GP is controlled by the finite width. Our contribution is three-fold: (i) In the infinite width limit, we establish a correspondence between DNNs trained with noisy gradients and the NNGP, not the NTK. (ii) We provide a general analytical form for the finite width correction (FWC) for DNNs with arbitrary activation functions and depth and use it to predict the outputs of empirical finite networks with high accuracy. Analyzing the FWC behavior as a function of $n$, the training set size, we find that it is negligible for both the very small $n$ regime, and, surprisingly, for the large $n$ regime (where the GP error scales as $O(1/n)$). (iii) We flesh-out algebraically how these FWCs can improve the performance of finite convolutional neural networks (CNNs) relative to their GP counterparts on image classification tasks.

中文翻译:

预测用噪声梯度训练的有限网络的输出

最近的一系列工作通过将它们近似为高斯过程 (GP) 来研究广泛的深度神经网络 (DNN)。用梯度流训练的 DNN 被证明可以映射到由神经切线核 (NTK) 控制的 GP,而早期的工作表明,具有优先于权重的 iid 的 DNN 映射到所谓的神经网络高斯过程 (NNGP) . 这里我们考虑一个 DNN 训练协议,涉及噪声、权重衰减和有限宽度,其结果对应于某个非高斯随机过程。然后引入一个分析框架来分析这个非高斯过程,其与 GP 的偏差由有限宽度控制。我们的贡献是三方面的:(i)在无限宽度限制中,我们建立了用噪声梯度训练的 DNN 与 NNGP 而非 NTK 之间的对应关系。(ii) 我们为具有任意激活函数和深度的 DNN 提供有限宽度校正 (FWC) 的一般分析形式,并使用它来高精度预测经验有限网络的输出。将 FWC 行为作为训练集大小 $n$ 的函数进行分析,我们发现对于非常小的 $n$ 机制和令人惊讶的大 $n$ 机制(GP 误差在作为 $O(1/n)$)。(iii) 我们在代数上充实了这些 FWC 如何提高有限卷积神经网络 (CNN) 相对于它们的 GP 对应物在图像分类任务上的性能。将 FWC 行为作为训练集大小 $n$ 的函数进行分析,我们发现对于非常小的 $n$ 机制和令人惊讶的大 $n$ 机制(GP 误差在作为 $O(1/n)$)。(iii) 我们在代数上充实了这些 FWC 如何提高有限卷积神经网络 (CNN) 相对于它们的 GP 对应物在图像分类任务上的性能。将 FWC 行为作为训练集大小 $n$ 的函数进行分析,我们发现对于非常小的 $n$ 机制和令人惊讶的大 $n$ 机制(GP 误差在作为 $O(1/n)$)。(iii) 我们在代数上充实了这些 FWC 如何提高有限卷积神经网络 (CNN) 相对于它们的 GP 对应物在图像分类任务上的性能。
更新日期:2020-10-07
down
wechat
bug