当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
arXiv - CS - Machine Learning Pub Date : 2020-06-30 , DOI: arxiv-2006.16627
Cristian Ivan, Razvan Florian

We present some novel, straightforward methods for training the connection graph of a randomly initialized neural network without training the weights. These methods do not use hyperparameters defining cutoff thresholds and therefore remove the need for iteratively searching optimal values of such hyperparameters. We can achieve similar or higher performances than in the case of training all weights, with a similar computational cost as for standard training techniques. Besides switching connections on and off, we introduce a novel way of training a network by flipping the signs of the weights. If we try to minimize the number of changed connections, by changing less than 10% of the total it is already possible to reach more than 90% of the accuracy achieved by standard training. We obtain good results even with weights of constant magnitude or even when weights are drawn from highly asymmetric distributions. These results shed light on the over-parameterization of neural networks and on how they may be reduced to their effective size.

中文翻译:

在具有随机初始化、固定权重的神经网络中训练高效连接

我们提出了一些新颖、直接的方法来训练随机初始化的神经网络的连接图,而无需训练权重。这些方法不使用定义截止阈值的超参数,因此不需要迭代搜索此类超参数的最佳值。与训练所有权重的情况相比,我们可以获得相似或更高的性能,并且计算成本与标准训练技术相似。除了打开和关闭连接之外,我们还引入了一种通过翻转权重符号来训练网络的新方法。如果我们尝试最小化改变连接的数量,通过改变少于总数的 10%,已经有可能达到标准训练达到的准确度的 90% 以上。即使权重大小恒定,或者即使权重来自高度不对称的分布,我们也能获得良好的结果。这些结果阐明了神经网络的过度参数化以及如何将它们减小到其有效大小。
更新日期:2020-11-18
down
wechat
bug