当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hardness of Learning Neural Networks with Natural Weights
arXiv - CS - Computational Complexity Pub Date : 2020-06-05 , DOI: arxiv-2006.03177
Amit Daniely and Gal Vardi

Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary. A natural approach to settle the discrepancy is to assume that the network's weights are "well-behaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in real-world networks are not arbitrary, but exhibit some "random-like" properties with respect to some "natural" distributions. We prove negative results in this regard, and show that for depth-$2$ networks, and many "natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.

中文翻译:

学习具有自然权重的神经网络的难度

尽管有很强的硬度结果,但神经网络现在非常成功。现有的硬度结果集中在网络架构上,并假设网络的权重是任意的。解决差异的一种自然方法是假设网络的权重“表现良好”并具有一些可能允许有效学习的通用属性。这种方法得到以下直觉的支持:现实世界网络中的权重不是任意的,而是相对于一些“自然”分布表现出一些“类似随机”的特性。我们证明了这方面的负面结果,并表明对于深度为 2 美元的网络,以及许多“自然”权重分布(例如正态分布和均匀分布),大多数网络都很难学习。即,没有有效的学习算法可以证明对大多数权重和每个输入分布都是成功的。这意味着在这样的随机网络中没有高概率的通用属性,并允许有效的学习。
更新日期:2020-10-15
down
wechat
bug