当前位置:
X-MOL 学术
›
arXiv.cs.CC
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Training Neural Networks is ER-complete
arXiv - CS - Computational Complexity Pub Date : 2021-02-19 , DOI: arxiv-2102.09798 Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
arXiv - CS - Computational Complexity Pub Date : 2021-02-19 , DOI: arxiv-2102.09798 Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
Given a neural network, training data, and a threshold, it was known that it
is NP-hard to find weights for the neural network such that the total error is
below the threshold. We determine the algorithmic complexity of this
fundamental problem precisely, by showing that it is ER-complete. This means
that the problem is equivalent, up to polynomial-time reductions, to deciding
whether a system of polynomial equations and inequalities with integer
coefficients and real unknowns has a solution. If, as widely expected, ER is
strictly larger than NP, our work implies that the problem of training neural
networks is not even in NP.
中文翻译:
训练神经网络是ER完成的
给定一个神经网络,训练数据和一个阈值,已知很难找到神经网络的权重以使总误差低于阈值是NP。通过证明它是ER完全的,我们可以精确地确定此基本问题的算法复杂性。这意味着直到确定多项式时间减少为止,问题等同于确定具有整数系数和实数未知数的多项式方程和不等式系统是否具有解决方案。如果人们普遍认为ER严格大于NP,则我们的工作意味着训练神经网络的问题甚至不在NP中。
更新日期:2021-02-22
中文翻译:
训练神经网络是ER完成的
给定一个神经网络,训练数据和一个阈值,已知很难找到神经网络的权重以使总误差低于阈值是NP。通过证明它是ER完全的,我们可以精确地确定此基本问题的算法复杂性。这意味着直到确定多项式时间减少为止,问题等同于确定具有整数系数和实数未知数的多项式方程和不等式系统是否具有解决方案。如果人们普遍认为ER严格大于NP,则我们的工作意味着训练神经网络的问题甚至不在NP中。