当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reachability Is NP-Complete Even for the Simplest Neural Networks
arXiv - CS - Computational Complexity Pub Date : 2021-08-30 , DOI: arxiv-2108.13179
Marco Sälzer, Martin Lange

We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters.

中文翻译:

即使对于最简单的神经网络,可达性也是 NP 完备的

我们研究了(深度)神经网络可达性问题的复杂性:在给定一些有效输入的情况下,它是否计算有效输出?最近有人声称该问题对于一般神经网络和联合输入/输出规范是 NP 完全的。我们修复了原始上下界证明中的一些缺陷。然后,我们证明 NP-hardness 已经适用于仅具有一层的简单规范和神经网络的受限类别,以及对出现参数的要求最低的神经网络。
更新日期:2021-08-31
down
wechat
bug