当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Undecidability of Learnability
arXiv - CS - Computational Complexity Pub Date : 2021-06-02 , DOI: arxiv-2106.01382
Matthias C. Caro

Machine learning researchers and practitioners steadily enlarge the multitude of successful learning models. They achieve this through in-depth theoretical analyses and experiential heuristics. However, there is no known general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data. We show that such a procedure cannot exist. For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable, both in the sense of independence of the axioms in a formal system and in the sense of uncomputability. Our proofs proceed via computable constructions of function classes that encode the consistency problem for formal systems and the halting problem for Turing machines into complexity measures that characterize learnability. Our work shows that undecidability appears in the theoretical foundations of machine learning: There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful. We cannot in general automatize the process of assessing new learning models.

中文翻译:

可学习性的不确定性

机器学习研究人员和从业者稳步扩大了众多成功的学习模型。他们通过深入的理论分析和经验启发式方法来实现这一目标。然而,没有已知的通用程序来严格评估新提出的模型是否确实成功地从数据中学习。我们表明这样的过程是不存在的。对于 PAC 二进制分类、统一和通用的在线学习以及通过教师与学习者交互的精确学习,无论是在正式系统中公理的独立性还是不可计算性的意义上,可学习性通常都是不可判定的。我们的证明通过函数类的可计算构造进行,这些函数类将形式系统的一致性问题和图灵机的停机问题编码为表征可学习性的复杂性度量。我们的工作表明,机器学习的理论基础中出现了不可判定性:没有一刀切的算法来决定机器学习模型是否能够成功。我们通常无法自动化评估新学习模型的过程。
更新日期:2021-06-04
down
wechat
bug