当前位置: X-MOL 学术arXiv.cs.PL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning a Deep Generative Model like a Program: the Free Category Prior
arXiv - CS - Programming Languages Pub Date : 2020-11-22 , DOI: arxiv-2011.11063
Eli Sennesh

Humans surpass the cognitive abilities of most other animals in our ability to "chunk" concepts into words, and then combine the words to combine the concepts. In this process, we make "infinite use of finite means", enabling us to learn new concepts quickly and nest concepts within each-other. While program induction and synthesis remain at the heart of foundational theories of artificial intelligence, only recently has the community moved forward in attempting to use program learning as a benchmark task itself. The cognitive science community has thus often assumed that if the brain has simulation and reasoning capabilities equivalent to a universal computer, then it must employ a serialized, symbolic representation. Here we confront that assumption, and provide a counterexample in which compositionality is expressed via network structure: the free category prior over programs. We show how our formalism allows neural networks to serve as primitives in probabilistic programs. We learn both program structure and model parameters end-to-end.

中文翻译:

学习像程序这样的深度生成模型:免费类别优先

在我们将概念“分解”为单词,然后将单词组合为概念的能力方面,人类超越了其他大多数动物的认知能力。在此过程中,我们“无限使用有限的手段”,使我们能够快速学习新概念并将概念相互嵌套。尽管程序的归纳和综合仍然是人工智能基础理论的核心,但直到最近,社区才开始尝试将程序学习用作基准任务本身。因此,认知科学界通常认为,如果大脑具有与通用计算机相当的模拟和推理能力,则它必须采用序列化的符号表示形式。在这里,我们面对这个假设,并提供了一个反例,其中通过网络结构来表示组成:免费类别优先于程序。我们展示了形式主义如何允许神经网络在概率程序中充当原语。我们端到端地学习程序结构和模型参数。
更新日期:2020-11-25
down
wechat
bug