当前位置: X-MOL 学术Comput. Linguist. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning an Executable Neural Semantic Parser
Computational Linguistics ( IF 3.7 ) Pub Date : 2019-03-01 , DOI: 10.1162/coli_a_00342
Jianpeng Cheng 1 , Siva Reddy 2 , Vijay Saraswat 3 , Mirella Lapata 1
Affiliation  

This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach, combining a generic tree-generation algorithm with domain-general grammar defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including fully supervised training where annotated logical forms are given, weakly supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of data sets demonstrate the effectiveness of our parser.

中文翻译:

学习一个可执行的神经语义解析器

本文介绍了一种神经语义解析器,它将自然语言话语映射到逻辑形式上,这些形式可以针对特定任务的环境(例如知识库或数据库)执行,以产生响应。解析器使用基于转换的方法生成树结构的逻辑形式,将通用树生成算法与逻辑语言定义的域通用语法相结合。生成过程由结构化循环神经网络建模,该网络提供了丰富的句子上下文和生成历史编码,用于进行预测。为了解决自然语言和逻辑形式标记之间的不匹配,探索了各种注意机制。最后,我们考虑神经语义解析器的不同训练设置,包括给出带注释的逻辑形式的全监督训练,提供符号的弱监督训练,以及只有未标记的句子和知识库可用的远程监督。跨广泛数据集的实验证明了我们的解析器的有效性。
更新日期:2019-03-01
down
wechat
bug