当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How Additional Knowledge can Improve Natural Language Commonsense Question Answering?
arXiv - CS - Information Retrieval Pub Date : 2019-09-19 , DOI: arxiv-1909.08855
Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Mishra and Chitta Baral

Recently several datasets have been proposed to encourage research in Question Answering domains where commonsense knowledge is expected to play an important role. Recent language models such as ROBERTA, BERT and GPT that have been pre-trained on Wikipedia articles and books have shown reasonable performance with little fine-tuning on several such Multiple Choice Question-Answering (MCQ) datasets. Our goal in this work is to develop methods to incorporate additional (commonsense) knowledge into language model-based approaches for better question-answering in such domains. In this work, we first categorize external knowledge sources, and show performance does improve on using such sources. We then explore three different strategies for knowledge incorporation and four different models for question-answering using external commonsense knowledge. We analyze our predictions to explore the scope of further improvements.

中文翻译:

附加知识如何提高自然语言常识问答?

最近提出了几个数据集来鼓励在问答领域的研究,在这些领域中,常识知识有望发挥重要作用。最近在维基百科文章和书籍上预训练的语言模型,如 ROBERTA、BERT 和 GPT,在几个这样的多项选择题答案 (MCQ) 数据集上显示出合理的性能,几乎没有微调。我们在这项工作中的目标是开发方法,将额外的(常识)知识整合到基于语言模型的方法中,以便在这些领域更好地回答问题。在这项工作中,我们首先对外部知识源进行分类,并表明使用此类资源确实提高了性能。然后,我们探索了三种不同的知识整合策略和四种不同的使用外部常识知识进行问答的模型。
更新日期:2020-04-20
down
wechat
bug