当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning
arXiv - CS - Computation and Language Pub Date : 2021-02-25 , DOI: arxiv-2102.12828
Xin Xie, Xiangnan Chen, Xiang Chen, Yong Wang, Ningyu Zhang, Shumin Deng, Huajun Chen

This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.

中文翻译:

ZJUKLAB在SemEval-2021上的任务4:使用语言模型进行负增强以阅读抽象含义

本文介绍了针对SemEval Task4的三个子任务的系统:抽象意义的阅读理解(ReCAM)。我们将说明用于学习模型的算法,以及调整算法和选择最佳模型的过程。受ReCAM任务和语言预训练的相似性的启发,我们提出了一种简单而有效的技术,即使用语言模型进行负增强。评估结果证明了我们提出的方法的有效性。我们的模型在“子任务1”和“子任务2”的官方测试集上均排名第四,准确度分别为87.9%和92.8%。我们将进一步进行全面的模型分析并观察有趣的错误案例,这可能会促进未来的研究。
更新日期:2021-02-26
down
wechat
bug