当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation
arXiv - CS - Computation and Language Pub Date : 2021-02-25 , DOI: arxiv-2102.12777
Yuqiang Xie, Luxi Xing, Wei Peng, Yue Hu

This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of question answering is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches achieve eighth rank on subtask-1 and tenth rank on subtask-2.

中文翻译:

IIE-NLP-Eyas在SemEval-2021上的任务4:通过特殊令牌,重新排列,连体编码器和反向翻译来增强ReCAM的PLM

本文介绍了我们针对SemEval-2021任务4:对抽象含义的理解的所有三个子任务的系统。为了帮助我们的模型更好地表示和理解自然语言中的抽象概念,我们精心设计了许多适用于主干模型(RoBERTa)的简单有效的方法。具体来说,我们将子任务形式化为多项选择题回答格式,并在抽象概念上添加特殊标记,然后将对问题回答的最终预测视为子任务的结果。此外,我们采用了许多微调技巧来提高性能。实验结果表明,与基准系统相比,我们的方法具有显着的性能。我们的方法在子任务1上排名第八,在子任务2上排名第十。
更新日期:2021-02-26
down
wechat
bug