当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LabelPrompt: Effective Prompt-based Learning for Relation Classification
arXiv - CS - Machine Learning Pub Date : 2023-02-16 , DOI: arxiv-2302.08068
Wenjie Zhang, Xiaoning Song, Zhenhua Feng, Tianyang Xu, Xiaojun Wu

Recently, prompt-based learning has become a very popular solution in many Natural Language Processing (NLP) tasks by inserting a template into model input, which converts the task into a cloze-style one to smoothing out differences between the Pre-trained Language Model (PLM) and the current task. But in the case of relation classification, it is difficult to map the masked output to the relation labels because of its abundant semantic information, e.g. org:founded_by''. Therefore, a pre-trained model still needs enough labelled data to fit the relations. To mitigate this challenge, in this paper, we present a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. It is an extraordinary intuitive approach by a motivation: ``GIVE MODEL CHOICES!''. First, we define some additional tokens to represent the relation labels, which regards these tokens as the verbalizer with semantic initialisation and constructs them with a prompt template method. Then we revisit the inconsistency of the predicted relation and the given entities, an entity-aware module with the thought of contrastive learning is designed to mitigate the problem. At last, we apply an attention query strategy to self-attention layers to resolve two types of tokens, prompt tokens and sequence tokens. The proposed strategy effectively improves the adaptation capability of prompt-based learning in the relation classification task when only a small labelled data is available. Extensive experimental results obtained on several bench-marking datasets demonstrate the superiority of the proposed LabelPrompt method, particularly in the few-shot scenario.

中文翻译:

LabelPrompt:有效的基于提示的关系分类学习

最近,基于提示的学习已成为许多自然语言处理 (NLP) 任务中非常流行的解决方案,它通过在模型输入中插入模板,将任务转换为完形填空式任务,以消除预训练语言模型之间的差异(PLM) 和当前任务。但在关系分类的情况下,由于其丰富的语义信息,很难将掩码输出映射到关系标签,例如 org:founded_by''。因此,预训练模型仍然需要足够的标记数据来拟合关系。为了缓解这一挑战,在本文中,我们针对关系分类任务提出了一种新的基于提示的学习方法,即 LabelPrompt。这是一种非常直观的方法,其动机是:“给出模型选择!”。第一的,我们定义了一些额外的标记来表示关系标签,它将这些标记视为具有语义初始化的表达器,并使用提示模板方法构造它们。然后我们重新审视预测关系和给定实体的不一致,设计了一个具有对比学习思想的实体感知模块来缓解这个问题。最后,我们将注意力查询策略应用于自注意力层以解析两种类型的标记,即提示标记和序列标记。当只有少量标记数据可用时,所提出的策略有效地提高了基于提示的学习在关系分类任务中的适应能力。在几个基准数据集上获得的大量实验结果证明了所提出的 LabelPrompt 方法的优越性,
更新日期:2023-02-17
down
wechat
bug