当前位置: X-MOL 学术EURASIP J. Wirel. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Few-shot relation classification by context attention-based prototypical networks with BERT
EURASIP Journal on Wireless Communications and Networking ( IF 2.6 ) Pub Date : 2020-06-08 , DOI: 10.1186/s13638-020-01720-6
Bei Hui , Liang Liu , Jia Chen , Xue Zhou , Yuhui Nian

Human-computer interaction under the cloud computing platform is very important, but the semantic gap will limit the performance of interaction. It is necessary to understand the semantic information in various scenarios. Relation classification (RC) is an import method to implement the description of semantic formalization. It aims at classifying a relation between two specified entities in a sentence. Existing RC models typically rely on supervised learning and distant supervision. Supervised learning requires large-scale supervised training datasets, which are not readily available. Distant supervision introduces noise, and many long-tail relations still suffer from data sparsity. Few-shot learning, which is widely used in image classification, is an effective method for overcoming data sparsity. In this paper, we apply few-shot learning to a relation classification task. However, not all instances contribute equally to the relation prototype in a text-based few-shot learning scenario, which can cause the prototype deviation problem. To address this problem, we propose context attention-based prototypical networks. We design context attention to highlight the crucial instances in the support set to generate a satisfactory prototype. Besides, we also explore the application of a recently popular pre-trained language model to few-shot relation classification tasks. The experimental results demonstrate that our model outperforms the state-of-the-art models and converges faster.



中文翻译:

带有BERT的基于上下文关注的原型网络进行的少量关联分类

云计算平台下的人机交互非常重要,但是语义上的差距将限制交互的性能。有必要了解各种情况下的语义信息。关系分类(RC)是一种实现语义形式化描述的导入方法。它旨在对句子中两个指定实体之间的关系进行分类。现有的RC模型通常依赖于监督学习和远程监督。有监督的学习需要大规模的有监督的训练数据集,而这些数据集并不容易获得。远程监管会引入噪声,许多长尾关系仍然遭受数据稀疏性的困扰。少量学习是图像分类中克服数据稀疏性的有效方法。在本文中,我们将几次学习应用于关系分类任务。但是,在基于文本的少量学习场景中,并非所有实例都对关系原型做出同等贡献,这可能会导致原型偏差问题。为了解决这个问题,我们提出了基于上下文关注的原型网络。我们设计上下文关注点,以突出显示支持集中的关键实例,以生成令人满意的原型。此外,我们还探索了最近流行的预训练语言模型在少拍关系分类任务中的应用。实验结果表明,我们的模型优于最新模型,并且收敛速度更快。这可能会导致原型偏差问题。为了解决这个问题,我们提出了基于上下文关注的原型网络。我们设计上下文关注点,以突出显示支持集中的关键实例,以生成令人满意的原型。此外,我们还探索了最近流行的预训练语言模型在少拍关系分类任务中的应用。实验结果表明,我们的模型优于最新模型,并且收敛速度更快。这可能会导致原型偏差问题。为了解决这个问题,我们提出了基于上下文关注的原型网络。我们设计上下文关注点,以突出显示支持集中的关键实例,以生成令人满意的原型。此外,我们还探索了最近流行的预训练语言模型在少拍关系分类任务中的应用。实验结果表明,我们的模型优于最新模型,并且收敛速度更快。

更新日期:2020-06-08
down
wechat
bug