当前位置: X-MOL 学术Symmetry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attention-Based LSTM with Filter Mechanism for Entity Relation Classification
Symmetry ( IF 2.940 ) Pub Date : 2020-10-19 , DOI: 10.3390/sym12101729
Yanliang Jin , Dijia Wu , Weisi Guo

Relation classification is an important research area in the field of natural language processing (NLP), which aims to recognize the relationship between two tagged entities in a sentence. The noise caused by irrelevant words and the word distance between the tagged entities may affect the relation classification accuracy. In this paper, we present a novel model multi-head attention long short term memory (LSTM) network with filter mechanism (MALNet) to extract the text features and classify the relation of two entities in a sentence. In particular, we combine LSTM with attention mechanism to obtain the shallow local information and introduce a filter layer based on attention mechanism to strength the available information. Besides, we design a semantic rule for marking the key word between the target words and construct a key word layer to extract its semantic information. We evaluated the performance of our model on SemEval-2010 Task8 dataset and KBP-37 dataset. We achieved an F1-score of 86.3% on SemEval-2010 Task8 dataset and F1-score of 61.4% on KBP-37 dataset, which shows that our method is superior to the previous state-of-the-art methods.

中文翻译:

具有过滤机制的基于注意力的 LSTM 用于实体关系分类

关系分类是自然语言处理(NLP)领域的一个重要研究领域,旨在识别句子中两个标记实体之间的关系。不相关词引起的噪声和被标记实体之间的词距离可能会影响关系分类的准确性。在本文中,我们提出了一种具有过滤机制 (MALNet) 的新型模型多头注意长短期记忆 (LSTM) 网络来提取文本特征并对句子中两个实体的关系进行分类。特别是,我们将 LSTM 与注意力机制相结合来获取浅层局部信息,并引入基于注意力机制的过滤层来加强可用信息。除了,我们设计了一个语义规则来标记目标词之间的关键词,并构建一个关键词层来提取其语义信息。我们在 SemEval-2010 Task8 数据集和 KBP-37 数据集上评估了我们的模型的性能。我们在 SemEval-2010 Task8 数据集上取得了 86.3% 的 F1 分数,在 KBP-37 数据集上取得了 61.4% 的 F1 分数,这表明我们的方法优于之前最先进的方法。
更新日期:2020-10-19
down
wechat
bug