当前位置: X-MOL 学术Comput. Linguist. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Syntactically-Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction
Computational Linguistics ( IF 9.3 ) Pub Date : 2020-01-01 , DOI: 10.1162/coli_a_00362
Wenya Wang 1 , Sinno Jialin Pan 1
Affiliation  

In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on pre-defined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction which failed to adapt well to other domains of interest, where only unlabeled data is available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this paper, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domaininvariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on 3 benchmark datasets.

中文翻译:

用于方面和意见提取的句法有意义且可转移的递归神经网络

在细粒度意见挖掘中,从用户生成的文本中提取方面项(又名意见目标)和意见项(又名意见表达)是生成结构化意见摘要的最基本任务。已有研究表明,aspect 和opinion 词之间的句法关系对aspect 和opinion 词的提取起着重要的作用。然而,大多数工作要么依赖于预定义的规则,要么依赖于特征学习的分离关系挖掘。此外,这些工作仅专注于单域提取,无法很好地适应其他感兴趣的域,其中只有未标记的数据可用。在现实世界的场景中,许多领域的注释资源极其稀缺,这促使知识转移策略从标记的源域到任何未标记的目标域。我们观察到要提取的目标词之间的句法关系不仅对于单域提取至关重要,而且还可以作为不变的“枢轴”信息来弥合不同域之间的差距。在本文中,我们探索了基于每个句子的依赖树的递归神经网络的构造,以将句法结构与特征学习相关联。此外,我们构建了可转移的递归神经网络来自动学习方面词和观点词之间的域不变细粒度交互。可转移性建立在辅助任务和条件域对抗网络上,通过句法关系在词级有效地减少隐藏空间中的域分布差异。具体来说,辅助任务通过预测递归神经网络中依赖树的每个路径的依赖关系来构建跨域的结构对应关系。条件域对抗网络有助于学习以句法结构为条件的每个单词的域不变隐藏表示。最后,我们将递归神经网络与顶部的序列标记分类器集成在一起,该分类器对最终预测中的上下文影响进行建模。进行了广泛的实验和分析,以证明所提出的模型和每个组件在 3 个基准数据集上的有效性。条件域对抗网络有助于学习以句法结构为条件的每个单词的域不变隐藏表示。最后,我们将递归神经网络与顶部的序列标记分类器集成在一起,该分类器对最终预测中的上下文影响进行建模。进行了广泛的实验和分析,以证明所提出的模型和每个组件在 3 个基准数据集上的有效性。条件域对抗网络有助于学习以句法结构为条件的每个单词的域不变隐藏表示。最后,我们将递归神经网络与顶部的序列标记分类器集成在一起,该分类器对最终预测中的上下文影响进行建模。进行了广泛的实验和分析,以证明所提出的模型和每个组件在 3 个基准数据集上的有效性。
更新日期:2020-01-01
down
wechat
bug