当前位置: X-MOL 学术J. Intell. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Negative-supervised capsule graph neural network for few-shot text classification
Journal of Intelligent & Fuzzy Systems ( IF 1.7 ) Pub Date : 2021-08-28 , DOI: 10.3233/jifs-210795
Ling Ding 1 , Xiaojun Chen 1 , Yang Xiang 1
Affiliation  

Few-shot text classification aims to learn a classifier from very few labeled text data. Existing studies on this topic mainly adopt prototypical networks and focus on interactive information between support set and query instances to learn generalized class prototypes. However, in the process of encoding, these methods only pay attention to the matching information between support set and query instances, and ignore much useful information about intra-class similarity and inter-class dissimilarity between all support samples. Therefore, in this paper we propose a negative-supervised capsule graph neural network (NSCGNN) which explicitly takes use of the similarity and dissimilarity between samples to make the text representations of the same type closer with each other and the ones of different types farther away, leading to representative and discriminative class prototypes. We firstly construct a graph to obtain text representations in the form of node capsules, where both intra-cluster similarity and inter-cluster dissimilarity between all samples are explored with information aggregation and negative supervision. Then, in order to induce generalized class prototypes based on those node capsules obtained from graph neural network, the dynamic routing algorithm is utilized in our model. Experimental results demonstrate the effectiveness of our proposed NSCGNN model, which outperforms existing few-shot approaches on three benchmark datasets.

中文翻译:

用于少样本文本分类的负监督胶囊图神经网络

少样本文本分类旨在从很少的标记文本数据中学习分类器。现有关于该主题的研究主要采用原型网络,并侧重于支持集和查询实例之间的交互信息来学习广义类原型。然而,在编码过程中,这些方法只关注支持集和查询实例之间的匹配信息,而忽略了所有支持样本之间的类内相似度和类间不相似度的很多有用信息。因此,在本文中,我们提出了一种负监督胶囊图神经网络(NSCGNN),它明确利用样本之间的相似性和不相似性,使相同类型的文本表示彼此更接近,不同类型的文本表示更远。 , 导致具有代表性和区分性的类原型。我们首先构建一个图以获得节点胶囊形式的文本表示,其中通过信息聚合和负监督探索所有样本之间的簇内相似性和簇间不相似性。然后,为了基于从图神经网络获得的那些节点胶囊来归纳广义类原型,我们的模型中使用了动态路由算法。实验结果证明了我们提出的 NSCGNN 模型的有效性,该模型在三个基准数据集上优于现有的小样本方法。其中通过信息聚合和负监督来探索所有样本之间的簇内相似性和簇间不相似性。然后,为了基于从图神经网络获得的那些节点胶囊来归纳广义类原型,我们的模型中使用了动态路由算法。实验结果证明了我们提出的 NSCGNN 模型的有效性,该模型在三个基准数据集上优于现有的小样本方法。其中通过信息聚合和负监督来探索所有样本之间的簇内相似性和簇间不相似性。然后,为了基于从图神经网络获得的那些节点胶囊来归纳广义类原型,我们的模型中使用了动态路由算法。实验结果证明了我们提出的 NSCGNN 模型的有效性,该模型在三个基准数据集上优于现有的小样本方法。
更新日期:2021-09-03
down
wechat
bug