当前位置: X-MOL 学术IEEE Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Concept Representation by Learning Explicit and Implicit Concept Couplings
IEEE Intelligent Systems ( IF 5.6 ) Pub Date : 2020-09-02 , DOI: 10.1109/mis.2020.3021188
Wenpeng Lu 1 , Yuteng Zhang 1 , Shoujin Wang 2 , Heyan Huang 3 , Qian Liu 3 , Sheng Luo 4
Affiliation  

Generating the precise semantic representation of a word or concept is a fundamental task in natural language processing. Recent studies which incorporate semantic knowledge into word embedding have shown their potential in improving the semantic representation of a concept. However, existing approaches only achieved limited performance improvement as they usually 1) model a word’s semantics from some explicit aspects while ignoring the intrinsic aspects of the word, 2) treat semantic knowledge as a supplement of word embeddings, and 3) consider partial relations between concepts while ignoring rich coupling relations between them, such as explicit concept co-occurrences in descriptive texts in a corpus as well as concept hyperlink relations in a knowledge network, and implicit couplings between concept co-occurrences and hyperlinks. In human consciousness, a concept is always associated with various couplings that exist within/between descriptive texts and knowledge networks, which inspires us to capture as many concept couplings as possible for building a more informative concept representation. We thus propose a neural coupled concept representation (CoupledCR) framework and its instantiation: a coupled concept embedding (CCE) model. CCE first learns two types of explicit couplings that are based on concept co-occurrences and hyperlink relations, respectively, and then learns a type of high-level implicit couplings between these two types of explicit couplings for better concept representation. Extensive experimental results on six real-world datasets show that CCE significantly outperforms eight state-of-the-art word embeddings and semantic representation methods.

中文翻译:

通过学习显性和隐性概念耦合来进行概念表示

在自然语言处理中,生成单词或概念的精确语义表示是一项基本任务。将语义知识整合到单词嵌入中的最新研究表明,它们在改善概念的语义表示方面具有潜力。但是,现有方法通常只能实现有限的性能改进,因为它们通常是:1)从一些显式方面对单词的语义进行建模,而忽略了单词的内在方面,2)将语义知识视为单词嵌入的补充,并且3)考虑了词之间的部分关系概念,而忽略它们之间的丰富耦合关系,例如语料库中描述性文本中的显式概念共现以及知识网络中的概念超链接关系,以及概念共现与超链接之间的隐式耦合。在人类意识中,一个概念总是与描述性文本和知识网络之间/之间存在的各种耦合相关联,这激发了我们捕获尽可能多的概念耦合以建立更具信息性的概念表示形式。因此,我们提出了一种神经耦合概念表示(CoupledCR)框架及其实例化:耦合概念嵌入(CCE)模型。CCE首先学习分别基于概念共现和超链接关系的两种显式耦合,然后学习这两种显式耦合之间的一种高级隐式耦合,以实现更好的概念表示。在六个真实世界的数据集上的大量实验结果表明,CCE的性能明显优于八种最先进的词嵌入和语义表示方法。一个概念总是与描述性文本和知识网络之间/之间存在的各种耦合相关联,这激发我们捕获尽可能多的概念耦合以建立更具信息性的概念表示形式。因此,我们提出了一种神经耦合概念表示(CoupledCR)框架及其实例化:耦合概念嵌入(CCE)模型。CCE首先学习分别基于概念共现和超链接关系的两种显式耦合,然后学习这两种显式耦合之间的一种高级隐式耦合,以实现更好的概念表示。在六个真实世界的数据集上的大量实验结果表明,CCE的性能明显优于八种最先进的词嵌入和语义表示方法。一个概念总是与描述性文本和知识网络之间/之间存在的各种耦合相关联,这激发我们捕获尽可能多的概念耦合以建立更具信息性的概念表示形式。因此,我们提出了一种神经耦合概念表示(CoupledCR)框架及其实例化:耦合概念嵌入(CCE)模型。CCE首先学习分别基于概念共现和超链接关系的两种显式耦合,然后学习这两种显式耦合之间的一种高级隐式耦合,以实现更好的概念表示。在六个真实世界的数据集上的大量实验结果表明,CCE的性能明显优于八种最先进的词嵌入和语义表示方法。
更新日期:2020-09-02
down
wechat
bug