当前位置: X-MOL 学术Complexity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Hybrid Neural Network BERT-Cap Based on Pre-Trained Language Model and Capsule Network for User Intent Classification
Complexity ( IF 2.3 ) Pub Date : 2020-11-21 , DOI: 10.1155/2020/8858852
Hai Liu 1, 2 , Yuanxia Liu 1 , Leung-Pun Wong 3 , Lap-Kei Lee 3 , Tianyong Hao 1, 4
Affiliation  

User intent classification is a vital component of a question-answering system or a task-based dialogue system. In order to understand the goals of users’ questions or discourses, the system categorizes user text into a set of pre-defined user intent categories. User questions or discourses are usually short in length and lack sufficient context; thus, it is difficult to extract deep semantic information from these types of text and the accuracy of user intent classification may be affected. To better identify user intents, this paper proposes a BERT-Cap hybrid neural network model with focal loss for user intent classification to capture user intents in dialogue. The model uses multiple transformer encoder blocks to encode user utterances and initializes encoder parameters with a pre-trained BERT. Then, it extracts essential features using a capsule network with dynamic routing after utterances encoding. Experiment results on four publicly available datasets show that our model BERT-Cap achieves a F1 score of 0.967 and an accuracy of 0.967, outperforming a number of baseline methods, indicating its effectiveness in user intent classification.

中文翻译:

基于预训练语言模型和胶囊网络的用户意图分类混合神经网络BERT-Cap

用户意图分类是问答系统或基于任务的对话系统的重要组成部分。为了理解用户问题或话语的目标,系统将用户文本分类为一组预定义的用户意图类别。用户问题或话语通常篇幅短且缺乏足够的上下文;因此,很难从这些类型的文本中提取深层语义信息,并且可能会影响用户意图分类的准确性。为了更好地识别用户意图,本文提出了一种具有焦点损失的BERT-Cap混合神经网络模型,用于用户意图分类,以捕获对话中的用户意图。该模型使用多个变压器编码器模块来编码用户话语,并使用预训练的BERT初始化编码器参数。然后,它使用发声编码后的动态路由的胶囊网络提取基本特征。对四个公开数据集的实验结果表明,我们的模型BERT-Cap实现了F 1得分为0.967,准确性为0.967,优于许多基准方法,表明其在用户意图分类中的有效性。
更新日期:2020-11-22
down
wechat
bug