当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Channel-Aware Decoupling Network for Multiturn Dialog Comprehension
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 11-14-2022 , DOI: 10.1109/tnnls.2022.3220047
Zhuosheng Zhang 1 , Hai Zhao 1 , Longxiang Liu 2
Affiliation  

Training machines to understand natural language and interact with humans is one of the major goals of artificial intelligence. Recent years have witnessed an evolution from matching networks to pretrained language models (PrLMs). In contrast to the plain-text modeling as the focus of the PrLMs, dialog texts involve multiple speakers and reflect special characteristics, such as topic transitions and structure dependencies, between distant utterances. However, the related PrLM models commonly represent dialogs sequentially by processing the pairwise dialog history as a whole. Thus, the hierarchical information on either utterance interrelation or speaker roles coupled in such representations is not well addressed. In this work, we propose compositional learning for holistic interaction across the utterances beyond the sequential contextualization from PrLMs, in order to capture the utterance-aware and speaker-aware representations entailed in a dialog history. We decouple the contextualized word representations by masking mechanisms in transformer-based PrLM, making each word only focus on the words in the current utterance, other utterances, and two speaker roles (i.e., utterances of the sender and utterances of the receiver), respectively. In addition, we employ domain-adaptive training strategies to help the model adapt to the dialog domains. Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets, achieving new state-of-the-art performance over previous methods.

中文翻译:


用于多轮对话理解的通道感知解耦网络



训练机器理解自然语言并与人类互动是人工智能的主要目标之一。近年来,见证了从匹配网络到预训练语言模型(PrLM)的演变。与 PrLM 关注的纯文本建模不同,对话文本涉及多个说话者,并反映遥远话语之间的特殊特征,例如主题转换和结构依赖性。然而,相关的 PrLM 模型通常通过将成对对话历史作为一个整体处理来顺序表示对话。因此,在这种表示中耦合的话语相互关系或说话者角色的层次信息没有得到很好的解决。在这项工作中,我们提出了超越 PrLM 的顺序语境化的跨话语整体交互的组合学习,以捕获对话历史中包含的话语感知和说话人感知表示。我们通过基于 Transformer 的 PrLM 中的屏蔽机制来解耦上下文化的单词表示,使每个单词分别只关注当前话语、其他话语和两个说话者角色(即发送者的话语和接收者的话语)中的单词。此外,我们采用领域自适应训练策略来帮助模型适应对话领域。实验结果表明,我们的方法大大提高了四个公共基准数据集中的强 PrLM 基线,比以前的方法实现了新的最先进的性能。
更新日期:2024-08-26
down
wechat
bug