当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated Quality Assessment of Cognitive Behavioral Therapy Sessions Through Highly Contextualized Language Representations
arXiv - CS - Computation and Language Pub Date : 2021-02-23 , DOI: arxiv-2102.11573
Nikolaos Flemotomos, Victor R. Martinez, Zhuohao Chen, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan

During a psychotherapy session, the counselor typically adopts techniques which are codified along specific dimensions (e.g., 'displays warmth and confidence', or 'attempts to set up collaboration') to facilitate the evaluation of the session. Those constructs, traditionally scored by trained human raters, reflect the complex nature of psychotherapy and highly depend on the context of the interaction. Recent advances in deep contextualized language models offer an avenue for accurate in-domain linguistic representations which can lead to robust recognition and scoring of such psychotherapy-relevant behavioral constructs, and support quality assurance and supervision. In this work, a BERT-based model is proposed for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT), where prior work is limited to frequency-based language features and/or short text excerpts which do not capture the unique elements involved in a spontaneous long conversational interaction. The model is trained in a multi-task manner in order to achieve higher interpretability. BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements.

中文翻译:

通过高度上下文化的语言表示自动进行认知行为治疗课程的质量评估

在心理治疗会议期间,辅导员通常采用沿特定维度编纂的技术(例如,“表现出温暖和自信”,或“试图建立合作关系”)以促进对会议的评估。传统上由受过训练的人类评分者对这些结构进行评分,反映出心理治疗的复杂性,并且高度依赖于交互作用的背景。深度上下文化语言模型的最新进展为准确的域内语言表示提供了一种途径,这可以导致对此类与心理治疗相关的行为构造的可靠识别和评分,并支持质量保证和监督。在这项工作中,提出了一种基于BERT的模型,用于特定类型心理治疗的自动行为评分,称为认知行为疗法(CBT),以前的工作仅限于基于频率的语言功能和/或简短的文本摘录,而这些摘录不包含自发的长时间对话互动中涉及的独特元素。为了实现更高的可解释性,以多任务方式对模型进行了训练。基于BERT的表示进一步增加了可用的治疗元数据,提供了相关的非语言环境,并带来了持续的性能改进。
更新日期:2021-02-24
down
wechat
bug