当前位置: X-MOL 学术Cognit. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions
Cognitive Computation ( IF 5.4 ) Pub Date : 2021-07-17 , DOI: 10.1007/s12559-021-09901-1
Esaú Villatoro-Tello 1, 2 , Shantipriya Parida 2 , Sajit Kumar 3 , Petr Motlicek 2
Affiliation  

According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obtained by Operant Motives, unconscious intrinsic desires measured by the Operant Motive Test (OMT). The OMT test requires participants to write freely descriptions associated with a set of provided images and questions. In this work, we explore different recent machine learning techniques and various text representation techniques for facing the problem of the OMT classification task. We focused on advanced language representations (e.g, BERT, XLM, and DistilBERT) and deep Supervised Autoencoders for solving the OMT task. We performed an exhaustive analysis and compared their performance against fully connected neural networks and traditional support vector classifiers. Our comparative study highlights the importance of BERT which outperforms the traditional machine learning techniques by a relative improvement of 7.9%. In addition, we performed an analysis of how the BERT attention mechanism is being modified. Our findings indicate that the writing style features acquire higher importance at the moment of accurately identifying the different OMT categories. This is the first time that a study to determine the performance of different transformer-based architectures in the OMT task is performed. Similarly, our work propose, for the first time, using deep supervised autoencoders in the OMT classification task. Our experiments demonstrate that transformer-based methods exhibit the best empirical results, obtaining a relative improvement of 7.9% over the competitive baseline suggested as part of the GermEval 2020 challenge. Additionally, we show that features associated with the writing style are more important than content-based words. Some of these findings show strong connections to previously reported behavioral research on the implicit psychometrics theory.



中文翻译:

应用基于注意力的模型来检测认知过程和心理健康状况

根据心理学文献,内隐动机允许表征行为、随后的成功和长期发展。与人格特征相反,内隐动机通常被认为是相当稳定的人格特征。通常,隐性动机是通过操作性动机获得的,无意识的内在欲望是通过操作性动机测试 (OMT) 测量的。OMT 测试要求参与者自由地编写与一组提供的图像和问题相关的描述。在这项工作中,我们探索了不同的最新机器学习技术和各种文本表示技术,以应对 OMT 分类任务的问题。我们专注于高级语言表示(例如,BERT、XLM 和 DistilBERT)和用于解决 OMT 任务的深度监督自动编码器。我们进行了详尽的分析,并将它们的性能与完全连接的神经网络和传统的支持向量分类器进行了比较。我们的比较研究强调了 BERT 的重要性,它比传统的机器学习技术提高了 7.9%。此外,我们分析了 BERT 注意力机制是如何被修改的。我们的研究结果表明,写作风格特征在准确识别不同的 OMT 类别时具有更高的重要性。这是第一次进行一项研究,以确定 OMT 任务中不同基于变压器的架构的性能。同样,我们的工作首次提出在 OMT 分类任务中使用深度监督自动编码器。我们的实验表明,基于变压器的方法表现出最好的实证结果,与作为 GermEval 2020 挑战的一部分提出的竞争基线相比,获得了 7.9% 的相对改进。此外,我们表明与写作风格相关的特征比基于内容的单词更重要。其中一些发现与先前报道的关于内隐心理测量学理论的行为研究有很强的联系。

更新日期:2021-07-18
down
wechat
bug