当前位置: X-MOL 学术Journal of Pragmatics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
In your face? Exploring multimodal response patterns involving facial responses to verbal and gestural stance-taking expressions
Journal of Pragmatics ( IF 1.860 ) Pub Date : 2022-01-19 , DOI: 10.1016/j.pragma.2022.01.002
Kurt Feyaerts 1 , Christian Rominger 2 , Helmut Karl Lackner 3 , Geert Brône 1 , Annelies Jehoul 1 , Bert Oben 1 , Ilona Papousek 2
Affiliation  

In the present study, informed by insights from cognitive and interactional linguistics, we set out to explore how facial expressions systematically occur as responses in interactional sequences. More specifically, we use FACS-analyses (Facial Action Coding System) to study which Action Units (AU) on the part of the listener co-occur with multimodal stance-taking acts by speakers. Based on a data set of 24 dyadic interactions, we show that different types of stance acts (e.g. marking obviousness vs. using expressive amplifiers) reveal different patterns of facial responses. In addition, also within one type of stance act, there is systematic variation in facial responses. For example, listeners displayed significantly different AU-patterns in reactions to verbal obviousness markers, compared to non-verbal obviousness markers. Together, these observations highlight that, analogous to verbal responses in interactional sequences, also facial motor responses appear to be systematic, and highly dependent on conversational context. As such, the AU's under scrutiny serve as intersubjectively aligned response turns completing a situationally designed stance-taking act. With this interdisciplinary study, combining linguistics with psychology and physiology, we aim for a better understanding of the multimodal complexity that constitutes the process of meaning making in spontaneous conversation.



中文翻译:

在你的脸上?探索多模态反应模式,包括对口头和手势姿态采取表情的面部反应

在本研究中,根据认知和交互语言学的见解,我们着手探索面部表情如何系统地作为交互序列中的响应出现。更具体地说,我们使用 FACS 分析(面部动作编码系统)来研究听者的哪些动作单元 (AU) 与说话者的多模态姿态采取行为同时发生。基于 24 个二元交互的数据集,我们表明不同类型的姿态行为(例如,标记明显性与使用表达放大器)揭示了不同的面部反应模式。此外,在一种姿态行为中,面部反应也存在系统变化。例如,与非语言明显性标记相比,听众在对语言明显性标记的反应中显示出显着不同的 AU 模式。一起,这些观察结果表明,类似于交互序列中的口头反应,面部运动反应似乎也是系统的,并且高度依赖于对话背景。因此,被审查的非盟作为主体间一致的反应轮流完成了一个情境设计的立场采取行动。通过这项跨学科研究,将语言学与心理学和生理学相结合,我们旨在更好地理解构成自发对话中意义生成过程的多模态复杂性。受到审查的 s 作为主体间一致的反应轮流完成一个情境设计的立场采取行为。通过这项跨学科研究,将语言学与心理学和生理学相结合,我们旨在更好地理解构成自发对话中意义生成过程的多模态复杂性。受到审查的 s 作为主体间一致的反应轮流完成一个情境设计的立场采取行为。通过这项跨学科研究,将语言学与心理学和生理学相结合,我们旨在更好地理解构成自发对话中意义生成过程的多模态复杂性。

更新日期:2022-01-19
down
wechat
bug