In your face? Exploring multimodal response patterns involving facial responses to verbal and gestural stance-taking expressions
Introduction
When interacting, interlocutors make use of multiple resources, both verbal and non-verbal, to jointly construe and feed a process of meaning making, which in its socio-cognitive essence relies on the activation of intersubjective mechanisms like audience design, common ground, intercorporeality and theory of mind.1 With this characterization, we situate the present research at the crossroads of interactional and cognitive linguistics as they both advocate an account of communicative exchange as a situated social interaction, thus steering away from a logocentric analysis, in which verbal structures qualify as self-contained elements of an inherent, deemed primary linguistic system.
In our view, instead, communicative interaction is all over the place involving both verbal and non-verbal resources as equally essential parts of an integrated process of meaning making. With this multimodal view on interaction, the present paper aligns with a substantial number of studies situated in different yet related paradigms like conversation analysis, cognitive linguistics and interactional linguistics, which have embraced the embodied and materialist turn as key aspects of their research objectives and analytical methods (Nevile, 2015; Nevile et al., 2015; Feyaerts et al., 2017a: 152–156). This means that next to all kinds of verbal and paraverbal elements, every aspect of the participants’ bodies but also so-called materialities as parts of the interactional setting, are considered relevant resources that may contribute to the complex, essentially multimodal meaning of an interactional usage event (Day and Wagner, 2015).
In its most recent account, the multimodal analytical perspective on interaction has expanded into the still rather underexplored realm of ‘multimodal gestalts’ (Mondada, 2014), in which all sorts of sensorial and (inter)corporeal resources along with situated materialities are taken into account (see, among others, Oben and Brône, 2015; Meyer et al., 2017; Goodwin, 2017; Goodwin and Cekaite, 2018; Schoonjans, 2018; Mondada, 2019). Along with the increasing diversity of multimodal dimensions, which constitute the semantic complexity of interactional usage events, the issue of sequentiality among these different multimodal components challenges further analytical investigation. Of specific interest are questions about the order, in which different resources underlie the overall process of interactional meaning making. Certain (non-)verbal actions tend to occur prior to other actions, thus projecting and somehow constraining the appearance of a following action (Schegloff, 2007; Brône et al., 2017; Dale et al., 2014; Deppermann and Streeck, 2018).
In line with these recent explorations in the field of multimodality in interaction, the present study aims to identify systematic co-occurrences of facial expressions with other (non-)verbal resources across interactional turns. More specifically, we will investigate the expression of four different stance-taking phenomena by a first participant in a dyadic conversation along with the adjacent reaction by the second participant in the same interaction. Traditionally, stance taking expressions are considered individually bound verbal utterances like evidentials, deictics, modals etc., that overtly express a speaker's inner circumstance like an attitude, an emotion or an evaluation. Nowadays, most scholars advocate a fundamentally dialogic perspective on stance (Kärkkäinen, 2003, 2006; Haddington, 2007; Du Bois, 2007), which accordingly is seen as a joint socio-cognitive act not requiring the objective construal of explicit verbal expressions. Following the sociocultural approach of Du Bois (2007: 163), stance-taking acts reside in a stance-taker's evaluation of (some aspect of) a situation, thereby positioning the subject (usually the self) on a relevant scale and in doing so aligning them with other interlocutors (Feyaerts, 2013: 212–215). In line with current research in multimodal interaction analysis, stance now refers to a range of verbal (verbs, adjectives, negation, constructions, etc.) and non-verbal expressions (gesture, eye-gaze, etc.), of which Dancygier (2012) has observed their possible clustering in multimodal ‘stance-stacking’ constructions (Dancygier et al., 2019).
As far as the modalities in our present study are concerned, the analytical focus is narrowed down to pairings of a verbal or gestural stance-taking utterance with the facial expression on the side of the ‘listening’ participant. Our focus on the resource of facial expression, which does not count among the traditional co-speech actions to be investigated, derives from the premise that any facial motor response qualifies as a response marker with respect to the preceding utterance made by the ‘other’ participant. Unlike previous relevant research, which focused on facial displays of particular emotions accompanying or foreshadowing responses to a conversation partner (Kaukomaa et al., 2013, 2014, 2015), in the present study we used the original Facial Action Coding System (FACS) to measure the full range of visually distinguishable facial muscle movements, so-called Action Units (AU), without any theoretical a-priori assumptions about their possible meaning (Ekman and Friesen, 1976; Ekman et al., 2002). FACS provides an objective technique for the description of facial movement in terms of anatomically based minimal action units, which are not to be confounded with inferential terms (Ekman and Friesen, 1976; Ekman et al., 2002). The coding system is widely used in psychological and biological research (for recent overview see, e.g., Waller et al., 2020).
This article is structured in four main sections. In Section 2, we describe the experimental set-up of our empirical study focusing on the four stance-taking phenomena under investigation (i.e., verbal and gestural expression of obviousness, verbal amplifiers as expressions of intensification, and comical hypotheticals). The third section elaborates on the methodological aspects of the experiment with special interest in the integration of facial motor responses. In the fourth section, we present the quantitative and qualitative results of our study followed by, in Section 5, a broader discussion of the results.
Section snippets
Experimental design
In this contribution we report on the findings of an experimental corpus study, in which we observed 48 male participants in 24 dyads, each of which was constituted by participants who had not met each other prior to the experiment (Lackner et al., 2019). Participants were invited to participate in a cartoon-rating experiment, which only served as a cover story. Having the participants wait for the alleged experiment provided time for free conversation between the members of the dyad.
Participants
The study sample comprised 48 male university students from various disciplines aged between 18 and 38 years (mean age 22.9 years). They were invited to participate in a cartoon rating experiment. Two participants, of which we ensured they had never met before, were grouped into dyads. The study was performed in accordance with the Declaration of Helsinki and the American Psychological Association's Ethics Code, and the study protocol was registered by the local privacy commission. All
Results
If we compare the results from our analysis of real-life conversational data with a base-line of surrogate data made up of fictive dyads, we observe that in reaction to the utterance of the selected stance-taking phenomena, four AUs stand out among more than forty AUs coded with a significantly higher frequency. The relevant action units are AU12 (in Ekman's terms the ‘lip corner puller’), AU6 (the ‘cheek raise’) AU1 (the ‘inner brow raise’) and AU2 (the ‘outer brow raise’), as illustrated
Discussion
In the present study, we have identified specific facial expressions by means of a detailed FACS analysis. The coded AUs occur in response to the production of different stance-taking phenomena with a significantly higher frequency compared to a base-line of surrogate data. As such, this finding highlights a multimodal co-occurrence pattern involving particular verbal, gestural, and facial forms across adjacent turns within a communicative interaction. Yet, identifying the exact communicative
Conclusion
In this interdisciplinary study we have extended both the traditional resource scope and the methods of a multimodal corpus study of dyadic interactions by including a detailed FACS-analysis of facial expressions in our analysis. In doing so, we aimed for a better understanding of the multimodal complexity that constitutes the process of meaning making in interaction. As a matter of fact, despite its specific experimental design, this study tackles two of the central challenges raised by
Declaration of competing interest
Authors have no conflict of interest to declare.
Acknowledgments
We are grateful to Gabriele Gierlinger for help with the experiment and to Ellen Hofer for valuable help with the FACS-analysis.
Kurt Feyaerts I am a full professor at the Department of Linguistics at the University of Leuven, where I teach courses on Aspects of German grammar, Structural features of spoken German, (Multimodal) Constructions of spoken German and Dutch, Multimodality and Interaction (Digital Humanities program). Since 2007, I also teach a course on ‘Humor & Creativity in Language’, in which students are acquainted with corpus research techniques in analyzing humorous spontaneous interactions.
After my MA
References (69)
On the subjectivity of intensifiers
Lang. Sci.
(2007)- et al.
Turn-opening smiles: facial expression constructing emotional transition in conversation
J. Pragmat.
(2013) - et al.
Foreshadowing a problem: turn-opening frowns in conversation
J. Pragmat.
(2014) - et al.
Phase synchronization of hemodynamic variables and respiration during mental challenge
Int. J. Psychophysiol.: Off. J. Int. Org. Psychophysiol.
(2011) Contemporary issues in conversation analysis: embodiment and materiality, multimodality and multisensoriality in social interaction
J. Pragmat.
(2019)Almost certainly and most definitely: degree modifiers and epistemic stance
J. Pragmat.
(2008)- et al.
Measuring the evolution of facial 'expression' using multi-species FACS
Neurosci. Biobehav. Rev.
(2020) - et al.
Less differentiated facial responses to naturalistic films of another person's emotional expressions in adolescents and adults with High-Functioning Autism Spectrum Disorder
Prog. Neuropsychopharmacol. Biol. Psychiatry
(2019) Fuzziness – vagueness – generality – ambiguity
J. Pragmat.
(1998)Intensitätspartikeln
Eye gaze and viewpoint in multimodal interaction management
Cognit. Linguist
Generic statements require little evidence for acceptance but have powerful implications
Cognit. Sci.
Using Language
The self-organization of human interaction
Negation, stance, and subjectivity
Stance-stacking in Language and Multimodal Communication
Objects as tools for talk
Some uses of head tilts and shoulder shrugs during human interaction, and their relation to stancetaking
Specifying and animating facial signals for discourse in embodied conversational agents
Comput. Animat. Virtual Worlds
It's all about you in Dutch
J. Pragmat.
Time in Embodied Interaction Synchronicity and Sequentiality of Multimodal Resources
The stance triangle
Measuring facial movement
Environ. Psychol. Nonverbal Behav.
The Duchenne smile: emotional expression and brain physiology II
J. Pers. Soc. Psychol.
Facial Action Coding System. Manual and Investigator's Guide
Functions of three open-palm hand gestures
Multimodal Commun.
A cognitive grammar of creativity
Multimodality in interaction
Alignment and empathy as viewpoint phenomena: the case of verbal amplifiers and comical hypotheticals
Cognit. Linguist
The development of intensification scales in noun-intensifying uses of adjectives: sources, paths and mechanisms of change
Engl. Lang. Linguist.
Emotionale und physiologische Synchronisation zwischen Personen
Metzler Lexikon Sprache
Co-Operative Action
Embodied Family Choreography: Practices of Control, Care, and Mundane Creativity
Cited by (3)
Towards a corpus-based description of speech-gesture units of meaning: The case of the circular gesture
2023, International Journal of Corpus LinguisticsMultimodal stance-taking in interaction—A systematic literature review
2023, Frontiers in CommunicationPoliteness Conceptualization in Iranian Social Interactions: An Ethnographic Study
2022, Brno Studies in English
Kurt Feyaerts I am a full professor at the Department of Linguistics at the University of Leuven, where I teach courses on Aspects of German grammar, Structural features of spoken German, (Multimodal) Constructions of spoken German and Dutch, Multimodality and Interaction (Digital Humanities program). Since 2007, I also teach a course on ‘Humor & Creativity in Language’, in which students are acquainted with corpus research techniques in analyzing humorous spontaneous interactions.
After my MA degree in Germanic linguistics (1990), I studied at the Westfälische Wilhelmsuniversität in Münster (1991) and worked as a research and teaching assistant at the University of Antwerp. In 1997, I obtained my PhD at KU Leuven with a corpus study on the role of metonymy as a basic mechanism of conceptual creativity.
As a member of the research group ‘Multimodality, Interaction & Discourse’ (MIDI) my research emerges at the intersection between cognitive and interactional Linguistics. Against that background I adopt a multimodal perspective on the analysis of face-to-face interaction, hence advocating the integration of visual and acoustic as well as biometric resources in the analysis of language in (inter)action. Accordingly, most of my current studies focus on (one of) the following topics:• the systematic co-occurrence of gestures, facial expressions, gaze-patterns (eye tracking) and verbal expressions in interaction(s);• aspects of embodied meaning as apparent in different types of verbal and musical interaction;• design of multimodal corpora;• multimodal and linguistic aspects of spoken German and Dutch;• multimodal and linguistic aspects of humorous and creative interaction(s).
In line with my multimodal focus on interaction, I also actively participate in interdisciplinary studies exploring research fields beyond the boundaries of linguistics, and which involve cooperation with psychologists and physiologists (University of Graz) and musicologists/musical performers (KU Leuven/LUCA School of Arts).