Short reportSound context modulates perceived vocal emotion
Section snippets
Background
When animals are highly aroused there can be many effects on their bodies and behaviors. One important behavioral consequence of physiological arousal is the introduction of nonlinear features in the structure of vocalizations (Briefer, 2012; Fitch et al., 2002; Wilden et al., 1998). These acoustic correlates of arousal include deterministic chaos, subharmonics, and other non-tonal characteristics that can give vocalizations a rough, noisy sound quality. Nonlinear phenomena are effective in
Participants
23 young adults (12 women, mean age = 20.8, SD = 1.3; 11 men, mean age = 25.1 SD = 3.2) with self-reported normal hearing participated in the experiment. All participants provided informed written consent prior to the experiment and were paid 15€ for their participation. Participants were tested at the Centre Multidisciplinaire des Sciences Comportementales Sorbonne Université-Institut Européen d’Administration des Affaires (INSEAD), and the protocol of this experiment was approved by the
Results
To control that stimuli with increasing levels of portrayed vocal arousal indeed had more nonlinearities, we subjected the 18 vocal stimuli to acoustic analysis with Praat (Boersma, 2011), using three measures of voice quality (jitter, shimmer, noise-harmonic ratio) commonly associated with auditory roughness and noise. All three measures scaled as predicted with portrayed vocal arousal (Fig. 1-top).
Repeated-measures ANOVAs revealed a main effect of portrayed vocal arousal on judgments of
Discussion
As expected, voices with higher levels of portrayed anger were judged as more negative and more emotionally aroused than the same voices produced with less vocal arousal. Both the perceived valence and emotional arousal of voices with high vocal arousal were significantly affected by both musical and non-musical contexts. However, contrary to what would be predicted e.g. by the aesthetic enjoyment of nonlinear vocal sounds in rough musical textures by death metal fans (Thompson et al., 2018) or
Data accessibility
Matlab files and stimuli to run the experiment, R files and python notebook to analyze the results, as well as a. fxb file for the guitar distortion plugin are available at the following URL:
Funding
This study was supported by ERC Grant StG 335536 CREAM to JJA, and by a Fulbright Visiting Scholar Fellowship to ML.
CRediT authorship contribution statement
Marco Liuni: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing - original draft, Writing - review & editing. Emmanuel Ponsot: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing - original draft, Writing - review & editing. Gregory A. Bryant: Conceptualization, Visualization, Supervision, Writing - review & editing. J.J. Aucouturier: Conceptualization, Formal analysis, Funding acquisition, Writing - review & editing.
Acknowledgements
The authors thank Hugo Trad for help running the experiment. All data collected at the Sorbonne-Université INSEAD Center for Behavioural Sciences.
References (23)
- et al.
Human screams occupy a privileged niche in the communication soundscape
Curr. Biol.
(2015) - et al.
Neurobiology: sounding the alarm
Curr. Biol.
(2015) - et al.
Measuring emotion: the self-assessment manikin and the semantic differential
J. Behav. Ther. Exp. Psychiatry
(1994) - et al.
Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments
Neuron
(2008) - et al.
Calls out of chaos: the adaptive significance of nonlinear phenomena in mammalian vocal production
Anim. Behav.
(2002) - et al.
Processing of angry voices is modulated by visual load
Neuroimage
(2012) - et al.
Nonlinear phenomena in contemporary vocal music
J. Voice
(2004) - et al.
Neural mechanisms underlying contextual dependency of subjective values: converging evidence from monkeys and humans
J. Neurosci.
(2015) Forensic analysis of the audibility of female screams
Audio Engineering Society Conference: 33rd International Conference: Audio Forensics-Theory and Practice. Audio Engineering Society
(2008)- et al.
The sound of arousal in music is context-dependent
Biol. Lett.
(2012)
The sound of arousal: The addition of novel non‐linearities increases responsiveness in marmot alarm calls
Ethology
Cited by (7)
Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency
2022, CognitionCitation Excerpt :In adaptation, emotional perception is influenced by recent and preceding events. Other forms of contextual influence operate simultaneously: Vocal emotion perception is influenced by the musical or non-musical background (Liuni et al., 2020), semantic speech content (Bliss-Moreau et al., 2010), or input from other modalities (Baart & Vroomen, 2018; de Gelder & Vroomen, 2000). While these are examples that operate on relatively short times scales, contextual influences on emotional processing can also be observed over longer time scales.
Super Linguistics: an introduction
2023, Linguistics and PhilosophyWhich acoustic parameters modify the great tit’s response to conspecific combinatorial mobbing calls?
2022, Behavioral Ecology and SociobiologyThe Role of Embodied Simulation and Visual Imagery in Emotional Contagion with Music
2022, Music and ScienceEven violins can cry: Specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music
2021, Philosophical Transactions of the Royal Society B: Biological Sciences