Updating spatial hearing abilities through multisensory and motor cues
Introduction
Spatial hearing is the ability to discriminate the position of sounds. It plays a key role in our interactions with the physical and social environment and it influences auditory scene analysis. As such, spatial hearing contributes to our capacity to discern signal from noise (Best et al., 2008; Kitterick et al., 2010; Middlebrooks, 2015a; Shinn-Cunningham et al., 2016; Yost, 2017) and to the ability to detect and respond to relevant sounds. Besides, it plays a fundamental role in overt and covert orienting to multisensory events (Heffner, 2004; Masterton et al., 1969; Pavani et al., 2017). Spatial hearing relies on systematic associations between the auditory input reaching the ears, the perceived position of the head, and the coordinates in external space (Wallach, 1940; Middlebrooks, 2015b; Middlebrooks and Green, 1991). Although spatial hearing is more plastic early in life (Knudsen, Knudsen, Esterly, 1984), recent studies showed that training can update the associations between auditory cues and space throughout life, both in animal models (Kacelnik et al., 2006; Keating and King, 2013; Popescu and Polley, 2010) and in humans (Carlile et al., 2014; Rabini et al., 2019; Keating and King, 2015; Strelnikov et al., 2011; Van Wanrooij et al., 2005).
In individuals with normal hearing (i.e., hearing thresholds of 20 dB or better in both ears), updating of sound-space correspondences can be studied in contexts in which binaural or monaural auditory cues are temporarily altered (for a review see Mendonça, 2014). Binaural auditory cues are fundamental for localizing sounds in the horizontal dimension (Brughera et al., 2013; Rayleigh, 1907), whereas monaural cues primarily contribute to front-back disambiguation as well as sound localization in elevation and depth (Angell and Fite, 1901; Carlile et al., 2005; Colburn and Kulkarni, 2005). Plugging one ear alters binaural cues (Strelnikov et al., 2011; Van Wanrooij and Van Opstal, 2007), whereas changing the shape of the external ear using ear-molds affects spectral auditory cues (Hofman et al., 1998). Atypical listening conditions can also occur when individuals with normal hearing are exposed to synthesized virtual sounds, particularly when non-individualized head-related transfer functions are used (HRTFs; e.g., see Parseihian and Katz, 2012). When testing the effects of training on the updating of sound-space correspondences, all these alterations to auditory-cues have been adopted.
The general finding is that updating of sound-space correspondences is indeed possible (e.g., Bauer et al., 1966; Irving and Moore, 2011; Kacelnik et al., 2006; Keating et al., 2016; Knudsen, Esterly, Knudsen, 1984; Kumpik et al., 2010; Rabini et al., 2019; Strelnikov et al., 2011; Trapeau and Schönwiesner, 2015). Moreover, there is growing awareness training sound localization abilities can benefit from multisensory stimulation. For instance, Strelnikov et al. (2011) compared the efficacy of an audio-visual training, an auditory-only training or a training based on semantic visual feedback about performance (‘correct’ vs. ‘incorrect’ verbal messages, delivered centrally as a function of whether the response was within 5 degrees from the correct speaker or not). The group exposed to audio-visual training (a flash delivered simultaneously with a sound on most of the trials) showed greater improvements compared to the group trained only with auditory stimulation or the group trained only with semantic visual feedback. These results highlight the importance of multisensory cues in improving spatial hearing. Similar results have been documented in ferrets fitted with bilateral cochlear implants (CI). Isaiah et al. (2014) showed that sound localization abilities of ferrets with CI improved after multisensory training based on systematic associations between auditory and visual stimuli. This audio-visual training led to greater improvements compared to an auditory-only training or a control condition in which no training was performed. It is typically assumed that the more reliable visual information helps the brain to optimally calibrate the associations between auditory cues and spatial locations (Isaiah et al., 2014).
While the contribution of visual information in calibrating auditory cues has already been exploited for updating sound-space correspondences, much less is known about the contributions that can arise from other sensory systems, such as kinesthesia – the sense of limb position and limb movements in space (Proske and Gandevia, 2012). When moving a sound source with our own body, such as when we handle a ringing phone, kinesthesia contributes to defining the position of the sound in space. Can this sensory input, which is also linked to our intention to act in space, contribute to tuning our sound-space correspondences? To address this question, Parseihian and Katz (2012) conceived a multisensory-motor training in which participants were involved in an active game-like scenario. They placed participants in a virtual auditory environment, which was experienced through non-individualized HRTFs. When using non-individualized HRTFs, spatial hearing is initially approximate and incorrect. Yet, normal-hearing listeners can refine sound-space correspondences and improve their sound localization with experience. The task consisted of active searching for animal sounds hidden around the participant, scanning space with a tracked hand-held ball. This ball appeared to emit bursts of alternating pink and white noise continuously. The temporal interval between bursts reduced as a function of the angular distance between the hand-held ball and the target – hence, the sound acted like a sonar. When participants found the target, the noise bursts were replaced by a random animal call, rendered at the position of the hand-held ball. To measure to what extent the participants benefited from this sensorimotor adaptation and learned to localize sounds with non-individualized HRTFs, a sound localization task (hand pointing to sounds) was administered before and after the game-like task. Results showed that a single training session of 12 min was ineffective, but three game-like training sessions improved vertical sound localization and reduced front/back confusions.
Although the study by Parseihian and Katz (2012) provides initial evidence that actively moving a sound source attached to one's body may result in the updating of sound-space correspondences, several questions remain open. First, it is unclear whether mere tracking of a sound moving in space is sufficient for updating, even when this is not attached to one's body. If this were the case, a contribution of kinesthesia could not be unequivocally claimed. Second, it would be important to assess if kinesthetic training can improve in altered listening conditions other than non-individualized HRTFs, such as monaural ear-plugging or ear-molding. Finally, it is important to understand to what extent training can influence auditory spatial tasks that entail allocentric coding (e.g., judge the relative position of two sounds) or more complex spatial metrics (e.g., comparing the relative distance between successive sounds).
In the present work, we examined whether normal-hearing individuals listening with one ear plugged and muffed can update sound-space correspondences by moving a sound source with one's hand. We address these open questions by studying the effect of active vs. passive training in two groups of normal-hearing adults who underwent simulated monaural listening. Participants in the active training group were instructed to move an audio-bracelet attached to their own wrist, which emitted amplitude-modulated white-noise. To ensure that the sound was continuously attended and tracked in space, occasional pure tones were interspersed in the white noise stimulus. Participants were instructed to detect these tones and report their spatial position. In the passive training group, participants performed an identical perceptual task, continuously paying attention to a moving sound. Yet, the audio-bracelet was now attached to the experimenter's wrist, eliminating all kinesthetic cues for this trained group. A third group of participants which did not perform any training was also included in the study.
Critically, the audio-bracelet emitted a continuous sound only when it was moved (Cappagli et al., 2019; Finocchietti et al., 2015; Gori et al., 2016; Porquis et al., 2017). Thus, participants in the active group were fully aware of the correspondence between kinesthetic cues related to their own arm movements and sound movements. Previous works effectively exploited this kinesthetic-sound association to help blind individuals in creating better representations of the position of their own body acting in space (Cappagli et al., 2017). Here, we used it as a tool to link spatial coding of sounds to one's kinesthetic sensations in the active training group. Following Parseihian and Katz (2012), the training was repeated for three sessions.
To assess the impact of training on spatial hearing, we tested participants before and after training on three auditory spatial tasks (see Fig. 1): (1) localization of single sounds (i.e., identify the direction of single sounds); (2) minimum audible angle task (i.e., distinguish the relative position of two sounds presented in succession; from now on MAA task); (3) space bisection task (i.e., given three consecutive sounds at different spatial positions compare the distance between the first and second, with the distance between the second and third). In addition, a non-spatial task was also added to the battery of tasks, to assess any unspecific test-retest effect. This task was a time bisection (i.e., given three consecutive sounds at identical spatial positions compare the temporal interval between the first and second, with the temporal interval between the second and third).
We expected that both trainings (active and passive) could improve performance in the auditory tasks performed monaurally, compared to the untrained group. However, if kinesthetic cues play an essential role in updating sound-space correspondence, active training should induce higher improvements compared to passive training. Finally, we expected that training improvements could generalize across all the spatial tasks we proposed (i.e., localization of single sounds task, MAA task, and space bisection task), but not to non-spatial temporal task (time bisection task).
Section snippets
Experiment 1
Before addressing our main research question on the effect of active vs. passive training, we validated our experimental protocol by testing the impact of monaural plugging on our battery of auditory tasks. Although several previous works have assessed the consequences of monaural listening for single sound localization tasks (Angell and Fite, 1901; Flannery and Butler, 1981; Musicant and Butler, 1980; Slattery and Middlebrooks, 1994; Van Wanrooij and Van Opstal, 2007) and minimum audible angle
Experiment 2
Having examined the effect of monaural plugging on single sound localization and on auditory tasks that require the construction of spatial metrics (minimum audible angle, space bisection), we turned to our main experimental question – namely, the possibility that actively moving a sound source with one's hand could promote updating of sound-space correspondences, more than just monitoring its changing position through vision and hearing. To recap, we compared an active group, whose
Conclusions
In conclusion, our findings extend and support previous works that documented adaptation to unilateral hearing alterations, even in the mature auditory system (Kumpik and King, 2019; Mendonça, 2014). Understanding which factors can promote fast adaptation to new auditory cues has theoretical as well as translational implications. The results of the present study show that the benefits of kinesthesic cues to sound position during training may be additive with respect to the role of auditory and
Supplementary material
All data for the study can be retrieved from osf.io/zwj2n.
CRediT authorship contribution statement
Chiara Valzolgher:Conceptualization, Methodology, Writing - review & editing, Writing - original draft, Investigation, Formal analysis, Data curation.Claudio Campus:Conceptualization, Methodology, Writing - review & editing, Formal analysis, Data curation.Giuseppe Rabini:Conceptualization, Methodology, Writing - review & editing.Monica Gori:Conceptualization, Methodology, Writing - review & editing.Francesco Pavani:Conceptualization, Methodology, Writing - review & editing, Writing - original
Acknowledgements
We are grateful to three anonymous reviewers and to Nick Holmes for their very constructive comments on a previous version of this manuscript. C.V. was supported by a grant of the Università Italo-Francese (UIF)/Université Franco-Italienne (UFI) and the Zegna Founder's Scholarship. F.P. was supported by a grant of the Agence Nationale de la Recherche (ANR-16-CE17-0016, VIRTUALHEARING3D, France), by a prize of the Foundation Medisite (France), by the Neurodis Foundation (France) and by a grant
References (71)
- et al.
The ventriloquist effect results from near-optimal bimodal integration
Current Biology
(2004) - et al.
Spatial cues influence time estimations in deaf individuals
iScience
(2019) - et al.
Spectral information in sound localization
International Review of Neurobiology
(2005) - et al.
Temporal cues influence space estimations in visually impaired individuals
iScience
(2018) - et al.
Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children
Neuroscience & Biobehavioral Reviews
(2016) - et al.
Transfer effects on sound localization performances from playing a virtual three-dimensional auditory game
Applied Acoustics
(2007) - et al.
Training sound localization in normal hearing listeners with and without a unilateral ear plug
Hearing Research
(2011) - et al.
Spatially localized distortions of event time
Current Biology
(2006) - et al.
Sound localization in a changing world
Current Opinion in Neurobiology
(2015) - et al.
A review of the effects of unilateral hearing loss on spatial hearing
Hearing Research
(2019)
Spatial and non-spatial multisensory cueing in unilateral cochlear implant users
Hearing Research
Monaural deprivation disrupts development of binaural selectivity in auditory midbrain and cortex
Neuron
Monaural sound localization: Acute versus chronic unilateral impairment
Hearing Research
Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex
NeuroImage
The neural basis of the egocentric and allocentric spatial frame of reference
Brain Research
From the Psychological Laboratory of the University of Chicago: The monaural localization of sound
Psychological Review
Noise localization after unilateral attenuation
The Journal of the Acoustical Society of America
Effects of sensorineural hearing loss on visually guided attention in a multitalker environment
JARO - Journal of the Association for Research in Otolaryngology
Human interaural time difference thresholds for sine tones: The high-frequency limit
The Journal of the Acoustical Society of America
Multisensory rehabilitation training improves spatial perception in totally but not partially visually deprived children
Frontiers in Integrative Neuroscience
Audio motor training improves mobility and spatial cognition in visually impaired children
Scientific Reports
Accommodating to new ears: The effects of sensory and sensory-motor feedback
The Journal of the Acoustical Society of America
Models of sound localization
Evaluation of the audio bracelet for blind interaction for improving mobility and spatial cognition in early blind children - A pilot study
Unilateral hearing loss: Understanding speech recognition and localization variability-implications for cochlear implant candidacy
Ear and Hearing
Spectral cues provided by the pinna for monaural localization in the horizontal plane
Perception & Psychophysics
Development of visuo-auditory integration in space and time
Frontiers in Integrative Neuroscience
Sound localization in subjects with impaired hearing: Spatial-discrimination and interaural-discrimination tests
Acta Oto-Laryngologica
Primate hearing from a mammalian perspective
The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology: An Official Publication of the American Association of Anatomists
Visual factors in sound localization in mammals
Journal of Comparative Neurology
Relearning sound localization with new ears
Nature Neuroscience
Multisensory training improves auditory spatial processing following bilateral cochlear implantation
Journal of Neuroscience
Training-induced plasticity of auditory localization in adult mammals
PLoS Biology
Developmental plasticity of spatial hearing following asymmetric hearing loss: Context-dependent cue integration and its clinical implications
Frontiers in Systems Neuroscience
Behavioral training promotes multiple adaptive processes following acute hearing loss
ELife
Cited by (12)
The impact of a visual spatial frame on real sound-source localization in virtual reality
2020, Current Research in Behavioral SciencesSpatial hearing training in virtual reality with simulated asymmetric hearing loss
2024, Scientific ReportsEffect of age-related hearing loss on cognitive function and sound localization
2023, Chinese Journal of Otorhinolaryngology Head and Neck SurgeryTraining spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality
2023, European Archives of Oto-Rhino-LaryngologyEffects of Unilateral Audio-Vestibular Insufficiency on Spatial Hearing
2023, Indian Journal of Otolaryngology and Head and Neck SurgeryMultiTab: A Novel Portable Device to Evaluate Multisensory Skills
2023, Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS