Elsevier

Cognition

Volume 204, November 2020, 104409
Cognition

Updating spatial hearing abilities through multisensory and motor cues

https://doi.org/10.1016/j.cognition.2020.104409Get rights and content

Highlights

  • Simulated monaural listening (one ear plugged and muffed) reduces performance in localization of single sounds.

  • Monaural plugging impacts also on more complex auditory tasks, such as minimum audible angle and space bisection.

  • Spatial hearing difficulties reduce after three consecutive days of multisensory training.

  • Active training updates spatial hearing more than passive training.

  • Partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.

Abstract

Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.

Introduction

Spatial hearing is the ability to discriminate the position of sounds. It plays a key role in our interactions with the physical and social environment and it influences auditory scene analysis. As such, spatial hearing contributes to our capacity to discern signal from noise (Best et al., 2008; Kitterick et al., 2010; Middlebrooks, 2015a; Shinn-Cunningham et al., 2016; Yost, 2017) and to the ability to detect and respond to relevant sounds. Besides, it plays a fundamental role in overt and covert orienting to multisensory events (Heffner, 2004; Masterton et al., 1969; Pavani et al., 2017). Spatial hearing relies on systematic associations between the auditory input reaching the ears, the perceived position of the head, and the coordinates in external space (Wallach, 1940; Middlebrooks, 2015b; Middlebrooks and Green, 1991). Although spatial hearing is more plastic early in life (Knudsen, Knudsen, Esterly, 1984), recent studies showed that training can update the associations between auditory cues and space throughout life, both in animal models (Kacelnik et al., 2006; Keating and King, 2013; Popescu and Polley, 2010) and in humans (Carlile et al., 2014; Rabini et al., 2019; Keating and King, 2015; Strelnikov et al., 2011; Van Wanrooij et al., 2005).

In individuals with normal hearing (i.e., hearing thresholds of 20 dB or better in both ears), updating of sound-space correspondences can be studied in contexts in which binaural or monaural auditory cues are temporarily altered (for a review see Mendonça, 2014). Binaural auditory cues are fundamental for localizing sounds in the horizontal dimension (Brughera et al., 2013; Rayleigh, 1907), whereas monaural cues primarily contribute to front-back disambiguation as well as sound localization in elevation and depth (Angell and Fite, 1901; Carlile et al., 2005; Colburn and Kulkarni, 2005). Plugging one ear alters binaural cues (Strelnikov et al., 2011; Van Wanrooij and Van Opstal, 2007), whereas changing the shape of the external ear using ear-molds affects spectral auditory cues (Hofman et al., 1998). Atypical listening conditions can also occur when individuals with normal hearing are exposed to synthesized virtual sounds, particularly when non-individualized head-related transfer functions are used (HRTFs; e.g., see Parseihian and Katz, 2012). When testing the effects of training on the updating of sound-space correspondences, all these alterations to auditory-cues have been adopted.

The general finding is that updating of sound-space correspondences is indeed possible (e.g., Bauer et al., 1966; Irving and Moore, 2011; Kacelnik et al., 2006; Keating et al., 2016; Knudsen, Esterly, Knudsen, 1984; Kumpik et al., 2010; Rabini et al., 2019; Strelnikov et al., 2011; Trapeau and Schönwiesner, 2015). Moreover, there is growing awareness training sound localization abilities can benefit from multisensory stimulation. For instance, Strelnikov et al. (2011) compared the efficacy of an audio-visual training, an auditory-only training or a training based on semantic visual feedback about performance (‘correct’ vs. ‘incorrect’ verbal messages, delivered centrally as a function of whether the response was within 5 degrees from the correct speaker or not). The group exposed to audio-visual training (a flash delivered simultaneously with a sound on most of the trials) showed greater improvements compared to the group trained only with auditory stimulation or the group trained only with semantic visual feedback. These results highlight the importance of multisensory cues in improving spatial hearing. Similar results have been documented in ferrets fitted with bilateral cochlear implants (CI). Isaiah et al. (2014) showed that sound localization abilities of ferrets with CI improved after multisensory training based on systematic associations between auditory and visual stimuli. This audio-visual training led to greater improvements compared to an auditory-only training or a control condition in which no training was performed. It is typically assumed that the more reliable visual information helps the brain to optimally calibrate the associations between auditory cues and spatial locations (Isaiah et al., 2014).

While the contribution of visual information in calibrating auditory cues has already been exploited for updating sound-space correspondences, much less is known about the contributions that can arise from other sensory systems, such as kinesthesia – the sense of limb position and limb movements in space (Proske and Gandevia, 2012). When moving a sound source with our own body, such as when we handle a ringing phone, kinesthesia contributes to defining the position of the sound in space. Can this sensory input, which is also linked to our intention to act in space, contribute to tuning our sound-space correspondences? To address this question, Parseihian and Katz (2012) conceived a multisensory-motor training in which participants were involved in an active game-like scenario. They placed participants in a virtual auditory environment, which was experienced through non-individualized HRTFs. When using non-individualized HRTFs, spatial hearing is initially approximate and incorrect. Yet, normal-hearing listeners can refine sound-space correspondences and improve their sound localization with experience. The task consisted of active searching for animal sounds hidden around the participant, scanning space with a tracked hand-held ball. This ball appeared to emit bursts of alternating pink and white noise continuously. The temporal interval between bursts reduced as a function of the angular distance between the hand-held ball and the target – hence, the sound acted like a sonar. When participants found the target, the noise bursts were replaced by a random animal call, rendered at the position of the hand-held ball. To measure to what extent the participants benefited from this sensorimotor adaptation and learned to localize sounds with non-individualized HRTFs, a sound localization task (hand pointing to sounds) was administered before and after the game-like task. Results showed that a single training session of 12 min was ineffective, but three game-like training sessions improved vertical sound localization and reduced front/back confusions.

Although the study by Parseihian and Katz (2012) provides initial evidence that actively moving a sound source attached to one's body may result in the updating of sound-space correspondences, several questions remain open. First, it is unclear whether mere tracking of a sound moving in space is sufficient for updating, even when this is not attached to one's body. If this were the case, a contribution of kinesthesia could not be unequivocally claimed. Second, it would be important to assess if kinesthetic training can improve in altered listening conditions other than non-individualized HRTFs, such as monaural ear-plugging or ear-molding. Finally, it is important to understand to what extent training can influence auditory spatial tasks that entail allocentric coding (e.g., judge the relative position of two sounds) or more complex spatial metrics (e.g., comparing the relative distance between successive sounds).

In the present work, we examined whether normal-hearing individuals listening with one ear plugged and muffed can update sound-space correspondences by moving a sound source with one's hand. We address these open questions by studying the effect of active vs. passive training in two groups of normal-hearing adults who underwent simulated monaural listening. Participants in the active training group were instructed to move an audio-bracelet attached to their own wrist, which emitted amplitude-modulated white-noise. To ensure that the sound was continuously attended and tracked in space, occasional pure tones were interspersed in the white noise stimulus. Participants were instructed to detect these tones and report their spatial position. In the passive training group, participants performed an identical perceptual task, continuously paying attention to a moving sound. Yet, the audio-bracelet was now attached to the experimenter's wrist, eliminating all kinesthetic cues for this trained group. A third group of participants which did not perform any training was also included in the study.

Critically, the audio-bracelet emitted a continuous sound only when it was moved (Cappagli et al., 2019; Finocchietti et al., 2015; Gori et al., 2016; Porquis et al., 2017). Thus, participants in the active group were fully aware of the correspondence between kinesthetic cues related to their own arm movements and sound movements. Previous works effectively exploited this kinesthetic-sound association to help blind individuals in creating better representations of the position of their own body acting in space (Cappagli et al., 2017). Here, we used it as a tool to link spatial coding of sounds to one's kinesthetic sensations in the active training group. Following Parseihian and Katz (2012), the training was repeated for three sessions.

To assess the impact of training on spatial hearing, we tested participants before and after training on three auditory spatial tasks (see Fig. 1): (1) localization of single sounds (i.e., identify the direction of single sounds); (2) minimum audible angle task (i.e., distinguish the relative position of two sounds presented in succession; from now on MAA task); (3) space bisection task (i.e., given three consecutive sounds at different spatial positions compare the distance between the first and second, with the distance between the second and third). In addition, a non-spatial task was also added to the battery of tasks, to assess any unspecific test-retest effect. This task was a time bisection (i.e., given three consecutive sounds at identical spatial positions compare the temporal interval between the first and second, with the temporal interval between the second and third).

We expected that both trainings (active and passive) could improve performance in the auditory tasks performed monaurally, compared to the untrained group. However, if kinesthetic cues play an essential role in updating sound-space correspondence, active training should induce higher improvements compared to passive training. Finally, we expected that training improvements could generalize across all the spatial tasks we proposed (i.e., localization of single sounds task, MAA task, and space bisection task), but not to non-spatial temporal task (time bisection task).

Section snippets

Experiment 1

Before addressing our main research question on the effect of active vs. passive training, we validated our experimental protocol by testing the impact of monaural plugging on our battery of auditory tasks. Although several previous works have assessed the consequences of monaural listening for single sound localization tasks (Angell and Fite, 1901; Flannery and Butler, 1981; Musicant and Butler, 1980; Slattery and Middlebrooks, 1994; Van Wanrooij and Van Opstal, 2007) and minimum audible angle

Experiment 2

Having examined the effect of monaural plugging on single sound localization and on auditory tasks that require the construction of spatial metrics (minimum audible angle, space bisection), we turned to our main experimental question – namely, the possibility that actively moving a sound source with one's hand could promote updating of sound-space correspondences, more than just monitoring its changing position through vision and hearing. To recap, we compared an active group, whose

Conclusions

In conclusion, our findings extend and support previous works that documented adaptation to unilateral hearing alterations, even in the mature auditory system (Kumpik and King, 2019; Mendonça, 2014). Understanding which factors can promote fast adaptation to new auditory cues has theoretical as well as translational implications. The results of the present study show that the benefits of kinesthesic cues to sound position during training may be additive with respect to the role of auditory and

Supplementary material

All data for the study can be retrieved from osf.io/zwj2n.

CRediT authorship contribution statement

Chiara Valzolgher:Conceptualization, Methodology, Writing - review & editing, Writing - original draft, Investigation, Formal analysis, Data curation.Claudio Campus:Conceptualization, Methodology, Writing - review & editing, Formal analysis, Data curation.Giuseppe Rabini:Conceptualization, Methodology, Writing - review & editing.Monica Gori:Conceptualization, Methodology, Writing - review & editing.Francesco Pavani:Conceptualization, Methodology, Writing - review & editing, Writing - original

Acknowledgements

We are grateful to three anonymous reviewers and to Nick Holmes for their very constructive comments on a previous version of this manuscript. C.V. was supported by a grant of the Università Italo-Francese (UIF)/Université Franco-Italienne (UFI) and the Zegna Founder's Scholarship. F.P. was supported by a grant of the Agence Nationale de la Recherche (ANR-16-CE17-0016, VIRTUALHEARING3D, France), by a prize of the Foundation Medisite (France), by the Neurodis Foundation (France) and by a grant

References (71)

  • F. Pavani et al.

    Spatial and non-spatial multisensory cueing in unilateral cochlear implant users

    Hearing Research

    (2017)
  • M.V. Popescu et al.

    Monaural deprivation disrupts development of binaural selectivity in auditory midbrain and cortex

    Neuron

    (2010)
  • W.H. Slattery et al.

    Monaural sound localization: Acute versus chronic unilateral impairment

    Hearing Research

    (1994)
  • R. Trapeau et al.

    Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex

    NeuroImage

    (2015)
  • T. Zaehle et al.

    The neural basis of the egocentric and allocentric spatial frame of reference

    Brain Research

    (2007)
  • J.R. Angell et al.

    From the Psychological Laboratory of the University of Chicago: The monaural localization of sound

    Psychological Review

    (1901)
  • R.W. Bauer et al.

    Noise localization after unilateral attenuation

    The Journal of the Acoustical Society of America

    (1966)
  • V. Best et al.

    Effects of sensorineural hearing loss on visually guided attention in a multitalker environment

    JARO - Journal of the Association for Research in Otolaryngology

    (2008)
  • A. Brughera et al.

    Human interaural time difference thresholds for sine tones: The high-frequency limit

    The Journal of the Acoustical Society of America

    (2013)
  • G. Cappagli et al.

    Multisensory rehabilitation training improves spatial perception in totally but not partially visually deprived children

    Frontiers in Integrative Neuroscience

    (2017)
  • G. Cappagli et al.

    Audio motor training improves mobility and spatial cognition in visually impaired children

    Scientific Reports

    (2019)
  • S. Carlile et al.

    Accommodating to new ears: The effects of sensory and sensory-motor feedback

    The Journal of the Acoustical Society of America

    (2014)
  • H.S. Colburn et al.

    Models of sound localization

  • S. Finocchietti et al.

    Evaluation of the audio bracelet for blind interaction for improving mobility and spatial cognition in early blind children - A pilot study

  • J.B. Firszt et al.

    Unilateral hearing loss: Understanding speech recognition and localization variability-implications for cochlear implant candidacy

    Ear and Hearing

    (2017)
  • R. Flannery et al.

    Spectral cues provided by the pinna for monaural localization in the horizontal plane

    Perception & Psychophysics

    (1981)
  • M. Gori et al.

    Development of visuo-auditory integration in space and time

    Frontiers in Integrative Neuroscience

    (2012)
  • R. Häusler et al.

    Sound localization in subjects with impaired hearing: Spatial-discrimination and interaural-discrimination tests

    Acta Oto-Laryngologica

    (1983)
  • R.S. Heffner

    Primate hearing from a mammalian perspective

    The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology: An Official Publication of the American Association of Anatomists

    (2004)
  • R.S. Heffner et al.

    Visual factors in sound localization in mammals

    Journal of Comparative Neurology

    (1992)
  • P.M. Hofman et al.

    Relearning sound localization with new ears

    Nature Neuroscience

    (1998)
  • A. Isaiah et al.

    Multisensory training improves auditory spatial processing following bilateral cochlear implantation

    Journal of Neuroscience

    (2014)
  • O. Kacelnik et al.

    Training-induced plasticity of auditory localization in adult mammals

    PLoS Biology

    (2006)
  • P. Keating et al.

    Developmental plasticity of spatial hearing following asymmetric hearing loss: Context-dependent cue integration and its clinical implications

    Frontiers in Systems Neuroscience

    (2013)
  • P. Keating et al.

    Behavioral training promotes multiple adaptive processes following acute hearing loss

    ELife

    (2016)
  • Cited by (12)

    View all citing articles on Scopus
    View full text