Elsevier

Cortex

Volume 133, December 2020, Pages 309-327
Cortex

Research Report
Multimodal comprehension in left hemisphere stroke patients

https://doi.org/10.1016/j.cortex.2020.09.025Get rights and content

Abstract

Hand gestures, imagistically related to the content of speech, are ubiquitous in face-to-face communication. Here we investigated people with aphasia's (PWA) processing of speech accompanied by gestures using lesion-symptom mapping. Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only, congruent speech-gesture, and incongruent speech-gesture. Participants' task was to indicate, in different blocks, whether the picture and the word matched (speech task), or whether the picture and the gesture matched (gesture task). Multivariate lesion analysis with Support Vector Regression Lesion-Symptom Mapping (SVR-LSM) showed that benefit for congruent speech-gesture was associated with 1) lesioned voxels in anterior fronto-temporal regions including inferior frontal gyrus (IFG), and sparing of posterior temporal cortex and lateral temporal-occipital regions (pTC/LTO) for the speech task, and 2) conversely, lesions to pTC/LTO and sparing of anterior regions for the gesture task. The two tasks did not share overlapping voxels. Costs from incongruent speech-gesture pairings were associated with lesioned voxels in these same anterior (for the speech task) and posterior (for the gesture task) regions, but crucially, also shared voxels in superior temporal gyrus (STG) and middle temporal gyrus (MTG), including the anterior temporal lobe. These results suggest that IFG and pTC/LTO contribute to extracting semantic information from speech and gesture, respectively; however, they are not causally involved in integrating information from the two modalities. In contrast, regions in anterior STG/MTG are associated with performance in both tasks and may thus be critical to speech-gesture integration. These conclusions are further supported by associations between performance in the experimental tasks and performance in tests assessing lexical-semantic processing and gesture recognition.

Section snippets

The neural substrate of processing gestures accompanying speech

Shared processing, or integration, between speech and gestures has been argued to involve left inferior frontal gyrus (IFG) and, to different extents, left (or bilateral) posterior temporal cortices (pTC). Some previous imaging studies reported overlap between the processing of speech and gestures in left IFG and bilateral posterior middle temporal gyrus (pMTG) (Straube, Green, Weis, & Kircher, 2012; Xu, Gannon, Emmorey, Smith, & Braun, 2009). However, these results do not tell us whether the

Patient studies of speech-gesture processing

Those studies that have looked at aphasics' performance in tasks combining speech and gesture indicate that PWA show congruence and incongruence effects when presented with speech-gesture pairings. For example, Eggenberger et al. (2016) asked PWA and control participants to judge if a spoken word and a co-speech gesture matched. Stimuli were either congruent (same meaning), incongruent (different meaning) or baseline (words produced in the context of a meaningless gesture). PWA showed both an

Left IFG and pMTG involvement in verbal and action semantics

Left IFG and pMTG have long been considered as key in semantic processing from language (words and sentences) and action (gesture recognition), respectively. Many imaging studies have shown left IFG involvement in a large variety of tasks requiring processing semantic information from verbal (spoken, written or signed) material, both in production as well as in comprehension (Binder, Desai, Graves, & Conant, 2009; Hickok, 2012; Hickok & Poeppel, 2007). In particular, IFG activation has been

The present study

We use a case series approach to characterize the behavioral and anatomical profile of PWA's comprehension of speech and of gestures when presented in combination and in isolation. We compare multimodal speech-gesture pairings to unimodal baselines (speech-only or gesture-only) to establish benefits (difference between congruent speech-gesture pairings in which both the speech and the gesture refer to the same meaning and unimodal baseline) and costs (difference between incongruent

Participants

Forty-five right-handed native American English speakers participated in the study: 30 chronic aphasic/apraxic patients1 and 15 healthy controls who were equivalent in age [t(42) = −1.59, p = .12] and education [t(42) = −1.59, p = .12].

All subjects were recruited from the Moss Rehabilitation Research Institute (MRRI) Research Registry (Schwartz, Brecher, Whyte, & Klein, 2005) and tested in the MRRI

Comparison between PWA and controls in the speech and in the gesture tasks

The first set of analysis was performed in two models, one for the speech and one for the gesture task. Both contained the main effects of group and condition, as well as the two-way interaction between group and condition. Performance was very accurate for both groups, with controls being at or near-ceiling. Fig. 3 and Table 2 show the results.

For the speech task, there was a main effect of group [χ2(1) = 27.12, p < .001, −2.35 ± .4] with patients performing less accurately than controls.

Discussion

This study is the first investigation of the neural systems engaged in comprehending words accompanied by gestures and gestures accompanied by words in aphasic patients. Moreover, we considered for the first time the influence of the ability to derive meaning from lexical and gestural input on the pattern of benefits and costs of multimodal versus unimodal processing in PWA. Overall, PWAs showed larger effects of multimodal congruency and incongruency than controls, although both groups showed

Conclusions

In the first lesion study of people with aphasia (PWA) – accompanied by different degrees of deficits in lexical-semantics and gesture recognition – that investigates multimodal word comprehension we have provided new insight into the role of specific nodes (IFG, pTC/LTO and anterior STG/MTG), part of the language and/or action networks, in the semantic processing of spoken words and gestures.

Credit author statement

Gabriella Vigliocco: conceptualization, supervision, writing first draft, methodology, funding acquisition, revision.

Anna Krason: formal analysis, investigation, visualization, revision.

Harrison Stoll: formal analysis, investigation, visualization, revision.

Alessandro Monti: methodology, software, investigation, revision.

Laurel Buxbaum: conceptualization, supervision, methodology, funding acquisition, resources, revision.

Open practices

The study in this article earned an Open Materials badge for transparent practices. Materials for the study are available at https://osf.io/pvube and https://github.com/cognition-action-lab/Vigliocco_etal.

Acknowledgments

This research was supported by National Institute of Health R01-NS099061 awarded to Laurel Buxbaum, the Economic and Social Research Council (ESRC) of Great Britain: grant no. RES-620-28-6002 and European Research Council 743035 awarded to Gabriella Vigliocco, and by the Moss Rehabilitation Research Institute. We thank H. Branch Coslett and Olu Faseyitan for assistance with lesion image segmentation and warping.

References (106)

  • A.D. Friederici

    Pathways to language: Fiber tracts in the human brain

    Trends in Cognitive Sciences

    (2009)
  • G. Gainotti et al.

    Comprehension of symbolic gestures in aphasia

    Brain and Language

    (1976)
  • A.M. García et al.

    How meaning unfolds in neural time: Embodied reactivations can precede multimodal semantic effects during language processing

    Neuroimage

    (2019)
  • G. Goldenberg et al.

    Shared neural substrates of apraxia and aphasia

    Neuropsychologia

    (2015)
  • G. Herbet et al.

    Rethinking voxel-wise lesion-deficit analysis: A new challenge for computational neuropsychology

    Cortex

    (2015)
  • G. Hickok

    The cortical organization of speech processing: Feedback control and predictive coding the context of a dual-stream model

    Journal of Communication Disorders

    (2012)
  • H. Holle et al.

    Neural correlates of the processing of co-speech gestures

    Neuroimage

    (2008)
  • H. Holle et al.

    Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions

    Neuroimage

    (2010)
  • S. Kalénine et al.

    Thematic knowledge, artifact concepts, and the left posterior temporal lobe: Where action and object semantics converge

    Cortex; a Journal Devoted to the Study of the Nervous System and Behavior

    (2016)
  • D. Kemmerer et al.

    Behavioral patterns and lesion sites associated with impaired processing OF lexical and conceptual knowledge OF actions

    Cortex; a Journal Devoted to the Study of the Nervous System and Behavior

    (2012)
  • A. Kertesz

    Apraxia and aphasia. Anatomical and clinical relationship

  • J.M. Kilner

    More than one pathway to action understanding

    Trends in Cognitive Sciences

    (2011)
  • K. Marinkovic et al.

    Spatiotemporal dynamics of modality-specific and supramodal word processing

    Neuron

    (2003)
  • A. Martin et al.

    Is a single ‘hub’, with lots of spokes, an accurate description of the neural architecture of action semantics?: Comment on “action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge” by van Elk, van Schie and Bekkering

    Physics of Life Reviews

    (2014)
  • C. Obermeier et al.

    The benefit of gestures during communication: Evidence from hearing and hearing-impaired individuals

    Cortex

    (2012)
  • A. Özyürek et al.

    Gesture, brain, and language

    Brain and Language

    (2007)
  • G. Pobric et al.

    Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation

    Current Biology

    (2010)
  • D. Pustina et al.

    Improved accuracy of lesion to symptom mapping with multivariate sparse canonical correlations

    Neuropsychologia

    (2018)
  • G. Robinson et al.

    Conceptual proposition selection and the LIFG: Neuropsychological evidence from a focal frontal group

    Neuropsychologia

    (2010)
  • M.F. Schwartz et al.

    A patient registry for cognitive rehabilitation research: A strategy for balancing patients' privacy rights with researchers' need for access

    Archives of Physical Medicine and Rehabilitation

    (2005)
  • J.I. Skipper et al.

    Gestures orchestrate brain networks for language understanding

    Current Biology

    (2009)
  • M. van Elk et al.

    Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge

    Physics of Life Reviews

    (2014)
  • G. Vigliocco et al.

    Nouns and verbs in the brain? A review of behavioural, electrophysiological, neuropsychological and imaging studies

    Neuroscience and Biobehavioural Reviews

    (2011)
  • N. Vukovic et al.

    Primary motor cortex functionally contributes to language comprehension: An online rTMS study

    Neuropsychologia

    (2017)
  • R.M. Willems et al.

    Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language

    Neuroimage

    (2009)
  • M.W. Alibali et al.

    Assessing knowledge conveyed in gesture: Do teachers have the upper hand?

    Journal of Educational Psychology

    (1997)
  • D. Bates et al.

    Fitting linear mixed-effects models using lme4

    Journal of Statistical Software

    (2015)
  • M. Bedny et al.

    Semantic adaptation and competition during word comprehension

    Cerebral Cortex (New York, NY)

    (2008)
  • C.M. Bennett et al.

    The principled control of false positives in neuroimaging

    Social Cognitive and Affective Neuroscience

    (2009)
  • J.R. Binder et al.

    Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies

    Cerebral Cortex (New York, NY)

    (2009)
  • R.J. Binney et al.

    The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: Evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and semantic dementia

    Cerebral Cortex

    (2010)
  • N. Cocks et al.

    Integration of speech and gesture in aphasia: Integration of speech and gesture in aphasia

    International Journal of Language & Communication Disorders

    (2018)
  • N. Cocks et al.

    Gesture and speech integration: An exploratory study of a man with aphasia

    International Journal of Language & Communication Disorders

    (2009)
  • A.S. Dick et al.

    Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech

    Human Brain Mapping

    (2014)
  • N. Eggenberger et al.

    Comprehension of Co-speech gestures in aphasic patients: An eye movement study

    Plos One

    (2016)
  • K.V. Embleton et al.

    Distortion correction for diffusion-weighted MRI tractography and fMRI in the temporal lobes

    Human Brain Mapping

    (2010)
  • D.C. Finkelnburg

    Niederrheinische gesellschaft, Sitzung vom 21. Marz 1870 in Bonn (lower Rhine society, meeting of 21 March 1870). Berlin Klin

    Wochenschr

    (1870)
  • F. Garcea et al.

    Reduced competition between tool action neighbors in left hemisphere stroke

    BioRxiv

    (2019)
  • A. Green et al.

    Neural integration of iconic and unrelated coverbal gestures: A functional MRI study

    Human Brain Mapping

    (2009)
  • T.C. Gunter et al.

    Inconsistent use of gesture space during abstract pointing impairs language comprehension

    Frontiers in Psychology

    (2015)
  • Cited by (9)

    • Manual praxis and language-production networks, and their links to handedness

      2021, Cortex
      Citation Excerpt :

      Initially, we calculated LIs across the whole set of the 180 bilateral parcellations. Subsequently, the most relevant 48 ROIs—based on theoretically informed selection of cortical areas critical for praxis and language tasks (e.g., Garcea, Greene, Grafton, & Buxbaum, 2020; Goldenberg & Randerath, 2015; Kroliczak et al., 2016; Labache et al., 2019; Mazoyer et al., 2016; Vigliocco, Krason, Stoll, Monti, & Buxbaum, 2020), as well as their networks were further analyzed. All these ROIs, with the reduced set of 48 color coded, are shown in Fig. 1.

    View all citing articles on Scopus
    View full text