Representations of conceptual information during automatic and active semantic access
Introduction
When we see an object, we automatically and effortlessly understand its significance and meaning. Likewise, as evidenced by the Stroop task (Stroop, 1935), the meaning of a word is automatically and obligatorily retrieved when we read or hear that word. This reflects the passive and automatic access to conceptual representation that allows us to effectively interact with the world. At the same time, specific tasks and behavioural goals can trigger the retrieval of richer information about an object. Owls are not just birds; they are nocturnal, hunt mice and can rotate their neck 270° in either direction. Forms of active semantic access underlie much of the human capacity for thought and higher understanding. The degree to which automatic conceptual access and active conceptual access to deeper associated meaning are served by the same or different neural representations remains an open question.
Conceptual representations can be accessed regardless of the input-modality: conceptual representation derived by a concept presented as spoken word is generally similar to the one derived by the same concept presented as written word. Accordingly, brain areas that represent such conceptual knowledge are supramodal in nature. In the human brain, conceptual knowledge is represented by a distributed semantic system. This includes regions that respond more strongly to semantically richer stimuli: the angular gyrus, lateral and ventral temporal cortex, ventromedial prefrontal cortex, inferior frontal gyrus, dorsal medial prefrontal cortex and the precuneus/posterior cingulate gyrus (Binder et al., 2009). Studies employing multivariate pattern analysis (MVPA) have determined the non-perceptual sensitivity of elements of the semantic system to semantic content (Fairhall and Caramazza, 2013; Devereux et al., 2013, Clark and Tyler, 2014, Simanova et al., 2014, Bruffaerts et al., 2013; Liuzzi et al., 2015, 2017, 2019, 2020, Borghesani et al., 2016, Martin et al., 2018). Nevertheless, all these studies adopted active semantic tasks: naming task (Devereux et al., 2013, Clark and Tyler, 2014), judgment of semantic consistency (Simanova et al., 2014), property verification task (Bruffaerts et al., 2013; Liuzzi et al., 2015, 2017, 2019; Martin et al., 2018), semantic decision (Borghesani et al., 2016), or a typicality task (Fairhall and Caramazza, 2013; Liuzzi et al., 2020). Representations of word-meaning have been studied during the naturalistic presentation of narratives (Huth et al., 2016, Deniz et al., 2020). These results suggest that representations may be widespread under these conditions. However, the relationship between these representations to single word processing and the relationship between active and passive conceptual access remain uncertain.
Magnetoencephalographic studies employing word stimuli indicate that access to semantic content during active semantic access is a multistage process. Using representational similarity analysis and a semantic typicality task, Giari et al. (2020) demonstrated that access to the conceptual content of word stimuli occurs in two stages, the first ranging from 230 to 335 msec, the second from 360 to 585 msec. This finding supports the possibility that semantic access proceeds in an initial, rapid, automatic phase, followed by a latter phase in which semantic representations are actively accessed. Furthermore, the initial peak of a representation of conceptual content in all modalities was observer at 360 ms, which coincides with the N400 potential. The N400 is a index of semantic processing and its amplitude is influenced by predictability of a stimulus (Lau et al., 2008) and integration of semantic information with the working context (Hagoort, 2007).
The mechanism by which active semantic access is instantiated is thought to be mediated through control circuitry that guides access to task- and goal-relevant semantic representation. Due to its involvement in selecting between multiple competing semantic responses and in making infrequent semantic associations, the left inferior frontal gyrus (IFG) has been attributed a key role in semantic control (Thompson-Schill et al., 1997; Martin and Chao, 2001; Wagner et al., 2001; Thompson-Schill, 2003; Lambon-Ralph et al., 2017). Recent models have also posited that the control circuitry may extend to include additional regions, such as the posterior middle temporal gyrus (pMTG; Lambon-Ralph et al., 2017).
In the current study, we address the question of whether the semantic system codes for conceptual information in the same way when we interact with the world as when we internally think about meaning. We do so by adopting a phonetic decision task and a typicality task, which require automatic and active semantic access, respectively. By means of a whole-brain decoding Multivariate Pattern Analysis (MVPA), as well as Region-Of-Interest (ROI) analysis, we determined whether—and to what degree—representations of semantic category during passive or active semantic access rely on shared or distinct neural patterns. Results reveal that the semantic system is sensitive to semantic class during active tasks and that, among those regions composing this system, the left pMTG, pVTC and IFG share common semantic representations between active and passive semantic access.
Section snippets
Participants
Seventeen participants took part in this study. All participants were native Italian speakers, right-handed and free of neurological or psychiatric disorders. All procedures were approved by the Human Research Ethics Committee on the Use of Human Subjects in Research of the University of Trento and the experiments were performed in accordance with the approved guidelines. Participants confirmed that they understood the experimental procedure and gave their written informed consent.
Stimulus dataset
One hundred
Pre-processing
Data were pre-processed with Statistical Parametric Mapping - SPM12 (Wellcome Trust Centre for Neuroimaging, University College London, UK). Functional images were realigned and resliced, and a mean functional image was created. Next, the structural image was co-registered with the mean function image and segmented. Functional images were normalized to the Montreal Neurological Institute (MNI) T1 space, resampled to a voxel size of 2x2x2 mm3 and spatially smoothed with 8 mm FWHM kernel.
Univariate analysis
A
Behavioural results
A 3-way Repeated Measures ANOVA with reaction times as output was computed. As expected, there was a strong main effect of task (F(1,16) = 28,2, p < .00001) and main effect of input-modality (F(1,16) = 24,2, p < .00001). Subjects were faster during the phonetic task (mean = 1.19s, SEM = 0.019) than during the typicality task (mean = 1.32s, SEM = 0.017) and they were faster for the written modality (mean = 1.18s, SEM = 0.017) compared to the spoken modality (mean = 1.33, SEM = 0.018). A less
Discussion
In this study, we investigated the representation of conceptual information during automatic and active semantic access to address whether overlapping or distinct neural populations underlie these processes. We observed that active access enhanced representation in pMTG, pVTC and BA45 (Fig. 5). At the same time, we found that neural representations in these same cortical regions show a subtle pattern of category representation that is common to both active and passive conceptual access (Fig. 6).
Conclusions
In the current study we investigated whether active and passive semantic access rely on shared or distinct neural substrates. We found that active access enhanced representation in left pMTG, pVTC and BA45, and through a cross-task decoding analysis, we showed that these same regions exhibit a common neural representation for both types of access, active and passive. Collectively, these results show that the same cortical regions code for conceptual information both when we interact with the
Data availability
Data, stimuli and analysis scripts are available upon request from the corresponding author.
Credit author statement
Antonietta Gabriella Liuzzi: Methodology, Formal analysis, Writing – original draft, Writing - review & editing. Silvia Ubaldi: Investigation, Resources. Scott L. Fairhall: Conceptualization, Methodology, Formal analysis, Writing – original draft, Writing - review & editing, Supervision, Funding acquisition.
Acknowledgements
The project was funded by the European Research Council (ERC) Starting Grant CRASK - Cortical Representation of Abstract Semantic Knowledge, awarded to Scott Fairhall under the European Union's Horizon 2020 research and innovation program (grant agreement no. 640594).
References (30)
- et al.
Word meaning in the ventral visual path: a perceptual to conceptual gradient of semantic coding
Neuroimage
(2016) - et al.
Exploring the role of the posterior middle temporal gyrus in semantic cognition: integration of anterior temporal lobe with executive processes
Neuroimage
(2016) - et al.
Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language
Cognition
(2004) - et al.
Left perirhinal cortex codes for similarity in meaning between written words: comparison with auditory word input
Neuropsychologia
(2015) - et al.
Cross-modal representation of spoken and written word meaning in left pars triangularis
Neuroimage
(2017) - et al.
Left perirhinal cortex codes for semantic similarity between written words defined from cued word association
Neuroimage
(2019) - et al.
Semantic memory and the brain: structure and processes
Curr. Opin. Neurobiol.
(2001) - et al.
A fronto-parietal circuit for tactile object discrimination:: an event-related fmri study
Neuroimage
(2003) Neuroimaging studies of semantic memory: inferring “how” from “where”
Neuropsychologia
(2003)- et al.
Recovering meaning: left prefrontal cortex guides controlled semantic retrieval
Neuron
(2001)