Automaticity in the reading circuitry
Introduction
Mastering spoken language is natural, but learning a written language is not (Saffran et al., 2001, Wandell et al., 2012). Infants learn to understand spoken language through statistical regularities in natural speech starting from the earliest stages of development (Saffran, Aslin, & Newport, 1996). Indeed, encoding phonetic information during speech perception (Mesgarani et al., 2014, Yi et al., 2019) seems to be an automatic process in infants, and specialized circuits for processing spoken language located in the superior temporal gyrus (STG) are activated by speech sounds irrespective of attention, and even during sleep in infants as young as three months (Dehaene-lambertz, Dehaene, & Hertz-pannier, 2002). However, learning to read is an effortful process, which requires formal instruction on how to map arbitrary visual symbols (i.e., letters or graphemes) onto speech sounds (i.e., phonemes). Only after years of practice does this association become automatic and effortless, allowing for fluid and deep reading (Norton and Wolf, 2012, Wolf, 2018).
Behaviorally, the difference between a child who struggles to apply knowledge of grapheme-phoneme correspondence to decode a word and a child who fluidly reads a paragraph of text is striking. But neurally, what it means to automate the grapheme to phoneme conversion process is less clear. Cognitive models of reading have proposed that, for the literate brain, viewing printed words produces widespread and automatic activation of phonological and semantic representations (Harm and Seidenberg, 1999, Harm and Seidenberg, 2004, Seidenberg and McClelland, 1989, Van Orden and Goldinger, 1994). These models posit that literacy involves automatizing the connections between orthographic (visual), phonological, and semantic codes in the brain. Consistent with the prediction of these models, skilled adult readers show activation in canonical language processing areas such as the left inferior frontal gyrus (i.e., Broca’s area, IFG) and superior temporal gyrus (i.e., Wernicke’s area, STG) in response to visually-presented words regardless of whether or not the task requires them to actively read the words (Klein et al., 2015, Pattamadilok et al., 2017, Paulesu, 2001, Price, 2012, Turkeltaub et al., 2003, Wilson et al., 2004). Furthermore, there is ample behavioral evidence suggesting automatic involvement of phonological processing in response to printed words (Perfetti et al., 1988, Perfetti and Bell, 1991, Stroop, 1935). Indeed, in a series of studies examining the construction of “audiovisual objects” from text, Blomert and colleagues have suggested that automatization of letter-sound knowledge is a hallmark of skilled reading and the lack of automatization is a critical component of the struggles observed in children with developmental dyslexia (Blau et al., 2009, Blomert, 2011, van Atteveldt et al., 2004).
An intriguing conjecture is that neurons throughout the reading circuitry become automatically responsive to text, regardless of whether a subject intends to read the text, as a result of long-term simultaneous neural activity occurring in visual and language regions – akin to Hebbian learning – over the course of schooling (Hebb, 1949). Thus, becoming a skilled reader might involve automatizing the information transfer between visual and language circuits such that canonical speech processing regions in the STG start responding to written language even in the absence of attention.
In the present study, we defined automaticity as the evoked responses to visual stimuli in the absence of attention. To examine automaticity in the visual word recognition circuitry, we compare the response evoked by words to the response evoked by visually matched stimuli (scramble) under two different task conditions where attention is either focused on the stimuli (lexical decision task) or diverted away from the stimuli (color judgement on a fixation dot). Other studies have measured task effects within the reading circuitry (Chen et al., 2013, Chen et al., 2015, Mano et al., 2013). For example, Chen and colleagues (Chen et al., 2013) had subjects view words while performing either a (a) semantic judgement task, (b) lexical decision task, or (c) silent reading. They found that the cognitive task affected the evoked response to words as early as 150 ms after stimulus onset indicating flexibility in the reading circuitry. In later work, they argued that the existence of task effects early in word processing is evidence against automaticity in word recognition (Chen et al., 2015). However, the question of automaticity need not be an either-or distinction: some computations in the reading circuitry might occur automatically while others might flexibly change based on the demands of the cognitive task (Kay & Yeatman, 2017). For example, a large body of studies have examined automatic audio-visual integration of visual symbols and speech sounds during letter processing (Brem et al., 2010, Raij et al., 2000, Taylor et al., 2019). Our concept of automaticity is distinct from these other studies; we examine the potential role of attention in gating information flow between visual and language cortex. We set out to ask whether neurons in the reading circuitry respond to visual word stimuli when visual attention is directed away from the stimuli. This is a classic manipulation used to dissociate bottom-up (task-independent) visual responses from top-down (task-dependent) responses in visual cortex (Fang et al., 2008, Kay and Yeatman, 2017).
Previous studies suggesting automaticity of word processing have not diverted attention from the stimuli. For example, although the Stroop task asks subjects to make an orthogonal judgment (color naming) rather than reading the word, attention is still directed toward the word stimuli (Strijkers et al., 2015, Stroop, 1935). The same is true for incidental reading tasks that direct attention to orthographic and shape features of the words (Klein et al., 2015, Pattamadilok et al., 2017, Paulesu, 2001, Price, 2012, Turkeltaub et al., 2003, Wilson et al., 2004).
To test our hypothesis, it is essential to disentangle bottom-up, visually-driven responses from top-down, task-related responses, and assess whether and how components of the reading circuitry are activated in an automatic manner by bottom-up signals from visual cortex. We used identical word stimuli in two tasks: one task is to read the word and decide whether the word is a made-up word (lexical decision task), and the other is to direct attention to the fixation mark and respond to rapid color changes (fixation task). By comparing responses to the identical stimuli in these two tasks, we could assess the extent to which word-selective responses require visual attention to words and whether the development of automaticity in the reading circuitry is related to children’s reading abilities.
We used magnetoencephalography (MEG) and source localization to define brain regions that were activated during a lexical decision task (active reading) and, within those regions, we characterized the time course of neural responses to text during a reading-irrelevant task in which words were placed outside the focus of attention. Using this paradigm, we first tested whether canonical speech processing regions show automatic responses to printed words. We then assessed whether the strength of automaticity in those regions depends on an individual’s reading skill.
Section snippets
Participants
A total of 45 native English-speaking children ages 7–12 participated. We discarded data from 3 participants because their MEG signals were noisy and included data from the remaining 42 participants (age = 7.16–12.7 years, mean ± sd = 9.6 ± 1.5) for our analysis. Children without histories of neurological or sensory disorders were recruited from a database of volunteers in the Seattle area (University of Washington Reading & Dyslexia Research Database; http://ReadingAndDyslexia.com). Parents or
Behavioral results
Table A summarizes participants’ age, behavioral data, and IQ scores in typical and struggling readers. D’ for the lexical decision task suggest that all our participants performed the task as instructed (typical readers: 1.74 ± 0.23; struggling readers: 1.08 ± 0.17, mean ± sem; p = 0.03, independent t-test; Bayes Factor B10 = 3.05). All word stimuli were four letter words with high lexical frequency to encourage our young participants, including struggling readers, to do the task. Despite the
Discussion
We have demonstrated that a part of canonical circuitry for processing spoken language, the left superior temporal gyrus (STG), is automatically engaged when skilled readers view text. It has long been hypothesized that automating the association between printed symbols and spoken language is at the foundation of skilled reading (Blau et al., 2009, Blau et al., 2010, Blomert, 2011, Harm and Seidenberg, 1999, Seidenberg and McClelland, 1989, van Atteveldt et al., 2004) and our results formalize
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
We would like to thank Samu Taulu, Eric Larson, Patricia Kuhl and the rest of the I-LABS MEG team for helpful discussion and input throughout this project. This work was supported under the framework of international cooperation program managed by the National Research Foundation of Korea (NRF) (No. 2018K2A9A2A20088926 and 2019R1C1C1009383) to SJJ. This work was also funded by NSF BCS 1551330, NICHD R21HD092771 and R01HD09586101 and Jacobs Foundation Research Fellowship to JDY. SC was funded by
References (73)
- et al.
Reduced neural integration of letters and speech sounds links phonological and reading deficits in adult dyslexia
Current Biology
(2009) The neural signature of orthographic-phonological binding in successful and failing reading development
Neuroimage
(2011)- et al.
Word and object recognition during reading acquisition: MEG evidence
Developmental Cognitive Neuroscience
(2017) - et al.
Task modulation of brain responses in visual word recognition as studied using EEG/MEG and fMRI
Frontiers in Human Neuroscience
(2013) - et al.
Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity
Neuron
(2000) - et al.
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals
NeuroImage
(2015) - et al.
Attention-dependent representation of a size illusion in human V1
Current Biology
(2008) - et al.
Good practice for conducting and reporting MEG research
NeuroImage
(2013) - et al.
Nonparametric statistical testing of EEG- and MEG-data
Journal of Neuroscience Methods
(2007) - et al.
The visual word form area: Expertise for reading in the fusiform gyrus
Trends in Cognitive Sciences
(2003)
Automaticity of phonological and semantic processing during visual word recognition
NeuroImage
Phonemic activation during the first 40 ms of word identification: Evidence from backward masking and priming
Journal of Memory and Language
Automatic (prelexical) phonetic activation in silent word reading: Evidence from backward masking
Journal of Memory and Language
A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading
Neuroimage
Neurobiological studies of reading and reading disability
Journal of Communication Disorders
Audiovisual integration of letters in the human brain
Neuron
Visual feature-tolerance in the reading network
Neuron
Integration of letters and speech sounds in the human brain
Neuron
The anatomical localisation of selective impairment of auditory verbal short-term memory
Neuropsychologia
Processes in word recognition
Cognitive Psychology
The development of cortical sensitivity to visual word forms
Journal of Cognitive Neuroscience
Human brain language areas identified by functional magnetic resonance imaging
The Journal of Neuroscience
Functional magnetic resonance imaging of human auditory cortex
Annals of Neurology
Deviant processing of letters and speech sounds as proximate cause of reading failure: A functional magnetic resonance imaging study of dyslexic children
Brain
Brain sensitivity to print emerges when children learn letter–speech sound correspondences
Proceedings of the National Academy of Sciences
Explicit and implicit processing of words and pseudowords by adult developmental dyslexics: A search for Wernicke’s Wortschatz?
Brain: A Journal of Neurology
The effect of spatial attention on contrast response functions in human visual cortex
The Journal of Neuroscience
Activation of the left inferior frontal gyrus in the first 200 ms of reading: Evidence from magnetoencephalography ({MEG})
Plos One
Cerebral mechanisms of word masking and unconscious repetition priming
Nature Neuroscience
High-resolution intersubject averaging and a coordinate system for the cortical surface
Human Brain Mapping
MEG and EEG data analysis with MNE-Python
Frontiers in Neuroscience
Phonology, reading acquisition, and dyslexia: Insights from connectionist models
Psychological Review
Cited by (10)
A volumetric asymmetry study of gray matter in individuals with and without dyslexia
2024, Journal of Neuroscience ResearchChildren’s Reading of Sublexical Units in Years Three to Five: A Combined Analysis of Eye-Movements and Voice Recording
2024, Scientific Studies of ReadingThe Neurobiology of Literacy
2022, The Science of Reading: A Handbook, Second EditionHow Learning to Read Changes the Listening Brain
2021, Frontiers in Psychology