Elsevier

Brain and Language

Volume 214, March 2021, 104906
Brain and Language

Automaticity in the reading circuitry

https://doi.org/10.1016/j.bandl.2020.104906Get rights and content

Highlights

  • Active reading activates conventional language processing areas.

  • Only skilled readers show strong automatic activation in the left STG.

  • The strength of automaticity correlates with reading skill.

  • Automatic activation of STG in children might be a hallmark of skilled reading.

Abstract

Skilled reading requires years of practice associating visual symbols with speech sounds. Over the course of the learning process, this association becomes effortless and automatic. Here we test whether automatic activation of spoken-language circuits in response to visual words is a hallmark of skilled reading. Magnetoencephalography was used to measure word-selective responses under multiple cognitive tasks (N = 42, 7–12 years of age). Even when attention was drawn away from the words by performing an attention-demanding fixation task, strong word-selective responses were found in a language region (i.e., superior temporal gyrus) starting at ~300 ms after stimulus onset. Critically, this automatic word-selective response was indicative of reading skill: the magnitude of word-selective responses correlated with individual reading skill. Our results suggest that automatic recruitment of spoken-language circuits is a hallmark of skilled reading; with practice, reading becomes effortless as the brain learns to automatically translate letters into sounds and meaning.

Introduction

Mastering spoken language is natural, but learning a written language is not (Saffran et al., 2001, Wandell et al., 2012). Infants learn to understand spoken language through statistical regularities in natural speech starting from the earliest stages of development (Saffran, Aslin, & Newport, 1996). Indeed, encoding phonetic information during speech perception (Mesgarani et al., 2014, Yi et al., 2019) seems to be an automatic process in infants, and specialized circuits for processing spoken language located in the superior temporal gyrus (STG) are activated by speech sounds irrespective of attention, and even during sleep in infants as young as three months (Dehaene-lambertz, Dehaene, & Hertz-pannier, 2002). However, learning to read is an effortful process, which requires formal instruction on how to map arbitrary visual symbols (i.e., letters or graphemes) onto speech sounds (i.e., phonemes). Only after years of practice does this association become automatic and effortless, allowing for fluid and deep reading (Norton and Wolf, 2012, Wolf, 2018).

Behaviorally, the difference between a child who struggles to apply knowledge of grapheme-phoneme correspondence to decode a word and a child who fluidly reads a paragraph of text is striking. But neurally, what it means to automate the grapheme to phoneme conversion process is less clear. Cognitive models of reading have proposed that, for the literate brain, viewing printed words produces widespread and automatic activation of phonological and semantic representations (Harm and Seidenberg, 1999, Harm and Seidenberg, 2004, Seidenberg and McClelland, 1989, Van Orden and Goldinger, 1994). These models posit that literacy involves automatizing the connections between orthographic (visual), phonological, and semantic codes in the brain. Consistent with the prediction of these models, skilled adult readers show activation in canonical language processing areas such as the left inferior frontal gyrus (i.e., Broca’s area, IFG) and superior temporal gyrus (i.e., Wernicke’s area, STG) in response to visually-presented words regardless of whether or not the task requires them to actively read the words (Klein et al., 2015, Pattamadilok et al., 2017, Paulesu, 2001, Price, 2012, Turkeltaub et al., 2003, Wilson et al., 2004). Furthermore, there is ample behavioral evidence suggesting automatic involvement of phonological processing in response to printed words (Perfetti et al., 1988, Perfetti and Bell, 1991, Stroop, 1935). Indeed, in a series of studies examining the construction of “audiovisual objects” from text, Blomert and colleagues have suggested that automatization of letter-sound knowledge is a hallmark of skilled reading and the lack of automatization is a critical component of the struggles observed in children with developmental dyslexia (Blau et al., 2009, Blomert, 2011, van Atteveldt et al., 2004).

An intriguing conjecture is that neurons throughout the reading circuitry become automatically responsive to text, regardless of whether a subject intends to read the text, as a result of long-term simultaneous neural activity occurring in visual and language regions – akin to Hebbian learning – over the course of schooling (Hebb, 1949). Thus, becoming a skilled reader might involve automatizing the information transfer between visual and language circuits such that canonical speech processing regions in the STG start responding to written language even in the absence of attention.

In the present study, we defined automaticity as the evoked responses to visual stimuli in the absence of attention. To examine automaticity in the visual word recognition circuitry, we compare the response evoked by words to the response evoked by visually matched stimuli (scramble) under two different task conditions where attention is either focused on the stimuli (lexical decision task) or diverted away from the stimuli (color judgement on a fixation dot). Other studies have measured task effects within the reading circuitry (Chen et al., 2013, Chen et al., 2015, Mano et al., 2013). For example, Chen and colleagues (Chen et al., 2013) had subjects view words while performing either a (a) semantic judgement task, (b) lexical decision task, or (c) silent reading. They found that the cognitive task affected the evoked response to words as early as 150 ms after stimulus onset indicating flexibility in the reading circuitry. In later work, they argued that the existence of task effects early in word processing is evidence against automaticity in word recognition (Chen et al., 2015). However, the question of automaticity need not be an either-or distinction: some computations in the reading circuitry might occur automatically while others might flexibly change based on the demands of the cognitive task (Kay & Yeatman, 2017). For example, a large body of studies have examined automatic audio-visual integration of visual symbols and speech sounds during letter processing (Brem et al., 2010, Raij et al., 2000, Taylor et al., 2019). Our concept of automaticity is distinct from these other studies; we examine the potential role of attention in gating information flow between visual and language cortex. We set out to ask whether neurons in the reading circuitry respond to visual word stimuli when visual attention is directed away from the stimuli. This is a classic manipulation used to dissociate bottom-up (task-independent) visual responses from top-down (task-dependent) responses in visual cortex (Fang et al., 2008, Kay and Yeatman, 2017).

Previous studies suggesting automaticity of word processing have not diverted attention from the stimuli. For example, although the Stroop task asks subjects to make an orthogonal judgment (color naming) rather than reading the word, attention is still directed toward the word stimuli (Strijkers et al., 2015, Stroop, 1935). The same is true for incidental reading tasks that direct attention to orthographic and shape features of the words (Klein et al., 2015, Pattamadilok et al., 2017, Paulesu, 2001, Price, 2012, Turkeltaub et al., 2003, Wilson et al., 2004).

To test our hypothesis, it is essential to disentangle bottom-up, visually-driven responses from top-down, task-related responses, and assess whether and how components of the reading circuitry are activated in an automatic manner by bottom-up signals from visual cortex. We used identical word stimuli in two tasks: one task is to read the word and decide whether the word is a made-up word (lexical decision task), and the other is to direct attention to the fixation mark and respond to rapid color changes (fixation task). By comparing responses to the identical stimuli in these two tasks, we could assess the extent to which word-selective responses require visual attention to words and whether the development of automaticity in the reading circuitry is related to children’s reading abilities.

We used magnetoencephalography (MEG) and source localization to define brain regions that were activated during a lexical decision task (active reading) and, within those regions, we characterized the time course of neural responses to text during a reading-irrelevant task in which words were placed outside the focus of attention. Using this paradigm, we first tested whether canonical speech processing regions show automatic responses to printed words. We then assessed whether the strength of automaticity in those regions depends on an individual’s reading skill.

Section snippets

Participants

A total of 45 native English-speaking children ages 7–12 participated. We discarded data from 3 participants because their MEG signals were noisy and included data from the remaining 42 participants (age = 7.16–12.7 years, mean ± sd = 9.6 ± 1.5) for our analysis. Children without histories of neurological or sensory disorders were recruited from a database of volunteers in the Seattle area (University of Washington Reading & Dyslexia Research Database; http://ReadingAndDyslexia.com). Parents or

Behavioral results

Table A summarizes participants’ age, behavioral data, and IQ scores in typical and struggling readers. D’ for the lexical decision task suggest that all our participants performed the task as instructed (typical readers: 1.74 ± 0.23; struggling readers: 1.08 ± 0.17, mean ± sem; p = 0.03, independent t-test; Bayes Factor B10 = 3.05). All word stimuli were four letter words with high lexical frequency to encourage our young participants, including struggling readers, to do the task. Despite the

Discussion

We have demonstrated that a part of canonical circuitry for processing spoken language, the left superior temporal gyrus (STG), is automatically engaged when skilled readers view text. It has long been hypothesized that automating the association between printed symbols and spoken language is at the foundation of skilled reading (Blau et al., 2009, Blau et al., 2010, Blomert, 2011, Harm and Seidenberg, 1999, Seidenberg and McClelland, 1989, van Atteveldt et al., 2004) and our results formalize

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

We would like to thank Samu Taulu, Eric Larson, Patricia Kuhl and the rest of the I-LABS MEG team for helpful discussion and input throughout this project. This work was supported under the framework of international cooperation program managed by the National Research Foundation of Korea (NRF) (No. 2018K2A9A2A20088926 and 2019R1C1C1009383) to SJJ. This work was also funded by NSF BCS 1551330, NICHD R21HD092771 and R01HD09586101 and Jacobs Foundation Research Fellowship to JDY. SC was funded by

References (73)

  • C. Pattamadilok et al.

    Automaticity of phonological and semantic processing during visual word recognition

    NeuroImage

    (2017)
  • C.A. Perfetti et al.

    Phonemic activation during the first 40 ms of word identification: Evidence from backward masking and priming

    Journal of Memory and Language

    (1991)
  • C.A. Perfetti et al.

    Automatic (prelexical) phonetic activation in silent word reading: Evidence from backward masking

    Journal of Memory and Language

    (1988)
  • C.J. Price

    A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading

    Neuroimage

    (2012)
  • K.R. Pugh et al.

    Neurobiological studies of reading and reading disability

    Journal of Communication Disorders

    (2001)
  • T. Raij et al.

    Audiovisual integration of letters in the human brain

    Neuron

    (2000)
  • Andreas M. Rauschecker et al.

    Visual feature-tolerance in the reading network

    Neuron

    (2011)
  • N. van Atteveldt et al.

    Integration of letters and speech sounds in the human brain

    Neuron

    (2004)
  • E.K. Warrington et al.

    The anatomical localisation of selective impairment of auditory verbal short-term memory

    Neuropsychologia

    (1971)
  • D.D. Wheeler

    Processes in word recognition

    Cognitive Psychology

    (1970)
  • Bedo, N., Ribary, U., & Ward, L. M. (2014). Fast dynamics of cortical functional and effective connectivity during word...
  • M. Ben-Shachar et al.

    The development of cortical sensitivity to visual word forms

    Journal of Cognitive Neuroscience

    (2011)
  • J.R. Binder et al.

    Human brain language areas identified by functional magnetic resonance imaging

    The Journal of Neuroscience

    (1997)
  • J.R. Binder et al.

    Functional magnetic resonance imaging of human auditory cortex

    Annals of Neurology

    (1994)
  • V. Blau et al.

    Deviant processing of letters and speech sounds as proximate cause of reading failure: A functional magnetic resonance imaging study of dyslexic children

    Brain

    (2010)
  • S. Brem et al.

    Brain sensitivity to print emerges when children learn letter–speech sound correspondences

    Proceedings of the National Academy of Sciences

    (2010)
  • N. Brunswick et al.

    Explicit and implicit processing of words and pseudowords by adult developmental dyslexics: A search for Wernicke’s Wortschatz?

    Brain: A Journal of Neurology

    (1999)
  • G.T. Buracas et al.

    The effect of spatial attention on contrast response functions in human visual cortex

    The Journal of Neuroscience

    (2007)
  • Chen, Y., Davis, M. H., Pulvermüller, F., & Hauk, O. (2015). Early visual word processing is flexible: evidence from...
  • P.L. Cornelissen et al.

    Activation of the left inferior frontal gyrus in the first 200 ms of reading: Evidence from magnetoencephalography ({MEG})

    Plos One

    (2009)
  • Dehaene-lambertz, G., Dehaene, S., & Hertz-pannier, L. (2002). Functional neuroimaging of speech perception. In...
  • S. Dehaene et al.

    Cerebral mechanisms of word masking and unconscious repetition priming

    Nature Neuroscience

    (2001)
  • DeWitt, I., & Rauschecker, J. P. (2012). Phoneme and word recognition in the auditory ventral stream. Proceedings of...
  • B. Fischl et al.

    High-resolution intersubject averaging and a coordinate system for the cortical surface

    Human Brain Mapping

    (1999)
  • A. Gramfort et al.

    MEG and EEG data analysis with MNE-Python

    Frontiers in Neuroscience

    (2013)
  • M.W. Harm et al.

    Phonology, reading acquisition, and dyslexia: Insights from connectionist models

    Psychological Review

    (1999)
  • View full text