Elsevier

Neuropsychologia

Volume 137, 3 February 2020, 107299
Neuropsychologia

Interaction of top-down category-level expectation and bottom-up sensory input in early stages of visual-orthographic processing

https://doi.org/10.1016/j.neuropsychologia.2019.107299Get rights and content

Highlights

  • Expectation based on prior knowledge and current goals in the environment context.

  • Category-level expectation affects early visual-orthographic processing.

  • Prior expectation starts to affect familiar Chinese processing from N1 offset on.

  • Delayed N250 facilitation effect was found for unfamiliar Korean processing.

Abstract

How and when top-down information modulates visual-orthographic processing is an essential question in reading research. In a previous study, we showed that task modulation of print-tuning started at around 170 ms after stimulus presentation in the N1 offset of the ERP, while the N1 onset was yet unaffected. Here we test how prior category-level expectation affects visual-orthographic processing. Familiar, left/right-structured Chinese characters and stroke number matched, unfamiliar Korean characters were presented, while expectation about the upcoming stimuli was manipulated with green and blue colored frames (high Chinese vs. high Korean expectation). EEG data of 18 native Chinese speakers were recorded while participants performed an expectation judgment task. Results from occipito-temporal and whole map analyses revealed that effects of prior expectation changed throughout the N1. Accordingly in the N1 onset, a print tuning main effect was found, with a stronger N1 to Chinese characters than Korean characters, irrespective of expectation. In the N1 offset, an expectation-by-character interaction was observed at the whole map level, with a more negative N1 to Korean characters than Chinese characters when expecting a Chinese character, but no such difference when expecting a Korean character. Moreover, the expectation-by-character interaction continued to the N250, with similar responses to Chinese and Korean characters under the Chinese expectation condition, while less negative N250 to Korean than Chinese under the Korean expectation condition. Taken together, the current study provides evidence that prior category-level expectation starts to take effect at an early stage even within 200 ms by facilitating the processing of expected stimuli, suggesting that category-level expectation can influence early visual-orthographic processing during word recognition.

Introduction

Neural brain processes underlying reading must be fast and efficient, because skilled readers are able to read up to 250 words per minute, as revealed by eye tracking studies (Dimigen et al., 2011). While many neuroimaging studies have been investigating reading, it is still under debate whether these rapid brain processes reflect bottom-up visual tuning for written words (Dehaene et al., 2005), or instead reflect the integration of visual input and top-down expectations (Gordon et al., 2017; Hsu et al., 2014). Generally, top-down expectations, which base on our prior knowledge (Cheung and Bar, 2012) and current goals (Beck and Kastner, 2009; Chalk et al., 2010) of the environment context, are prepared brain states that reflect foreknowledge about what is probable or possible in the upcoming sensory environment (Summerfield and Egner, 2009).

There are a growing number of studies that provide support for the assumption of integration between visual input and top-down expectation, although in the domain of visual object perception. Studies revealed that expectations could reduce the computational demands in perception, which allows an efficient and rapid interpretation of the environment (Aru et al., 2016; Melloni et al., 2011). Foreknowledge expectations can be rich with configuration information (e.g., expect the likely configuration of furniture in a familiar room, Clark, 2013), color features (e.g., expect the color of a banana to be yellow, Witzel et al., 2011), and even specific object content (e.g., expect to see a microwave in a kitchen, Cheung and Bar, 2012). For example, observers’ expectation about the category of an upcoming target object (either a face or a house) was manipulated in an fMRI study (Esterman and Yantis, 2009). Results found that visual expectation facilitates subsequent perception by activating, in advance, the face- and house-selective regions of the visual cortex, with increased brain activity in corresponding category-selective regions of the temporal cortex before stimulus onset and faster reaction time in behavioral tasks. Although such paradigms with fMRI technique could provide information of anticipatory brain activity in some category-selective regions (e.g., Esterman and Yantis, 2009), the assumption of predictive processes may be heavily biased. One of the most important concerns would be the relatively long intervals requirement (>5 s) given the sluggish nature of the BOLD fMRI signal, which can in turn make such method unsuitable when examining some circumstances with more typical processing intervals or more rapid presentation sequence.

Like most “objects” in the visual world, written words can be recognized rapidly and efficiently as well (Maurer, et al., 2005a). Considering the accumulating evidence for effects of expectations on visual object perception, the question arises whether similar neural mechanisms of expectation effects occur in visual word processing. In fact, over the past several decades, a large number of studies have examined this issue, but the manipulations of top-down expectation were mostly achieved by using sentence context (Federmeier et al., 2007; Wang et al., 2015). Specifically, strongly and weekly constraining sentence context frames were used, in order to compare brain activity to expected and unexpected target words embedded in the sentences. However, using sentence context, most of these studies focused on the later N400 component, an event-related brain potential component that has been linked to access and integration of semantic information between target words and sentence context (Kutas and Hillyard, 1980), while the expectation influences on early visual-orthographic processing remains unclear.

Indeed, instead of sentence context, a few recent studies have started to examine the issue of expectation effects on early stages of word processing by using different task demands (Chen et al., 2013, 2015; Strijkers et al., 2015; Wang and Maurer, 2017; Yang et al., 2012). For example, Yang et al. (2012) compared responses to stimuli that varied parametrically in their wordlikeness using different tasks (i.e., explicit lexical decision task and implicit symbol detection task). Task by wordlikeness interactions were found throughout the reading system (Yang et al., 2012). Moreover, evidence from spatiotemporal brain dynamics revealed early task modulation (i.e., within 250 ms) for several psycholinguistic variables, such as orthographic typicality and frequency (Chen et al., 2015).

While both spatial and temporal evidence of task-induced top-down modulation on processing of early visual orthographic information has been provided, the effect of category-level expectation/modulation is less clear. Studies of task effects mainly investigated expectations at the level of detailed features, which were targeted based on the task at hand, rather expectations at the level of more abstract categories. Specifically, in order to get effects of different task demands, several tasks were normally used and compared. For example, in Strijker et al’s (2015) study, semantic categorization and color detection tasks were employed for examining task effects on visual word processing. In the semantic categorization task, participants were instructed to press the button when a visually presented word corresponded to an animal name, while in the color detection task, participants were instructed to press the button when a stimulus was presented in blue font (Strijkers et al., 2015). In other words, participants were instructed to respond when a target feature (e.g., font color in the color detection task) occurred in an ongoing stimulation stream. The foreknowledge of stimulus target features corresponding to different task demands leads to more sensitivity to task-related feature(s) than other irrelevant ones during processing of the visual input (herein visual words). The more abstract category-level expectation, instead, is related to the category of an upcoming target object/print (Egner et al., 2010), but not to different features belonging to the stimuli. The prior knowledge of the stimulus type leads to a response bias towards the expected category membership (Summerfield and Egner, 2016). In fact, no studies so far have explicitly examined how expectations about abstract category-level visual word representations influence visual-orthographic processing, even though category-level sensitivity is often investigated in studies on neural mechanisms of visual expertise in reading (e.g., Brem et al., 2006; Maurer et al., 2005a,b; Xue et al., 2008; Zhao et al., 2012).

In these studies, different stimulus categories were used that corresponded to different levels of reading expertise of the participants. For example, in Maurer et al. (2005a), responses to two stimulus categories, familiar words and unfamiliar symbol strings, were compared. Importantly, as they were presented in separate blocks, different categories of expectations for words and symbol strings were thus elicited. However, only neural tuning for print was examined in these studies, while the influence of category-level expectations on print tuning has not been explicitly explored.

Collectively, in the current study, instead of using different tasks to examine the top-down modulation from foreknowledge of target stimulus features on visual word processing, we tried to extend the top-down modulation to more abstract category-level expectations.

Additionally, despite a growing number of studies that have started to explore the integration of expectation and sensory input during object and word recognition, notably, the time course of the effects of expectation remains elusive. To our knowledge, only few studies to date have addressed this issue and mixed results were found (Aru et al., 2016; Dambacher et al., 2006; Dambacher et al., 2009; Gagl et al., 2018; Johnston et al., 2016; Sherwell et al., 2016). For example, Johnston and colleagues examined the modulation of expectations on image processing, including face expression changes, rigid-rotations and visual field location (Johnston et al., 2016). The mismatch between perceptual expectations and visual input was indexed by the N170 component. The N170 (N1) is a robust electrophysiologal marker for visual expertise of face, object and print stimuli (Brem et al., 2005; Gauthier et al., 2003; Tanaka and Curran, 2001; Maurer et al., 2005b). The N1 peaks around 140–180 ms after stimulus onset with negativity over occipito-temporal and positivity over fronto-central electrodes (Maurer et al., 2005a). However, studies that focused on visual conscious perception revealed earlier expectation effects in the time window of 80–95 ms after stimulus onset. For studies of expectation effects on word processing, results were also not clear. Results from sentence context manipulations suggested that expectation effects occurred either late at around 200–500 ms (Dambacher et al., 2006) or early within 90 ms (Dambacher et al., 2009). With regards to the time course of task expectation effects on word recognition, in our recent ERP study, we found that the influence of task on visual-orthographic processing changes throughout the N1. Specifically, print tuning (indicated by larger sensitivity to words/characters compared to control stimuli) was present in the N1 onset across all tasks, and that tasks modulated print tuning only later in the N1 offset (Wang and Maurer, 2017).

It seems important to explore whether the N170 (N1) also reflects effect of category-level expectation on visual-orthographic processing. Moreover, considering the previously reported results that only the late part of the N1, but not the early part of the N1 was sensitive to the top-down task modulation, it seems also important to explore the onset and/or offset of the potential categorical-level expectation modulation of visual-orthographic processing.

Taken together, in this study, we are particularly interested in two questions: (1) whether early visual-orthographic processing can be influenced by the abstract category-level expectations, and if so, (2) how such abstract category-level expectations effects would unfold within the stages of early visual-orthographic processing. To this end, we compared Chinese native speakers’ electroencephalography (EEG) activity to different categories of characters (i.e., familiar Chinese and unfamiliar Korean characters, respectively). In particular, rather than using different tasks which evoked more detailed feature-level expectation (Wang and Maurer, 2017), we used a design where expectation about the character category was explicitly established based on cues (i.e. colored frames) presented before stimuli. Specifically, we manipulated the probabilistic association between frame color and the following character type and notified the participants of all probabilistic contingencies before the experiment. The participants could thus expect which type/category of character was more likely to occur based on the frame color seen earlier. Based on the literature about categorical expectation effects on object perception (Egner et al., 2010; Gamond et al., 2011), we expected that word perception might also be facilitated by categorical expectations, which may result in smaller amplitudes of N170 component. This hypothesis can be based on the empirical evidence provided by Johnston and colleagues, who suggested that the N170 can index the general processes relating to expectations irrespective of stimulus object category (Johnston et al., 2016), with smaller N170 to predictable than unpredictable trials. Moreover, given the growing number of studies providing support for the assumption that several processes coincide within the N1 component in response to visual word (Brem et al., 2006; Cohen et al., 2000; Eberhard-Moscicka et al., 2016; Wang and Maurer, 2017), we further hypothesized that the modulation of categorical expectation may change throughout the N1 component. We did not have a strong hypothesis about expectation effects on the N1 latency. When reviewing the studies of expectation effects, little is known about expectation influence on N170 latency. Most of previous studies on categorical expectation effects were focusing on object category using fMRI (e.g., Egner et al., 2010). The previous EEG studies about reading expertise effect on visual word processing (e.g., Maurer et al., 2005a,b) did not explicitly compare N170 latency between expected and unexpected stimuli.

Section snippets

Participants

Data from 18 right-handed, native Mandarin speakers (4 males, between 21 and 32 years old) are reported in this paper. All subjects had no reading disabilities and had normal or corrected-to-normal vision. Data from 5 additional subjects were excluded from the analysis due to low signal-to-noise ratios and from 2 additional subjects due to excessive eye blinks. Written consent forms were presented to participants before recording began. After the study, each participant received cash

Behavioral results

The average reaction time and hit rate to target stimuli for expected Chinese characters (ec), unexpected Chinese characters (uc), expected Korean characters (ek) and unexpected Korean (uk) characters are reported in Table 1.

For reaction time, a repeated measure ANOVA revealed that Chinese expectation condition (expecting Chinese) had a shorter reaction time as compared to Korean expectation condition (expecting Korean) (script expectation, F (1, 17) = 39.99, p < .001). Moreover, the script

Discussion

The present study investigated whether category-level expectation about upcoming characters was able to influence early visual-orthographic processing. To this end, we selected Chinese characters that were familiar to the Chinese participants and Korean characters that were unfamiliar. We presented them after frame cues whose colors evoked expectations regarding the category of upcoming characters. The ERP results showed an interaction effect between category-level expectation and character

Conclusion

This study is among the first to investigate whether and when early visual-orthographic processing can be influenced by category-level expectations. Using familiar Chinese and unfamiliar Korean characters, as well as manipulating the probability of combination between colored frame cues (before stimuli onset) and characters, findings demonstrated that category expectation already took effect during early orthographic processing stages, and the degree of such expectation effects is based on the

CRediT authorship contribution statement

Fang Wang: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing - original draft. Urs Maurer: Conceptualization, Formal analysis, Supervision, Writing - review & editing.

Declaration of competing interests

None.

Acknowledgements and funding

We thank all subjects for participating in this study. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References (60)

  • L. Gamond et al.

    Early influence of prior experience on face perception

    Neuroimage

    (2011)
  • T.P. Jung et al.

    Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects

    Clin. Neurophysiol.

    (2000)
  • D. Lehmann et al.

    Reference-free identification of components of checkerboard-evoked multichannel potential fields

    Electroencephalogr. Clin. Neurophysiol.

    (1980)
  • S.E. Lin et al.

    Left-lateralized N170 response to unpronounceable pseudo but not false Chinese characters—the key role of orthography

    Neuroscience

    (2011)
  • U. Maurer et al.

    Coarse neural tuning for print peaks when children learn to read

    Neuroimage

    (2006)
  • C.M. Michel et al.

    EEG source imaging

    Clin. Neurophysiol.

    (2004)
  • C.J. Price et al.

    The interactive account of ventral occipitotemporal contributions to reading

    Trends Cogn. Sci.

    (2011)
  • K. Rayner

    The perceptual span and peripheral cues in reading

    Cogn. Psychol.

    (1975)
  • C. Summerfield et al.

    Expectation (and attention) in visual cognition

    Trends Cogn. Sci.

    (2009)
  • C. Summerfield et al.

    Feature-based attention and feature-based expectation

    Trends Cogn. Sci.

    (2016)
  • G. Xue et al.

    Cerebral asymmetry in children when reading Chinese characters

    Brain Res. Cogn. Brain Res.

    (2005)
  • G. Xue et al.

    Language experience shapes early electrophysiological responses to visual stimuli: the effects of writing system, stimulus length, and presentation duration

    Neuroimage

    (2008)
  • J. Yang et al.

    Task by stimulus interactions in brain responses during Chinese character processing

    Neuroimage

    (2012)
  • J. Aru et al.

    Early effects of previous experience on conscious perception

    Neurosci. Conscious.

    (2016)
  • S. Brem et al.

    Neurophysiological signs of rapidly emerging visual expertise for symbol strings

    Neuroreport

    (2005)
  • M. Chalk et al.

    Rapidly learned stimulus expectations alter perception of motion

    J. Vis.

    (2010)
  • Y. Chen et al.

    Task modulation of brain responses in visual word recognition as studied using EEG/MEG and fMRI

    Front. Hum. Neurosci.

    (2013)
  • Y. Chen et al.

    Early visual word processing is flexible: evidence from spatiotemporal brain dynamics

    J. Cogn. Neurosci.

    (2015)
  • A. Clark

    Whatever next? predictive brains, situated agents, and the future of cognitive science

    Behav. Brain Sci.

    (2013)
  • L. Cohen et al.

    The visual word form area

    Brain

    (2000)
  • Cited by (10)

    • Is there magnocellular facilitation of early neural processes underlying visual word recognition? Evidence from masked repetition priming with ERPs

      2022, Neuropsychologia
      Citation Excerpt :

      The N1 component has been shown to be sensitive to print, as it was larger for letter strings in familiar writing systems compared to visual control stimuli or word forms from unfamiliar writing systems (Maurer et al., 2005, 2008). Recent studies with more closely matched control stimuli suggest that this print tuning effect mainly manifests in the N1 onset, possibly reflecting an earlier onset of print-specialized processing (Eberhard-Moscicka et al., 2016; Wang and Maurer, 2017, 2020). In a model of the time-course of neural correlates of visual word recognition that was based on EEG masked priming studies, the N1 component was suggested to reflect the mapping of visual features to location-specific letter positions in alphabetic languages (Grainger and Holcomb, 2009).

    • Expectations Attenuate the Negative Influence of Neural Adaptation on the Processing of Novel Stimuli: ERP Evidence

      2022, Neuroscience
      Citation Excerpt :

      Given the rapid neural plasticity of the N1 and P2 in sensory processing (Miia et al., 2012; Hsu et al., 2014; Lee et al., 2020) and the decreased reaction times in the behavioral aspects of tasks, the observed deflections have been suggested to reflect neural adaptation in solving repetitive problems. Functionally, the frontal N1 and frontocentral P2 components are associated with the early stages of audio-visual processing (Vogel and Luck, 2000; Shedden and Nordgaard, 2001), particularly the phonologic and orthographic processing of Chinese characters (Lee et al., 2012; Tong et al., 2016; Tong et al., 2020; Wang and Maurer, 2020); the late component is related to higher processing of sensory information, such as the decomposition and reconstruction of Chinese character chunks (Zhang et al., 2019, 2020, 2021). In this study, the amplitude changes may reflect the facilitation of repeated character recognition and decomposition.

    View all citing articles on Scopus
    View full text