Introduction

Although the mechanisms underlying visual attention remain a topic of active research, it is widely accepted that the focal area of attention can vary in size (Castiello & Umiltà, 1992; C. Eriksen & St James, 1986; Henderson, 1991; Jonides, 1983; LaBerge, 1983; Lavie, 1995) and that processing is more efficient when the focal area is small, compared to when it is large (C. Eriksen & Schultz, 1979; C. Eriksen & Yeh, 1985; Pan & Eriksen, 1993; Umiltà, 1998). The Eriksen flanker task (B. Eriksen & Eriksen, 1974), which requires participants to respond to a target while ignoring target-congruent or -incongruent distractor stimuli, has proven to be a particularly useful tool for studying attention. Notably, Gratton, Coles, Sirevaag, Eriksen, and Donchin (1988) used the flanker task to reveal a time-related component to attentional processing efficiency, such that incongruent distractors cause markedly less processing interference as response times (RTs) increase. These and other behavioral and electrophysiological findings suggest that distractor stimuli have an influence on processing early in the trial, but that it decreases through time (Burle, Possamaï, Vidal, Bonnet, & Hasbroucq, 2002; Czernochowski, 2015; Nigbur, Schneider, Sommer, Dimigen, & Stürmer, 2015; Ridderinkhof, 2002). In their seminal theory, Eriksen and St. James (1986) provided the explanation that attention behaves as a zoom lens or shrinking spotlight that starts out wide and diffuse at the beginning of a trial and gradually focuses on the target.

The size and shape of the attentional spotlight has been extensively studied using mixtures of horizontally and vertically arranged flanker stimuli (Chen & Tyler, 2002; Cohen & Shoup, 1993; Livne & Sagi, 2011; Vejnović & Zdravković, 2015) and visual search paradigms (Hüttermann, Memmert, & Simons, 2014; Luck, Hillyard, Mangun, & Gazzaniga, 1989; Panagopoulos, Von Grünau, & Galera, 2004). By manipulating the spatial distance, position, and stimulus onset asynchrony of distractors relative to targets, for example, Pan and Eriksen (1993) concluded that the dimensions of the spotlight dynamically adjust from trial to trial based on the spatial configuration of the stimulus at hand. In line with these results, subsequent work showed that the spotlight can take on the shape of a ring (Müller & Hübner, 2002) or a Mexican hat (Müller, Mollenhauer, Rösler, & Kleinschmidt, 2005) or can be divided among non-contiguous locations (Dubois, Hamker, & VanRullen, 2009; Müller, Malinowski, Gruber, & Hillyard, 2003; Treue & Martinez-Trujillo, 2012) depending on the spatial arrangement of the stimuli and the demands of the task. Through group-level analyses of speed and accuracy, however, other studies have concluded that there is a dimensional bias to the spotlight, such that it is elliptical in shape, broadly distributed along the horizontal plane and narrowly distributed along the vertical plane (Andersen & Kramer, 1993; Feng, Jiang, & He, 2007; Hüttermann, Memmert, Simons, & Bock, 2013). While we acknowledge that the notions of a stimulus-dependent spotlight and a horizontal attention bias are not mutually exclusive, we contend that the extent to which these processing features trade off within individual participants has not been thoroughly investigated.

In the current study, we used a two-dimensional flanker task paradigm and a corresponding computational model to investigate the hypothesis that individuals vary in dimensional biases related to attentional allocation when controlling for the spatial configuration of stimuli across conditions. As shown in Fig. 1, stimuli were designed to be identical in spatial distribution across conditions in an effort to limit stimulus-dependent modulation of the spotlight. Within each condition, we manipulated the arrangement of target-congruent and -incongruent distractors that participants must ignore in order to indicate the direction of the center target. We fit two variants of a sequential sampling model of within-trial decision processing during the flanker task to each participant’s data, which allowed us to calculate parameter estimates based on trial-level choices and RTs (Weichart, Turner, & Sederberg, 2020). Both model variants contain an attentional spotlight, implemented as the density function for a bivariate normal distribution that narrows onto the target throughout the decision process. The shape of the spotlight is specified by separate horizontal and vertical standard deviation (SD) parameters. In the circular spotlight model, the shape parameters were constrained to be equal in order to reflect the horizontally and vertically symmetric spatial configuration of the stimuli. The alternative elliptical spotlight variant subsumes the circular spotlight model, and allows the horizontal and vertical shape parameters to take on different values to optimally fit each participant’s data, if needed. Our results show that an elliptical rather than a circular spotlight is favored for most subjects, and demonstrate notable variability between subjects in terms of horizontal or vertical biases.

Fig. 1
figure 1

Examples of stimuli. Colors are used to highlight the contrasting configurations of left (red) and right (blue) arrows. All stimuli shown here contain a left-facing target, but analogous stimuli with right-facing targets were included in the experiment as well

Methods

Participants

Twenty-six undergraduate students were recruited from the University of Virginia to participate in exchange for partial course credit. All participants provided informed consent in accordance with the requirements of the Institutional Review Board at the university.

Stimuli and apparatus

A custom program using the State Machine Interface Library for Experiments (SMILE; https://github.com/compmem/smile) was written to present stimuli, track timing, and log responses. The experiment was administered on a desktop computer with Windows 10, connected to a 24-in., 1,920 × 1,080-pixel LED display with a refresh rate of 120 Hz. Stimuli were presented in white text on a dark gray background. Participants made responses using the outer two keys of a four-key Black Box ToolKit response pad. Stimulus arrays were comprised of 13 left- or right-facing arrows arranged in a diamond formation. Distractor arrows took on one of five configurations, as illustrated in Fig. 1, and participants were asked to indicate the direction of the arrow in the center of the array while ignoring all distractors. The stimuli were designed in consideration of research demonstrating that distractor interference is positively correlated with the proximity of distractors to the target (Andersen & Kramer, 1993; Feng et al., 2007; Pan & Eriksen, 1993). In the easy condition, all distractors were congruent to the target. In the moderate and hard conditions, distractors were incongruent to the target in the outer and inner layers of the array, respectively. Horizontal and vertical conditions were included to test for asymmetries in dimension-level response competition. On each trial, the array was presented in one of eight locations around the screen. Possible locations were equidistant from the center of the screen in increments of 45°. Task condition (easy, moderate, hard, horizontal, or vertical), target direction (left or right) and screen location (0, 45, 90, 135, 180, 225, 270, or 315°) were counterbalanced and randomized within block. Across eight blocks each consisting of 80 trials, participants completed a total of 640 trials.

Procedure

Participants provided informed consent and were seated in individual testing rooms. Written instructions and example stimuli appeared on the screen, and instructions were also provided verbally by an experimenter. A practice module with visual feedback for correct and incorrect responses was administered until the experimenter verified that the participant understood the task (~1 min). Prior to beginning the main task, the experimenter provided the following information: “You will complete eight blocks of the task, each lasting about 2 minutes. At the end of each block, you will receive a score based on speed and accuracy. Please try to get the highest score that you can.” Once the task began, a fixation cross appeared in the center of the screen and remained present for the duration of the block. Stimuli appeared on the screen and remained until a response was made, or for a maximum of 3,000 ms. Participants responded by pressing the leftmost key on the response pad if the arrow in the center of the array pointed left, and the rightmost key if the center arrow pointed right. Only responses made at least 150 ms after the stimulus appeared were recorded. At the end of each block, participants received a composite score between 0 and 100 that was calculated as shown in Equations 13:

$$ accuracy=\frac{\frac{N_{correct}}{N_{total}}-0.5}{0.5} $$
(1)
$$ speed=\frac{\sum \limits_{i\in I}\frac{\log\ \left({RT}_{max}+1.0\right)-\log\ \left(i+1.0\right)}{\log\ \left({RT}_{max}+1.0\right)-\log\ \left({RT}_{min}+1.0\right)}}{N_{total}} $$
(2)
$$ score= speed\ast accurancy\ast 100 $$
(3)

where I is a vector of RTs in seconds. Within this scoring metric, performance across conditions was scaled between chance (0.5) and perfect accuracy (1.0), and RTs were scaled to fall within an expected range of RTmin = 350ms to RTmax = 1350ms. Log transforms were used in Equation 2 to correct for rightward skew in the RT distributions, and 1.0 was added to prevent negative log RT values. To earn a high score, participants needed to respond both quickly and accurately in all conditions.

Computational models

The base model in our current investigation was designed after the zoom lens theory of Eriksen and St James (1986), with decision and attention mechanisms implemented within a leaky-competing accumulator (LCA; Usher & McClelland, 2001) model framework. Specifically, we modified the LCA-control model of the flanker task described by Weichart et al. (2020) to accommodate two-dimensional stimuli. Details of the LCA-control model are provided in the original article, but will be summarized here. In LCA-control and other accumulator models, trial-level decisions are thought to result from the noisy build-up of evidence for competing response options up to a threshold (α). Evidence accumulation is governed by drift rates that reflect the strength of information provided by the stimulus, lateral inhibition (β), and passive decay through time (κ). Each accumulator i with drift rate ρi and current evidence xi is updated continuously as shown in Equation 4.

$$ {\displaystyle \begin{array}{c}{dx}_i=\left({\rho}_i-\kappa {x}_i-\beta \sum \limits_{j\ne i}{x}_j\right)\frac{dt}{\Delta t}+\xi \sqrt{\frac{dt}{\Delta t}}\\ {}{x}_i\to \mathit{\max}\left({x}_i,0\right)\end{array}} $$
(4)

In our implementation, time is discretized via the Euler method, using a step size of dt = 0.01 modified by a time constant of △t = 0.1. The degree of noise is represented as a driftless Wiener process distributed as \( \xi \sim \mathcal{N}\left(0,1\right) \). Responses are made in favor of the first accumulator to exceed α, and RTs are equal to the duration of the decision process plus a non-decision time parameter (τ). Similar to the shrinking spotlight model designed by White, Ratcliff, and Starns (SSP; 2011), LCA-control features an attentional spotlight that gradually focuses on the target throughout the trial. Unlike the original SSP, the spotlight shrinks due to the dynamics of the decision process and not simply due to the passage of time. In previous work, the LCA-control model outperformed alternative time-based spotlight models in terms of fits to data from two variants of the flanker task, uniquely demonstrated an ability to capture nuanced differences in behavior between subjects, and accurately predicted trial-level buildup in EEG activity related to visual attention (Weichart et al., 2020). For our current purposes, the spotlight takes the form of a density function for a bivariate normal distribution centered on the target stimulus with initial horizontal and vertical SDs of sd0(h) and sd0(υ), respectively. The spotlight shrinks as a function of an online measure of cognitive control (c), modified by a rate parameter (rd) as shown in Equations 5 and 6.

$$ {sd}_a(h)={sd}_0(h)-{r}_dc $$
(5)
$$ {sd}_a\left(\upsilon \right)={sd}_0\left(\upsilon \right)-{r}_dc $$
(6)

A ratio parameter Θ governs the relationship between sd0(h) and sd0(υ), such that \( \Theta =\frac{sd_0(h)}{sd_0\left(\upsilon \right)} \). Θ was fixed to 1.0 when fitting the circular spotlight model, but was a free parameter when fitting the elliptical spotlight model. The behaviors of the attentional spotlights in the circular and elliptical spotlight models are illustrated in Fig. 2.

Fig. 2
figure 2

Representation of the shrinking two-dimensional spotlight of visual attention. Over the course of a trial, illustrated from left to right, the spotlight shrinks and focuses on the center arrow of a stimulus array, rendered in each subplot at z = 0. The strength of visual attention allocated to each arrow in a stimulus array is calculated from the density of a bivariate normal distribution within the corresponding unit square. The top panels show a circular spotlight, such that the shape parameters of the bivariate normal are constrained to be equal. The bottom panels show an elliptical spotlight, such that the two shape parameters are free to vary

The mechanism for cognitive control is based on descriptions of reactive control discussed in Braver’s dual mechanisms of control framework (Braver, 2012; Braver, Gray, & Burgess, 2008; De Pisapia & Braver, 2006), and is calculated as the cumulative distance between the total evidence and a conflict resolution threshold, δ. The continuous change in c is given by Equation 7.

$$ dc=\left(\delta -\sum \limits_{i\in \left\{1,2\right\}}{x}_i\right)\frac{dt}{\Delta_t} $$
(7)

Each individual arrow in a stimulus array occupies one square unit of perceptual space, and the spotlight is centered on the target. Drift rates corresponding to correct (ρ1) and incorrect (ρ2) responses are calculated as the total volume of the spotlight allocated to target-congruent and target-incongruent arrows, respectively. Within each unit square of the 5 × 5 stimulus array, we used the trapezoidal method to estimate the bivariate probability density at 100 equally spaced points (Kalambet, Kozmin, & Samokhin, 2018). The spotlight volume allocated to each unit square was then estimated from the integral of the density values over the range of interest, multiplied by the perceptual strength of a single arrow (p), as shown in Equation 8.

$$ V=p{\int}_n^{n+1}{\int}_m^{m+1}\frac{1}{2\pi {sd}_a(h){sd}_a\left(\upsilon \right)}\mathit{\exp}\left(-\frac{1}{2}\left[\frac{x^2}{sd_a{(h)}^2}+\frac{y^2}{sd_a{\left(\upsilon \right)}^2}\right]\right) dydx $$
(8)

Values of dx and dy were set to 0.1, and (x, y) coordinates fell within the unit square occupied by an arrow with vertices (n, m), (n + 1, m), (n + 1, m + 1), (n, m + 1), such that n, m ∈ [−2.5, −1.5, −0.5, 0.5, 1.5]. A summary of free parameters in the two models and their respective prior distributions is shown in Table 1.

Table 1 Model free parameters and priors

Model fitting and assessment

We used the model-fitting procedures described in detail by Weichart et al. (2020) with probability density approximation methods (PDA; Turner & Sederberg, 2014) that were implemented via custom programs in RunDEMC (https://github.com/compmem/RunDEMC). Broadly, fitting a model to a set of trial-level data from an individual subject was a six-step process: First, we specified the relevant model as a system of equations, prior distributions that were determined through a series of pilot investigations, and starting values for each free parameter. Next, the model was simulated 30,000 times using the starting set of parameter values. This step generated distributions of data in each task condition. The probability density function for the simulated data was then approximated using an Epanechnikov kernel (Turner & Sederberg, 2014; Turner, Sederberg, & McClelland, 2016). The estimated functional form of the simulated data is an approximation of the likelihood function of each observed response under the current set of parameter values. The posterior density of the parameter set was calculated as a combination of the likelihood function and the priors. A new proposal parameter set was then selected using differential evolution with Markov chain Monte Carlo (DE-MCMC; Ter Braak, 2006; Turner & Sederberg, 2012; Turner, Sederberg, Brown, & Steyvers, 2013), a genetic algorithm that selects parameter values according to the success of previous proposals. This procedure was implemented for 800 iterations across 90 chains. Goodness of fit was assessed using estimates of Bayes factor, which allowed us to make inferences about the strength of evidence in favor of one model over the other using the scale described by Kass and Raftery (1995). We first calculated the Bayesian information criterion (BIC; Schwarz, 1978) value for each model using the equation

$$ BIC=-2\mathit{\log}\left(L\left(\left.\hat{\theta}\right|D\right)\right)+\mathit{\log}{(N)}_p, $$
(9)

where \( L\left(\left(\left.\hat{\theta}\right|D\right)\right) \) is the maximum log likelihood estimate of parameter set θ, N is the number of trials, and p is the number of free parameters. We then approximated Bayes factor by comparing BIC values for each candidate model using the following equation (Kass & Raftery, 1995):

$$ {BF}_{i,j}\approx \mathit{\exp}\left[-\frac{1}{2}\left({BIC}_i-{BIC}_j\right)\right]. $$
(10)

Results

Behavioral results

Performance scores based on speed and accuracy were calculated for each participant and task condition using Equations 13. As shown in panels A, B, and C of Fig. 3, we observed the expected pattern of decreasing performance from the easy to the moderate condition (accuracy: t(25) = 3.48, p = 0.0018; speed: t(25) = -10.75, p < 0.0001; composite score: t(25) = 9.65, p < 0.0001) and from the moderate to the hard condition (accuracy: t(25) = 5.54, p < 0.0001; speed: t(25) = -6.21; p < 0.0001; composite score: t(25) = 8.74, p < 0.0001). Performance was also better across participants in the horizontal compared to the vertical condition when considering composite scores, and this effect was driven by faster RTs rather than an a difference in accuracy (accuracy: t(25) = 1.01, p = 0.3244; speed: t(25) = -4.20, p = 0.0003; composite score: t(25) = 3.92, p = 0.0013). Panels D, E, and F of Fig. 3 present nuanced insight into the latter comparison. The majority (20 out of 26) of participants performed better in the horizontal compared to the vertical condition, but some participants (six out of 26) displayed the opposite pattern of results. This indicates that most participants were better at ignoring incongruent distractors that were placed immediately above and below the target, compared to those that were placed immediately to the left and right. Model predictions shown in panels A, B, and C of Fig. 3 are discussed in the Model results section below.

Fig. 3
figure 3

Results. Top row: Observed and model-predicted scores within condition. Bars show observed mean accuracy (A), mean response times (RTs) (B), and composite scores (C). Triangular and circular points show mean scores predicted by the elliptical and circular spotlight models, respectively. Error bars show 95% confidence intervals across subjects. Bottom row: Observed performance differences, horizontal vs. vertical conditions. (D) Points show x2 statistics from tests comparing subject-level frequencies of correct and incorrect responses in the horizontal and vertical conditions. Values were plotted negatively if raw vertical accuracy exceeded raw horizontal accuracy. Points outside of the gray panel denote significance (α = 0.05, critical value = + / -3.841). (E) Points show t statistics from independent samples t-tests comparing trial-level RTs in the horizontal and vertical conditions for each subject. Points outside of the gray panel denote significance (α = 0.025, critical value = + / -1.969). (F) Differences in composite scores in the horizontal and vertical task conditions, calculated within-subject. Subjects in panels D and E are sorted according to panel F

Model results

Before assessing our results, we first compared maximum log-likelihood (MLL) estimates from the circular and elliptical spotlight models at the level of each subject as a check for our fitting procedures. Because the elliptical model subsumes the circular variant, the MLL for the elliptical model should always be greater than or approximately equal to that of the circular model. We indeed found this to be true for all subjects. We then estimated Bayes factor values to compare the two models in order to account for model complexity in addition to MLL when assessing model performance. Results are shown in Fig. 4. Log Bayes factor values favor the circular model (negative values) in cases where the addition of the free parameter Θ did not improve the fit of the model (six out of 26 subjects). For 20 out of 26 subjects, however, the additional flexibility of the elliptical model provided improved model fits as determined by Bayes factor estimates. Evidence was “strong” or “very strong” for 16 out of 26 subjects in favor of the elliptical model, and for one out of 26 subjects in favor of the circular model.

Fig. 4
figure 4

Subject-level differences in Bayes factor estimates comparing the elliptical spotlight and circular spotlight models. The elliptical model outperforms the circular model for the majority of subjects (higher estimated values indicate a stronger win for the elliptical model). Points that fall outside of the light gray range (− log (10) < log (BF) < log (10)) indicate “strong” evidence for one model over the other, and points that fall outside of the dark gray range (− log (100) < log (BF) < log (100)) indicate “very strong” evidence (Kass & Raftery, 1995)

We next wanted to gain insight into the range of spotlight dimensions calculated within the elliptical model. We first determined the scaled difference between horizontal and vertical standard deviations (Dh, υ) from each subject’s best-fitting parameter values in the elliptical spotlight model using Equation 11:

$$ {D}_{h,\upsilon }=\frac{sd_0(h)-{sd}_0\left(\upsilon \right)}{\frac{sd_0(h)+{sd}_0\left(\upsilon \right)}{2}} $$
(11)

where sd0(υ) = sd0(h)Θ. The left panel of Fig. 5 shows the range of best-fitting spotlight shapes for each subject: data from 17 out of 26 subjects were best-fit by a horizontally biased spotlight (points above y = 0.0), and data from nine out of 26 subjects were best-fit by a vertically biased spotlight (points below y = 0.0). As shown by the right panel of Fig. 5, we identified a positive correlation between the extent of horizontal bias in spotlight shape and the extent of performance benefit in the horizontal relative to the vertical task condition, when considering composite scores (R2 = 0.31). The direction of this relationship indicates that participants with a horizontally biased attentional spotlight are better equipped to ignore distractors placed above and below the target compared to those placed to the left and right of the target, and participants with a vertically biased spotlight show the opposite performance benefit. When considering the relationship between horizontal bias and performance in terms of speed and accuracy separately, we find that the shape of the spotlight has a stronger correlation with horizontal versus vertical differences in speed (R2 = 0.25) compared to accuracy (R2 = 0.07).

Fig. 5
figure 5

Model results. Left panel: Subject-level scaled differences in best-fitting horizontal and vertical shape parameters for the spotlight in the elliptical spotlight model. Right panel: Behavior (x-axis) vs. scaled differences in parameter estimates (y-axis). Spotlight shape asymmetry predicts behavioral performance differences in the horizontal and vertical conditions

To investigate why the elliptical model tended to fit better than the circular variant, we simulated data in each model, and compared it to the behavior that our participants actually produced. We first found best-fitting parameters for each model and subject by identifying the particle in the joint posterior with the maximum weight. Using best-fitting parameters and the relevant model, we generated 10,000 choice-RT trials in each condition. We were then able to calculate a performance score for each model and subject using Equations 13. Ranges of condition-level scores generated by each model are shown in Fig. 3, alongside the observed data. Panels A, B, and C of Fig. 3 show that both models predict the observed pattern of decreasing accuracy, speed, and composite performance as we move from the easy, to the moderate, to the hard conditions. The elliptical model also captures the observed pattern of better performance in the moderate and horizontal conditions, compared to the hard and vertical conditions. Given that data from most participants favored a horizontally biased spotlight (see Fig. 5), elliptical model predictions reflect the fact that moderate and horizontal stimuli contained the identical configuration of distractors along the horizontal midline, as did the hard and vertical conditions (see Fig. 1). While the elliptical model made accurate predictions in the moderate and horizontal conditions, it was unable to capture the pattern of better performance in the vertical condition compared to the hard condition. We suspect this is due to perceptual continuity or grouping effects that are not currently accounted for in the model, but that would disproportionately affect performance in the hard condition due to the deliberately “interrupted” configuration of the distractors in the array (Livne & Sagi, 2010; Logan, 1996; Manassi, Sayim, & Herzog, 2012). Problems arising from a lack of perceptual continuity-related mechanisms appear to be exacerbated by over-constraint of the spotlight dimensions in the circular model. In addition to overestimating performance in the hard condition, the circular model underestimates performance in both the horizontal and the vertical conditions. Because the spotlight in the circular model is constrained to be round, it is unable to predict the observed differences in performance between the horizontal and vertical conditions that are shown in the panels D, E, and F of Fig. 3.

Discussion

Here, we used model-based analyses to investigate individual differences in the shape of the attended visual area while subjects responded to two-dimensional flanker task stimuli. Given evidence of an attentional spotlight that dynamically adjusts to the spatial configuration of stimuli from trial to trial (Pan & Eriksen, 1993), we developed stimulus arrays that were perceptually identical across conditions in order to identify subject-level biases in spotlight dimensions. We constructed two variants of a model that was designed after the zoom lens hypothesis of visual attention (C. Eriksen & St James, 1986; Weichart et al., 2020). Consistent with previous results (Andersen & Kramer, 1993; Feng et al., 2007; Hüttermann et al., 2013), we found that most participants use an elliptical spotlight, specifically one with a horizontal bias. This was not the case for all subjects, however, and we identified a widely variable range of spotlight configurations.

Previous studies investigating dimensional biases in visual attention have inferred the shape of the spotlight based on group-level differences in responses to horizontally and vertically configured stimuli. To explore individual differences in attentional biases, our model-based methods allowed us to account for the spotlight configuration’s critical effects on the cascade of mechanisms that ultimately results in a specific pattern of behavior. Our results therefore add to the existing literature on attentional allocation, highlighting individual differences in dimensional biases that should be considered in future work. One avenue for further study will be to test the stability of the baseline spotlight shape in the context of different task demands. Because all of the stimuli in the current study contained left- or right-facing arrows, participants were potentially biased to implement a horizontally oriented spotlight. Follow-up work will additionally use stimuli consisting of up- or down-facing arrows to help determine the extent to which task demands influence the baseline shape of the spotlight, and the extent to which spotlight malleability varies between subjects.

Our results present compelling evidence of dimension-dependent differences in attentional processes that have not been considered in existing mechanistic models of the flanker task. All other within-trial flanker decision models have been designed to fit data from stimulus arrays oriented along a single horizontal plane (Hübner, Steinhauser, & Lehle, 2010; Ulrich, Schröter, Leuthold, & Birngruber, 2015; White et al., 2011). Real-world challenges to visual attention, however, require integrated processing across multiple spatial dimensions. By failing to consider multidimensional stimuli, our results indicate that these models are potentially missing an important source of variability between individuals in visual processing mechanisms.

Open practices statement

The data, stimuli, and experiment code used in the current study are available via the Center for Open Science at https://osf.io/nef6j/(DOI:10.17605/OSF.IO/NEF6J).