Skip to main content
Open AccessShort Research Article

Effects of Input Modality on Vocal Effector Prioritization in Manual–Vocal Dual Tasks

Published Online:https://doi.org/10.1027/1618-3169/a000479

Abstract

Abstract. Doing two things at once (vs. one in isolation) usually yields performance costs. Such decrements are often distributed asymmetrically between the two actions involved, reflecting different processing priorities. A previous study (Huestegge & Koch, 2013) demonstrated that the particular effector systems associated with the two actions can determine the pattern of processing priorities: Vocal responses were prioritized over manual responses, as indicated by smaller performance costs (associated with dual-action demands) for the former. However, this previous study only involved auditory stimulation (for both actions). Given that previous research on input–output modality compatibility in dual tasks suggested that pairing auditory input with vocal output represents a particularly advantageous mapping, the question arises whether the observed vocal-over-manual prioritization was merely a consequence of auditory stimulation. To resolve this issue, we conducted a manual–vocal dual task study using either only auditory or only visual stimuli for both responses. We observed vocal-over-manual prioritization in both stimulus modality conditions. This suggests that input–output modality mappings can (to some extent) attenuate, but not abolish/reverse effector-based prioritization. Taken together, effector system pairings appear to have a more substantial impact on capacity allocation policies in dual-task control than input–output modality combinations.

Typical experiments in multitasking research often focus on tasks involving a rather restricted range of effector systems (mostly manual key presses; see, e.g., Pashler, 1994). While such restrictions can be helpful to ensure a highly controlled experimental situation, everyday life often confronts us with challenges requiring the coordination of different effector systems simultaneously (cross-modal action; see Huestegge & Hazeltine, 2011; Huestegge, Pieczykolan, & Koch, 2014). However, the impact of (combinations of) effector systems on multiple-action (or dual-task) control has largely been disregarded in previous research and theories (e.g., Logan & Gordon, 2001; Meyer & Kieras, 1997; Navon & Miller, 2002; Tombu & Jolicœur, 2003).

A study that explicitly focused on the impact of effector system combinations on multiple-action control was conducted by Huestegge and Koch (2013). They had participants respond to a single auditory stimulus (presented to the left/right ear) with either a single oculomotor, vocal or manual response, or with two of these responses simultaneously. An analysis of the pattern of performance costs (i.e., response time [RT] difference between single- and dual-response conditions for each effector system) revealed an asymmetrical distribution of these costs throughout all pairwise combinations of effector systems: Oculomotor responses were associated with smaller costs than vocal and manual responses, while vocal costs were only large when combined with oculomotor (but not with manual) responses. Finally, manual costs were substantial throughout. Interestingly, this pattern could not be explained in terms of the overall response time levels of the effector systems (e.g., vocal responses were slower than manual responses, nevertheless associated with smaller performance costs). Therefore, Huestegge and Koch (2013) interpreted these findings as evidence for a generic capacity allocation policy among responses based on an ordinal effector system hierarchy (see also Pieczykolan & Huestegge, 2014): Oculomotor responses are assumed to be prioritized over vocal and manual responses, while vocal responses are prioritized over manual responses. Probably, the specific effector systems are anticipated early during task processing, and a corresponding capacity allocation policy is implemented accordingly to eventually select and execute these responses (i.e., an anticipatory mechanism similar to action effect anticipation as assumed by ideomotor theories; see, e.g., Badets, Koch, & Philipp, 2016; Pfister, 2019, for reviews).

However, a potential alternative explanation of these findings comes from studies on input–output modality compatibility (IOMC) effects. IOMC effects refer to the influence of the combination of sensory systems and effector systems on dual-task performance. A dual-task setting involving a visual–manual task in combination with an auditory–vocal task (referred to as compatible mapping) has been reported to yield smaller dual-task costs than a dual-task setting with a reversed (incompatible) modality mapping (i.e., visual–vocal and auditory–manual; Göthe, Oberauer, & Kliegl, 2016; Halvorson, Ebner, & Hazeltine, 2013; Stelzel & Schubert, 2011; Stelzel, Schumacher, Schubert, & D'Esposito, 2006). Analogous findings have been observed in other multitasking paradigms such as psychological refractory period studies (Maquestiaux, Ruthruff, Defer, & Ibrahime, 2018) and task-switching studies with respect to switch costs (Stephan & Koch, 2010, 2011; Stephan, Koch, Hendler, & Huestegge, 2013) and mixing costs (Hazeltine, Ruthruff, & Remington, 2006; Schacherer & Hazeltine, 2019). The advantage of compatible mappings has usually been explained by referring to the similarity of stimuli to typical effects associated with certain actions: For example, vocal actions are typically followed by auditory effects (ideomotor account; see Greenwald, 1972, 2003; Stephan & Koch, 2010, 2011).

Given that Huestegge and Koch (2013) only utilized auditory stimuli, it is possible that this setting created a particular IOMC-like advantage for vocal (vs. manual) action demands, which may have resulted in the prioritization of vocal-over-manual actions. This potential alternative explanation receives further credibility by other previous reports: Some studies, in fact, reported greater dual-task costs for vocal than for manual responses (e.g., Fagot & Pashler, 1992; Holender, 1980; Schumacher et al., 2001), and these studies involved visual stimuli (or only one fixed assignment of stimulus-to-response modalities in the case of Schumacher et al., 2001). Therefore, a systematic examination of the role of stimulus modality on effector prioritization in vocal–manual dual tasks is pending.

The present study was conducted to rule out that vocal-over-manual prioritization reported previously (e.g., Huestegge & Koch, 2013) was simply due to the use of auditory stimuli (i.e., due to IOMC-like effects). Therefore, we conducted a study requiring manual and vocal responses, in which we explicitly manipulated the input modality by using only visual stimuli in one condition and only auditory stimuli in another condition. Principally, two study design options appeared feasible: First, it is possible to closely replicate the setup used by Huestegge and Koch (2013), in which one aspect of a single stimulus determined both actions in dual conditions (e.g., a left tone requires participants to always respond with both a left key press and uttering the word left). Second, it is possible to implement a more typical dual-task setup, in which two stimulus aspects independently determine the actions required in the two effector systems (e.g., a high frequency tone on the left ear requires pressing the right key but uttering left). As we recently demonstrated that the same effector-based hierarchy can be observed in both types of setups (Hoffmann, Pieczykolan, Koch, & Huestegge, 2019), we decided to follow the second approach here. Such a typical dual-task setup is probably more relatable for the majority of current dual-task researchers and theories because these theories usually assume two independent response selection processes.

Across blocks of trials, participants either responded with single vocal, single manual, or with both responses to either a visual or an auditory stimulus. Single-task blocks involved responding to one dimension only of a stimulus, while dual-task blocks involved responding to two different dimensions of that same stimulus. In the auditory stimulus condition, a tone was presented, and tone pitch and location were each assigned to one of the two effector systems. In the visual stimulus condition, the letters “L” and “R” were presented either in correct or in mirrored orientation so that letter identity and orientation were distinct visual stimulus dimensions each assigned to one effector system (see Table 1 for an illustration of all possible stimuli in the visual or auditory domain and the possible instructed responses).

Table 1 Overview of possible stimuli and responses (left/right manual key press or vocal utterance “left”/“right”).

If IOMC-like effects were the main reason behind vocal-over-manual prioritization, one would expect manual-over-vocal prioritization (indexed by relatively smaller dual-task costs) when visual stimuli trigger both responses. However, if effector system pairings are stronger determinants for capacity allocation than IOMC mappings, one would expect vocal-over-manual prioritization, irrespective of stimulus modality. Nevertheless, in the latter case, it is still possible that IOMC-like effects attenuate the strength of vocal-over-manual prioritization.

Method

Participants

A power analysis (using and a between-measurement correlation = .014 as observed by Huestegge & Koch, 2013, in the relevant vocal–manual combination group; α = 5%, 1 − β = 95%) suggested a minimum sample size of 13 participants. Due to counterbalancing and because we were also interested in a potential interaction of effector system, task condition, and stimulus modality, 32 participants took part. Four participants were excluded because they produced too many (>33%) invalid trials (i.e., trials involving omission/commission errors, outliers, or trials in which a saccade was executed prior to the required response, as such eye movements were shown to affect response latencies in other effector systems; see Huestegge, 2011; Huestegge & Adam, 2011). To ensure full counterbalancing, we recollected these data by testing four new participants. The final sample (26 females) had a mean age of 29.5 years (SD = 10.2). All had normal or corrected to normal vision and hearing, were right-handed, and naïve regarding the purpose of the study. Participants gave informed consent and received a monetary reward or course credits for participation.

Apparatus and Stimuli

Participants were seated 67 cm in front of a 21″ cathode ray tube screen (temporal resolution: 100 Hz, spatial resolution: 1,024 × 768 pixels) with a standard German QWERTZ keyboard and in front of a Sennheiser e835-S microphone (Sennheiser electronic GmbH & Co KG, Wedemark, Germany). Participants wore supra-aural headphones (Sennheiser, PMX 95). Experiment Builder (version 2.1.140, SR Research Ltd., Ottawa, Ontario, Canada) was used to run the experiment and to log response events (left and right arrow key presses both operated by the participant’s right index finger and vocal RTs by utilizing the integrated voice key functionality). Content of the vocal response was recorded and registered online by the experimenter. An eye tracker (Eyelink 1000, SR Research) with a sampling rate of 1,000 Hz registered eye movements of the right eye in order to control for saccade occurrence (see the Participants section). During all blocks, a green fixation cross (approximate size = 0.4° of visual angle) on black background was present at the screen center. To the left and right, two green rectangular squares (also with a size of approximately 0.4°) at an eccentricity of 8.5° were displayed, but these were irrelevant in the context of the present study (they were included to be able to compare the present results with similar, other experiments from our lab involving instructed eye movements). The capital letters R and L served as visual stimuli (size: 0.6° displayed about 0.4° above the fixation cross). They were either mirrored (pointing to the left side: ⅃ and Я) or not (pointing to the right side: L and R). Auditory stimuli were easily distinguishable sinusoidal tones of either high (1,000 Hz) or low (400 Hz) frequency presented either on the left or right ear.

Procedure and Design

At the beginning of each block, participants received both written and oral instructions, followed by a three-point horizontal calibration routine of the eye tracker. In each block, visual or auditory stimuli were presented, and participants were instructed to either respond vocally, manually, or both (each as fast and accurately as possible, but without any instructions regarding response order, grouping, or prioritization). The instructions included information about the stimulus modality and the assignment of dimension to effector system (i.e., which stimulus component was assigned to which effector, e.g., reacting manually to letter orientation and verbally to letter identity). In each trial, the stimulus was presented for 80 ms. All stimulus components (L vs. R, mirrored vs. not, high vs. low frequency, and presented to the left vs. right ear) occurred equally often in random order in each experimental condition. Participants responded by pressing the right or left arrow key, by uttering the word links or rechts (German for left/right), or both, depending on the current block. Responses should always be given in a spatially congruent manner to the respective stimulus component (e.g., left response to L, to a mirrored letter orientation, to a sound presented to the left ear, or to a low frequency, the latter analogous to pitch-location mappings on a piano keyboard). Half of the dual-task trials involved response–response compatibility in the sense that a left key press was combined with uttering “left”, while the other half was incompatible. Trials were separated by an inter-stimulus-interval of 3,000 ms. All participants experienced all 12 different block types twice: 3 (single manual, single vocal, dual task) × 2 (auditory, visual stimuli) × 2 (two possible assignments of stimulus component to effector system per stimulus modality). In total, each participant completed 24 blocks, each consisting of 32 trials.

The sequence of conditions was counterbalanced across participants apart from three restrictions to reduce confusion for participants. All participants started with one of the single-task conditions followed by either the other single-task condition and next the dual-task condition or vice versa, involving the same stimulus modality and the same assignment of stimulus component to task. Then, these three conditions were repeated once. This was followed by six blocks in the respective other stimulus modality condition with the same sequence of task conditions (e.g., single manual – single vocal – dual task). The sequence of task conditions stayed constant within participants. Next, the stimulus modality switched again, but now the stimulus component to effector system assignment was reversed compared to the first six blocks. The same applied to the six final blocks regarding the second stimulus modality.

The experimental 2 × 2 × 2 design involved the independent within-subject variables: effector system (manual vs. vocal), task condition (single vs. dual), and stimulus modality (auditory vs. visual). RTs and error rates served as dependent variables.

Results

Data Treatment

Trials involving omission or commission errors (in manual or vocal responses, 2.1%) and all trials in which a saccade was registered prior to the execution of the required manual and/or vocal response (8.4%) were defined as invalid and discarded. The same applied to outliers that were defined as responses executed faster or slower than two SDs of the individual mean in each condition (5.3%). This resulted in 84.4% valid data. Finally, directional errors (4.6% of valid data) in manual and/or vocal responses (e.g., uttering right instead of left) were excluded from RT data analyses.

Response Times

Absolute RT data are illustrated in Figure 1, while dual-task costs are depicted in Figure 2. RT data and error rates including dual-task costs are reported in Table 2. Data are publicly available under https://doi.org/10.5281/zenodo.3756790. Results of 2 × 2 × 2 analyses of variance (ANOVAs) on RT data and error rates are depicted in Table 3. The analysis of RT data revealed a significant main effect of the effector system, indicating that manual responses (765 ms) were overall faster than vocal responses (994 ms). There was a significant main effect of task condition, suggesting that dual-task conditions yielded performance costs of 373 ms overall (dual-task RTs: 1,066 ms vs. single-task RTs: 693 ms). We also observed a significant main effect of stimulus modality, indicating overall lower RTs in response to a visual (850 ms) than to an auditory stimulus (909 ms).

Figure 1 Mean RTs as a function of effector system, task condition, and stimulus modality. Error bars represent mean standard errors. RT = response time.
Figure 2 Dual-task costs as a function of effector system and stimulus modality. Error bars represent mean standard errors. RT = response time.
Table 2 Mean RTs, error rates, and dual-task costs (including SE) across effector systems, stimulus modalities, and task conditions.
Table 3 Overview of statistical test results (three-way ANOVAs) regarding RTs and error rates.

Crucially, the interaction of effector system and task condition was significant, indicating a difference in dual-task costs between effector systems. Specifically, we observed smaller dual-task costs for vocal (306 ms, single-task RTs: 841 ms vs. dual-task RTs: 1,147 ms) than for manual responses (440 ms, single-task RTs: 545 ms vs. dual-task RTs: 985 ms). This was further qualified by a significant three-way interaction: Dual-task cost differences between effector systems varied between stimulus conditions. Note that this pattern persisted when excluding trials with an inter-response-interval below 100 ms, ensuring that the results were not driven by trials in which response grouping may have occurred.

Post-hoc paired sample t-test comparisons revealed significantly smaller vocal than manual dual-task costs in both stimulus conditions. In the auditory condition, we observed a difference of 178 ms, t(31) = 4.45, p < .001, d = 0.95, while in the visual condition, the difference in dual-task costs between effector systems amounted to 88 ms, t(31) = 2.35, p = .026, d = 0.60. Moreover, post-hoc paired sample t-test comparisons revealed that manual dual-task costs were significantly greater in the auditory than in the visual stimulus condition, t(31) = 4.58, p < .001, d = 0.63, while vocal dual-task costs did not significantly differ as a function of stimulus modality, t(31) = 1.54, p = .134, d = 0.29.

Additionally, the interaction of effector system and stimulus modality and the interaction of task condition and stimulus modality were significant. Post-hoc contrasts showed faster vocal RTs to visual stimuli (954 ms) than to auditory stimuli (1,033 ms), F(1, 31) = 16.76, p < .001, , which was also observed for manual RTs, F(1, 31) = 4.71, p = .038, (784 ms for auditory, 746 ms for visual stimuli). The interaction of task condition and stimulus modality revealed overall smaller dual-task costs in the visual stimulus condition (331 ms, single-task RTs: 685 ms vs. dual-task RTs: 1,016 ms) than in the auditory stimulus condition (415 ms, single-task RTs: 701 ms vs. dual-task RTs: 1,116 ms).

Error Rates

Errors occurred relatively rarely (4.6% in total). Nevertheless, there was a significant main effect of effector system, indicating more errors for manual (4.9%) than for vocal (3.8%) responses, and a main effect of task condition, indicating more errors in dual-task conditions (6.9%) than in single-task conditions (1.7%). The three-way interaction of effector system, task condition, and stimulus modality was significant, too. Post-hoc paired sample t-test comparisons revealed greater dual-task costs for manual responses than for vocal responses in the visual stimulus condition, t(31) = 3.21, p = .003, d = 0.41, while there was no significant difference in dual-task costs between the two effector systems in the auditory stimulus condition, t(31) = 0.14, p = .887, d = 0.02.

Discussion

We compared dual-task costs associated with manual and vocal responses between conditions involving either only visual or only auditory stimuli. We used stimuli with two independent aspects (i.e., identity and orientation of letters in the visual domain, location and pitch in the auditory domain) in order to trigger the two responses independently. Therefore, we were able to examine and compare effector-based task prioritization effects for visual and auditory stimulation conditions.

Generally, our findings revealed significant dual-task costs in both effector systems throughout all conditions. The observation of significant dual-task costs for vocal responses (which were executed second in 74% of trials) differs from Huestegge and Koch (2013), who did not observe any performance costs for vocal responses (most likely because in this previous study the vocal response was always of the same identity (e.g., left) as the manual response, and hence, there was no need to independently select the correct spatial code for the vocal response).

Most importantly, the present results confirm the effector-based prioritization pattern reported in Huestegge and Koch (2013) across all conditions. Specifically, there was a significant prioritization of vocal-over-manual responses (as indexed by greater dual-task costs in RTs for the latter) in both stimulus modality conditions, and this observation was not compromised by any reversed pattern in the error data. Thus, we can rule out that vocal-over-manual prioritization could only be observed using auditory stimuli as a result from IOMC-like effects. Therefore, the present results also suggest that effector system pairings have a greater effect on dual-task capacity allocation policies than input–output modality mappings, as the latter only had a negligible effect on the performance cost pattern. We assume that this prioritization is rooted in an effector-based allocation of capacity to response selection processes. As the effector system associated with a response is essentially an execution-related characteristic, it appears likely that the effector system associated with a response is already anticipated at an early stage during task processing, prior to assigning capacity to the individual response selection processes (see also Hoffmann et al., 2019). This view is in line with other suggestions, implying that response-related features (e.g., proximal and distal effects associated with responses as in ideomotor theories) are anticipated and thereby influence response selection (see, e.g., Badets et al. 2016; Pfister, 2019, for reviews).

While our general observation of greater dual-task costs for manual (vs. vocal) responses is nicely in line with Huestegge and Koch (2013), there is still a discrepancy with respect to reports of other previous studies that observed a reversed manual versus vocal dual-task cost pattern using either visual or (simultaneous) visual and auditory stimuli (e.g., Fagot & Pashler, 1992; Holender, 1980; Schumacher et al., 2001; Stelzel et al., 2006). However, note that there are numerous methodological differences between our present setting and these earlier studies (as well as among them), and it would take a large set of experiments to pinpoint the crucial differences that may turn vocal-over-manual prioritization into manual-over-vocal prioritization. Most importantly, however, our present results clearly demonstrated that effects of IOMC are not a central causal factor in this context.

Our results support theoretical accounts that suggest capacity sharing or resource scheduling between tasks in dual-task control (Meyer & Kieras, 1997; Navon & Miller, 2002; Tombu & Jolicœur, 2003). Moreover, the results further specify such models in that they suggest that allocation of capacity is also determined by task characteristics such as the associated (anticipated) effector systems. Specifically, it appears conceivable to incorporate effector-based attentional weighting parameters in computational theories of dual-task control such as executive control of the theory of visual attention (ECTVA; Logan & Gordon, 2001; see Hoffmann et al., 2019; Huestegge & Koch, 2013; Pieczykolan & Huestegge, 2019, for further discussion).

It should be noted that vocal-over-manual prioritization in RTs was more pronounced in the auditory (vs. visual) stimulus condition. At first sight, this might indicate that IOMC-like effects (e.g., Hazeltine et al., 2006; Stelzel et al., 2006) at least modulate the extent of the effector-based prioritization effect. However, one issue speaks against such a clear conclusion here: Our analysis of the error rates pointed into the opposite direction. Specifically, while the difference in dual-task costs is more pronounced in auditory (vs. visual) stimulus conditions in the RT data, it is more pronounced in visual (vs. auditory) stimulus conditions in the error rates. Therefore, a speed-accuracy trade-off compromises any clear conclusion regarding the direction of a potential modulation of effector prioritization by IOMC-like phenomena. Additionally, it is important to keep in mind that our present study is different from typical IOMC studies in that we compared one dual-task condition involving only visual stimuli with another dual-task condition involving only auditory stimuli (intra-modal stimulation, while typical IOMC studies usually compare two different input–output modality mapping conditions using bimodal stimulation; see also Hoffmann et al., 2019).

Thus, we conclude that specific input–output modality mappings can (to some extent) modulate dual-task performance (at least by affecting speed-accuracy policies), but not abolish or reverse effector-based prioritization (here: of vocal-over-manual responses) in dual-task control. Overall, effector system pairings appear to have a more substantial impact on capacity allocation policies in dual-task control than input–output modality mappings.

References

  • Badets, A., Koch, I., & Philipp, A. M. (2016). A review of ideomotor approaches to perception, cognition, action, and language: Advancing a cultural recycling hypothesis. Psychological Research, 80, 1–15. 10.1007/s00426-014-0643-8 First citation in articleCrossref MedlineGoogle Scholar

  • Fagot, C., & Pashler, H. (1992). Making two responses to a single object: Implications for the central attentional bottleneck. Journal of Experimental Psychology: Human Perception and Performance, 18, 1058–1079. 10.1037/0096-1523.18.4.1058 First citation in articleCrossref MedlineGoogle Scholar

  • Göthe, K., Oberauer, K., & Kliegl, R. (2016). Eliminating dual-task costs by minimizing crosstalk between tasks: The role of modality and feature pairings. Cognition, 150, 92–108. 10.1016/j.cognition.2016.02.003 First citation in articleCrossref MedlineGoogle Scholar

  • Greenwald, A. G. (1972). On doing two things at once: Time sharing as a function of ideomotor compatibility. Journal of Experimental Psychology, 94, 52–57. 10.1037/h0032762 First citation in articleCrossref MedlineGoogle Scholar

  • Greenwald, A. G. (2003). On doing two things at once: III. Confirmation of perfect timesharing when simultaneous tasks are ideomotor compatible. Journal of Experimental Psychology: Human Perception and Performance, 29, 859–868. 10.1037/0096-1523.29.5.859 First citation in articleCrossref MedlineGoogle Scholar

  • Halvorson, K. M., Ebner, H., & Hazeltine, E. (2013). Investigating perfect timesharing: The relationship between IM-compatible tasks and dual-task performance. Journal of Experimental Psychology: Human Perception and Performance, 39, 413–432. 10.1037/a0029475 First citation in articleCrossref MedlineGoogle Scholar

  • Hazeltine, E., Ruthruff, E., & Remington, R. (2006). The role of input and output modality pairings in dual-task performance: Evidence for content-dependent central interference. Cognitive Psychology, 52, 291–345. 10.1016/j.cogpsych.2005.11.001 First citation in articleCrossref MedlineGoogle Scholar

  • Hoffmann, M. A., Pieczykolan, A., Koch, I., & Huestegge, L. (2019). Motor sources of dual-task interference: Evidence for effector-based prioritization in dual-task control. Journal of Experimental Psychology: Human Perception and Performance, 45, 1355–1374. 10.1037/xhp0000677 First citation in articleCrossref MedlineGoogle Scholar

  • Holender, D. (1980). Interference between a vocal and a manual response to the same stimulus. In G. E. StelmachJ. Requin (Eds.), Tutorials in motor behavior (Vol. 1, pp. 421–431). Advances in psychology. Amsterdam, North-Holland: Elsevier. 10.1016/S0166-4115(08)61959-7 First citation in articleCrossrefGoogle Scholar

  • Huestegge, L. (2011). The role of saccades during multitasking: Towards an output-related view of eye movements. Psychological Research, 75, 452–465. 1007/s00426-011-0352-5 First citation in articleCrossref MedlineGoogle Scholar

  • Huestegge, L., & Adam, J. J. (2011). Oculomotor interference during manual response preparation: Evidence from the response-cueing paradigm. Attention, Perception, & Psychophysics, 73, 702–707. 10.3758/s13414-010-0051-0 First citation in articleCrossref MedlineGoogle Scholar

  • Huestegge, L., & Hazeltine, E. (2011). Crossmodal action: Modality matters. Psychological Research, 75, 445–451. 10.1007/s00426-011-0373-0 First citation in articleCrossref MedlineGoogle Scholar

  • Huestegge, L., & Koch, I. (2013). Constraints in task-set control: Modality dominance patterns among effector systems. Journal of Experimental Psychology: General, 142, 633–637. 10.1037/a0030156 First citation in articleCrossref MedlineGoogle Scholar

  • Huestegge, L., Pieczykolan, A., & Koch, I. (2014). Talking while looking: On the encapsulation of output system representations. Cognitive Psychology, 73, 72–91. 10.1016/j.cogpsych.2014.06.001 First citation in articleCrossref MedlineGoogle Scholar

  • Logan, G. D., & Gordon, R. D. (2001). Executive control of visual attention in dual-task situations. Psychological Review, 108, 393–434. 10.1037/0033-295x.108.2.393 First citation in articleCrossref MedlineGoogle Scholar

  • Maquestiaux, F., Ruthruff, E., Defer, A., & Ibrahime, S. (2018). Dual-task automatization: The key role of sensorymotor modality compatibility. Attention, Perception & Psychophysics, 80, 752–772. 10.3758/s13414-017-1469-4 First citation in articleCrossref MedlineGoogle Scholar

  • Meyer, D. E., & Kieras, D. E. (1997). A computational theory of executive cognitive processes and multiple-task performance: Part I. Basic mechanisms. Psychological Review, 104, 3–65. 10.1037/0033-295x.104.1.3 First citation in articleCrossref MedlineGoogle Scholar

  • Navon, D., & Miller, J. (2002). Queuing or sharing? A critical evaluation of the single-bottleneck notion. Cognitive Psychology, 44, 193–251. 10.1006/cogp.2001.0767 First citation in articleCrossref MedlineGoogle Scholar

  • Pashler, H. (1994). Dual-task interference in simple tasks: Data and theory. Psychological Bulletin, 116, 220–244. 10.1037/0033-2909.116.2.220 First citation in articleCrossref MedlineGoogle Scholar

  • Pfister, R. (2019). Effect-based action control with body-related effects: Implications for empirical approaches to ideomotor action control. Psychological Review, 126, 153–161. 10.1037/rev0000140 First citation in articleCrossref MedlineGoogle Scholar

  • Pieczykolan, A., & Huestegge, L. (2014). Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action. Journal of Vision, 14, 18. 10.1167/14.13.18 First citation in articleCrossref MedlineGoogle Scholar

  • Pieczykolan, A., & Huestegge, L. (2019). Action scheduling in multitasking: A multi-phase framework of response-order control. Attention, Perception, & Psychophysics. 81, 14641487. 10.3758/s13414-018-01660-w First citation in articleCrossref MedlineGoogle Scholar

  • Schacherer, J., & Hazeltine, E. (2019). How conceptual overlap and modality pairings affect task-switching and mixing costs. Psychological Research, 83, 1020–1032. 10.1007/s00426-017-0932-0 First citation in articleCrossref MedlineGoogle Scholar

  • Schumacher, E. H., Seymour, T. L., Glass, J. M., Fencsik, D. E., Lauber, E. J., Kieras, D. E., & Meyer, D. E. (2001). Virtually perfect time sharing in dual-task performance: Uncorking the central cognitive bottleneck. Psychological Science, 12, 101–108. 10.1111/1467-9280.00318 First citation in articleCrossref MedlineGoogle Scholar

  • Stelzel, C., & Schubert, T. (2011). Interference effects of stimulus-response modality pairings in dual tasks and their robustness. Psychological Research, 75, 476–490. 10.1007/s00426-011-0368-x First citation in articleCrossref MedlineGoogle Scholar

  • Stelzel, C., Schumacher, E. H., Schubert, T., & D’Esposito, M. (2006). The neural effect of stimulus-response modality compatibility on dual-task performance: An fMRI study. Psychological Research, 70, 514–525. 10.1007/s00426-005-0013-7 First citation in articleCrossref MedlineGoogle Scholar

  • Stephan, D. N., & Koch, I. (2010). Central cross-talk in task switching: Evidence from manipulating inputoutput modality compatibility. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 1075–1081. 10.1037/a0019695 First citation in articleCrossref MedlineGoogle Scholar

  • Stephan, D. N., & Koch, I. (2011). The role of inputoutput modality compatibility in task switching. Psychological Research, 75, 491–498. 10.1007/s00426-011-0353-4 First citation in articleCrossref MedlineGoogle Scholar

  • Stephan, D. N., Koch, I., Hendler, J., & Huestegge, L. (2013). Task switching, modality compatibility, and the supra-modal function of eye movements. Experimental Psychology, 60, 90–99. 10.1027/1618-3169/a000175 First citation in articleLinkGoogle Scholar

  • Tombu, M., & Jolicœur, P. (2003). A central capacity sharing model of dual-task performance. Journal of Experimental Psychology: Human Perception and Performance, 29, 3–18. 10.1037/0096-1523.29.1.3 First citation in articleCrossref MedlineGoogle Scholar

Mareike A. Hoffmann, Institute of Psychology, University of Würzburg, Röntgenring 11, 97070 Würzburg, Germany,