Elsevier

Cognitive Psychology

Volume 53, Issue 3, November 2006, Pages 195-237
Cognitive Psychology

Modeling response signal and response time data

https://doi.org/10.1016/j.cogpsych.2005.10.002Get rights and content

Abstract

The diffusion model (Ratcliff, 1978) and the leaky competing accumulator model (LCA, Usher & McClelland, 2001) were tested against two-choice data collected from the same subjects with the standard response time procedure and the response signal procedure. In the response signal procedure, a stimulus is presented and then, at one of a number of experimenter-determined times, a signal to respond is presented. The models were fit to the data from the two procedures simultaneously under the assumption that responses in the response signal procedure were based on a mixture of decision processes that had already terminated at response boundaries before the signal and decision processes that had not yet terminated. In the latter case, decisions were based on partial information in one variant of each model or on guessing in a second variant. Both variants of the diffusion model fit the data well and both fit better than either variant of the LCA model, although the differences in numerical goodness-of-fit measures were not large enough to allow decisive selection between the models.

Introduction

Since the 1970s, the response signal paradigm has been attractive to cognitive psychologists because it tracks, in a manner that appears to be quite direct, the time course with which information becomes available to decision processes. Other paradigms, such as those using standard response time (RT) measures, only allow information growth as a function of time to be determined indirectly, for example, through models of processing. In this article, I explore how current models for two-choice decisions can explain response signal data.

In a response signal experiment, a test item is presented and then it is followed at some time lag by a signal that tells the subject to respond. Several different time lags are used, in random order across trials, varying from such a short amount of time between test item and signal that performance is at chance to an amount of time at which performance asymptotes. Subjects are asked to respond within 200–300 ms of the signal, and the dependent variable is accuracy. Because the method provides snapshots of accuracy across the lags, it yields a map of the growth of accuracy as a function of time. Usually, the measure of accuracy is d′, with one experimental condition taken as baseline and the other conditions scaled against it. The response signal paradigm has been used to examine a number of experimental questions in two-choice tasks in experimental psychology and Reed (1976) argued that it is superior to deadline procedures, which fix the lag between stimulus and signal at a constant within blocks of trials, because subjects are informed of the deadline ahead of time and can alter their retrieval strategy as a function of the deadline (see also Corbett and Wickelgren, 1978, Dosher, 1976, Dosher, 1979, Dosher, 1981, Dosher, 1982, Dosher, 1984, McElree and Dosher, 1993, Ratcliff, 1980, Ratcliff, 1981, Reed, 1973, Reed, 1976, Schouten and Bekker, 1967, Wickelgren, 1977; Wickelgren and Corbett, 1977).

The response signal procedure has been used to measure three characteristics of processing: the point in time at which the amount of information available to the decision process is sufficient for accuracy to begin to grow above chance, the rate at which the amount of information grows toward asymptotic accuracy, and the level of asymptotic accuracy. In contrast, the standard RT procedure provides an estimate of the time to make a decision but, in the absence of a theory of processing, it cannot be used to determine the time at which information begins to be available, the rate of growth, or the level of asymptotic accuracy. It has also been argued that the standard RT procedure provides an estimate of only a single point on the function that maps the growth of accuracy over time (e.g. Dosher, 1984). In the experiment presented below, I collected both response signal and standard RT data from the same subjects in order to examine whether and how models can simultaneously explain the growth of accuracy over time in the response signal paradigm and all the data from the standard RT paradigm (accuracy and RT distributions for both correct and error responses). The main question was how the constraints of accounting for both kinds of data jointly might impact theoretical interpretations of how information becomes available to decision processes over time.

Often in previous research, the analysis of response signal data is not theory-based (e.g. Corbett and Wickelgren, 1978, Dosher, 1976, Dosher, 1979, Dosher, 1981, Dosher, 1982, Dosher, 1984, Hintzman and Curran, 1997, McElree and Dosher, 1993, Ratcliff, 1980, Ratcliff, 1981, Reed, 1973, Reed, 1976, Schouten and Bekker, 1967, Wickelgren, 1977, Wickelgren et al., 1980). Usually, an exponential function for growth to asymptote over time is fit to the d′ values for the response signal lags:d(t)=da(1-exp(-(t-T0)/τ)),forT0>0,where da is the asymptotic level of accuracy, T0 is the time intercept at which accuracy begins to grow above chance, and τ is the time constant for exponential rate of growth to asymptote. Differences among the values of intercept, rate, and asymptote across experimental conditions are then used to assess the effects of various independent variables on the time course of information accumulation and also to evaluate predictions about how these features of performance should behave (Corbett and Wickelgren, 1978, Dosher, 1976, Dosher, 1979, Dosher, 1981, Dosher, 1982, Dosher, 1984, McElree and Dosher, 1993, Ratcliff, 1981, Reed, 1973, Reed, 1976). The exponential function generally provides good fits to response signal data. McElree and Dosher (1989) compared the exponential to the expression for growth of accuracy derived from the diffusion model proposed by Ratcliff (1978), and found that the exponential was slightly superior to the diffusion model expression, but the differences were small. Wagenmakers, Ratcliff, Gomez, and Iverson (2004) used Monte Carlo simulations of the two expressions to compute how many observations would be needed to discriminate them, and found that, for typical data, about 2000 observations per experimental condition would be needed to discriminate them at a .95 probability level.

The major problem with the exponential function as a summary of the d′ growth function is that it is not theoretically based. There is no model of underlying cognitive processes that gives rise to the exponential function and no obvious theoretical way to relate the exponential function from response signal data to data from the standard RT task.

The two-choice models explored in this article are sequential sampling models, the diffusion model (Ratcliff, 1978, Ratcliff, 1981, Ratcliff, 1985, Ratcliff, 1988, Ratcliff, 2002, Ratcliff and Rouder, 1998, Ratcliff and Rouder, 2000, Ratcliff and Smith, 2004, Ratcliff and Tuerlinckx, 2002, Ratcliff et al., 2004, Ratcliff et al., 1999) and the leaky competing accumulator model (the LCA model, Usher & McClelland, 2001). These two models were chosen because they have been applied to response signal data previously and because Usher and McClelland argued that the two models can be discriminated with response signal data. Both models assume that noisy information is accumulated over time from a starting point toward one of two decision criteria, or boundaries. In a standard RT experiment, a response is initiated when the amount of accumulated information reaches one or the other of the boundaries. In a response signal experiment, on many trials, a response is required before the accumulated information reaches a boundary. To handle this, Ratcliff (1978) assumed that the decision process proceeds without boundaries, and a decision is based on the position of the process at the time of the response signal, that is, whether the amount of accumulated information is above or below the starting point. Later, Ratcliff (1988) proposed that the decision boundaries are retained and that responses are based on a mixture of processes, those that have already hit a boundary at the time of the response signal and those that have not; in the latter case, a decision is based on the position of the process. Usher and McClelland (2001) made the same assumption as Ratcliff (1978), that decisions in the response signal paradigm are based on the position of the process at the time of the response signal, with decision criteria removed.

In this article, I jointly fit data for the same subjects from the response signal paradigm and the standard RT paradigm for both the diffusion model and the LCA model. The task used for both paradigms was a signal detection task: arrays of between 13 and 87 dots were presented to subjects and they were asked to decide for each array whether the number of dots was large or small. The goal was that the models account for all aspects of both kinds of data with as many parameters as possible kept the same across the two tasks.

Section snippets

The diffusion model

The diffusion model was developed to explain the processes by which two-choice decisions are made. The model applies only to relatively fast two-choice decisions (mean RTs less than about 1000–1500 ms) and only to decisions that are a single-stage decision process (as opposed to the multiple-stage processes that might be involved in, for example, reasoning tasks or card sorting tasks). The model has been successful in explaining the data in a number of areas including perceptual tasks (Ratcliff,

The leaky competing accumulator model

The LCA model (Usher & McClelland, 2001) was developed as an alternative to the diffusion model with the aim of implementing neurobiological principles that the authors felt should be incorporated into RT models, especially mutual inhibition mechanisms and decay of information across time.

In the LCA model (Usher & McClelland, 2001), like the diffusion model, information is accumulated continuously over time. There are two accumulators, one for each response, as shown in the bottom panel of Fig.

Modeling the response signal paradigm

For the standard RT paradigm, responses are made when the amount of accumulated information reaches a response boundary. For the response signal paradigm, responses must often be made before a boundary is reached. To model this situation, researchers in the past have made the simple assumption that the response boundaries are removed from the decision process and so no decision process can ever reach a boundary. When the signal to respond is given, subjects make their decision according to the

Availability of partial information

The issue of whether partial information can be used by decision processes when subjects are asked to respond at a signal was examined in a set of papers in the late 1980s by Meyer and colleagues (Kounios et al., 1987, Meyer et al., 1988; see also Meyer et al., 1985, Ratcliff, 1988). They used a paradigm in which on any given trial, either subjects were asked to respond before or at a signal, or there was no signal and they completed a normal response; these two kinds of trials were randomly

Experiment

The aim was to use the diffusion model and the LCA model (in their partial information versions and their guessing versions) to simultaneously account for data from the standard RT task and response signal data. For the standard task, accuracy, correct and error RTs, and their distributions for each experimental condition were the targets for modeling. For the response signal data, the probabilities of each response alternative for each condition were the targets for modeling. In most previous

Results

The data are presented in the context of fitting the diffusion and LCA models to them. The models were fit to the data from each subject individually and goodness of fit was evaluated for each model for each subject. For the reaction time task, RTs were eliminated from analyses if they were below 280 ms and above 1500 ms (responses below 280 ms were at chance); this eliminated less than 8% of the data. For the response signal procedure, responses below 100 ms and above 500 ms were eliminated (about

General discussion

There were two goals for the research described in this article: first, to test whether two current sequential sampling models could simultaneously account for data from the response signal paradigm and the standard RT paradigm; and second, to explore what could be learned from response signal data in the theoretical context of models that account for how information is accumulated over time.

This research is the first in which data from both the response signal and standard RT paradigms were

Conclusions

The first conclusion is that applying models to multiple tasks simultaneously produces powerful constraints on the models that (if the models can successfully account for the data) lead to new understandings of how the tasks are performed. Here, in the context of sequential sampling models, this approach yielded a new view of response signal performance: responses increase in accuracy over time mainly because the proportion of terminated processes increases; the increase in accuracy does not

References (72)

  • J.F. Schouten et al.

    Reaction time and accuracy

    Acta Psychologica

    (1967)
  • P.L. Smith

    Stochastic dynamic models of response time and accuracy: A foundational primer

    Journal of Mathematical Psychology

    (2000)
  • P.L. Smith et al.

    The accumulator model of two-choice discrimination

    Journal of Mathematical Psychology

    (1988)
  • P.L. Smith et al.

    Attention, visual short term memory, and the representation of stimuli in psychophysical decisions

    Vision Research

    (2004)
  • D. Vickers et al.

    Discriminating between the frequency of occurrence of two alternative events

    Acta Psychologica

    (1971)
  • E.-J. Wagenmakers et al.

    Assessing model mimicry using the parametric bootstrap

    Journal of Mathematical Psychology

    (2004)
  • E.-J. Wagenmakers et al.

    A model for evidence accumulation in the lexical decision task

    Cognitive Psychology

    (2004)
  • W.A. Wickelgren

    Speed–accuracy tradeoff and information processing dynamics

    Acta Psvchologica

    (1977)
  • W.A. Wickelgren et al.

    Priming and retrieval from short-term memory: A speed accuracy trade-off analysis

    Journal of Verbal Learning and Verbal Behavior

    (1980)
  • F.G. Ashby et al.

    Decision rules in the perception and categorization of multidimensional stimuli

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (1988)
  • J.R. Busemeyer et al.

    Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment

    Psychological Review

    (1993)
  • A.T. Corbett et al.

    Semantic memory retrieval: Analysis by speed–accuracy tradeoff functions

    Quarterly Journal of Experimental Psychology

    (1978)
  • R. De Jong

    Partial information or facilitation? Different interpretations of results from speed–accuracy decomposition

    Perception & Psychophysics

    (1991)
  • B.A. Dosher

    Effect of sentence size and network distance on retrieval speed

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (1982)
  • B. Espinoza-Varas et al.

    Effects of decision criterion on response latencies of binary decisions

    Perception & Psychophysics

    (1994)
  • W. Feller

    An introduction to probability theory and its applications

    (1968)
  • Gomez, P., Perea, M., & Ratcliff, R. (2006). A model of the go/no-go lexical decision task...
  • S.D. Gronlund et al.

    The time-course of item and associative information: Implications for global memory models

    Journal of Experimental Psychology: Learning, Memory and Cognition

    (1989)
  • H. Heuer

    Visual discrimination and response programming

    Psychological Research

    (1987)
  • D.L. Hintzman et al.

    Retrieval constraints and the mirror effect

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (1994)
  • D.L. Hintzman et al.

    Comparing retrieval dynamics in recognition memory and lexical decision

    Journal of Experimental Psychology: General

    (1997)
  • H. Jeffreys

    Theory of probability

    (1961)
  • J. Kounios et al.

    Structure and process semantic memory: New evidence based on speed–accuracy decomposition

    Journal of Experimental Psychology: General

    (1987)
  • D.R.J. Laming

    Information theory of choice reaction time

    (1968)
  • W. Lee et al.

    Categorizing externally distributed by stimulus samples for three continua

    Journal of Experimental Psychology

    (1964)
  • S.W. Link et al.

    A sequential theory of psychological discrimination

    Psychometrika

    (1975)
  • Cited by (158)

    • Speeded response tasks with unpredictable deadlines

      2023, Journal of Mathematical Psychology
    • Gray matter analysis of MRI images: Introduction to current research practice

      2021, Encyclopedia of Behavioral Neuroscience: Second Edition
    View all citing articles on Scopus

    This research was supported by NIMH grants R37-MH44640 and K05-MH01891. I thank Marius Usher, Douglas Hintzman, and especially Gail McKoon for comments on this article.

    View full text