CorrespondenceA primer on assessing intelligence in laboratory studies
Section snippets
Background
Ever since the groundbreaking work by Hunt and his colleagues (e.g., Hunt, Frost, and Lunneborg, 1973), numerous researchers have attempted to investigate the relationships between intellectual abilities on the one hand, and constructs from experimental cognitive psychology, on the other hand. However, there is a mismatch between the standard paradigm used in experimental psychology and the procedures that are optimal for investigating individual differences in intellectual abilities. Some of
Reliability and validity
The first difficulty in this area of research is that experimental and correlational orientations create a fundamental conflict. In the context of cognitive research, the experimental approach typically involves using stimuli that are either believed to be substantially “overlearned” by most or all study participants (e.g., letters, numbers, high-frequency words, simple polygons), or entirely novel stimuli (e.g., artificial grammars, random number sequences), such that either way, the
Validity
3. Do consider the validity of selected intelligence tests
In experimental cognitive psychology, there is ongoing debate about what particular laboratory tasks are suitable for assessing a particular construct (e.g., memory search speed, decision time, reaction time or movement time, verbal or spatial working memory). The “validity” of such tasks is established through discussion in the literature, leading to a set of relatively standardized procedures to establish that a particular set of task
Data are theory-laden
5. Do understand that data are theory-laden
Thomas Kuhn (1977), physicist-turned-philosopher of science, proposed the theme of “paradigm” to describe the amalgamation of assumptions and methods used within individual laboratories. Although there was a lively argument in the literature about whether psychology could be considered paradigmatic, it is clear that within the domain of intelligence research, there are multiple paradigms, loosely associated with the Spearman and Cattell-Horn-Carroll
Standardized intelligence tests
6. Don't use intelligence tests in a non-standard manner without norms and validation
The traditional ‘gold standard’ assessments of intelligence (e.g., Stanford-Binet, Wechsler) are considered to be standardized tests, in that all of the examiner's words, apparatus, and procedure, and the scoring rules are fixed. The same is true for college admissions tests, such as the SAT or ACT, or tests administered in the manner dictated by test manuals. However, there are numerous variations that have
Regression-to-the-mean
7. Don't use extreme-group designs
Restriction-of-range in talent
9. Do adjust correlations to account for restriction-of-range in talent
When a researcher chooses a sample that is restricted in range-of-talent – a typical phenomenon when the study participants are students from a moderate to highly selective college/university, correlations are expected to be attenuated than would be observed in the population-at-large. Corrections can be made to the correlations, on the basis of comparing the variances of the sample on the variables of interest to the
Differences between correlations
10. Don't use samples too small to detect differences in correlations
Significant vs. meaningful correlations
11. Don't confuse statistically significant with meaningful magnitude correlations
12. Do pre-specify expected correlations
Over the past few decades, scholarly organizations and journal editors have encouraged researchers to move beyond null hypothesis significance testing, in favor of reporting effect sizes (e.g., Wilkinson and Task Force on Statistical Inference, 1999). This movement is especially important for laboratory studies of experimental tasks and intellectual abilities, because it may
‘Method’ factors
As noted by Campbell and Fiske (1959), “Each test or task employed for measurement purposes is a trait-method unit, a union of a particular trait content with measurement procedures not specific to that content.” (p. 81). In the assessment of intellectual abilities, “method” can refer to a variety of different characteristics of a particular test, such as the kinds of question formats (multiple choice, open-ended, sentence completion), administration procedures (e.g., speed vs. power tests),
Content as overlapping ‘Method’ factor
13. Don't ignore common content as method variance
One of the reasonably well-replicated findings from research conducted under the heading of “elementary information process” correlates of intelligence (e.g., Ackerman, 1986; Kyllonen, 1985), is that common stimulus content (e.g., numbers as stimuli/test items) on both the task and ability assessments will lead to higher correlations than when there are different stimulus contents (e.g., a spatial stimulus task and a verbal ability test). This
Speed vs. power method issues
14. Do take account of speed vs. power of reference tests
As the information processing framework in experimental psychology grew out of information theory (e.g., see Atkinson & Shiffrin, 1967; Attneave, 1959; Shannon and Weaver, 1949), the two major dependent variables for such investigations were the speed and accuracy of responding, often with a major emphasis on speed.1
Approaches to assessing intellectual abilities
The following is a set of recommendations, and their underlying justifications, for assessing intellectual abilities in the context of laboratory experimental studies aimed at determining the relations between tasks and intellectual abilities. Examples are provided to underline these recommendations.
Contrasting a single ability measure with multiple ability measures
21. Don't assume that adequate assessment with multiple measures requires excessive administration time
Ackerman et al. (2000) administered a large set of intellectual ability tests, including the Raven's Advanced Progressive Matrices (APM) test, and a battery of 7 working memory (WM) ability tests to a group of 135 young adults, in order to determine the relationships between intelligence, perceptual speed, and working memory ability. The APM was administered in the manner specified by the
Conclusions
Many of the “don'ts” and too few of the “do's” can be found in the extant literature on the relationship between constructs from experimental cognitive psychology and intellectual ability. Although they are prominent in laboratory experimental studies that attempt to relate task performance to individual differences in intellectual abilities, they are pervasive in the wider literature beyond the study of intelligence. It is important to note that there are few, if any, “perfect” studies in the
References (92)
Individual differences in information processing: An investigation of intellectual abilities and task performance during practice
Intelligence
(1986)- et al.
Explorations of crystallized intelligence: Completion tests, cloze tests and knowledge
Learning and Individual Differences: A Multidisciplinary Journal in Education
(2000) The future of intelligence research
Intelligence
(1989)Why expert performance is special and cannot be extrapolated from studies of performance in the general population: A response to criticisms
Intelligence
(2014)- et al.
Separating power and speed components of standardized intelligence measures
Intelligence
(2017) - et al.
More dissociations and interactions within executive functioning: A comprehensive latent-variable analysis
Acta Psychologica
(2008) Raven’s is not a pure measure of general intelligence: Implications for g factor theory and the brief measurement of g
Intelligence
(2015)- et al.
Individual differences in cognition: A new approach to intelligence
- et al.
Evidence of stable individual differences in implicit learning
Cognition
(2019) - et al.
Becoming an expert in the musical domain: It takes more than just practice
Intelligence
(2008)