Skip to main content

Advertisement

Log in

How interventionist accounts of causation work in experimental practice and why there is no need to worry about supervenience

  • Humeanisms
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

It has been argued that supervenience generates unavoidable confounding problems for interventionist accounts of causation, to the point that we must choose between interventionism and supervenience. According to one solution, the dilemma can be defused by excluding non-causal determinants of an outcome as potential confounders. I argue that this solution undermines the methodological validity of causal tests. Moreover, we don’t have to choose between interventionism and supervenience in the first place. Some confounding problems are effectively circumvented by experimental designs routinely employed in science. The remaining confounding issues concern the physical interpretation of variables and cannot be solved by choosing between interventionism and supervenience.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For example: the diagnosis and pathology of psychiatric conditions encompass physiological and psychological symptoms, risk factors and causal determinants; epidemiological studies reveal that many diseases are influenced by diverse variables, such as genes, diet, lifestyle and socioeconomic status; biopsychosocial models of disease emphasize multiple levels of medical intervention; biopsychosocial models of pain and fear are explained in terms of interactions among biological, psychological and social variables.

  2. Woodward reiterates the fixability requirement in his response to Baumgartner: “M requires that for X to be a direct or contributing cause of Y, it must be true both that it is possible to intervene on X (there must “be” such an intervention) and that under this intervention, Y changes when other specified variables be held fixed” (Woodward 2015, p. 312, emphasis added).

  3. The methodological literature repeatedly emphasizes that the logic of causal inference is one of systematic elimination of alternative explanations: “Determining whether there is a causal relationship between variables, A and B, requires that the variables covary, the presence of one variable preceding the other (e.g., A → B), and ruling out the presence of a third variable, C, which might mitigate the influence of A on B” (Leighton 2010, p. 622). Or again: “The temporal structure of the experiment ensures that cause precedes effect. Whether cause covaries with effect is easily checked in the data within known probabilities. The remaining task is to show that most alternative explanations of the cause-effect relationship are implausible” (Shadish et al. 2002, p. 249).

  4. Following Pearl (2000), Woodward (2003, pp. 95–96) assumes that an intervention fixes the values of the independent variable, thus simulating the effect of a random allocation intervention. This requirement is rather restrictive. Most experiments in biomedical research don’t fix variable values. For instance, overexpressing a gene doesn’t fix the concentration of a gene product. Notwithstanding, by choosing comparable test and control systems, researchers routinely demonstrate the causal relevance of gene expression to biological activity. The assumption of graph acyclicity is also a major idealization in the case of molecular mechanisms, which involve chemical equilibria and feedback structures. The implication here is that Woodward’s formulations of (M) and (IV) are too narrow, since they tie the definitions of manipulation and ideal intervention to causal modelling assumptions which cannot be satisfied in many experiments. This is not a fatal shortcoming for interventionism, but more a question of finetuning. For example, Eberhardt and Scheines (2007) provide a formal treatment of ‘soft interventions,’ which, unlike Woodward’s ‘hard interventions,’ change the value of a variable without breaking the causal arrows into that variable.

  5. Fixability is a special case of comparability. If the test and control conditions correspond to an ‘after’ and a ‘before intervention’ state of a system, comparability dictates that the confounder-variables must take the same values in the ‘before/control’ and ‘after the intervention/test’ conditions, which is to say that they must stay constant over time.

  6. Randomization is said to equate groups on the expected value of all variables at pretest, which simply means that test and control groups will differ only by chance.

  7. INF stands for ‘independent fixability’ and presupposes the fixability clause stated in (M).

  8. Consider, for example, Ronald Melzack’s (2001, p. 1378) proposal that pain “is produced by the output of a widely distributed neural network in the brain,” dubbed the ‘neuromatrix,’ which “is the primary mechanism that generates the neural pattern that produces pain. Its output pattern is determined by multiple influences, of which the somatic sensory input is only a part, that converge on the neuromatrix.” According to this proposal, pain is produced by a complex biological mechanism, which immediately suggests that pain cannot non-causally supervene with that which produces it. Thus, the recently discovered pattern of fMRI activity predicting whether a subject will report a heat stimulus as being painful or not (Wager et al. 2013) refers solely to the mechanism causing pain, not pain itself. It tells which structures in the brain should be monitored to measure pain or targeted by interventions to alter pain experience, but it tells us nothing about the supervenience of pain on a biological state.

  9. Causal interpretations paradoxically coexist side by side with supervenience ones. For instance, physics textbooks are also in the habit of defining temperature as a measure of the average translational kinetic energy of the molecules of a gas. Under a causal-realist interpretation of measurement of the sort typically endorsed in experimental science, this definition entails that temperature is a measurable effect of kinetic energy in a specific experimental setup.

  10. “[…] the requirement that in order for M1 to have a causal effect on M2, there must be a possible intervention that changes M1 while P1 is held fixed and under which M2 changes, is inappropriate. […] the requirements in the definition (IV) are understood as applying only to those variables that are causally related to X and Y or are correlated with them but not to those variables that are related to X and Y as a result of supervenience relations or relations of definitional dependence. Call this characterization of interventions (IV*) and an intervention meeting these conditions an IV*-intervention” (Woodward 2015, pp. 333–34).

  11. Since the human body can metabolize cholesterol in many other ways, TC is not, strictly speaking, a conserved quantity. Woodward’s example probably refers to Friedewald equation (TC = HD + LD + 20% triglyceride blood concentration), which is a method for estimating LD given measurements of the other variables. The practical value of the equation lies in the fact that it can act as a substitute for direct, but costlier, measurements of LD. The method is known to yield inaccurate estimates for certain patients, such as metabolic syndrome patients, hence the recommendation to opt for the direct test. Moreover, pharmacological agents designed to inhibit cholesterol synthesis (statins) are known to interfere with metabolic pathways regulating the balance between LD and HD. If a relationship between variables of the sort illustrated by Friedewald equation were indeed a definitional constraint, we would have to conclude that hereditary metabolic abnormalities and enzyme inhibitors can override matters of logical necessity.

  12. I am not arguing that experimental practice is, can or should be insulated from prior assumptions. For instance, a statistical model embodies substantive claims about how data is generated (e.g., deterministically or probabilistically), thus offering a putative explanation of why data scatter in a particular way (Jaynes 2003). The key point, however, is that these assumptions are not dogmatically assumed to be true, but are adopted as working hypotheses, the consequences of which can be eventually tested (e.g., consistency or inconsistency with actual variations of measured values, the fact that measurement precision can or cannot be increased by improving measurement techniques and experimental setups, etc.). A similar argument can be made about thresholds of statistical significance. Changes in methodological standards in response to the replicability crisis (Ioannidis 2005) demonstrate that conventions are revised in light of empirical results. In contrast, the definitional and metaphysical constraints advocated by Woodward do not admit empirical testing.

  13. In doing so, we may find out that some psychological variables don’t refer or don’t constitute natural kinds. As noted above, the variable ‘reported pain’ is multidimensional and some of these dimensions can be manipulated independently. This is compatible with the view that ‘pain unpleasantness’ may be associated with sensory dimensions of pain, such as ‘pain intensity,’ to the same extent visual experiences are associated with auditory ones (Hardcastle 1999). We may also find something entirely new and surprising. For example, deCharms et al. used real-time functional MRI (rtfMRI) to train subjects to control ACC activation. This is what they found: “When subjects deliberately induced increases or decreases in rACC fMRI activation, there was a corresponding change in the perception of pain caused by an applied noxious thermal stimulus. Control experiments demonstrated that this effect was not observed after similar training conducted without rtfMRI information, or using rtfMRI information derived from a different brain region, or sham rtfMRI information derived previously from a different subject. […] These findings show that individuals can gain voluntary control over activation in a specific brain region given appropriate training, that voluntary control over activation in rACC leads to control over pain perception, and that these effects were powerful enough to impact severe, chronic clinical pain” (2005, 18,626).

References

  • Ankeny, R. (2001). Model organisms as models: Understanding the ‘‘lingua franca’’ of the human genome project. Philosophy of Science, 68, S251–S261.

    Article  Google Scholar 

  • Baetu, T. M. (2019). Mechanisms in molecular biology. In Grant Ramsey & Michael Ruse (Eds.), Elements in the philosophy of biology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Baetu, T. M. (2020). Causal inference in biomedical research. Biology and Philosophy, 35, 43.

    Article  Google Scholar 

  • Baumgartner, M. (2009). Interventionist causal exclusion and non-reductive physicalism. International Studies in the Philosophy of Science, 23(2), 161–178.

    Article  Google Scholar 

  • Bechtel, W., & Richardson, R. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research. Cambridge: MIT Press.

    Book  Google Scholar 

  • Bickle, J. (1998). Psychoneural reduction: The new wave. Cambridge: MIT Press.

    Google Scholar 

  • Campbell, J. (2008). Causation in psychiatry. In K. Kendler & J. Parnas (Eds.), Philosophical issues in psychiatry (pp. 196–216). Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Chalmers, T. C., Celano, P., et al. (1983). Bias in treatment assignment in controlled clinical trials. New England Journal of Medicine, 309(22), 1359–1361.

    Article  Google Scholar 

  • Craver, C. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Clarendon Press.

    Book  Google Scholar 

  • deCharms, R. C., Maeda, F., et al. (2005). Control over brain activation and pain learned by using real-time functional MRI. Proceedings of the National Academy of Sciences of the United States of America, 102(51), 18626–18631.

    Article  Google Scholar 

  • Eberhardt, F., & Scheines, R. (2007). Interventions and Causal Inference. Philosophy of Science, 74, 981–995.

    Article  Google Scholar 

  • Evidence Based Medicine Working Group. (1992). Evidence-based medicine. A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 2420–2425.

    Article  Google Scholar 

  • Fisher, R. A. (1947). The design ol experiments (4th ed.). Edinburgh: Oliver and Boyd.

    Google Scholar 

  • Hardcastle, V. (1999). The myth of pain. Cambridge: MIT Press.

    Book  Google Scholar 

  • Hill, A. B. (1955). Principles of medical statistics (6th ed.). New York: Oxford University Press.

    Google Scholar 

  • Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.

    Article  Google Scholar 

  • Howick, J. (2011). The philosophy of evidence-based medicine. Oxford: BMJ Books.

    Book  Google Scholar 

  • Ioannidis, J. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

    Article  Google Scholar 

  • Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Langreth, R., & Waldholz, M. (1999). New era of personalized medicine: Targeting drugs for each unique genetic profile. Oncologist, 4(5), 426–427.

    Article  Google Scholar 

  • Leighton, J. P. (2010). Internal Validity. In N. J. Salkind (Ed.), Encyclopedia of research design. Thousand Oaks: SAGE.

    Google Scholar 

  • Melzack, R. (2001). Pain and the neuromatrix in the brain. Journal of Dental Education, 65(12), 1378–1382.

    Article  Google Scholar 

  • Mill, J. S. (1843). A system of logic, ratiocinative and inductive. London: John W. Parker.

    Google Scholar 

  • Pearl, J. (2000). Causality. Models, reasoning, and inference. Cambridge: Cambridge University Press.

    Google Scholar 

  • Pearl, J., Glymour, M., et al. (2016). Causal inference in statistics: A primer. Chichester: Wiley.

    Google Scholar 

  • Rainville, P., Duncan, G. H., et al. (1997). Pain affect encoded in human anterior cingulate but not somatosensory cortex. Science, 277(5328), 968–971.

    Article  Google Scholar 

  • Rubin, D. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701.

    Article  Google Scholar 

  • Sekhon, J. S. (2008). The Neyman-Rubin Model of Causal Inference and Estimation via Matching Methods. In J. M. Box-Steffensmeier, H. E. Brady, & D. Collier (Eds.), The oxford handbook of political methodology (pp. 271–299). New York: Oxford University Press.

    Google Scholar 

  • Shadish, W. R., Cook, T. D., et al. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  • Spirtes, P., Glymour, C., et al. (1993). Causation, prediction and search. New York: Springer.

    Book  Google Scholar 

  • Wager, T. D., Atlas, L. Y., et al. (2013). An fMRI-based neurologic signature of physical pain. New England Journal of Medicine, 368(15), 1388–1397.

    Article  Google Scholar 

  • Winch, R. F., & Campbell, D. T. (1969). Proof? No. Evidence? Yes. The significance of tests of significance. The American Sociologist, 4(2), 140–143.

    Google Scholar 

  • Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.

    Google Scholar 

  • Woodward, J. (2008). Cause and explanation in psychiatry: An interventionist perspective. In K. Kendler & J. Parnas (Eds.), Philosophical issues in psychiatry: explanation, phenomenology and nosology. Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank the editors of this volume, as well as the anonymous reviewers for their comments on previous versions on this paper. This research was supported by SSHRC Grant # 430-2020-0654.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tudor M. Baetu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baetu, T.M. How interventionist accounts of causation work in experimental practice and why there is no need to worry about supervenience. Synthese 199, 4601–4620 (2021). https://doi.org/10.1007/s11229-020-02993-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-020-02993-6

Keywords

Navigation