Brain states matter. A reply to the unfolding argument

https://doi.org/10.1016/j.concog.2020.102981Get rights and content

Highlights

  • Several promising theories of consciousness exist.

  • Recently, it has been claimed that a large class of them is empirically problematic.

  • We formalize this ”unfolding argument” and show that a much stronger result applies.

  • However, one premise of the argument is unwarranted, so that its claim is not valid.

Abstract

Recently, it has been claimed that Integrated Information Theory and other theories of its type cannot explain consciousness (“unfolding argument”). We unravel this argument mathematically and prove that the premises of the argument imply a much stronger result according to which the observed problem holds for almost all theories of consciousness. We find, however, that one of the premises is unwarranted and show that if this premise is dropped, the argument ceases to work. Thus our results show that the claim of the unfolding argument cannot be considered valid. The premise in question is that measures of brain activity cannot be used in an empirical test of theories of consciousness.

Introduction

In its modern form, the scientific study of consciousness aims to uncover the laws or regularities that link conscious experience with physical systems, such as the brain. Among the remarkable successes of this young field is the creation of various theories of consciousness, also referred to as models of consciousness (Seth, 2007). The most prominent examples are Integrated Information Theory (Marshall et al., 2016, Mayner et al., 2018, Oizumi et al., 2014), Recurrent Processing Theory (Lamme, 2006) and Global Neuronal Workspace Theory (Dehaene, Changeux, & Naccache, 2011). These theories are being supplemented by a large body of philosophical theories about how the relation of conscious experience and the physical domain could be, e.g. the many variants of functionalism, representationalism or type identity theory.

The unfolding argument presented in Doerig, Schurger, Hess, and Herzog (2019) claims that some of the leading models of consciousness “are either false or outside the realm of science” (p. 56). The models in question are so-called causal structure theories, which define a system’s experience in terms of the mutual interaction of its parts. If this claim is true, it has large consequences for the scientific study of consciousness, as Integrated Information Theory and Recurrent Processing Theory, two of the most promising contemporary models of consciousness, both qualify as causal structure theories.

In Sections 2.1 Setup, 2.2 The problem, 2.3 The argument, we review the unfolding argument and state it in clear formal language. In Section 2.4, we prove that the premises used in the unfolding argument imply a much stronger result, namely that the problem identified in the unfolding argument holds for all models of consciousness that depend non-trivially on physical systems. This includes causal structure theories such as Integrated Information Theory, but also Global Neuronal Workspace Theory and any other model of consciousness which is functionalist or representationalist in nature or based on type identity theory. Finally, in Section 3, we discuss one premise of the argument in detail and show that it is unwarranted both in a theoretical and practical sense. Readers who are not interested in the mathematical details of the unfolding argument can jump straight to Section 3.

Section snippets

Setup

We start by explaining the setting in which the unfolding argument, and our generalization thereof, is staged. This setting indeed underlies many contemporary research programs in the scientific study of consciousness. A summary is given in Fig. 1. Before introducing the underlying assumptions, we fix terminology and notation.

In what follows, we denote by P a class of configurations of physical systems. Any pP denotes a model of the physical system, and we assume that this model contains all

Discussion

In the previous section we have mathematically shown that the problem identified in Doerig et al. (2019) does not only affect causal structure theories, but indeed all models of consciousness that depend non-trivially on physical systems. This includes Integrated Information Theory and Recurrent Processing Theory, but also Global Neuronal Workspace Theory or functionalist models of consciousness, to name just a few. Our results show that if the assumptions of the unfolding argument are valid,

Summary and conclusion

In this contribution, we have subjected the unfolding argument to a thorough formal analysis. Based on this analysis, we have shown that the premises of the unfolding argument entail a much stronger result than the one obtained in Doerig et al. (2019). If the premises are valid, the problem identified in the unfolding argument affects not only causal structure theories, but in fact all models of consciousness that depend non-trivially on physical systems. This includes, e.g., Integrated

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Acknowledgements

I would like to thank Robin Lorenz and Robert Prentner for discussions on this topic and Adrien Doerig for discussion of parts of this manuscript. This analysis of the unfolding argument was originally conceived of as part of a larger contribution.

References (23)

  • Hauser, L. (2019). Behaviorism. In Internet encyclopedia of...
  • Cited by (13)

    • A bibliometric evaluation of the impact of theories of consciousness in academia and on social media

      2022, Consciousness and Cognition
      Citation Excerpt :

      In particular, given the wide scope of phenomena that consciousness science as a discipline entails, one problem for the field is that theories for consciousness could generate claims that are not empirically testable. Recently, some researchers have argued that it is impossible to empirically falsify certain theories or types of theories of consciousness (Hanson & Walker, 2020, Doerig et al., 2019, but see Kleiner (2020) and Tsuchiya et al. (2019) for replies). As Michel et al. (2019) argued, if they were indeed unfalsifiable, attributing funding for work on these theories could harm the field as a whole in the long run, especially when such funding opportunities could have supported the development of other more empirically testable theories.

    • First-person experience cannot rescue causal structure theories from the unfolding argument

      2022, Consciousness and Cognition
      Citation Excerpt :

      We will also explain why the specific feedforward and recurrent networks discussed by Usher are not relevant for the UA, since they are not in fact functionally equivalent. As mentioned, all sides of the debate (except Usher) seem to agree that the UA holds if consciousness science is strictly based on third-person data gathered by i/o experiments (Albantakis, 2020; Doerig et al., 2019; Hanson & Walker, 2020; Kleiner, 2020; Kleiner & Hoel, 2021; Tsuchiya et al., 2020). Therefore, the majority of counterarguments to the UA propose that there is more to consciousness research than just i/o experiments: in contrast to other sciences, first-person experiences are needed (Albantakis, 2020; Kleiner, 2020; Kleiner & Hoel, 2021; Negro, 2020; Tsuchiya et al., 2020; see also Chalmers, 1996; Goff, 2019).

    View all citing articles on Scopus
    View full text