Brain states matter. A reply to the unfolding argument
Introduction
In its modern form, the scientific study of consciousness aims to uncover the laws or regularities that link conscious experience with physical systems, such as the brain. Among the remarkable successes of this young field is the creation of various theories of consciousness, also referred to as models of consciousness (Seth, 2007). The most prominent examples are Integrated Information Theory (Marshall et al., 2016, Mayner et al., 2018, Oizumi et al., 2014), Recurrent Processing Theory (Lamme, 2006) and Global Neuronal Workspace Theory (Dehaene, Changeux, & Naccache, 2011). These theories are being supplemented by a large body of philosophical theories about how the relation of conscious experience and the physical domain could be, e.g. the many variants of functionalism, representationalism or type identity theory.
The unfolding argument presented in Doerig, Schurger, Hess, and Herzog (2019) claims that some of the leading models of consciousness “are either false or outside the realm of science” (p. 56). The models in question are so-called causal structure theories, which define a system’s experience in terms of the mutual interaction of its parts. If this claim is true, it has large consequences for the scientific study of consciousness, as Integrated Information Theory and Recurrent Processing Theory, two of the most promising contemporary models of consciousness, both qualify as causal structure theories.
In Sections 2.1 Setup, 2.2 The problem, 2.3 The argument, we review the unfolding argument and state it in clear formal language. In Section 2.4, we prove that the premises used in the unfolding argument imply a much stronger result, namely that the problem identified in the unfolding argument holds for all models of consciousness that depend non-trivially on physical systems. This includes causal structure theories such as Integrated Information Theory, but also Global Neuronal Workspace Theory and any other model of consciousness which is functionalist or representationalist in nature or based on type identity theory. Finally, in Section 3, we discuss one premise of the argument in detail and show that it is unwarranted both in a theoretical and practical sense. Readers who are not interested in the mathematical details of the unfolding argument can jump straight to Section 3.
Section snippets
Setup
We start by explaining the setting in which the unfolding argument, and our generalization thereof, is staged. This setting indeed underlies many contemporary research programs in the scientific study of consciousness. A summary is given in Fig. 1. Before introducing the underlying assumptions, we fix terminology and notation.
In what follows, we denote by P a class of configurations of physical systems. Any denotes a model of the physical system, and we assume that this model contains all
Discussion
In the previous section we have mathematically shown that the problem identified in Doerig et al. (2019) does not only affect causal structure theories, but indeed all models of consciousness that depend non-trivially on physical systems. This includes Integrated Information Theory and Recurrent Processing Theory, but also Global Neuronal Workspace Theory or functionalist models of consciousness, to name just a few. Our results show that if the assumptions of the unfolding argument are valid,
Summary and conclusion
In this contribution, we have subjected the unfolding argument to a thorough formal analysis. Based on this analysis, we have shown that the premises of the unfolding argument entail a much stronger result than the one obtained in Doerig et al. (2019). If the premises are valid, the problem identified in the unfolding argument affects not only causal structure theories, but in fact all models of consciousness that depend non-trivially on physical systems. This includes, e.g., Integrated
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Acknowledgements
I would like to thank Robin Lorenz and Robert Prentner for discussions on this topic and Adrien Doerig for discussion of parts of this manuscript. This analysis of the unfolding argument was originally conceived of as part of a larger contribution.
References (23)
- et al.
The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness
Consciousness and Cognition
(2019) - et al.
Estimating the integrated information measure phi from high-density electroencephalography during states of consciousness in humans
Frontiers in Human Neuroscience
(2018) Towards a true neural stance on consciousness
Trends in Cognitive Sciences
(2006)- et al.
Consciousness and complexity during unresponsiveness induced by propofol, xenon, and ketamine
Current Biology
(2015) - et al.
The Oxford companion to consciousness
(2009) Psychologism and behaviorism
The Philosophical Review
(1981)- Dehaene, S. & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace...
- et al.
The global neuronal workspace model of conscious access: From neuronal architectures to clinical applications
(2011) - et al.
A neuronal model of a global workspace in effortful cognitive tasks
Proceedings of the National Academy of Sciences
(1998) - Hanson, J. R. & Walker, S. I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombies....
Cited by (13)
Towards a structural turn in consciousness science
2024, Consciousness and CognitionA bibliometric evaluation of the impact of theories of consciousness in academia and on social media
2022, Consciousness and CognitionCitation Excerpt :In particular, given the wide scope of phenomena that consciousness science as a discipline entails, one problem for the field is that theories for consciousness could generate claims that are not empirically testable. Recently, some researchers have argued that it is impossible to empirically falsify certain theories or types of theories of consciousness (Hanson & Walker, 2020, Doerig et al., 2019, but see Kleiner (2020) and Tsuchiya et al. (2019) for replies). As Michel et al. (2019) argued, if they were indeed unfalsifiable, attributing funding for work on these theories could harm the field as a whole in the long run, especially when such funding opportunities could have supported the development of other more empirically testable theories.
First-person experience cannot rescue causal structure theories from the unfolding argument
2022, Consciousness and CognitionCitation Excerpt :We will also explain why the specific feedforward and recurrent networks discussed by Usher are not relevant for the UA, since they are not in fact functionally equivalent. As mentioned, all sides of the debate (except Usher) seem to agree that the UA holds if consciousness science is strictly based on third-person data gathered by i/o experiments (Albantakis, 2020; Doerig et al., 2019; Hanson & Walker, 2020; Kleiner, 2020; Kleiner & Hoel, 2021; Tsuchiya et al., 2020). Therefore, the majority of counterarguments to the UA propose that there is more to consciousness research than just i/o experiments: in contrast to other sciences, first-person experiences are needed (Albantakis, 2020; Kleiner, 2020; Kleiner & Hoel, 2021; Negro, 2020; Tsuchiya et al., 2020; see also Chalmers, 1996; Goff, 2019).