Abstract
Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only argument offered to establish that generative models are structural representations, and argue that it does not substantiate the desired conclusion. Having so done, I consider a number of alternative arguments aimed at showing that the relevant structural similarity obtains, and argue that all these arguments are unconvincing for a variety of reasons. I then conclude the paper by briefly highlighting three themes that might be relevant for further investigation on the matter.
Similar content being viewed by others
Notes
A reader might contest this, noting that numerous accounts of generative models as structural representations have been offered (e.g. Kiefer and Hohwy 2018, 2019; Wiese 2018). I am aware of the existence of such accounts. However, to me they all seem to presuppose the success of Gładziejewski’s (2016) original argument, to then improve on it in various ways.
Notice that this is a theoretical assumption, that can be theoretically contested (e.g. Orlandi, 2016).
Importantly, model inversion is not essentially an approximated process. So, by saying that a generative model is inverted one has not yet shown how the intractability problem is solved. Since the technical details are fairly complex (see Bogacz, 2017) and will not matter for my argument, I will not sketch them here. An anonymous referee has my gratitude for having noticed this issue.
Many thanks to the anonymous reviewer who noticed that the original formulation of this point was too strong.
And in fact, according to PP, action too requires the inversion of a generative model (see Friston 2011).
Here, “relevant” means “the one adopted by Gładziejewski”. Other definitions of structural similarity are surely possible (e.g. Shea, 2018, p. 117). However, since my focus here is Gładziejewski's argument, I will stick to the definition Gładziejewski favors.
Alternatively, structural representations can be defined as: “A collection of representations in which a relation on representational vehicles represents a relation on the entities they represent” (Shea, 2018, p. 118). This definition stresses the important fact that each element of the structural representation is also a representational vehicle, whose content is determined by the relevant structural similarity in which it participates. For instance, each object on a map stands for (i.e. represents) an environmental landmark, and spatial relations among objects on a map represent spatial relations holding among the corresponding landmarks. Notice that such a nesting of representational vehicles is entirely unproblematic: after all, both a sentence and the words forming it are representational vehicles in an entirely intelligible sense. Notice further that according to both Shea’s and Gładziejewski’s definition, the relevant structural representation is the entire structure of related objects, rather than any single part of that structure. That is, the elements (V and ℜV) of a structural representation need not be, on their own, structural representations.
This might or might not require a representational consumer. Gładziejewski asserts that a consumer is necessary in his (2015); but his (2016) does not mention consumers. Shea's definition of exploitable structural similarity (to which Gładziejewski adheres) does not require consumers, so I will skip them here. Notice that I adapted the notation in Shea's definition for the sake of orthographic consistency.
In its original formulation, the definition of decouplability also mentions representational consumers (see Gładziejewski, 2015). Here, I omit them for the reasons given in the previous footnote.
Notice that I'm not claiming that graphical models are not structurally similar to their targets. They are. As clarified above, a structural similarity might hold among any pair of entities. Yet, the relevant class of structural similarities that can be used to vindicate (a) is the class of structural similarities holding between representational vehicles and their targets; and graphs are not representational vehicles.
Or, at least, so it seems. To be honest, I believe that Kiefer is no longer committed to the claim that generative models are structural representations. Rather, it seems to me that Kiefer is committed to some form of functional role semantics. To be precise, Kiefer (2017, p. 12) seems to endorse the claim that generative models are structural representations. However, he seems to have quickly changed his mind about this point, as, in numerous later publications (Kiefer and Hohwy, 2018, p. 2393; 2019, p. 401–403; Kiefer, 2020, footnote 19) he takes the content of generative models to be determined by internal functional roles rather than by the structural similarity holding between a generative model and its target. I will more directly confront this issue in the main text, when dealing with the fourth alternative argument for (a). Many thanks to an anonymous referee for having pressed me on this issue.
Notice that the scope of my claim is restricted to PP and the usage of graphical models in the PP literature. I make no claim on how graphical models are used in the rest of cognitive neuroscience (and related disciplines). Many thanks to the reviewer who advised me to be more cautious on this point.
Importantly, this seems exactly how Kiefer interpreted these models, see (Kiefer 2017, pp. 12–16).
The same two points seems to apply whether these models are intended to capture computational processes more generally, given that computational processes are often defined in terms of representations (see Fodor, 1981; Shagrir, 2001; Ramsey, 2007, pp. 68–77; Sprevak, 2010; Rescorla, 2012). This latter point, however, is not entirely uncontested (e.g. Piccinini, 2008).
Or both. The formulation in terms of “either (i) or (ii)” is due to the fact that it seems to me that one might interpret weighted connections either as parts of a structural representation (i.e. as members of V) or as relations among parts (i.e. as relations in ℜV).
Notice that I’m not denying that weight matrices encode the invariant relations that hold among the elements of the domain upon which the network has been trained to operate. I am only denying that there is a mapping from weight matrices (that is, from individual weights or sets of weights) to relations such that the mapping satisfies (i) or (ii). In simpler terms, I’m not denying that weight matrices represent invariant relations, I’m only denying that weight matrices represent invariant relations by being structurally similar to the target domain (or by participating in some relevant structural similarity with the target domain). Notice, importantly, that not all invariant relations need to be encoded in a vehicle that is structurally similar to its target. We might, for instance, stipulate that the sign “§” represents the fact that my father is n years older than me. If we do so, then “§” encodes an invariant relation holding between me and my father, and yet there just seems to be no structural similarity holding between “§” and the target it represents. Many thanks to an anonymous reviewer for having pressed me on this point.
Many thanks to an anonymous referee for having raised these objections.
And even if my confidence were misplaced, I would concede the point for the sake of discussion.
At this point, it might be tempting to wonder whether the relevant definition of structural similarity could be relaxed, so as to allow connections to be elements in the structural similarity in spite of the lack of any intelligible one-to-one mapping holding between them and the elements of the target domains. As an anonymous reviewer aptly noticed, O'Brien and Opie’s (2004) definition of structural similarity is not the only one on the market, and at least some alternative formulations do not require a one-to-one mapping (e.g. Kiefer and Hohwy, 2019, p. 400; Shea, 2018, p. 117). As far as I can see, the mapping can be relaxed so as to allow many elements of the vehicle to map onto one element of the target. However, I believe the mapping cannot be relaxed so as to allow one element of the vehicle to map onto many elements of the target. To see why this is the case, consider a minimal structural representation constituted by two objects a*and b* in a relation R*. Suppose that R* corresponds to a relation R, that a* corresponds to an element a and that b* maps onto two elements b and c. Now, given this mapping, the representation is accurate when aRb is the case. It is also accurate when aRc is the case. Hence, misrepresentation occurs only when both aRb and aRc are not the case. But, if this is correct, then the representation represents (aRb or aRc), and its content is disjunctive and thus indeterminate. Yet, it is widely assumed that a successful theory of content must deliver us determinate content. So, it seems to me that, in order for a structural-resemblance based theory of content to be successful, it must exclude one-to-many mappings. Now, the issue with weights in connectionist systems is that they seem to map one-to-many: each weight encodes information about many targets (see Clark, 1993, pp. 13–17; Van Gelder, 1991, pp. 42–47; Ramsey, Stich and Garon 1991, pp. 215–217 for early renditions of this point). Hence, it seems that each weight is bound to map onto many targets, generating the problem with content determinacy. Notice, importantly, that the same line of reasoning holds even when the relations map onto many. To see why, consider a modified version of the minimal structural representation considered above, in which a* maps onto a, b* maps onto b and R* maps onto two relations R and F. Again, given this mapping, misrepresentation occurs only when both aRb and aFb are not the case, and so the representation represents (aRb or aFb). In both cases, the disjunction problem is brought about by the claim that one-to-many mappings might constitute structural similarities, so as to circumvent the problems raised by superspositionality. Hence, we should not allow one-to-many to constitute structural similarities. Thanks to an anonymous referee for having pressed me on this point.
More precisely, it is common in the PP literature most heavily influenced by Friston’s free energy principle. Many thanks to an anonymous referee for having noticed this imprecision.
One might contend this verdict is premature. For the elements (i.e. objects of V and relations of ℜV) of structural representations are representational vehicles in their own right (e.g. Shea, 2018, p.118; Ramsey, 2007, p. 79, footnote 3). Thus, claiming that the brain as a whole is a structural representation might in principle justify the claim that the relevant elements of the structural similarity (i.e. patterns of activation) are representations too, leading to a vindication of epistemic representationalism. I believe that the problem with this line of reasoning is the following: the brain-world structural similarity Friston envisages is not defined over patterns of activation in the brain. Rather, it is defined over the anatomical structure of the brain. The relevant elements in the structural similarity are not patterns of activation. Hence, this way of vindicating (a) fails to properly vindicate the epistemic representationalist claim.
Here, one might be tempted to simply reject condition (c) and accept that entire brains are structural representations of the environment. As far as I can see, this is a legitimate move. However, it seems quite an ad hoc move. There are good independent reasons to hold that representations are necessarily decouplable from their targets (see Grush, 1997; Webb, 2006; Pezzulo, 2008: Orlandi, 2014, pp. 120–134). Moreover, abandoning (c) would likely make Gładziejewski’s account of structural representations far too liberal, as Gładziejewski himself acknowledges (Gładziejewski, 2016, p. 571).
To be clear, Kiefer and Hohwy do not explicitly set out to defend “whole brain” representations. However, it seems to me that their account entails that the whole brain is a structural representation, at least insofar they take the entire causal network instantiated by the brain to be the relevant structural representation. A reviewer noticed that this characterization of Kiefer and Hohwy’s position might be too ungenerous, since, strictly speaking, Kiefer and Howhy speak only of connections among cortical regions. Hence, their position is best described as a form of “whole cortex”, rather than “whole brain” representationalism. However (and the reviewer seems to agree) noticing this does not substantially alter the dialectical situation. So, I will continue to speak of Kiefer and Hohwy as endorsing a form of “whole brain” representationalism, mainly for the sake of simplicity.
Notice, importantly, that Kiefer and Hohwy seem to consider decouplability a necessary feature of representations, see (Kiefer and Hohwy 2019, p. 400).
Of course, individual patterns of activation can be decoupled from the individual target they represent in virtue of the overall brain-world structural similarity. However, to be satisfied, point (c) requires that the entire vehicle of structural representation (in this case, the whole brain) is decoupled from its target (in this case, the world). Thus, noticing that in some cases (e.g. during dreaming) certain patterns of activation are tokened in a way that is functionally independent from the incoming sensory stimulation is not sufficient to vindicate point (c). This is because individual patterns of activations are not the entire vehicle of the structural representations, but rather elements of that vehicle. Thanks to an anonymous referee for having pressed me to clarify this point.
One might object that Kiefer and Hohwy (2018, 2019) should be counted as defending structural representations because they stress that the relevant structural similarity is relevant for the system’s success. As I understand it, the problem with this line of argument is that the same holds true also for causal theories of content (see Nirshberg and Shapiro, 2020, pp. 6–7; Facchin 2021, pp. 9–12).
To be precise, Friston suggests that the dorsal horn of the spinal cord embodies an inverse model. But an inverse model still seems to me to count as a model.
Presumably, single, well identified, regions of the cortical hierarchy.
Arguably, Kiefer and Hohwy’s (2018, 2019) account is one such account, given Kiefer and Hohwy’s commitment to functional role semantics. However, given that they seem to take (wrongly, in my opinion) functional role semantics as a kind of structural resemblance, it is very hard to evaluate their proposal as an alternative to structural representations-based accounts of PP.
A reviewer noticed that structural representations are less popular in the philosophy of mind, where teleosemantic theories of content still appear to dominate. It might be worth noticing, at this point, that teleosemanticists are increasingly willing to incorporate some forms of structural similarity in their accounts (e.g. Millikan, 2020; Neander, 2017). Moreover, the standard notion of exploitable structural similarity has been elaborated within a roughly teleosemantic framework (Shea, 2018). Yet, nothing, in my argument, hinges over this.
References
Adams, R., et al. (2013). Predictions, not commands: Active inference in the motor system. Brain Structure and Function, 218(3), 611–643.
Albers, A. M., et al. (2013). Shared representations for working memory and mental imagery in early visual cortex. Current Biology, 23(15), 1427–1431.
Allen, M., & Friston, K. (2018). From cognitivism to autopoiesis: Towards a computational framework for the embodied mind. Synthese, 195(6), 2459–2482.
Baltieri, M., et al. (2020). Predictions in the eye of the beholder: An active inference account of Watt governors. Artificial Life Conferences. https://doi.org/10.1162/isal_a_00288.
Bastos, A. M., et al. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4), 695–711.
Bickhard, M. H. (1999). Interaction and representation. Theory and Psychology, 9, 435–458.
Bogacz, R. (2017). A tutorial on the free energy framework for modeling perception and learning. Journal of Mathematical Psychology, 76, 198–211.
Bruineberg, J., et al. (2020). The emperor’s new Markov Blankets [preprint]. Accessed at http://philsci-archive.pitt.edu/18467/, Accessed 15 Dec 2020
Buckley, C. L., et al. (2017). The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology, 81, 55–79.
Chemero, A. (2009). Radical embodied cognitive science. . The MIT Press.
Churchland, P. M. (1986). Some reductive strategies in cognitive neurobiology. Mind, 95(379), 279–309.
Curchland, P. M. (2012). Plato’s Camera. How the physical brain captures a landscape of abstract universals. . The MIT Press.
Clark, A. (1993). Associative engines. . The MIT Press.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Clark, A. (2015). Predicting peace: the end of the representation wars. In T. Metzinger, J. M. Windt (Eds.). Open MIND: 7, Frankfurt am Main: The MIND Group. https://doi.org/https://doi.org/10.15502/9783958570979.
Clark, A. (2016). Surfing uncertainty. . Oxford University Press.
Clark, A. (2017). Busting out: predictive brains, embodied minds, and the puzzle of the evidentiary veil. Noûs, 51(4), 727–753.
Colombo, M., Elkin, L., & Hartmann, S. (2018). Being realist about Bayes, and the predictive processing theory of the mind. The British Journal of Philosophy of Science. https://doi.org/10.1093/bjps/axy059.
Danks, D. (2014). Unifying the mind. . The MIT Press.
Dayan, P., & Hinton, G. (1996). Varieties of Helmholtz machine. Neural Networks, 9(8), 1385–1403.
De Vries, B., & Friston, K. (2017). A factor graph description of deep temporal active inference. Frontiers in Computational Neuroscience, 11, 95.
Dolega, K. (2017). Moderate predictive processing. In T. Metzinger, W. Wiese (Eds.), Philosophy and predictive processing, 10, Frankfurt am Main: The MIND Group, https://doi.org/https://doi.org/10.15502/9783958573116.
Dolega, K., & Dewhurst, J. E. (2020). Fame in the predictive brain: a deflationatory approach to explaining consciousness in the prediction error minimization framework. Synthese. https://doi.org/10.1007/s11229-020-02548-9.
Donnarumma, F., et al. (2017). Action perception has hypothesis testing. Cortex, 89, 45–60.
Downey, A. (2018). Predictive processing and the representation wars: a victory for the eliminativists (via fictionalism). Synthese, 195(12), 5115–5139.
Facchin, M. (2021). Structural representations do not meet the job description challenge. Synthese. https://doi.org/10.1007/s11229-021-03032-8.
Fodor, J. (1981). The mind body problem. In J. Heil (Ed.), (2004), Philosophy of mind: A guide and anthology. (pp. 162–182). Oxford University Press.
Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B, 360(1456), 815–836.
Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
Friston, K. (2011). What is optimal about motor control? Neuron, 72(3), 488–498.
Friston, K. (2013a). Active inference and free-energy. Behavioral and Brain Sciences, 36(3), 132–133.
Friston, K. (2013b). Life as we know it. Journal of The Royal Society Interface, 10(86), 20130475.
Friston, K. (2019). Beyond the desert landscape. In M. Colombo, E. Irvine, & M. Stapleton (Eds.), Andy clark and his critics. (pp. 174–190). Oxford University Press.
Friston, K., et al. (2010). Action and behavior, a free-energy formulation. Biological Cybernetics, 102(3), 227–260.
Friston, K., et al. (2017a). The graphical brain: belief propagation and active inference. Network Neuroscience, 1(4), 381–414.
Friston, K., et al. (2017b). Active inference: A process theory. Neural Computation, 29(1), 1–49.
Friston, K., et al. (2017c). Active inference, curiosity and insight. Neural Computation, 29(10), 2633–2683.
Gładziejewski, P. (2015). Explaining cognitive phenomena with internal representations: A mechanistic perspective. Studies in Logic, Grammar and Rhetoric, 40(1), 63–90.
Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193(2), 559–582.
Gładziejewski, P. (2017). Just how conservative is conservative predictive processing? Internetowy Magazyn Filozofinczny Hybris, 38, 98–122.
Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: Causally relevant and different from detectors. Biology and Philosophy, 32(3), 337–355.
Goodman, N. (1969). The languages of art. . Oxford University Press.
Grush, R. (1997). The architecture of representation. Philosophical Psychology, 10(1), 5–23.
Haykin, S. (2009). Neural networks and machine learning. . Pearson.
Hinton, G. (2007a). To recognize shapes, first learn to generate images. Progress in Brain Research, 165, 535–547.
Hinton, G. (2007b). Learning multiple layers of representations. Trends in Cognitive Sciences, 11(10), 428–434.
Hinton, G. E. (2014). Where do features come from? Cognitive Science, 38(6), 1078–1101.
Hinton, G. E., & Sejnowski, T. E. (1983). Optimal perceptual inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Vol. 448.
Hohwy, J. (2013). The predictive mind. . Oxford University Press.
Hohwy, J. (2015). Prediction, agency, and body ownership. In A. K. Engel, K. Friston, & D. Kragic (Eds.), The pragmatic turn. (pp. 109–138). The MIT Press.
Hohwy, J. (2016). The self-evidencing brain. Noûs, 50(2), 259–285.
Hohwy, J. (2017). How to entrain your evil demon. In T. Metzinger, W. Wiese (Eds.), Philosophy and Predictive Processing, 2, Frankfurt am Main: The MIND Group, https://doi.org/https://doi.org/10.15502/9783958573048.
Hohwy, J. (2018). The predictive processing hypothesis. In A. Newen, L. De Bruin, & S. Gallagher (Eds.), The Oxford handbook of 4E cognition. (pp. 129–146). Oxford University Press.
Hohwy, J. (2019). Prediction error minimization in the brain. In M. Sprevak & M. Colombo (Eds.), The routledge handbook of the computational mind. (pp. 159–172). New York: Routledge.
Hohwy, J. (2020). New direction in predictive processing. Mind & Language. https://doi.org/10.1111/mila.12281.
Huang, Y., & Rao, P. (2011). Predictive coding. Wiley Interdisciplinary Reviews, 2(5), 580–593.
Kandel, E. R., Schwartz, J. H., Jessel, T. M., Siegelbaum, S. A., & Hudspeth, A. J. (Eds.). (2012). Principles of neural science. (5th ed.). The MacGraw-Hill Companies.
Kiefer, A. (2017). Literal perceptual inference. In T. Metzinger, W. Wiese (Eds.), Philosophy and Predictive Processing, 17, Frankfurt am Main: The MIND Group, https://doi.org/https://doi.org/10.15502/9783958573185.
Kiefer, A. (2020). Psychophysical identity and free energy. Journal of the Royal Society Interface. https://doi.org/10.1098/rsif.2020.0370.
Kiefer, A., & Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese, 195(6), 2387–2415.
Kiefer, A., & Hohwy, J. (2019). Representation in the prediction error minimization framework. In S. Robins, J. Symons, & P. Calvo (Eds.), The Routledge companion to philosophy of psychology. (2nd ed., pp. 384–410). Routledge.
Kilner, J., Friston, K., & Frith, C. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8(3), 159–166.
Kirchhoff, M. D., & Robertson, I. (2018). Enactivism and predictive processing: a non-representational view. Philosophical Explorations, 21(2), 264–281.
Knill, D., & Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Cognitive Science, 27(12), 712–719.
Koski, T., & Noble, J. (2009). Bayesian networks: An introduction. . Wiley.
Lee, J. (2018). Structural representations and the two problems of content. Mind & Language, 34(5), 606–626.
Leitgeb, H. (2020). On non-eliminative structuralism Unlabeled graphs as a case study, part A. Philosophia Mathematica. https://doi.org/10.1093/philmat/nkaa001.
Matsumoto, T., & Tani, J. (2020). Goal directed planning for habituated agents by active inference using a variational recurrent neural network. Entropy, 22(5), 564.
McClelland, J., & Rumelhart, D. (1986). Parallel distributed processing. (Vol. II). The MIT Press.
Mesulam, M. (2008). Representation, inference, and trascendent encoding in neurocognitive networks of the human brain. Annals of Neurology, 64(4), 367–378.
Millikan, R. G. (2020). Neuroscience and teleosemantics. Synthese. https://doi.org/10.1007/s11229-020-02893-9.
Morgan, A. (2014). Representations gone mental. Synthese, 191(2), 213–244.
Neander, K. (2017). A mark of the mental. . The MIT Press.
Nirshberg, G., & Shapiro, L. (2020). Structural and indicator representations: a difference in degree, not in kind. Synthese. https://doi.org/10.1007/s11229-020-02537-y.
O'Brien, G. (2015). How does the mind matter?. In T. Metzinger, J. M. Windt (Eds.), Open MIND: 28, Frankfurt am Main: The MIND Group https://doi.org/https://doi.org/10.15502/9783958570146.
O’Brien, G., & Opie, J. (2001). Connectionist vehicles, structural resemblance, and the phenomenal mind. Communication and Cognition, 34(1/2), 13–38.
O’Brien, G., & Opie, J. (2004). Notes towards a structuralist theory of mental representations. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representaion. (pp. 1–20). Elsevier.
Orlandi, N. (2014). The innocent eye. . Oxford University Press.
Orlandi, N. (2016). Bayesian perception is ecological perception. Philosophical Topics, 44(2), 327–352.
Pezzulo, G. (2008). Coordinating with the future: The anticipatory nature of representation. Minds and Machines, 18(2), 179–225.
Piccinini, G. (2008). Computation without representation. Philosophical Studies, 137(2), 205–241.
Poldrack, R. (2020). The physics of representation. Synthese. https://doi.org/10.1007/s11229-020-02793-y.
Ramsey, W. (2007). Representation reconsidered. . Cambridge University Press.
Ramsey, W. (2020). Defending representational realism. In J. Smortchkova, K. Dolega, & T. Schlich (Eds.), What are mental representations? (pp. 54–78). Oxford University Press.
Ramsey, W., Stich, S. P., & Garon, J. (1991). Connectionism, eliminativism and the future of folk psychology. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory. (pp. 199–228). Routledge.
Ramstead, M., Kirchooff, M. D., Friston, K. (2019). A tale of two densities: active inference is enactive inference. Adaptive Behavior, 1059712319862774.
Rao, R., & Ballard, D. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive field effects. Nature Neuroscience, 2(1), 79–87.
Rescorla, M. (2012). How to integrate representations is computational modeling, and why we should. Journal of Cognitive Science, 13(1), 1–38.
Rogers, T. T., & McClelland, J. L. (2014). Parallel Distributed Processing at 25: Further explorations in the microstructure of cognition. Cognitive Science, 38(6), 1024–1077.
Rumelhart, D., & McClelland, J. (1986). Parallel distributed processing. (Vol. I). The MIT Press.
Seth, A. (2014). A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia. Cognitive Neuroscience, 5(2), 97–118.
Seth, A. (2015). The cybernetic Bayesian brain. In T. Metzinger, J. M. Windt (Eds.), Open MIND: 35, Frankfurt am Main: The MIND Group https://doi.org/https://doi.org/10.15502/9783958570108.
Seth, A., & Friston, K. (2016). Active interoceptive inference and the emotional brain. Philosophical Transactions of the Royal Society B, 371(1708), 20160007.
Shagrir, O. (2001). Content, computation and externalism. Mind, 110(438), 369–400.
Shea, N. (2013). Perception versus action: the computations might be the same but the direction of fit differs. Behavioral and Brain Sciences, 36(3), 228–229.
Shea, N. (2014). VI: Exploitable isomorphism and structural representation. Proceedings of the Aristotelian Society, 114(22), 123–144.
Shea, N. (2018). Representations in Cognitive Science. . Oxford University Press.
Shipp, S. (2016). Neural elements for predictive coding. Frontiers in Psychology, 7, 1792.
Sims, A. (2017). The problems with prediction. In T. Metzinger & W. Wiese (Eds.), Philosophy and Predictive Processing: 23, Frankfurt am Main: The MIND Group https://doi.org/https://doi.org/10.15502/9783958573246
Sporns, O. (2010). Networks in the brain. . The MIT Press.
Spratling, M. W. (2016). Predictive coding as a model of cognition. Cognitive Processing, 17(3), 279–305.
Sprevak, M. (2010). Computation, individuation and the received view on representation. Studies in History and Philosophy of Science Part A, 41(3), 260–270.
Sprevak, M. (2013). Fictionalism about neural representations. The Monist, 96(4), 539–560.
Tani, J. (2016). Exploring robotic minds. . Oxford University Press.
van Es, T. (2020). Living models or life modelled? On the use of models in the free energy principle. Adaptive Behavior. https://doi.org/10.1177/1059712320918678.
Van Gelder, T. (1991). What is the “D” in “PDP”? A survey of the concept of distribution. In W. Ramsey, S. P. Stich, & D. E. Rumelhart (Eds.), Philosophy and connectionist theory. (pp. 33–61). Rutledge.
Van Gelder, T. (1992). Defining distributed representations. Connection Science, 4(3–4), 175–191.
Webb, B. (2006). Transformation, encoding and representation. Current Biology, 16(6), 184–185.
Wiese, W. (2017). What are the contents of representations in predictive processing? Phenomenology and the Cognitive Sciences, 16(4), 715–736.
Wiese, W. (2018). Experienced wholeness: Integrating insights from gestalt theory, cognitive neuroscience and predictive processing. . The MIT Press.
Williams, D. (2017). Predictive processing and the representation wars. Minds And Machines, 28(1), 141–172.
Williams, D. (2018a). Predictive coding and thought. Synthese, 197(4), 1749–1775.
Williams, D. (2018b). Predictive minds and small-scale models: Kenneth Craik’s contribution to cognitive science. Philosophical Explorations, 21(2), 245–263.
Williams, D., & Colling, L. (2017). From symbols to icons: The return of resemblance in the cognitive science revolution. Synthese, 195(5), 1941–1967.
Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: Analysis by synthesis? Trends in Cognitive Science, 10(7), 301–308.
Acknowledgments
The author wishes to thank the participants to the IUSS WIP seminars for useful feedback on the essay. Thanks also to Niccolò Negro and Giacomo Zanotti for their useful comments on some previous version of this essay. A special thanks goes to Eleonora, for her moral support.
Funding
This work has been funded by the PRIN Project “The Mark of Mental” (MOM), 2017P9E9N, active from 9.12.2019 to 28.12.2022, financed by the Italian Ministry of University and Research.
Author information
Authors and Affiliations
Contributions
MF is the sole author of the paper.
Corresponding author
Ethics declarations
Conflict of interests
The author declares no conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Facchin, M. Are Generative Models Structural Representations?. Minds & Machines 31, 277–303 (2021). https://doi.org/10.1007/s11023-021-09559-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-021-09559-6