By “phenomenology” we usually mean a style of thought and a variegated array of authors who originate, directly or indirectly, from the theoretical project promoted by Edmund Husserl. Since the family resemblances among authors and positions do not immediately allow the identification of a common nucleus, in order to delimit the sense of “phenomenology” intended here, we will associate the term as closely as possible with the Husserlian reading.

The term “complexity” is applied in a multiplicity of fields and in forms that are often difficult to compare with each other and its use is frequently more suggestive than clearly defined. However, we can recall some fundamental insights behind the introduction of the idea of ​​complexity. First and foremost “complexity” means a limitation of the instances of the modern conception of nature, seen as deterministically (linearly) computable and as reducible to the composition of elementary processes in a unitary predictable process. The mention of complexity therefore evokes anti-reductionist, anti-determinist (stochastic) and emergentist instances (cf. Holland, 2014). “Complexity” is thus placed halfway between ontology and epistemology: it expresses some characters of being (of nature) that constitutively back out of epistemic claims such as determinism, reductionism, etc.

The intent of the following pages is to show that some substantial ontological conclusions, consistent with the idea of ​​“complexity”, can be demonstrated through a series of elementary observations produced from a phenomenological perspective. In particular, it will be shown that, on a phenomenological basis, it is necessary to acknowledge an ontology where the forms of ontological efficacy are irreducible to efficient causality, where the relations between properties are irreducible to deduction, where raw qualities must exist originally, where qualities generate further qualities that emerge from the previous, and where no explanatory model can be adequate to the explanation of reality if it does not include the functions of consciousness.

1 Phenomenology and Complexity

That phenomenology has theoretical characteristics suitable for speaking on the ontological level has been, and is, a controversial issue, since the meaning that Husserl attributes to the notion of “ontology” is by no means immediately intuitive. Phenomenology as such is marked by Husserl as “wahre und echte universale Ontologie” (“true and authentic universal ontology”) (Hua I, 181). But the problem of the nature of ontology in phenomenology has the same degree of difficulty as the age-old problem of the “idealist” or “realist” character of the phenomenological approach. The phenomenological project aims at founding knowledge, where a well-founded knowledge is legitimately represented only by meanings capable of grasping reality. From the Husserlian point of view, a rigid contrast between epistemology and ontology is therefore meaningless. For Husserl, ontology is above all “eidetic phenomenology” (Hua IX, 298), i.e., it is an investigation of formal essences (categories valid for “something in general”) (Hua XVII, 82) or of the material essences of the “life-world” (Hua VI, 145). Therefore, in Husserlian terms, we speak of “ontology” when we address the question of the essential structures that oversee the encounter between consciousness and the transcendent world. By “transcendent” in phenomenology we mean “that which consciousness recognizes as subsisting beyond the acts of consciousness”; therefore “transcendent” is what consciousness encounters and acknowledges as other than itself. Although the phenomenological method largely relies on procedures of reflective analysis, such as “eidetic variation” (Hua XLI), the essential structures are manifested in a privileged way in the encounter between consciousness and transcendent otherness, in the first instance at the perceptual level.

The “constitution” (Konstitution)Footnote 1 of intentional objects for consciousness therefore occurs primarily in the sphere of perceptive acts, where perceiver and perceived are inseparable in their contribution to the emergence of the phenomenon. The founding priority of perception over acts of remembrance and imagination is an essential point for understanding the profound union of epistemology and ontology in Husserl’s phenomenology. The percept is at the same time the first source of what we call “reality” and also of what we consider “knowledge”. The phenomenological method enables us to examine the structuring of what manifests itself (the “phenomena”) without severing in advance what pertains to reality and what pertains to appearance, what is subjective and what is objective.

Phenomenology is first of all a method. It starts both from a problem of knowledge foundation and a primitive, simple, yet crucial observation: that knowledge is possible. This approach was already present at the roots of Kantian transcendental reflection, when Kant was trying to understand how disciplines that presented themselves as a priori, i.e. independent of experience such as logic, arithmetic, and geometry could synthetize the empirical data examined by the science of nature. The problem of a “transcendental affinity” between the subject and the world was at the centre of Kant’s reflection, and in a different form it is also at the centre of Husserl’s reflection. For Husserl, the fact that knowledge is possible says first and foremost that the main units we deal with in thought and action are not particulars. These units are not irreducible individuals, but have the characteristics of intertemporal identity and generality (that is, instantiability on an indefinite number of occasions): they are therefore called “essences” (Wesen). If we wanted to understand an instance of knowledge as conformity between two individual facts (for example, a particular brain state and a particular extracorporeal event), then we would find it impossible to define conformity or non-conformity, adequacy or inadequacy: in fact, two absolutely individual entities do not have by definition anything in common. When we talk about “commonalities” we are already talking about “essences”, i.e. units endowed with identity, stability, repeatability over time. The phenomenological method is therefore intended as a method capable of bringing to light “essences” or “forms”, stable meaningful units that belong to thought as well as reality.

How does the phenomenological method proceed? The original move of phenomenology is meant to bring forth the ground of “phenomena” as the first ontological ground. But in what sense can we say that a “phenomenon” is an element of an ontology? If ontology is the science of being, then how can something that by definition involves appearing (phenomenon) belong to it? Indeed, the “phenomenon” of which phenomenology speaks is no more appearance than reality. It is self-evident that in a primary sense whatever manifests itself, in whatever way it may manifest itself, is something that “appears to us”. In the space of what appears to us, we normally, and often unreflectively, make attributions of reality: certain forms of appearance (percepts, but also “folk theories”) are posited as conveying reality. The phenomenological method questions the obviousness of this passage, and it does so by first creating the space of phenomena.

The space of phenomena is brought to light when we exercise the Epoché, that is, a suspension of judgment on the status of reality of what appears. Consequently, the Epoché also suspends any explanation aimed at causally tracing an appearance back to a reality. Every ordinary explanation assumes some reality as known and proceeds to reduce the unknown (the apparent) within such an allegiance. This way of proceeding is never radical, because it relies on a provisionally shared opinion about what a “known reality” would be. Phenomenology shifts the playing field by bringing it back to the sphere of phenomena, that is, to the sphere where everything that manifests itself has the right of citizenship: what we call perception and what we call illusion, what we define as thought and what we define as being. This is the largest conceivable field, the one where all givenness takes place. Starting from this non-bypassable sphere, the phenomenological analysis begins to expose the ways in which the relations of dependence between phenomena manifest themselves. Thus, for example, phenomenological analysis shows how the contents of imagination are tributary to perception, or how the structure of objective time is grounded in the structure of time, which is immanent in consciousness. These “essential legalities” (Wesensgesetzlichkeiten), as Husserl calls them (Hua I, 106), are the first and most radical evidential basis on which any knowledge can be built. No wild empirical data and no scientific theory can replace such evidential source.

Phenomenological evidence does not guarantee absolute “truthfulness”, but it is the highest level of certainty that we can draw upon. Any other conclusion, verification, or inference must presuppose what the phenomenological exercise brings out.

Therefore, the phenomena of which phenomenology speaks are already fully part of ontology, as they have a form of existence. A phenomenon is everything that appears as a possible intentional object, as a possible object of consciousness, where not only the sensible appearances and their preconditions are phenomena, but also the logical forms that allow us to draw conclusions from this or that appearance are phenomena, as well as the intentional modes (asserting, doubting, believing, etc.).Footnote 2 Not all phenomena can be placed at the same foundational level. What determines the foundational level between phenomena is the analysis of the dependencies among them; this is how Husserl can identify some phenomenal orderings as primary and irreducible evidences, and others as constructs, derivations, implications, with a reduced degree of certainty compared to the former, primary level.

To define the criteria of truth, of the real, of the grounded, and of what is inferentially correct is not and cannot be the task of any particular fact or theory, because facts and theories have always made use of those criteria. Phenomenology brings to light these criteria as “essential legality”.

The crucial point in phenomenological conceptuality, which is a decisive point for the considerations to come, is that phenomena and the “essential legality” that phenomenology brings to light must not, and cannot, be considered more subjective than objective. Phenomena are first of all articulations of our “being-in-the-world”: they are neither primarily “thoughts” (even if only thought recognizes them), nor “facts” (even if they share “self-givenness” with facts).

2 The Irreducibility of the Qualitative and the Emerging Properties

The first question we want to address concerns an elementary character of physical ontology. The classical model of nature, which lies at the origins of modern science, postulated a deep affinity between the quantitative logic of mathematics and the essence of physical reality. Metaphysical visions, such as the Galilean image of the universe written in mathematical characters, rather than demonstrative arguments, are at the basis of this postulate on the quantitative essence of nature. And yet this assumption deeply influenced the historical development of modern science. The remote origin of this vision can be traced back to the Democritean idea of ​​atoms devoid of qualitative characteristics, yet distinguishable only by shape, order and position. Galileo will reformulate that vision by conceiving nature as a great book written in mathematical characters (Galilei, 1964: 631–632), and by placing the “primary qualities” (shape, size, motion, position) as the foundation of natural being. This approach will find a crucial realization in the Cartesian idea of ​​a reduction of geometry (theory of material solids) to arithmetic, through “analytic geometry”. More or less openly, this conception of nature as fundamentally congenerous to classical mathematics, and therefore as deterministic and governed by deductive laws, permeates the eighteenth and nineteenth centuries, down to the quantum revolution. But even after that theoretical turning point, the implicit assumption of a fundamentally quantitative essence of nature has remained dominant. This happened because methodological reasons are transferred to the ontological level: since mathematical tools are and remain crucial, their adoption is spontaneously combined with the idea that the reality to which they are applied must have a “number-like” character.

Some could believe that this opaque postulate does not have great implications and that it is perhaps limited to support a scientist mindset where mathematical modelling is crucial. However, while on the operational level – the level of scientific practice – this tacit ontological postulate does not produce any problem, things change when the ontological representation is conveyed outside the scientific sphere. Here we are faced with a manifest discrepancy between our primary knowledge of the world (ordinary actions and perceptions) and what that tacit ontological postulate supports, with the authority of scientific judgment. What we have in front of us is an implicit image of the world that basically preaches the delusional character of everything that appears to us as primary evidence: the life-world. Perhaps this dyscrasia does not produce damage at the level of scientific practice, but it certainly does so at the ethical and existential level, as it creates the conditions for a drying up and impoverishment of our conception of the world.

Now, let us ask ourselves: how can we judge such a qualitative ontology postulate in the radically foundational perspective of phenomenology? Do we have the tools to question it? The answer is affirmative and follows a rather straightforward reasoning.

Let’s start again from the Husserlian concept of “phenomenon”. The Husserlian reflection has its roots in the model of Descartes’ meditations, while modifying it in a decisive way. Descartes tried to provide certainty to the foundation of knowledge through the process of methodical doubt, which arrived at a single ultimate evidence: the existence of the ego as a thinking thing. It is known how difficult it was for Descartes to get out of that foundational corner and return from that first foundation to the knowledge of the perceptual world and its articulations. In Husserl the process differs on one essential point: the cogitationes as such, the phenomena, are certain. Every phenomenon exists, in its own forms and limits, and this is absolutely irrefutable. We do not know their specific form of existence: we do not know if we must attribute existence to it in the objective space or not. We do not know if they are a physical, logical or psychological fact. However, it is important to keep in mind this primary fact: in our world, phenomena such as Vermeer’s paintings and Sibelius’ symphonies, hopes for the future and nostalgia for the past, pain for a wound and pleasure for satisfying a need, etc. do exist.

Now, the first question we must ask ourselves is the following: in an allegedly quantitative ontology, that is reductionist in the sense of “primary qualities”, atomistic in the Democritean sense, could we ever make room for the phenomenal world for which we have first-hand certainty?

It is difficult to imagine how this could ever be possible. No world made up of number-like entities, of mere quantities, could ever generate a world of qualitative phenomena such as the one we live in. Even if we wanted to attribute a merely “subjective” status to a symphony, a pain or a thought, it remains clear that symphonies, pains and thoughts also have a form of existence, which must be justified.

Here we are faced with a first conclusion of ontological character: the world we live in must be made up of irreducible qualities and cannot have an originally quantitative character. To reach this conclusion we do not need experimental investigations, nor do we need to abandon the sphere of phenomena.

This step opens up some simple yet important corollaries.

In the first place, how are we to conceive of the combinatorial processes between qualitative units? In a world made up of ultimate quantitative elements, it would be logical to expect the general applicability of linear, deterministic, and deductive computations, even if, occasionally, no effective computation is available. In a quantitative ontology, the characteristics of the sum of the elements, of their unification into any wholes is always attributable to the characteristics of the elements as they are given before entering the sum. The whole must coincide with the sum of the parts. There is no room for critical thresholds or for effects that cannot be deduced from the knowledge of the premises.

The methodological reasons for advocating such an ontological model are obvious: such an ontology lends itself to be understood and explained, a priori. A quantitative ontology is clearly desirable for reaching maximum predictability and control over causal chains, but beyond the desirability for these ends, very little speaks in favour of such a perspective. Indeed we have to admit the existence of ultimate qualitative elements, we must envisage predictive scenarios. Specifically, we must expect two effects: (1) a sum of elements can generate a change in the properties of the whole, and (2) the composition of qualitatively different elements can generate properties that differ from those of the starting elements.

The first point states that, while there are no reasons to suppose that the addition of purely quantitative elements produces a change in the properties of the sum, variations in the quantity of elements can lead to a change in properties when irreducible qualities are involved.

An increase in brightness can increase the visibility of an event, up to the point where too much light blinds us, making the event invisible. An increase in kinetic energy in a material can increase its temperature, until a change of state (liquid, or gas) takes place. An increase in mass can bild up the force of gravity affecting events governed by ordinary laws, until the gravitational increase gives rise to a singularity (black hole) where ordinary laws are no longer recognizable.

The “threshold effects” in nature are the norm, not the exception. But in principle, our strictly deductive forecasting capacity works only in those spaces where there are no qualitative “thresholds”; the prototype of natural reality where we can work in an “a priori” mode is something like this: five kilos of iron plus five kilos of iron make ten kilos of iron, where iron preserves exactly the same properties, whether it appears in a set of lower or higher mass. The ideal reference model for the quantitative conception of nature is given by all those circumstances in which we do not encounter threshold effects. Of course, whenever a threshold effect takes place, provided that its characteristics have been studied, the novel traits can also enter a predictable computation (the new characteristics of ice or vapor can enter further predictions, after discovering what happens to water when it changes state).

The second point recalls an idea similar to the previous, but from a different perspective. The first point emphasizes the fact that a purely quantitative growth of a qualitative factor can produce qualitative changes, that is, changes in the properties of the whole. The second point, on the other hand, emphasizes the fact that the union of two qualities, when it reaches a fusion and is not just an extrinsic juxtaposition (such as oil and water), can produce different properties than those based on the mere knowledge of the component qualities. From the visual knowledge of the primary colours green and red we cannot deduce a priori that their fusion will produce yellow. From the arrangement of the stars in the sky we cannot deduce a priori the visual formation of configurations (constellations). From the separate existence of individual notes, we cannot a priori deduce the melodic effect of their succession in musical clusters. From the analysis of the seed we cannot deduce a priori the characteristics of a tree. From the knowledge of the properties of hydrogen and oxygen we cannot deduce a priori the properties of water, just as we could not a priori deduce the properties of table salt from those of chlorine and sodium (Rothschild, 2006: 152-3).

Of course, the assumption of constancy and uniformity of nature allows for producing reliable a posteriori inferences. The physicist who knows the elements of hydrogen and oxygen, and how they combine in water, will be able to use this previous knowledge in all subsequent instances in a predictive way, assuming that what was valid in the past will continue to be valid in the future. Furthermore, previous experience can guide us to develop abductive skills, that is, it can train us to produce “educated guesses”. The talented composer, on the basis of previous experience, can foresee the possible effectiveness of some musical solutions before playing them, even if he or she will become fully aware of their effect only by playing them. After learning how hydrogen and oxygen combine in water, it became possible to use that knowledge to conjecture further compound interactions of hydrogen and oxygen and other effects could be produced, even though the empirical reproduction of the effect was always required for ascertaining those hypotheses.

The “emergent” character of a qualitative ontology has interesting consequences. It suggests that even a limited combination of ultimate qualities can in principle be generative of an infinite number of properties. This is the case because, if the combination of two qualities can generate a third that is endowed with new properties, and if the quantitative increase of the same qualitative entity can generate threshold effects, that again shed light on different properties, then the number of potentially different properties in the universe is virtually infinite. The ontology that we must recognize is such as to allow, in principle, an indefinite number of properties in nature that are different and additional to those we are used to and aware of.

Finally, one last point should be noted, which has only metaphysical implications, but which nevertheless deserves to be pointed out. An ontology that considers processes able to generate new properties on the basis of changes in the relations and quantities cannot warrant the ordinary physical assumption of a general uniformity of nature over time. Such an assumption remains methodologically indispensable to draw any kind of sustainable inference and therefore to make our knowledge work. However, it must be clear that, as the world appears to us through its primary phenomenal manifestations, the uniformity of nature can only be a methodological assumption and never an ultimate reality. The state of affairs exemplified by Nelson Goodman, where emeralds at some point may turn out to be “emerubies” (Goodman, 1983: 73 ff.), that is, to have the properties of emeralds up to a certain point in time and to have then those of rubies, is a structural possibility of the world we live in.

3 The Explanatory Insufficiency of Efficient Causality

At this point we must focus on a further step, concerning the analysis of causal processes. When we contrast causal theories of complexity with classical theories we usually find ourselves faced with theories that go beyond the linear causality model and include stochastic processes, interactions among the parts of the whole, and forms of feedback and self-organization of the system (Bickhard & Campbell, 2000: 342). The challenge in these models lies in being able to elaborate predictive forms for more realistic and broader systems, which are more similar to the realities we meet outside laboratory conditions.

Here too we want to ask ourselves whether a rigorously phenomenological analysis is able to give an autonomous conceptual contribution to questioning ​​causality, considering that such an idea is applied in our conception of the natural world. Let us begin with a brief analysis of the form of the ordinary concept of causality, which corresponds to Aristotelian efficient causality. This deals with an ideal basic example that is a sort of projection of the deductive model on the physical sphere: in the presence of necessary and sufficient conditions, the physical effect must be produced in a deterministic way. Here logical necessity and physical necessity seem to meet ideally.

However, in the normality of real circumstances we deal with states of affairs where we basically never have sufficient conditions, but at most necessary ones: ​​we may know some necessary premises for an effect to occur, but we never have complete knowledge of what would guarantee the effect. Here the ideal determinism of efficient causality can be translated into probabilistic consequentiality: given certain conditions, some outcomes have a certain probability distribution (in the ideal optimal case, with necessary and sufficient conditions, a single outcome probability’s score is 1).

When we talk about causality we tend to envisage a primary intuitive and very generic idea, in which “cause” is just anything that “makes something be”, while in fact the implicitly assumed picture coincides with Aristotelian efficient causality. The first idea, intuitively indisputable, is rooted in our first-person experience, and specifically in the sphere of action: a cause is something that acts on something else. We can refer to this idea of “​generic causality” as “efficacy”. The second idea (efficient causality) implies a (more or less) sophisticated explanatory model.

The only side of causality that is directly manifest to us is efficacy, where we act by producing effects, or something else appears to act and to produce effects. As von Wright observed, the reference to agency in causation cannot be circumvented (von Wright 1971). If we are faced with a stable spacetime correlation, such that event A is constantly followed by event B, then this is a case of indirect causal evidence: in fact, we imagine that A “acts” on B.

It is clear that we are faced with a form of causality whose evidence is derived when we observe that, in principle, any space-time correlation between A and B can always be due to both A and B having a common underlying cause (C), unknown to us - without A directly causing B. This situation of indeterminacy can be replicated for any further space-time correlation (therefore also for C). The situation changes only when we can intervene on the system: if, in the face of a correlation, we intervene on the upstream state of affairs A and we see that the downstream events B come out modified, then we have reasons to say that A caused B. This happens because our intervention, in our eyes, is endowed with a special status, the status of irreducible originating cause. The identification of something as a “cause” rests on an intuitive model, most evidently provided by first-person experience, in which we as agents understand ourselves as irreducible causes or sources, while the connection of facts in regular forms just provides us with correlations (as in the Humean analysis), which can be interpreted in various ways.

If we now look closely at how our imagination is articulated when we read efficacy as efficient cause, then we find, in addition to the reference to agency, some fundamental phenomenological elements: the articulation of causation into circumscribed units and the temporal ordering of cause and effect.

Above all, at the basis of the idea of ​​efficient causality lies the isolation of a couple of events, posed as cause and effect. Of course, this is not an obvious natural fact, an objective instance of the physical world. There are no circumscribed events in nature. In an objective sense, everything is, in various forms, connected with everything, in every sense and direction. There are no events naturally endowed with limits, but events do exist for our interest as living, sentient, thinking beings. A road accident is an event for us, an event on which we can focus, and investigate in its causes and effects. But in an objective physical sense, if we should disregard our distribution of interests, then such an event would simply be a section of physical processes that has connections before, after, and around it, without any end point or without any ultimate threshold. It is our interest that defines the limits of an “event”. And this is true here for both the upstream event, the cause, and for the downstream one, the effect. The identification of causes and effects is therefore a purely axiological operation, guided by interests that are rooted in the life of consciousness, in the sensorimotor system, in the structure of mediated, theoretical interests, etc.

At this point we must also bring the last fundamental trait of consciousness into the picture. When we describe an efficient causal link, we must primarily be based on a situation of temporal contingency such that the cause precedes the effect, and from the effect it is possible to trace back the cause through a chain of steps. By definition, cause and effect do not coexist. When there is the cause, the effect does not exist yet, it is in the future. When the effect occurs, the cause is no longer there, it is in the past. Now, the phenomenologically essential point to be stressed here is that succession relations, and the concepts of future and past, can exist only by virtue of specific functions of consciousness. There is no place to go and inspect the past, or the future, but the dimension of the semantic units, of the meanings by which we understand each other.

Everything that manifests itself as primary evidence belongs to what we call “presence”, and what is present is “presence to a consciousness”. A fossil or an archaeological finding are present data, which “stand for” a past and refer to it through the mediation of a theory. A “memory trace”, a neuronal process in the hippocampus, as well as a magnetic trace on a hard disk can “represent” a past event, but this can only happen on the basis of an acting living consciousness (Zhok, 2017: 80–84). It is not the “present thing” such as a brain process or a magnetic support that is past: they become past when they signify the past, and this signification takes place only for a conscious process.

This inescapability of the role of consciousness does not mean that the past, the temporal sequences, time itself are “inventions”, much less “illusions”. However, the form of the structures of succession and the connective tissue that binds past and future have no reality without a reference to operations of consciousness. And this means, again, that the memory of the past and the anticipation of the future depend on structural “interests” of consciousness, on selective and connective activities.

In each causal sequence, the temporal ordering that defines it requires, alongside the selection of relevant events, their articulation in relations of meaning, where past and future are precisely semantic units, irreducible to present givenness.

If we now line up the elements that we have detected on a phenomenological basis, we find that the form of efficient causality, far from being a sort of “natural givenness”, is the outcome of a series of activities of consciousness. This series selects the units that count as “events”, places them as ideally connected in a temporal order, and conceives the cause as an “agent” and the effect as something “done”. The givenness of “efficacious relations” in the world is something that we can consider primary evidence, but the fact that these relations follow the efficient cause model is a matter of a particular explanatory model. The causal model presupposes, in order to function, to be able to rely on a series of intentional acts, that in turn presuppose “values” (interests) and orderings of consciousness.

This fact has a necessary and very relevant implication for the question of “complexity”, as we consider it. By essence, explanations based on an efficient causality model cannot be either ontologically ultimate explanations or exhaustive explanations. The “linearity” of ordinary causal explanations is superimposed on reality and is demonstrably insufficient to account for what is at work on the ontological level. In the terms we have introduced, we must say that ontological efficacy, that is real relationships, must necessarily include aspects that go beyond any possible account in terms of efficient cause.

As we have seen, the specific intentions of scientific method tend to favour a quantitative approach in the description of natural phenomena, and the explanatory form conforming to this approach is a version of efficient causality. Efficient causality, in fact, is functional to the purposes of an agent who tries to anticipate, univocally compute and govern a physical process.

The “cause” is the ideal locus of application of a potential agent (even if the agent is not there, as in the cases of causes of a geological or cosmological nature). The “effect” is the intent ideally anticipated by an agent. Phenomena unfold in the horizon of temporal succession in which our consciousness lives. Events are subjectively salient subsets of the physical world. From this perspective, efficient causality has nothing to do with a form of apprehension of the world “faithful to nature and phenomena”, but responds to our specific needs as living beings, to govern natural processes for the sake of our lives. All this is perfectly justifiable and not arbitrary, provided that every ontological thesis is kept away from this eminently methodological instance. At the ontological level, the lesson we must draw is that reality must necessarily possess a level of complexity higher than that which any account in terms of efficient causality, however comprehensive, may provide.

4 Conclusions

Phenomenological analysis therefore leads us to an ontology governed by “complexity”, that is, an ontology where the forms of ontological efficacy are irreducible to efficient causality, where the relations between properties are irreducible to deduction, where we must admit the original subsistence of irreducible qualities and where qualities generate further qualities that emerge from them. From this point of view, “complexity” appears as the historical awareness, applicable in a plurality of fields, of the insufficiency of the deterministic and quantitative paradigm that inspired the birth of modern science. “Complexity” therefore cannot represent a new unitary paradigm, because it includes plural forms of phenomenal irreducibility, which by essence resist unification.

From a phenomenological perspective, the most important outcome of facing the question of “complexity” consists in realizing that any explanatory level which is less comprehensive than the fullness of the functions of consciousness is inadequate to represent reality. This implies that naturalistic (objectivistic) representations of the world are fatally inadequate as ontological representations, and preserve their value only as methodological inspirations.