1 Introduction

QM represents the conclusion of a process of unification of the fundamental laws and concepts of physics, begun by Galileo and Newton in the case of matter, through the discovery of universal laws valid both for heavenly and earthly bodies, and continued by Maxwell in the case of radiation with a unified theory of optical, electric and magnetic phenomena. The corpuscular theory of Einstein’s light quanta and the wave theory of matter of de Broglie and Schrödinger lead to a breakdown of the distinction between the two previous theories of classical physics, and between their different ontologies of matter and radiation.

This unification of the two main theories and the concepts of classical physics was carried out through a mathematical structure characterized on the one hand by an extraordinary predictive power and a great breadth of application areas, and on the other by an unsatisfactory explanatory scope and by the presence of contradictions and unsolved problems.

The ongoing debate on the foundations of QM, which involved the founding fathers themselves of the theory, focused from the outset on two major philosophical questions: realism and causality, principles that appeared in conflict with the standard interpretation of the theory, better known as the Orthodox or Copenhagen interpretation.

The mysterious nature of QM and its inability to tell us something clear about reality has been reiterated by many authors, such as Mermin:

…the unambiguous calculation method that has underlain all the explosive growth and flowering that physical science has enjoyed from 1925 right up to the present moment. But the quantum theory remains deeply mysterious. It is no harder to use than any other branch of physics, and thousands—indeed hundreds of thousands—of people have mastered its computational intricacies since it was first put forth. It is capable, in principle, of predicting the outcome of any experiment one can describe precisely enough to apply the mathematical apparatus of the theory. What makes it mysterious is that in general the quantum theory refuses to offer any picture of what is actually going on out there. (Mermin, 1988)

The explanatory limits of quantum theory, due to its acausal nature, have been lamented from the beginning by its founding fathers. For instance, de Broglie, who wrote in the Foreword to a book by Bohm:

Those who have studied the development of modern physics know that the progress of our knowledge of microphysical phenomena has led them to adopt in their theoretical interpretation of these phenomena an entirely different attitude to that of classical physics. Whereas with the latter, it was possible to describe the course of natural events as evolving in accordance with causality in the framework of space and time (or relativistic space-time), and thus to present clear and precise models to the physicist’s imagination, quantum physics at present prevents any representations of this type and makes them quite impossible. It allows no more than theories based on purely abstract formulae, discrediting the idea of a causal evolution of atomic and corpuscular phenomena; it provides no more than laws of probability: it considers these laws of probability as having a primary character and constituting the ultimate knowable reality: it does not permit them to be explained as resulting from a causal evolution which works at a still deeper level in the physical world. (de Broglie, 1957, p. IX; italics ours)

On the other hand, the role of the principle of causality in the very concept of Western philosophical thought—when the notion of rational explanation in terms of causes, scire per causas, emerges as one of the main tools to develop and consolidate the independence and individuality of philosophical knowledge against the mythical and religious thinking—can hardly be overestimated.

In this paper we intend to continue to adopt a philosophical approach to science, obviously including QM, which one of us (G. T.) has already pursued in various other circumstances. At variance with a widespread point of view, it has been stressed the possibility of suitable reformulations endowed with meaning, according to empirical confirmation criteria, of several metaphysical questions, principles and concepts. It has been proposed to apply these criteria—through which neo-positivists believed they had eliminated all metaphysics as meaningless, concluding that the only meaningful statements were the ones of science—to show how those same criteria, far from characterizing scientific principles, which must satisfy the more stringent requirements of Popperian falsifiability, represent an important tool to discriminate between meaningless speculative metaphysics and factual meaningful metaphysics.

This has been shown in particular in the case of the metaphysical principle of reality refuted by Carnap and Ayer and in the case of Heidegger’s concept of nothing rejected by Carnap as a pseudo-proposition and a pseudo-concept. In the former, the meaningful reformulations had taken place by moving the notion of reality from the object to its predictable propertiesFootnote 1 and, in the latter, by replacing the concept of absolute nothing with that of relative nothing (more on this in the following pages).

These results emerged in connection with the debate on the foundations of QM. In this context, it has been shown the existence of different versions of the causality principle, directly endowed with meaning already in their original formulation, violated by the principles of the standard interpretation of QM.Footnote 2

In the present paper, we aim to show the existence of other meaningful versions of causality contradicted by non-standard interpretations of QM. In the first place, we suggest that Descartes’ strong metaphysical principle of non-inferiority of causes, according to which a cause cannot have less reality than its produced effects, appears endowed with meaning and is violated not only by standard QM, but also by another realistic interpretation (in two versions) which tries to solve its most serious paradoxes by attributing some sort of (weaker) reality to the quantum mechanical wave function.

Second, we shall advocate that even the well-known principle ex nihilo nihil—implied in Descartes’ philosophy as a consequence of his principle of the non-inferiority of causes, that represents, by staying at the roots of the concept of rational explanation, a sort of precondition to the application of the law of causality—has factual meaning when reformulated by replacing absolute metaphysical nothing with empirical nothing.Footnote 3

We stress moreover that if we do not commit ourselves to the assumption of the reality of de Broglie waves as capable of producing interference or destroying it as have been shown by two experiments recently discussed,Footnote 4 the Renninger paradox imposes on us either the subjectivistic conclusions of von Neumann and Wigner’s perspectives or the attribution of physical properties to nothingness, instead of quantum waves.

The paper is organized as follows. In Sect. 2 we offer a brief overview of the approaches born by denying physical reality to the wave function. Sections 3, 4, and 5 examine three paradoxes deriving from the so-called negative-result experiments. We introduce the two variants of a non-standard realist interpretation of QM in Sect. 6. Section 7 is dedicated to a historic-philosophical digression on causality, which includes Descartes’ notion, and in Sect. 8 we show how orthodox QM violates, because of the radioactive decay, the principle of rational explanation. In Sect. 9 we explain how those variants of the non-standard interpretation violate Cartesian causality but not the principle of rational explanation. Concluding remarks follow in Sect. 10.

2 Consequences of the Denial of the Reality of the Quantum Mechanical Wave Function

The dual behavior of atomic objects represents the fruitful experimental evidence on which QM was born and has developed, overcoming the bipartition of physical objects of the macroscopic world in two different categories, matter and radiation, described in classical physics by two distinct theories: Newton’s mechanics and Maxwell’s electromagnetism.

It was the original emergence of the discontinuous, corpuscular character of the radiation processes, violating the old classical request natura non facit saltus, and then de Broglie’s daring extension of the duality from radiation to matter, which led to Schrödinger’s wave mechanics. The deterministic evolution of the Schrödinger wave function was soon identified with the fundamental law of motion of the new QM, but depriving at the same time the wave function of its physical meaning and simply relating its square modulus to the probability density of finding a particle in a given region according to Born’s probabilistic interpretation. The latter constitutes one of the main pillars of the orthodox formulation of the theory, together with Bohr’s principle of complementarity, which sought to reintroduce the wave-like feature, albeit within an ambiguous coexistence with the corpuscular aspect in which one manifested itself at the expense of the other one.

The denial of physical reality to the wave function, however, entailed a subjectivistic solution to the problem of measurement, in which the transition from an initial state of superposition to a well-defined final recorded state, corresponding to the so-called collapse or reduction of the wave function, was explained by von Neumann as an active intervention of an extraphysical entity, such as the mind or consciousness of a human observer.Footnote 5

There have been, however, several valuable attempts to find a realistic interpretation of measurement not accompanied by an ontological commitment to the physical reality of the wave function, by assuming only the reality of the macroscopic measuring apparatus. In this case, we should talk about macrorealistic theories of measurement, a large class of theories according to which the process of wave function reduction occurs in the transition from the microscopic to the macroscopic level (placing the disappearance of the superposition at an intermediate level, called mesoscopic). According to these theories the breaking off of von Neumann chain is due neither to the privileged status of the measuring apparatus, differentiating it from all other ordinary macroscopic systems, as in Bohr’s perspective, nor to the intervention of the observer’s consciousness, as in von Neumann’s theory, but would be simply produced by the macroscopic nature of the measuring apparatus (since it seemed inconceivable to think of the superposition of macroscopic states, as Schrödinger had pointed out with his famous cat paradox). Such a hypothesis involves the restriction of the domain of application of quantum formalism to the atomic objects, assuming that macroscopic apparata are complex systems, whose description requires either the recourse to classical and semiclassical theories, including, of course, classical thermodynamics, or the elaboration of a new quantum macro-dynamics.

According to some authors, the measuring apparatus is to be considered as a thermodynamical system and the measurement act as an irreversible recording process in a macroscopic apparatus triggered by a microscopic event. This hypothesis, investigated firstly by Jordan, has led to the theories of measurement of Ludwig, Prigogine and Daneri-Loinger-Prosperi, in which the problem of measurement is identified with the problem of the evolution of a complex macroscopic system towards its state of thermodynamical equilibrium.

The fundamental idea on which the previous approaches are based is that in an apparatus the state preceding the measurement must be metastable, in such a way that even a very small perturbation, like the one produced by the interaction with the measured atomic system, causes it to evolve towards a stable state dependent on the one of the measured system.Footnote 6

The previous macrorealistic approaches, which consider the apparatus as a macroscopic system not describable by quantum formalism and subjected to an irreversible evolution, do not only appear the most exhaustive attempts to provide an interpretation of the measurement process able to limit to the microscopic level, the subjectivist implications of the standard interpretation, but have also the merit to have clarified the impossibility to reconcile the idea of a reversible evolution, like the one implied by the Schrödinger equation, with the notion of a disturbing measurement of QM. Such an incompatibility already emerged in connection with the paradoxes of classical thermodynamics where both the postulate of the existence of Maxwell’s demon, i.e. of an ideal non-disturbing measuring apparatus, and the assumption of the general validity of Poincaré’s recurrence theorem, maintaining the intrinsic reversibility of any mechanical process, implies a violation of Boltzmann’s H-theorem and a consequent conflict with the irreversible nature of macroscopic processes.

There is, however, a very serious objection against the previous macrorealistic theories of measurement due to their incapability to explain negative-result experiments, i.e. physical situations in which the reduction of the wave function occurs even in absence of any detection process by the measuring apparatus. In such a situation it is not necessary to detect a particle to have a quantum measurement: the lack itself of a particle detection can constitute a measurement.

These conceptual experiments give rise to the following three paradoxes.

3 Renninger’s Paradox

A thought experiment concerning a negative-result measurement was first posed in 1953 by Mauritius Renninger.Footnote 7

Let us consider, according to Renninger’s thought experiment, a weak source P emitting isotropically photons in all directions, partially surrounded by a hemispheric screen E1 of center P and radius R1 subtending a solid angle \({\Omega }\) around P and completely surrounded by a second and, in this case, spherical screen E2 of radius \({R}_{2}>{R}_{1}\), both covered by a photon-sensitive substance.

In this way, each photon emitted by P can be absorbed either by E1 or by E2, with probabilities respectively given by

$$ \omega _{1} = {\Omega \mathord{\left/ {\vphantom {\Omega {4\pi }}} \right. \kern-\nulldelimiterspace} {4\pi }}\,\,{\text{and}}\,\,\omega _{2} = {{\left( {4\pi - \Omega } \right)} \mathord{\left/ {\vphantom {{\left( {4\pi - \Omega } \right)} {4\pi }}} \right. \kern-\nulldelimiterspace} {4\pi }} $$
(1)

The quantum mechanical wave function describing the initial state at the time \({t}_{0}\) will be therefore

$$ \left| {\psi _{{t_{0} }} } \right\rangle = \sqrt {\omega _{1} } \left| {\psi _{1} } \right\rangle + \sqrt {\omega _{2} } \left| {\psi _{2} } \right\rangle $$
(2)

At the subsequent time \({t}_{1}={R}_{1}/c\), two different events may occur. The first event is the detection of the photon on the first screen E1. According to the reduction postulate, the new physical situation will be described by

$$ \left| {\psi _{{t_{1} }} } \right\rangle = \left| {\psi _{1} } \right\rangle $$
(3)

implying that \({\omega }_{2}\) is nullified and \({\omega }_{1}\) becomes certainty.

In this case, no particular contrast seems to arise between our mathematical description and its physical interpretation: the reduction of the superposition (2) to the pure state (3) can in some way be explained as the result of the physical process of interaction between the emitted photon with the detecting screen E1. One would, of course, object that if the apparatus E1, like any other physical systems, must be described by quantum formalism, one would have not yet the reduction process and would be led, according to von Neumann’s chain, to another state of superposition

$$ \left| {\psi _{{t_{0} }} } \right\rangle = \sqrt {\omega _{1} } \left| {\psi _{1} } \right\rangle \left| {\varphi _{1} } \right\rangle + \sqrt {\omega _{2} } \left| {\psi _{2} } \right\rangle \left| {\varphi _{2} } \right\rangle $$
(4)

where \(\left| {\varphi _{1} } \right\rangle\) and \(\left| {\varphi _{2} } \right\rangle\) are the states of measuring device registering simultaneously two different results in absence of the observer’s consciousness.

But we are faced in this case with the standard problem of the wave function reduction in the theory of measurement, for which there are alternative solutions to von Neumann’s subjectivist interpretation, based on the idea that quantum formalism cannot be applied to the description of measuring apparata and more in general to macroscopic systems.

The second possibility is that at the time \({t}_{1}\) one has no detection in E1, implying the immediate occurrence of the reduction process (being \({\omega }_{1}\) = 0 and \({\omega }_{2}\) = 1)

$$ \left| {\psi _{{t_{1} }} } \right\rangle = \left| {\psi _{2} } \right\rangle $$
(5)

before the detection event of the photon on E2 at the time \({t}_{2}={R}_{2}/c\).

Nevertheless, in this second case, the reduction of the wave function seems in no way related to any observable physical process of detection of some physical event, but merely appears as a consequence of the knowledge we, as human observers, have obtained by not observing the occurrence of a given phenomenon. We cannot for this reason appeal to the idea of a physical interaction between microscopic object and macroscopic instrument, according to which the reduction would have to occur at the later time \({t}_{2}\),when the photon is absorbed by the second detecting screen E2, which would imply

$$ \left| {\psi _{{t_{2} }} } \right\rangle = \left| {\psi _{2} } \right\rangle $$
(6)

whereas at the time \({t}_{1}\) would persist the same superposition given by (4) at the initial time \({t}_{0}\), in conflict with the description (5), given by standard QM.

This last aspect highlights the reasons why Renninger’s paradox constitutes a crucial point in the debate on the foundations of QM, reducing the problem of measurement to the conflict between two interpretations: von Neumann’s subjectivistic one, which denies physical reality to the wave function and de Broglie’s realistic one, which attributes to the latter some form of reality. As a matter of fact, the possibility that the collapse of the wave function could occur even in the absence of a detection process simply in the transition between the microscopic and macroscopic domain, severely questions macrorealistic theories of measurement.

Renninger’s paradox was subsequently assumed by Jauch et al. (1967), as a strong argument supporting von Neumann’s subjectivist theory and refuting the alternative macrorealistic theories of measurement elaborated in the spirit of Bohr’s philosophy, according to which “the microscopic part of the measuring acts merely as a triggering device, while the essential macroscopic part of the measuring process, that part which wipes out the phase relations, is ‘related to a process taking place in the latter apparatus after all interaction with the atomic system has ceased’ (Rosenfeld, 1.c.)” (Jauch et al., 1967, pp. 149–50).

But against this possibility,

it is quite easy to give counter-examples of measurements which do not proceed according to the scheme of a triggering device followed by an ergodic amplification in a macroscopic system. The most startling examples of this kind are for instance the so called ‘negative-result measurements’ discussed by Renninger. It follows from these examples that the macroscopic and ergodic systems are useful (and practically indispensable) devices to raise the events to the level of data […], but that they do not touch the basic aspect of the dilemma. (Jauch et al., 1967, p. 150)

4 de Broglie’s Paradox Revisited

Renninger’s paradox was also discussed by de Broglie (1973) to show the necessity of going beyond the contradictory features of orthodox QM.

He in turn presented another variation of Renninger’s argument through a paradox concerning the problem of the localization of a micro-object in which he showed how the subjective implications, deriving from a denial of the reality of the wave function, led to acausal non-local effects over large distances.

In the following we will discuss a stronger argument, deriving from the combination of de Broglie paradox with another one related to the status of Carnap’s principle of empiricism (“If all minds disappear from the universe, stars still go on on their courses”) in QM, that has been proposed as a variant of Renninger’s paradox. This argument represents an objection against macroscopic theories of measurement.

Let us consider, as in de Broglie’s paradox, a box B, with perfectly reflecting walls, which can be divided into two parts B1 and B2 by a double sliding wall. Suppose that B contains initially an electron, whose wave function \(\phi (xyzt)\) is defined in the volume V of B. The probability density of observing the electron at point x, y, z at time t is then given by \(\left| {\phi \left( {xyzt} \right)} \right|^{2}\).

Next, B is divided into the two parts B1 and B2: B1 is delivered to the observer O1, which remains in the laboratory on Earth, whereas B2 is connected with a detecting device A2 in such a way that the presence of the electron in B2 activates the retarded explosion of a 1000 megaton nuclear bomb and then everything is placed inside a missile which, immediately after, is launched towards the planet Venus. The explosion would cause a disturbance in the orbit of Venus, which would, in turn, produce a (small) displacement of the entire planetary system: if the set of the macroscopic observables \(P = F\left[ {q_{i} \left( t \right),~p_{i} \left( t \right),~t} \right]\) corresponds to the ordinary configuration of the planetary system at time t, \(P^{'} = F\left[ {q_{i}^{'} \left( t \right),~p_{i}^{'} \left( t \right),~t} \right]\) will express the perturbed one.

After the division of the box, the physical situation is described by QM with two wave functions, \({\phi }_{1}(xyzt)\) defined in the volume V1 of B1 and \({\phi }_{2}(xyzt)\) defined in the volume V2 of B2. The probabilities \({\omega }_{1}\) and \({\omega }_{2}\) of finding the electron in B1 and B2, respectively, are given by

$$ \omega _{1} = \int\limits_{{V_{1} }} {\left| {\phi _{1} \left( {xyzt} \right)} \right|^{2} } dV $$
(7a)
$$ \omega _{2} = \int\limits_{{V_{2} }} {\left| {\phi _{2} \left( {xyzt} \right)} \right|^{2} } dV $$
(7b)

with \({\omega }_{1}+{\omega }_{2}=1\).

So we will have the state of the electron described by the initial superposition state:

$$ \left| {\psi _{i} } \right\rangle = \sqrt {\omega _{1} } \left| {\phi _{1} } \right\rangle + \sqrt {\omega _{2} } \left| {\phi _{2} } \right\rangle $$
(8)

which, if we attribute, according to standard QM, two wave functions \(\left| P \right\rangle\) and \(\left| {P^{'} } \right\rangle\), respectively, to the normal and perturbed state of the planetary systems, the initial state of the global systems “de Broglie’s box + planetary system” will become

$$ \left| {\psi _{i} } \right\rangle = \sqrt {\omega _{1} } \left| {\phi _{1} } \right\rangle \left| P \right\rangle + \sqrt {\omega _{2} } \left| {\phi _{2} } \right\rangle \left| {P^{'} } \right\rangle . $$
(9)

The state (9), corresponding to a very strange superposition between the states of a disturbed and not disturbed universe, is not accepted as a description of “de Broglie’s box + planetary system” by all macrorealistic interpretations of the theory of measurement, which maintain the occurrence of the disappearance of the superposition between macroscopic states.Footnote 8 These interpretations assume, however, that description (8) is the correct one for de Broglie’s box, before the occurrence of any physical interaction between the electron and a macrosystem, like a measuring apparatus for its detection.

Let us now consider an apparatus A1 controlled by the observer O1, who, at any instant preceding the one in which the nuclear explosion might occur on Venus, can connect it with B1, detecting in this way the electron if it is present in this box. The absence of a detection by A1 will, instead, inform us that the electron is contained in B2.

As a consequence of the measurement on B1, we can have, therefore, according to the Copenhagen interpretation, the reduction of (9) to one of the states:

$$ \left| {\psi _{1} } \right\rangle = \sqrt {\omega _{1} } \left| {\phi _{1} } \right\rangle \left| P \right\rangle , $$
(10)
$$ \left| {\psi _{2} } \right\rangle = \sqrt {\omega _{2} } \left| {\phi _{2} } \right\rangle \left| {P^{'} } \right\rangle , $$
(11)

where (10) is a consequence of the detection of the electron in B1, while (11) is due to the absence of any detection, i.e. a typical case of negative-result measurement. The only difference between von Neumann’s and Bohr’s approaches is that the reduction of (9) to (10) or (11) occurs for the latter at the level of the measuring apparatus A1 and for the former at the level of the observer O1.

We are forced, in both cases, with very strange consequences:

  1. a.

    in the first case the detection of the electron by A1, or at least the observation of this event by O1, modifies, through an instantaneous action at a distance, the physical situation inside B2, which is a spontaneous evolution separated by a few million kilometers from B1, producing, in this case, the collapse to the state of the non-perturbed universe: we are faced therefore with a very strong form of macroscopic non-locality;

  2. b.

    in the second case, it is the absence of any detection by A1, which informs O1 that the electron is contained in B2, which produces the reduction: it is in this way the non-occurrence of any physical process that generates the transition from (9) to the state of the perturbed universe given by (11).

In this way the observation or non-observation of the electron on Earth changes the wave function on Venus, reducing it to zero or to unity.

But the more paradoxical situation, for the orthodox approach, is the one connected with the impossibility of making any measurement or observation, implying the persistence of a state of superposition between the states of the universe. We have therefore a direct conflict of this interpretation with the macrorealistic hypothesis of Lewis-Carnap: if all minds disappeared from the universe and, as an obvious consequence of such an event, no measurement or observation could be performed, stars would not continue on their courses but would remain in the undefined state expressed by (9).

5 The Paradox of the Physical Properties of (Relative) Nothing

There is a third possibility in addition to the two previously proposed: that of recognizing physical reality to nothing in order to consider the wave function collapse as a consequence neither of the intrusion of the observer’s consciousness, nor of the interaction of de Broglie wave with the measuring device, but of the detection of nothing, intended, as we shall see shortly, as the negation of the presence of the particle.

Our argument is based on the idea of describing a single photon,Footnote 9 that can be found at one or another of two distant places, through an entangled stateFootnote 10 replacing the standard superposition state (2) of both the original and of the modified version of Renninger’s paradox.

As the photon is indivisible and cannot appear partly here and partly there, if it is found here, it will not be there, and vice versa. We will use \(\left| 1 \right\rangle\) to denote the presence of the photon and \(\left| 0 \right\rangle\) to denote its absence, the product \(\left| 0 \right\rangle \otimes \left| 1 \right\rangle\), which we can write \(\left| {01} \right\rangle\), will accordingly indicate that there is a photon there and nothing (no photon) here. Similarly, \(\left| {10} \right\rangle\) indicates photon here and no photon there. If we consider the physical situation of de Broglie’s paradox, here and there would correspond to Paris and Tokyo, respectively.

The two possibilities \(\left| {01} \right\rangle\) and \(\left| {10} \right\rangle\) can be combined in the superposition

$$ \left| \psi \right\rangle = {1 \mathord{\left/ {\vphantom {1 {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}\left( {\left| {01} \right\rangle - \left| {10} \right\rangle } \right) $$
(12)

whose fundamental aspect stays in its coherence, expressed by the “–” sign between the two terms, which means that the two products are physically related and communicate with one another. This coherence means that both possibilities, \(\left| {01} \right\rangle\) and \(\left| {10} \right\rangle\), are present before an observation or a measurement operation produces the collapse to either one or the other one.

This “communication” or interaction between \(\left| {01} \right\rangle\) and \(\left| {10} \right\rangle\) through the phase relation is conserved in as much as the superposition \(\left| \psi \right\rangle\) is statistically distinguishable from the corresponding incoherent state, the mixture

$$ \rho = {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}\left( {\left| {01} \right\rangle \left| {01} \right\rangle + \left| {10} \right\rangle \left| {10} \right\rangle } \right) $$
(13)

where the two possibilities \(\left| {01} \right\rangle\) and \(\left| {10} \right\rangle\) are not connected by a phase relation, and the impossibility to describe the physical situation through a precise state vector is simply due to our ignorance.

Gleason’s theorem has shown that the different states (12) and (13) can always be told apart statistically and that there are moreover sensitive observables,Footnote 11 ensuring the distinguishability of a superposition of products, like \(\left| \psi \right\rangle\), from a mixture of the same products, like \(\rho \). A typical example is given by the observables involved in the violation of Bell’s inequalities.

In our physical situation, a sensitive observable can be constructed in the following way. There will be a “photon number” basisFootnote 12\(\left| {0^{l} } \right\rangle\), \(\left| {1^{l} } \right\rangle\) here, and a similar basis there. We will need states

$$ \left| \pm \right\rangle = {1 \mathord{\left/ {\vphantom {1 {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}\left( {\left| {0^{l} } \right\rangle \pm \left| {1^{l} } \right\rangle } \right) $$
(14)

here and states

$$ \left| {\Theta _{ \pm } } \right\rangle = {1 \mathord{\left/ {\vphantom {1 {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}\left( {\cos \frac{\Theta }{2}\left| {0^{r} } \right\rangle \pm \sin \frac{\Theta }{2}\left| {1^{r} } \right\rangle } \right) $$
(15)

there; and self-adjoint operators

$$ \sigma ^{l} = \left| {1^{l} } \right\rangle \left\langle {1^{l} } \right| - \left| {0^{l} } \right\rangle \left\langle {0^{l} } \right|,\,\,\,\sigma = \left| + \right\rangle \left\langle + \right| - \left| - \right\rangle \left\langle - \right| $$
(16)

here and

$$ \Theta _{ \pm } = \left| { \pm \Theta _{ + } } \right\rangle \left\langle { \pm \Theta _{ + } } \right| - \left| { \pm \Theta _{ - } } \right\rangle \left\langle { \pm \Theta _{ - } } \right| $$
(17)

there, to define the operator

$$ S_{\Theta } = \sigma ^{l} \otimes \Theta _{ + } - \sigma ^{l} \otimes \Theta _{ - } + \sigma \otimes \Theta _{ + } + \sigma \otimes \Theta _{ - } . $$
(18)

Since \(\left\langle \psi \right|S_{{{\pi \mathord{\left/ {\vphantom {\pi 4}} \right. \kern-\nulldelimiterspace} 4}}} \left| \psi \right\rangle = 2\sqrt 2\) is not equal to \(\mathrm{T}\mathrm{r}\left(\varrho {S}_{\pi /4}\right)=2\), the operator \({S}_{\pi /4}\) represents a sensitive observable, which can “see” coherence by telling \(\left| \psi \right\rangle\) and \(\varrho \) apart.

The observable \(\sigma ^{r} = \left| {1^{r} } \right\rangle \left\langle {1^{r} } \right| - \left| {0^{r} } \right\rangle \left\langle {0^{r} } \right|\), which we can call photon-there, represents the photon’s presence or absence, in other words, the “photon number”, there. Its expectation \(\alpha = \left\langle \psi \right|I \otimes \sigma ^{r} \left| \psi \right\rangle\) for state \(\left| \psi \right\rangle\) vanishes, unlike the expectations

$$ \alpha _{0} = \left\langle {10} \right|I \otimes \sigma ^{r} \left| {10} \right\rangle = - 1 $$
(19)
$$ \alpha _{1} = \left\langle {01} \right|I \otimes \sigma ^{r} \left| {01} \right\rangle = + 1 $$
(20)

for the two terms superposed in \(\left| \psi \right\rangle\).

If we make an observation or a measuring operation on the photon here and do not find it, its absence will produce a collapse of the superposition to its second term \(\left| {01} \right\rangle\), while the expectation of the photon there jumps from 0 to 1. The jump takes place once we have found out that the photon is not here, where we have detected or registered nothing.

But what does the discovery of the absence of the photon involve? If one wants to avoid a subjectivistic solution like in the case of von Neumann and Wigner interpretation of the measuring process, assuming that a change of knowledge can act on physical reality modifying it, we are forced to attribute some sort of physical reality to the state corresponding to “no-photon here”, and considering the detection or the observation of the no-photon, in other terms of no-thing, as the cause of the wave function collapse.

6 The Non-standard Interpretation of Empty Waves

As we have seen in the last paradox, our no-thing does not correspond to an absolute no-being or nothingness, but simply to a relative no-photon. In this way, one attributes the collapse of the wave function, and the corresponding modification of the physical situation, to the registration process of the absence of the photon, namely, in our formalism, \(\left| {01} \right\rangle\), no-photon here and photon there, or in other terms to the photon registration failure here and consequent registration there.

So that, if there is no photon, and wanting to avoid von Neumann’s subjectivist interpretation which in turn leads to the solipsist outcomes of Wigner paradox,Footnote 13 one can explain the collapse of the wave function and the corresponding modification of the physical situation by appealing to the physical properties of nothing, here understood as the absence of the photon (no-photon).

This is a rather radical interpretation (from which we will draw some consequences later) stimulated by the analysis of the last paradox. Actually, it can also be considered as a sort of extremization of another non-standard realist interpretation on which we now want to briefly dwell upon, that of Selleri.

In an attempt to overcome the difficulties connected with de Broglie’s interpretation of the pilot wave, Selleri proposed, starting from the 1970s, a new realistic interpretation of the wave function based on the introduction of a new concept, that of empty wave.

Even if this interpretation is one of the “roads not taken”, as Holland (2014) calls it, we think it is an interesting precursor of the so-called wavefunction realism introduced by Albert (1996) and later defended in particular by Lewis (2004). Although they are different interpretations, they are both based on considering wave function as an existing physical individual, as Lewis says: “The quantum mechanical wavefunction is not just a convenient predictive tool, but is a real entity figuring in physical explanations of our measurement results” (Lewis, 2004, p. 713).Footnote 14

In the wake of the realistic conception of Einstein-de Broglie, according to which waves and particles exist objectively, and on the basis of the observation that the experiments carried out in this field show beyond any reasonable doubt that all energy, momentum, angular momentum, and charge are closely associated with particles, Selleri posed the question: How can we hypothesize the existence of an entity which has not associated with it any (directly) observable physical property? Considering de Broglie’s answer unsatisfactory—according to which the previous physical quantities were mainly associated with particles, but with an infinitely small fraction of them, so small that it escaped all possible observations, associated with the wave—he proposed a new hypothesis according to which,

even without any physical quantity associated with it, the wave function could give rise to physically observable phenomena. In fact we do not only measure energies, momenta, and so on. We also measure probabilities, e.g. the lifetime of an unstable system. (Selleri, 1969, p. 910)

The wave function could therefore

acquire reality, independently of the particles associated with it, if it could give rise to changes in the transition probabilities of the systems with which it interacts. (ibid.)

Selleri proposed experiments for testing the physical properties of empty waves to demonstrate that they have the property of producing stimulated emission in systems of excited atoms, whose excitation energy is the same as the one possessed by the particles.

The basic idea behind these experiments was the acceptance of the physical dualism of waves and particles, but not of its symmetric nature. Empty waves imply some kind of “ontological priority” of particles with respect to waves, in the sense that waves without particles cannot be characterized through the basic properties possessed by all other physical objects, like energy, momentum, charge, and mass, but only through relational properties with the particles: the observable properties of producing interference and stimulated emission. This means that quantum waves would have to belong to a weaker level of physical reality, containing objects which are sensible carriers of exclusively relational predicates in the language of quantum mechanical events. Unlike de Broglie’s pilot waves, which possess a (very) small amount of energy–momentum, Selleri’s empty waves are a zero-energy undulatory phenomenon.Footnote 15

The experiments carried out so far (also by others)Footnote 16 have failed not only to refute the reduction postulate obtaining at the same time particle detection and interference fringes, according to the strong de Broglie-Vigier realist perspective,Footnote 17 but also to confirm the realistic interpretation of the wave function supported by Selleri.Footnote 18

In the following, we will try to show how both these weak realistic interpretations discussed in the present and the previous section behave towards causality. To do this, a brief historic-philosophical digression (without any claim neither of exhaustiveness nor of chronological rigor) on the concept of nothing—considered as the beating heart of a significant notion of causality, which is, in turn, the root of what is a rational explanation—is necessary. A short discussion on the two prevailing senses attributable to nothing will also follow.

7 The Principle that Nothing can come from (Absolute) Nothing as a Pillar of Rational Explanation

Discussions about the nature of causality and the idea that everything must have a cause accompany the evolution of philosophical and scientific thought, within which the causal explanation has always been considered one of the building blocks of each model of knowledge.

In order to come to grips with the nature of the world and its processes, the early Greek thinkers formulated the idea/principle that nothing cannot be a cause of something. Mourelatos underlines: “Aristotle was convinced that the principle was as old as philosophy itself. He frequently speaks of it as the ‘common assumption’ […] of all who wrote ‘on nature’” (1981, p. 649). About those early thinkers who studied science, Aristotle indeed affirms: “They say that none of the things that are either comes to be or passes out of existence, because what comes to be must do so either from what is or from what is not, both of which are impossible. For what is cannot come to be (because it is already), and from what is not nothing could have come to be (because something must be present as a substratum)” (Physics I.8.191a30-31).

Among those pre-Socratic thinkers, the earliest statement of the philosophical idea that nothing comes from nothing—which later became famous in the Latin–version: ex nihilo nihil fit—can be found in Parmenides, whose idea “may be interpreted as constituting the statement that there is no coming-to-be out of what-is” (Mourelatos, 1981, p. 651). Parmenides insisted on the absolute dichotomy between the being that is and the nothing that is not, concluding that being is “whole and immovable and complete” (in Ford, 1983) and that it neither emerges nor perishes.

Melissus was very close to Parmenides: “There always was whatever was, and it always will be. For if it came to be, then it is necessary that before it came to be it was nothing; and if it were nothing, in no way could anything come to be out of nothing” (in Mourelatos, 1981, p. 655).

Empedocles also maintained a sort of principle of conservation by saying that what exists now has always existed. No new matter can come into existence where there was none before, and nothing can pass away into nothing: “For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed” (Fr. 12; see Kirk et al. 1983, pp. 291–292).Footnote 19

In Letter to Herodotus, Epicurus offered a similar argument, too: “Nothing comes to be out of what-is-not; for otherwise any thing would come to be from anything without the need of seeds” (in Mourelatos, 1981, p. 664).

Apart from these Greek origins, the most famous version of this principle is the aforementioned Latin version of Lucretius. In the first century B.C., in his De rerum natura, the idea that nothing cannot generate anything and that things cannot spring forth without reasonable cause became a general principle of nature: “…the inner law of nature; whose first rule shall take its start for us from this, that nothing is ever begotten of nothing by divine will” (Lucretius, 1910, p. 31).

By the seventeenth century an important change had occurred in the old debate on the existence of vacuum or void, a debate strictly related to that of the presumed causal properties of nothingness. The discussion on vacuum was born around the fifth century B.C., when, according to Greek atomists, the motion of hard and impenetrable atoms required a void space to move into, namely a real empty space identified with “nothing”. From its inception—and in particular, since Aristotle, who firmly opposed the existence of any vacuum and its coherence as a concept—and at least until the eighteenth century, the nature of void was a matter of endless philosophical controversies, but no empirically significant results were found until the seventeenth century. Such controversies were rooted in the same ancient enigma of the double contradictory nothing-something nature at the basis of the principle of causality:

Described and defined as nothing by the terms that came to represent it—kenon in Greek; inane, vacuum, and nihil in Latin—the void was from the outset, and almost inevitably, subjected to a double entendre. Was it an unintelligible, total privation incapable of existence—a true “nothing”? Or was it a nothing conceived of as a something, a something with definite properties that could range from a pure dimensionless emptiness to a three-dimensional magnitude, and even be conceived of as God’s infinite and omnipresent immensity?”. (Grant, 1981, p. 3)

In brief, is empty space nothing or something? If something, empty space is not really empty, but if nothing, how could it be said to exist at all?

These semantic puzzles of ancient discussions continued during the Middle Ages and the Renaissance, when most authors denied the concrete existence in the world of vacua, while others favored the hypothetical existence of an extra-cosmic void.Footnote 20

The beginning of the seventeenth century saw the empty space become the necessary theoretical substratum of all physical processes, while, from the empirical point of view, it saw the first attempts to quantify measurements of partial vacuum, in particular with Evangelista Torricelli’s mercury barometer of 1643, which produced the first laboratory vacuum, Blaise Pascal’s experiments, and Otto von Guericke’s first vacuum pump of 1654. They showed that the Aristotelian dictum “nature abhors a vacuum” was false.

In that same period, Descartes developed his idea of causality, which is central in this paper.

According to Descartes, the cause can never be “inferior” to its effect: a “more real” thing cannot descend from a “less real” thing. Hence it follows that a thing whatsoever cannot be made out of nothing, since nothing is the “least real” thing of all. This view is similar to the principles already expressed by Parmenides and Lucretius.

The extremely importantFootnote 21 principle of “non-inferiority of causes” is outlined in Descartes’ Third Meditation:

But Now, it is evident by the Light of Nature that there must be as much at least in the Total efficient Cause, as there is in the Effect of that Cause; For from Whence can the effect have its Reallity, but from the Cause? and how can the Cause give it that Reallity, unless it self have it?

And from hence it follows, that neither a Thing can be made out of Nothing, Neither a Thing which is more Perfect (that is, Which has in it self more Reallity) proceed from That Which is Less Perfect.

[…] That is to say, for Example of Illustration, it is not only impossible that a stone, Which was not, should now begin to Be, unless it were produced by something, in Which, Whatever goes to the Making a Stone, is either Formally or Virtually; neither can heat be Produced in any Thing, which before was not hot, but by a Thing which is at least of as equal a degree of Perfection as heat is.

[…] But that this Idea has this or that objective reallity, rather then any other, proceeds clearly from some cause, in which there ought to be at least as much formal reallity, as there is of objective reallity in the Idea it self. For if we suppose any thing in the Idea, which was not in its cause, it must of necessity have this from nothing; but (tho it be a most Imperfect manner of existing, by which the thing is objectively in the Intellect by an Idea, yet) it is not altogether nothing, and therefore cannot proceed from nothing. (in Gaukroger, 2006, pp. 216-217)

It is important to note that Cartesian “nothing” is a form of metaphysically absolute nothingness, namely the complete absence of any property or determination of being. This is even more explicit in the fourth meditation in which he stresses that nothing is a negative idea and an absolute no-being (an antipode of the perfect and absolute being, which is God):

… when I return to the Contemplation of my self, I find my self liable to Innumerable Errors. Enquiring into the cause of which, I find in my self an Idea, not only a real and positive one of a God, that is, of a Being infinitely perfect, but also (as I may so speak) a Negative Idea of Nothing; that is to say, I am so constituted between God and Nothing or between a perfect Being and No-being, that as I am Created by the Highest Being, I have nothing in Me by which I may be deceived or drawn into Error; but as I pertake in a manner of Nothing, or of a No-Being, that is, as I my self am not the Highest Being, and as I want many perfections, ’tis no Wonder that I should be Deceived. (ibid., p. 223)

Therefore, according to the rationalist Descartes, this denial of being has strongly negative connotations. Nothing is a non-being and nothing else.

An interesting objection to the fourth meditation was raised by Marin Mersenne:

But, you say, the effect will not be seen to have the degree of perfection of its reality, which was not prior in the case. It is true, other than that which we see the swarms of flies, other animals, or even plants, to be brought forth by the sun, the rain, and the earth, in which there is no life, which is more noble than the same as any of the level of the merely corporeal, and hence we call it the effect of some reality in has a proximate cause, which, however, does not take place in case the idea is nothing but a figment of the mind is not in a brooding. (Descartes, 1904, p. 124)

According to Mersenne, causes can be “inferior” to their effects. This happens, for example, in nature, when, in spontaneous generation, living creatures arise from nonliving matter.

This objection did not particularly strike Descartes for whom living beings could be considered as automata, and because he considered his principle indisputable. Nevertheless, Mersenne’s objection, although today completely naïve scientifically, goes in the direction that we will take (somehow QM suggests to us that something ontologically “superior” can arise from something “inferior”).

Descartes’ position on nothing continues one of the two philosophical traditions of Western thought: the one which can be ascribed to Parmenides, who conceived nothing as the absolute absence of any determination of being, a complete deprivation of every positive property.Footnote 22

These principles of causality played a decisive role not only in the birth of philosophical thought in the ancient world but also in all its subsequent evolution, including the foundation of modern science, and its developments based on the formulation of the great principles of conservation.

A classical example is the law of conservation of matter, on which Antoine-Laurent de Lavoisier based modern chemistry. de Lavoisier’s fundamental postulate states: “Nothing is lost. Nothing is created. Everything is transformed.” In his Traité élémentaire de chimie, he writes: “Nothing is created by human action or in natural operations. It is a fundamental truth that in all operations there is the same quantity of matter before and afterward and that the quality and quantity of the material principles are the same; there are only alterations and modifications” (de Lavoisier, 1789, p. 107).

The philosophical approaches to the concept of causality had a fundamental historical moment in the criticism raised by Hume, who was in contrast with all previous philosophical traditions: he dissociated the order of causes with the order of reasons, by arguing that no reason a priori may infer that from a given thing must necessarily follow the existence of another one, maintaining that experience alone may say to us what really will follow. His conception led to four different non-metaphysical reformulations of the principle of causality:

  • the first (and weaker) advanced by Hume himself, in terms of the perception of a constant and orderly, but not necessary, connection between cause and effect, so causality was regarded as ordered connection, based on the impossibility of any reversal of the temporal order (the occurrence of an effect before its cause);

  • the second by Kant, who in his attempt to give a more solid foundation to the principle, that he regarded as the only guarantee for the existence of natural science, identified causality as conformity to a rule or a law, so causality was interpreted as legality or as law fullness according to his second analogy of experience, which claims: “everything that happens … presupposes something upon which it follows according to a rule”;

  • the third by Laplace, in the strongest form of mechanistic determinism, exemplified by the famous demon, which from the knowledge of the initial state (coordinates) of the universe can predict its future state and retrodict its past history at any instant, so it was a deterministic causality;

  • the fourth by John Stuart Mill, in the form of the principle of uniformity of nature, which laid the foundation of the inductive method, so causality was identified with uniformity between cause and effect, in the sense that the same causes produce the same effects.

In more recent times, a conception of nothing as negation is found in Henri Bergson’s Creative Evolution:

… there is no absolute void in nature. But admit that an absolute void is possible: it is not of this void that I am thinking when I say that the object, once annihilated, leaves its place unoccupied [...] The void of which I speak, therefore, is, at bottom, only the absence of some definite object, which was here at first, is now elsewhere [our italics] and, in so far it is no longer in its former place, leaves behind it, so to speak, the void of itself. A being unendowed with memory or prevision would not use the words “void” or “nought”; he would express only what is and what is perceived; now, what is, and what is perceived, is the presence of one thing or of another, never the absence of anything. (Bergson, 1922, pp. 296–297)

Bergson maintains that nothingness is precluded by the positive nature of reality. The absence of a thing is not a brute fact. Only the positive fact (the existence of that thing) and the notion of negation allow us to derive the negative fact of its absence. In general, “there is nothing” is just a contingent and negative fact that should be grounded on some positive reality.

In brief, Bergson claims that nothing is a pseudo-idea originated by the linguistic faculty of negation. His nothing has no absoluteness and no pervasiveness. It is a kind of ontologically local absence of an object which, for some reason, is no longer where it was before.

The Parmenidean metaphysical idea of an absolute nothing finds, after Hegel, its most radical expression in Heidegger,Footnote 23 in particular in What is metaphysics, the inaugural lecture he gave at the University of Freiburg in 1929. It contains his thesis of the inauthenticity of science attributed to its inability to describe no-thing [das Nichts], which according to Heidegger is at the ground of metaphysics:

But why do we trouble ourselves about this no-thing? In fact, no-thing is indeed turned away by science and given up as the null and void. But if we give up no-thing in such a way, do we not indeed accept it? But can we talk about an acceptance if we accept nothing? Yet maybe all this back and forth has already turned into empty verbal wrangling. Science must then renew its seriousness and assert its soberness in opposition to this, so that it has only to do with be-ing [um das Seiendegeht]. No-thing—what can it be for science except a horror and a phantasm? If science is right, then one thing is for certain: science wants to know nothing of no-thing [vom Nichts nichts wissen]. In the end, this is the scientifically strict comprehension of no-thing. We know it in wanting to know nothing about the no-thing. (Heidegger, 1998)

Unlike Descartes, for the irrationalist Heidegger, this denial of being proper to nothing has positive connotations, and it is precisely on nothing that he builds an anti-scientific metaphysics.

Bergson’s identification of nothing with negation was explicitly rejected by Heidegger’s metaphysical nothing:

Yet is the Not[hing] given only because the “Not” and negation are given? Or are denial and negation given only when the Nothing is there? This question has never yet been posed, let alone decided. We assert: the Nothing is more primordial than denial and negation. (Heidegger, ibid.)

A point of view similar to that of Bergson, which shares the idea of relative nothing as the absence or determination of any particular property, was taken up by Rudolf Carnap. His critique of Heidegger’s metaphysics involved above all the meaningless conception of nothingness, or of no-thing, intended as the absence of being. Carnap refuted previous claims in a famous essay, highlighting the complete absence of meaning of the above statements, derived from two basic linguistic errors: first, the use of empty pseudo-concepts, devoid of any coupling referential, as precisely Nothing; second, the construction of pseudo-propositions apparently correct grammatically and also containing terms signifiers but that violate the logical syntax of the language (such as “Caesar is a prime number” or “the adjectives love the analysis”). He believed that a correct and meaningful concept of nothing would imply its identification with the logical negation.

Even the great mathematician David Hilbert dismissed Heidegger’s notion of no-thing in a peremptory way:

At a recent philosophical conference, I find this expression: “The nothing is the complete negation of the totality of the being”. This proposition is instructive for the fact that, in spite of its brevity, exemplifies all the major violations that can be committed to the principles established by my axiomatic theory. (Hilbert, 1931, p. 485)

The last author we want to mention here, in this necessarily incomplete overview, is a physicist of the twentieth century: David Bohm. In the following book, he too argues that causality, as the idea that everything comes from other things and that nothing can surge up out of nothing, is at the foundation of the possibility of a rational understanding of nature: “This general characteristic of the world can be expressed in terms of a principle […]; namely, everything comes from other things and gives rise to other things. This principle is not yet a statement of the existence of causality in nature. Indeed, it is even more fundamental than is causality, for it is at the foundation of the possibility of our understanding nature in a rational way” (Bohm, 1957, p. 1).

8 Does Orthodox QM Clash with the Principle of Rational Explanation?

Elsewhere it has been argued that the aforementioned four formulations “post-Humean” of the principle of causality are endowed with empirical meaning and contradict the orthodox interpretation of QM.Footnote 24 Indeed, Heisenberg refuted Laplace’s deterministic causality with his uncertainty relations, Bohr challenged Kantian causality with his complementarity principle, von Neumann rejected Mill’s causality with his impossibility proof to complete QM, Pauli and Wheeler showed that even Hume’s causality is strongly questioned by QM.

Here we aim to give a hint (a more detailed analysis will be given in a forthcoming article) to how QM also violates the previously exposed principle of rational explanation. Let’s show it with a physical example, namely radioactive decay, by starting with a simple consideration of Norton, according to which the best that standard quantum theory can deliver are

probabilities for future occurrences. The most complete specification of the state of the universe now cannot determine whether some particular Radium-221 atom will decay over the next 30 seconds (its half life); the best we can say is that there is a chance of 1/2 of decay. (Norton, 2007, p. 17)

The quantum law of radioactive decay is only capable of giving an average life for a certain class of atomic particles, but it is not able to explain the different individual behaviors of every singular particle, identical to all the others, that belongs to that class. Quantum physics, therefore, cannot explain the causes of this phenomenon, as Franco Selleri clearly states:

Today’s physics does not provide an understanding of these causes and accepts in fact an acausal philosophy: every decay is a spontaneous process and does not admit a causal explanation. The question about the different individual lifes of similar unstable systems, like neutrons, will according to this line of thought remain forever without answer and should indeed be categorized as a ‘non-scientific’ question. (Selleri, 1990, pp. 33–34)

Quantum orthodoxy states that decay is a spontaneous phenomenon, a rather curious adjective that significantly recalls more human consciousness, its inclinations in terms of volitions, and in general the absence of constraints or ulterior motives in certain attitudes, than a phenomenon obeying a physical law. Such a spontaneity actually means: in QM, identical systems, prepared in the same initial conditions, can produce different effects.

Considering QM a complete (neither hidden variables nor a non-statistical theory), as the orthodox interpretation claims, a simple direct philosophical consequence of such a fact is that decay goes against the aforementioned Mill’s principle of uniformity of nature, according to which same causes should produce same effects. In radioactive decay, indeed, identical particles, devoid of any intrinsic initial differentiation, can live much shorter or much longer than their average lifetime.Footnote 25

However, the association of the same initial conditions (here, the identicality of particles) with the sameness of causes is not a proper identification, also because it is not said that here causes lie in the particles themselves. On the other hand, QM tells us that there are no causes in decay, as already expressed by Selleri, neither intrinsic (the wave functions of the particles are the same), nor extrinsic (no relational or stochastic effects, for instance, are expected). For this reason, we also want to reflect on the plausibility of a stronger violation of causality, which goes far beyond Mill’s principle.

Let’s start by taking literally the meaning of spontaneity as the absence of constraints. This seems quite natural because the causal chain of decay, by going backward temporally, is broken before the starting of the decay itself, as no physical reasons explain which particle will decay and which will not. The pre-decay causal past is simply cut off: it does not exist, at least in its “proactive” role. There is nothing capable of instantiating actions that constrain atoms in any way. It, therefore, makes more sense to speak of the absence of causes rather than of their identity.

Thus QM does not allow us to assume anything—no event, no happening, no property of something, and presumably no extra-physical entity such as Gods or fate—(temporally) behind the different behavior of every single particle. Can then we say that an absolute nothingness is in a sense the cause of something? Is the following statement just an innocuous word pun: since spontaneous decay does not originate from nothing, then it originates from the nothing?

If our tentative slippery “equivalence”, “without cause = out of nothing”, is true, QM must take upon itself the violation of the principle of rational explanation, at least in the ex nihilo nihil form, because there is nothing, if not the absolute nothing itself, producing the different behaviors of particles. Obviously, even admitting that absolute nothingness plays a role—let’s say, verging on incoherence, physical—in decay, this is not enough to explain why some atoms decay and others don’t. We would be forced to superficially say that nothing makes differentiations but the real reasons would remain buried in the unfathomable metaphysical territory.Footnote 26

We are indeed aware that we are moving in such a philosophical minefield (concepts already individually as explosive as causation and nothingness, together become deadly!) and that “there is no cause” and “nothing is the cause” are statements whose equivalence is surely open to criticisms fed on countless subtleties.Footnote 27

Nevertheless, it seems to us that, even if one does not want to accept that hazardous equivalence, one cannot but accept the fact that QM, at least in the case of decay, violates our basic, and general, idea of causal explanation: if a process or physical phenomenon is made up of individual events, each one related to the previous ones by a causal nexus, what about that connection between every single instant of a particle’s life when there is no real event capable of “choose” its lifespan? That kind of amputation in the causal chain cannot but have negative repercussions on the meaning to be attributed to the explanation of the decay.

We could also say, certainly in a somewhat more pictorial and imaginative way, that orthodox QM, making itself impotent in the face of the explanation of decay, also traces the boundaries, at least in that context, of the validity of physics itself; beyond those physics boundaries, if one insists on tracing explanatory causes, one can only find them in metaphysical entities, such as absolute nothingness.

9 Relative No-Thing + QM = Ex Nihilo Aliquid Fit

From the quantum paradoxes of measurement seen above it is evident that this theory leads us to a meeting of three roads, each of which is not easy to follow.

The first leads to accepting the intrusion of the observer’s consciousness.

The second leads to attributing a weak level of physical reality to the wave function.

The third leads to recognizing some kind of reality to nothing.

As mentioned, in our opinion the first road is hardly practicable because of the strong subjectivist, even solipsist, consequences, whereas the second cannot be maintained in the absence of experimental confirmation. However, it is interesting to note that the latter is in disagreement with Cartesian causality. The reason is evident: the lower causes embodied in empty waves would give rise to more “real”, in a sense more manifest, effects embodied in interferences and stimulated emissions of particles, so that a weaker level of reality would produce a detectable stronger one, contrary to the principle of the inferiority of causes over effects. This does not mean, however, that this interpretation conflicts with the principle of rational explanation, according to which, as mentioned, nothing can derive from nothing. Here too the reason is intuitive: empty waves are something, even if they have zero-energy and are devoid of those intrinsic properties possessed by all other physical objects. Therefore, it is still true that only from something can something originate.

Both of these consequences are perfectly in line also with the aforementioned third way, even if here the arguments, directly concerning nothingness, are more subtle and slippery. Let’s try to take it, explaining why.

The nothing implied by the third paradox (see Sect. 5) can only be a kind of Bergsonian, that is, a relative or partial nothing, regarded as no-photon. Therefore, it is a nothing understood not as the absence of a metaphysical being (or better, the presence of a pre-existing Heideggerian metaphysical being), but as the absence of a physical object that could be identified by the measurement process, before which QM attributes a sort of potential reality through the wave function. Recall Bergson’s words: “The void of which I speak, therefore, is, at bottom, only the absence of some definite object, which was here at first, is now elsewhere and, in so far it is no longer in its former place, leaves behind it, so to speak, the void of itself” (ibid., p. 296; our italics).

We are aware, however, that in this way we assign some degree of reality to no-thing, precisely to the no-photon, detaching ourselves from Bergson and partially going towards Heidegger, but still remaining halfway between them. Indeed, no-photon is not only the mere sterile product of our thought in its faculty of denying a presence, as Bergson’s philosophy would claim, insofar as the no-photon state implies something, but in the meantime, it is not even the absolute Heideggerian nothingness that comes before (pre-exists) the absence of a thing (the photon). Therefore, our no-photon is fundamentally the Bergsonian absence of a specific thing but “reinforced” with the capacity of Heideggerian nothing to produce effects.

A brief digression on the concept of “degree of reality” is now necessary. In Sect. 6 we said that according to Selleri empty waves belong to a weaker (with respect to particles) level of physical reality insofar as they carry only relational properties. We can generalize this idea arguing that, with Busch and Jaeger: “As an element of empirical reality, an actual property has the capacity to act, to actualize an indicative measurement outcome if a measurement is performed. By contrast, when a property is absent it has no capacity to act.” (2010, p. 1349). So, there is an extreme of full actuality and another extreme of the absence of a property. Between them, there could be indeterminate properties, to which Busch and Jaeger assign a limited degree of actuality (reality), being neither fully real nor completely absent. This makes sense, according to them, because an indeterminate property has a quantifiable, although limited, capacity—a potentiality—to cause an indicative measurement outcome.

Although the vagueness of the concept “degree of reality” is now only attenuated, what is important is to conceive it as a “quantification” of the capacity to cause an event. The fact that a given outcome occurs, entails that a certain property was not completely absent, even if it could be present without a clear determinateness, only as a potentiality. For instance, in the simple case of presumed spontaneous generation, living creatures were supposed to arise from non-living matter, whose properties concerning life, as far as scientists knew at the time, surely had, being not manifest, a weaker, if not none, degree of reality with respect to those of living beings. In fact, the non-living matter appeared inanimate; nevertheless, some of its unknown properties were mysteriously capable of generating beings endowed with fully actualized properties concerning life.

Let’s go back to the no-photon. If we want to call into question Descartes’ philosophy, the no-photon is not the Cartesian nothing intended as a form of metaphysical absolute nothingness, as a no-being given by the complete absence of any property or determination of being. Thus, overturning Descartes’ definition in his fourth meditation, we could say that no-photon, insofar as is not a metaphysical no-being but it is a being with physical properties producing the quantum wave collapse, is a positive idea, not a negative one. It is a kind of empirical no-thing able to manifest itself actively, not merely by the passive absence of a particle.

Such a reasoning, however, is not a philosophical free meal: the attribution of some sort of reality—we could paradoxically say of presence—to the absence of the photon entails a significant violation of Cartesian causality, in its more general form seen before corresponding to the principle of the non-inferiority of causes: the no-photon state, being fundamentally a relative nothing devoid of all the physical characters of normal things, has a weaker degree of reality than the consequences it originates. But from this does not follow a strict violation of the principle of rational explanation: the ex nihilo nihil fit principle is still valid if we consider that such a nihil, namely the no-photon state, is not the absolute metaphysical nihil as claimed by the founding fathers of this principle. Indeed, as already said, it is a partial or relative nihil, that is, a particular state of being able to cause a sort of interaction, so that, in such a physical context not properly affecting the metaphysical one, it would be more appropriate to speak of something out of no-thing: ex nihilo aliquid fit. And such a view, in a sense, curiously echoes the mentioned doctrine of Mersenne, but with other physical and metaphysical subtleties.

This interpretation also leads to another result, which is very important for our philosophical idea of science. The reformulation of the metaphysical concept of nothing, not only endowed with meaning but also with precise physical properties, has allowed us to use it in the Cartesian formulation of causality, enabling us to obtain a principle with empirical meaning, which, like its other formulations, is violated by QM. But if the results of QM conflict with those of Cartesian philosophy, this means that we are not faced with empty metaphysics, as neo-positivists believed, but with principles perfectly meaningful according to their own criterion.

10 Conclusions

The first conclusion is epistemological. It results that if we attribute physical properties to absolute nothing, we do not violate the very concept of causal explanation as emerged from the origins of rational thought in ancient philosophy through the well-known ex nihilo nihil. Indeed, the non-standard interpretation of quantum measurement seen before tells us that both the entities on which it is based in its two versions (empty waves and no-thing states) are actually the manifestation of something influential. Even the no-photon state, which seems to be even closer to a pure metaphysical nothing than the evanescence of empty waves, is still a sort of empirical nothing, i.e. a “causing” nothing, which we have defined relative or partial, borrowing the lexicon by Bergson.

On the contrary, what both versions violate is the Cartesian principle of the inferiority of causes, a stronger formulation of causality with respect to the ones of Hume, Kant, Laplace and Mill, contradicted by the standard interpretation of QM. In fact, both of those entities, even if endowed of a very weak degree of reality, at least from the point of view of standard physical properties (energy, momentum, and so on), are considered able to produce effects belonging to a stronger level of reality, as seen in our proposed solutions of the paradoxes, which, instead, remain unexplained by the orthodox interpretation of QM, whose metaphysics is anything but captivating. The orthodoxy, in fact, not only violates the aforementioned four forms of the principle of causality, but also the principle of rational explanation, in its original metaphysical interpretation without our replacement of absolute nothingness with relative (Bergsonian) no-thing.

The second conclusion concerns the foundations of QM, that even if interpreted in a non-standard more realistic way still remains an acausal theory. One is faced, however, with a less serious form of acausality than the one present in the standard interpretation.

Excluding the empty waves hypothesis, at least until it has an experimental confirmation, what remains is a partial nothing. It’s not a big deal! We do not delude ourselves that such a perspective is easily digestible. On the other hand, it’s a trivial fact that quantum intricacies are a mystery that forces us to choose the less bizarre hypothesis. Nevertheless, the partial nothing hypothesis still seems like a bargain, especially when compared with the expensive subjectivistic and solipsistic outcomes of von Neumann and Wigner’s views, which are so fragile in explaining negative-result experiments in particular.

Last but not least, we believe that our analysis confirms how much the implications of QM strongly contribute to a reopening of all the great metaphysical issues, also showing how these are not always pseudo-problems but questions that involve concepts and principles perfectly endowed with meaning in a factual sense.